id stringlengths 2 8 | url stringlengths 31 117 | title stringlengths 1 71 | text stringlengths 153 118k | topic stringclasses 4
values | section stringlengths 4 49 ⌀ | sublist stringclasses 9
values |
|---|---|---|---|---|---|---|
9891 | https://en.wikipedia.org/wiki/Entropy | Entropy | Entropy is a scientific concept that is most commonly associated with a state of disorder, randomness, or uncertainty. The term and the concept are used in diverse fields, from classical thermodynamics, where it was first recognized, to the microscopic description of nature in statistical physics, and to the principles of information theory. It has found far-ranging applications in chemistry and physics, in biological systems and their relation to life, in cosmology, economics, sociology, weather science, climate change and information systems including the transmission of information in telecommunication.
Entropy is central to the second law of thermodynamics, which states that the entropy of an isolated system left to spontaneous evolution cannot decrease with time. As a result, isolated systems evolve toward thermodynamic equilibrium, where the entropy is highest. A consequence of the second law of thermodynamics is that certain processes are irreversible.
The thermodynamic concept was referred to by Scottish scientist and engineer William Rankine in 1850 with the names thermodynamic function and heat-potential. In 1865, German physicist Rudolf Clausius, one of the leading founders of the field of thermodynamics, defined it as the quotient of an infinitesimal amount of heat to the instantaneous temperature. He initially described it as transformation-content, in German Verwandlungsinhalt, and later coined the term entropy from a Greek word for transformation.
Austrian physicist Ludwig Boltzmann explained entropy as the measure of the number of possible microscopic arrangements or states of individual atoms and molecules of a system that comply with the macroscopic condition of the system. He thereby introduced the concept of statistical disorder and probability distributions into a new field of thermodynamics, called statistical mechanics, and found the link between the microscopic interactions, which fluctuate about an average configuration, to the macroscopically observable behaviour, in form of a simple logarithmic law, with a proportionality constant, the Boltzmann constant, which has become one of the defining universal constants for the modern International System of Units (SI).
History
In his 1803 paper Fundamental Principles of Equilibrium and Movement, the French mathematician Lazare Carnot proposed that in any machine, the accelerations and shocks of the moving parts represent losses of moment of activity; in any natural process there exists an inherent tendency towards the dissipation of useful energy. In 1824, building on that work, Lazare's son, Sadi Carnot, published Reflections on the Motive Power of Fire, which posited that in all heat-engines, whenever "caloric" (what is now known as heat) falls through a temperature difference, work or motive power can be produced from the actions of its fall from a hot to cold body. He used an analogy with how water falls in a water wheel. That was an early insight into the second law of thermodynamics. Carnot based his views of heat partially on the early 18th-century "Newtonian hypothesis" that both heat and light were types of indestructible forms of matter, which are attracted and repelled by other matter, and partially on the contemporary views of Count Rumford, who showed in 1789 that heat could be created by friction, as when cannon bores are machined. Carnot reasoned that if the body of the working substance, such as a body of steam, is returned to its original state at the end of a complete engine cycle, "no change occurs in the condition of the working body".
The first law of thermodynamics, deduced from the heat-friction experiments of James Joule in 1843, expresses the concept of energy and its conservation in all processes; the first law, however, is unsuitable to separately quantify the effects of friction and dissipation.
In the 1850s and 1860s, German physicist Rudolf Clausius objected to the supposition that no change occurs in the working body, and gave that change a mathematical interpretation, by questioning the nature of the inherent loss of usable heat when work is done, e.g., heat produced by friction. He described his observations as a dissipative use of energy, resulting in a transformation-content ( in German), of a thermodynamic system or working body of chemical species during a change of state. That was in contrast to earlier views, based on the theories of Isaac Newton, that heat was an indestructible particle that had mass. Clausius discovered that the non-usable energy increases as steam proceeds from inlet to exhaust in a steam engine. From the prefix en-, as in 'energy', and from the Greek word [tropē], which is translated in an established lexicon as turning or change and that he rendered in German as , a word often translated into English as transformation, in 1865 Clausius coined the name of that property as entropy. The word was adopted into the English language in 1868.
Later, scientists such as Ludwig Boltzmann, Josiah Willard Gibbs, and James Clerk Maxwell gave entropy a statistical basis. In 1877, Boltzmann visualized a probabilistic way to measure the entropy of an ensemble of ideal gas particles, in which he defined entropy as proportional to the natural logarithm of the number of microstates such a gas could occupy. The proportionality constant in this definition, called the Boltzmann constant, has become one of the defining universal constants for the modern International System of Units (SI). Henceforth, the essential problem in statistical thermodynamics has been to determine the distribution of a given amount of energy E over N identical systems. Constantin Carathéodory, a Greek mathematician, linked entropy with a mathematical definition of irreversibility, in terms of trajectories and integrability.
Etymology
In 1865, Clausius named the concept of "the differential of a quantity which depends on the configuration of the system", entropy () after the Greek word for 'transformation'. He gave "transformational content" () as a synonym, paralleling his "thermal and ergonal content" () as the name of U, but preferring the term entropy as a close parallel of the word energy, as he found the concepts nearly "analogous in their physical significance". This term was formed by replacing the root of ('ergon', 'work') by that of ('tropy', 'transformation').
In more detail, Clausius explained his choice of "entropy" as a name as follows:
I prefer going to the ancient languages for the names of important scientific quantities, so that they may mean the same thing in all living tongues. I propose, therefore, to call S the entropy of a body, after the Greek word "transformation". I have designedly coined the word entropy to be similar to energy, for these two quantities are so analogous in their physical significance, that an analogy of denominations seems to me helpful.
Leon Cooper added that in this way "he succeeded in coining a word that meant the same thing to everybody: nothing".
Definitions and descriptions
The concept of entropy is described by two principal approaches, the macroscopic perspective of classical thermodynamics, and the microscopic description central to statistical mechanics. The classical approach defines entropy in terms of macroscopically measurable physical properties, such as bulk mass, volume, pressure, and temperature. The statistical definition of entropy defines it in terms of the statistics of the motions of the microscopic constituents of a system — modelled at first classically, e.g. Newtonian particles constituting a gas, and later quantum-mechanically (photons, phonons, spins, etc.). The two approaches form a consistent, unified view of the same phenomenon as expressed in the second law of thermodynamics, which has found universal applicability to physical processes.
State variables and functions of state
Many thermodynamic properties are defined by physical variables that define a state of thermodynamic equilibrium, which essentially are state variables. State variables depend only on the equilibrium condition, not on the path evolution to that state. State variables can be functions of state, also called state functions, in a sense that one state variable is a mathematical function of other state variables. Often, if some properties of a system are determined, they are sufficient to determine the state of the system and thus other properties' values. For example, temperature and pressure of a given quantity of gas determine its state, and thus also its volume via the ideal gas law. A system composed of a pure substance of a single phase at a particular uniform temperature and pressure is determined, and is thus a particular state, and has a particular volume. The fact that entropy is a function of state makes it useful. In the Carnot cycle, the working fluid returns to the same state that it had at the start of the cycle, hence the change or line integral of any state function, such as entropy, over this reversible cycle is zero.
Reversible process
The entropy change of a system excluding its surroundings can be well-defined as a small portion of heat transferred to the system during reversible process divided by the temperature of the system during this heat transfer:The reversible process is quasistatic (i.e., it occurs without any dissipation, deviating only infinitesimally from the thermodynamic equilibrium), and it may conserve total entropy. For example, in the Carnot cycle, while the heat flow from a hot reservoir to a cold reservoir represents the increase in the entropy in a cold reservoir, the work output, if reversibly and perfectly stored, represents the decrease in the entropy which could be used to operate the heat engine in reverse, returning to the initial state; thus the total entropy change may still be zero at all times if the entire process is reversible.
In contrast, irreversible process increases the total entropy of the system and surroundings. Any process that happens quickly enough to deviate from the thermal equilibrium cannot be reversible, the total entropy increases, and the potential for maximum work to be done during the process is lost.
Carnot cycle
The concept of entropy arose from Rudolf Clausius's study of the Carnot cycle which is a thermodynamic cycle performed by a Carnot heat engine as a reversible heat engine. In a Carnot cycle, the heat is transferred from a hot reservoir to a working gas at the constant temperature during isothermal expansion stage and the heat is transferred from a working gas to a cold reservoir at the constant temperature during isothermal compression stage. According to Carnot's theorem, a heat engine with two thermal reservoirs can produce a work if and only if there is a temperature difference between reservoirs. Originally, Carnot did not distinguish between heats and , as he assumed caloric theory to be valid and hence that the total heat in the system was conserved. But in fact, the magnitude of heat is greater than the magnitude of heat . Through the efforts of Clausius and Kelvin, the work done by a reversible heat engine was found to be the product of the Carnot efficiency (i.e., the efficiency of all reversible heat engines with the same pair of thermal reservoirs) and the heat absorbed by a working body of the engine during isothermal expansion:To derive the Carnot efficiency Kelvin had to evaluate the ratio of the work output to the heat absorbed during the isothermal expansion with the help of the Carnot–Clapeyron equation, which contained an unknown function called the Carnot function. The possibility that the Carnot function could be the temperature as measured from a zero point of temperature was suggested by Joule in a letter to Kelvin. This allowed Kelvin to establish his absolute temperature scale.
It is known that a work produced by an engine over a cycle equals to a net heat absorbed over a cycle. Thus, with the sign convention for a heat transferred in a thermodynamic process ( for an absorption and for a dissipation) we get:Since this equality holds over an entire Carnot cycle, it gave Clausius the hint that at each stage of the cycle the difference between a work and a net heat would be conserved, rather than a net heat itself. Which means there exists a state function with a change of . It is called an internal energy and forms a central concept for the first law of thermodynamics.
Finally, comparison for both the representations of a work output in a Carnot cycle gives us:Similarly to the derivation of internal energy, this equality implies existence of a state function with a change of and which is conserved over an entire cycle. Clausius called this state function entropy.
In addition, the total change of entropy in both thermal reservoirs over Carnot cycle is zero too, since the inversion of a heat transfer direction means a sign inversion for the heat transferred during isothermal stages:Here we denote the entropy change for a thermal reservoir by , where is either for a hot reservoir or for a cold one.
If we consider a heat engine which is less effective than Carnot cycle (i.e., the work produced by this engine is less than the maximum predicted by Carnot's theorem), its work output is capped by Carnot efficiency as:Substitution of the work as the net heat into the inequality above gives us:or in terms of the entropy change :A Carnot cycle and an entropy as shown above prove to be useful in the study of any classical thermodynamic heat engine: other cycles, such as an Otto, Diesel or Brayton cycle, could be analysed from the same standpoint. Notably, any machine or cyclic process converting heat into work (i.e., heat engine) that is claimed to produce an efficiency greater than the one of Carnot is not viable — due to violation of the second law of thermodynamics.
For further analysis of sufficiently discrete systems, such as an assembly of particles, statistical thermodynamics must be used. Additionally, descriptions of devices operating near the limit of de Broglie waves, e.g. photovoltaic cells, have to be consistent with quantum statistics.
Classical thermodynamics
The thermodynamic definition of entropy was developed in the early 1850s by Rudolf Clausius and essentially describes how to measure the entropy of an isolated system in thermodynamic equilibrium with its parts. Clausius created the term entropy as an extensive thermodynamic variable that was shown to be useful in characterizing the Carnot cycle. Heat transfer in the isotherm steps (isothermal expansion and isothermal compression) of the Carnot cycle was found to be proportional to the temperature of a system (known as its absolute temperature). This relationship was expressed in an increment of entropy that is equal to incremental heat transfer divided by temperature. Entropy was found to vary in the thermodynamic cycle but eventually returned to the same value at the end of every cycle. Thus it was found to be a function of state, specifically a thermodynamic state of the system.
While Clausius based his definition on a reversible process, there are also irreversible processes that change entropy. Following the second law of thermodynamics, entropy of an isolated system always increases for irreversible processes. The difference between an isolated system and closed system is that energy may not flow to and from an isolated system, but energy flow to and from a closed system is possible. Nevertheless, for both closed and isolated systems, and indeed, also in open systems, irreversible thermodynamics processes may occur.
According to the Clausius equality, for a reversible cyclic thermodynamic process: which means the line integral is path-independent. Thus we can define a state function , called entropy:Therefore, thermodynamic entropy has the dimension of energy divided by temperature, and the unit joule per kelvin (J/K) in the International System of Units (SI).
To find the entropy difference between any two states of the system, the integral must be evaluated for some reversible path between the initial and final states. Since an entropy is a state function, the entropy change of the system for an irreversible path is the same as for a reversible path between the same two states. However, the heat transferred to or from the surroundings is different as well as its entropy change.
We can calculate the change of entropy only by integrating the above formula. To obtain the absolute value of the entropy, we consider the third law of thermodynamics: perfect crystals at the absolute zero have an entropy .
From a macroscopic perspective, in classical thermodynamics the entropy is interpreted as a state function of a thermodynamic system: that is, a property depending only on the current state of the system, independent of how that state came to be achieved. In any process, where the system gives up of energy to the surrounding at the temperature , its entropy falls by and at least of that energy must be given up to the system's surroundings as a heat. Otherwise, this process cannot go forward. In classical thermodynamics, the entropy of a system is defined if and only if it is in a thermodynamic equilibrium (though a chemical equilibrium is not required: for example, the entropy of a mixture of two moles of hydrogen and one mole of oxygen in standard conditions is well-defined).
Statistical mechanics
The statistical definition was developed by Ludwig Boltzmann in the 1870s by analysing the statistical behaviour of the microscopic components of the system. Boltzmann showed that this definition of entropy was equivalent to the thermodynamic entropy to within a constant factor—known as the Boltzmann constant. In short, the thermodynamic definition of entropy provides the experimental verification of entropy, while the statistical definition of entropy extends the concept, providing an explanation and a deeper understanding of its nature.
The interpretation of entropy in statistical mechanics is the measure of uncertainty, disorder, or mixedupness in the phrase of Gibbs, which remains about a system after its observable macroscopic properties, such as temperature, pressure and volume, have been taken into account. For a given set of macroscopic variables, the entropy measures the degree to which the probability of the system is spread out over different possible microstates. In contrast to the macrostate, which characterizes plainly observable average quantities, a microstate specifies all molecular details about the system including the position and momentum of every molecule. The more such states are available to the system with appreciable probability, the greater the entropy. In statistical mechanics, entropy is a measure of the number of ways a system can be arranged, often taken to be a measure of "disorder" (the higher the entropy, the higher the disorder). This definition describes the entropy as being proportional to the natural logarithm of the number of possible microscopic configurations of the individual atoms and molecules of the system (microstates) that could cause the observed macroscopic state (macrostate) of the system. The constant of proportionality is the Boltzmann constant.
The Boltzmann constant, and therefore entropy, have dimensions of energy divided by temperature, which has a unit of joules per kelvin (J⋅K−1) in the International System of Units (or kg⋅m2⋅s−2⋅K−1 in terms of base units). The entropy of a substance is usually given as an intensive property — either entropy per unit mass (SI unit: J⋅K−1⋅kg−1) or entropy per unit amount of substance (SI unit: J⋅K−1⋅mol−1).
Specifically, entropy is a logarithmic measure for the system with a number of states, each with a probability of being occupied (usually given by the Boltzmann distribution):where is the Boltzmann constant and the summation is performed over all possible microstates of the system.
In case states are defined in a continuous manner, the summation is replaced by an integral over all possible states, or equivalently we can consider the expected value of the logarithm of the probability that a microstate is occupied:This definition assumes the basis states to be picked in a way that there is no information on their relative phases. In a general case the expression is:where is a density matrix, is a trace operator and is a matrix logarithm. Density matrix formalism is not required if the system occurs to be in a thermal equilibrium so long as the basis states are chosen to be eigenstates of Hamiltonian. For most practical purposes it can be taken as the fundamental definition of entropy since all other formulae for can be derived from it, but not vice versa.
In what has been called the fundamental postulate in statistical mechanics, among system microstates of the same energy (i.e., degenerate microstates) each microstate is assumed to be populated with equal probability , where is the number of microstates whose energy equals to the one of the system. Usually, this assumption is justified for an isolated system in a thermodynamic equilibrium. Then in case of an isolated system the previous formula reduces to:In thermodynamics, such a system is one with a fixed volume, number of molecules, and internal energy, called a microcanonical ensemble.
The most general interpretation of entropy is as a measure of the extent of uncertainty about a system. The equilibrium state of a system maximizes the entropy because it does not reflect all information about the initial conditions, except for the conserved variables. This uncertainty is not of the everyday subjective kind, but rather the uncertainty inherent to the experimental method and interpretative model.
The interpretative model has a central role in determining entropy. The qualifier "for a given set of macroscopic variables" above has deep implications when two observers use different sets of macroscopic variables. For example, consider observer A using variables , , and observer B using variables , , , . If observer B changes variable , then observer A will see a violation of the second law of thermodynamics, since he does not possess information about variable and its influence on the system. In other words, one must choose a complete set of macroscopic variables to describe the system, i.e. every independent parameter that may change during experiment.
Entropy can also be defined for any Markov processes with reversible dynamics and the detailed balance property.
In Boltzmann's 1896 Lectures on Gas Theory, he showed that this expression gives a measure of entropy for systems of atoms and molecules in the gas phase, thus providing a measure for the entropy of classical thermodynamics.
Entropy of a system
Entropy arises directly from the Carnot cycle. It can also be described as the reversible heat divided by temperature. Entropy is a fundamental function of state.
In a thermodynamic system, pressure and temperature tend to become uniform over time because the equilibrium state has higher probability (more possible combinations of microstates) than any other state.
As an example, for a glass of ice water in air at room temperature, the difference in temperature between the warm room (the surroundings) and the cold glass of ice and water (the system and not part of the room) decreases as portions of the thermal energy from the warm surroundings spread to the cooler system of ice and water. Over time the temperature of the glass and its contents and the temperature of the room become equal. In other words, the entropy of the room has decreased as some of its energy has been dispersed to the ice and water, of which the entropy has increased.
However, as calculated in the example, the entropy of the system of ice and water has increased more than the entropy of the surrounding room has decreased. In an isolated system such as the room and ice water taken together, the dispersal of energy from warmer to cooler always results in a net increase in entropy. Thus, when the "universe" of the room and ice water system has reached a temperature equilibrium, the entropy change from the initial state is at a maximum. The entropy of the thermodynamic system is a measure of how far the equalisation has progressed.
Thermodynamic entropy is a non-conserved state function that is of great importance in the sciences of physics and chemistry. Historically, the concept of entropy evolved to explain why some processes (permitted by conservation laws) occur spontaneously while their time reversals (also permitted by conservation laws) do not; systems tend to progress in the direction of increasing entropy. For isolated systems, entropy never decreases. This fact has several important consequences in science: first, it prohibits "perpetual motion" machines; and second, it implies the arrow of entropy has the same direction as the arrow of time. Increases in the total entropy of system and surroundings correspond to irreversible changes, because some energy is expended as waste heat, limiting the amount of work a system can do.
Unlike many other functions of state, entropy cannot be directly observed but must be calculated. Absolute standard molar entropy of a substance can be calculated from the measured temperature dependence of its heat capacity. The molar entropy of ions is obtained as a difference in entropy from a reference state defined as zero entropy. The second law of thermodynamics states that the entropy of an isolated system must increase or remain constant. Therefore, entropy is not a conserved quantity: for example, in an isolated system with non-uniform temperature, heat might irreversibly flow and the temperature become more uniform such that entropy increases. Chemical reactions cause changes in entropy and system entropy, in conjunction with enthalpy, plays an important role in determining in which direction a chemical reaction spontaneously proceeds.
One dictionary definition of entropy is that it is "a measure of thermal energy per unit temperature that is not available for useful work" in a cyclic process. For instance, a substance at uniform temperature is at maximum entropy and cannot drive a heat engine. A substance at non-uniform temperature is at a lower entropy (than if the heat distribution is allowed to even out) and some of the thermal energy can drive a heat engine.
A special case of entropy increase, the entropy of mixing, occurs when two or more different substances are mixed. If the substances are at the same temperature and pressure, there is no net exchange of heat or work – the entropy change is entirely due to the mixing of the different substances. At a statistical mechanical level, this results due to the change in available volume per particle with mixing.
Equivalence of definitions
Proofs of equivalence between the entropy in statistical mechanics — the Gibbs entropy formula:and the entropy in classical thermodynamics:together with the fundamental thermodynamic relation are known for the microcanonical ensemble, the canonical ensemble, the grand canonical ensemble, and the isothermal–isobaric ensemble. These proofs are based on the probability density of microstates of the generalised Boltzmann distribution and the identification of the thermodynamic internal energy as the ensemble average . Thermodynamic relations are then employed to derive the well-known Gibbs entropy formula. However, the equivalence between the Gibbs entropy formula and the thermodynamic definition of entropy is not a fundamental thermodynamic relation but rather a consequence of the form of the generalized Boltzmann distribution.
Furthermore, it has been shown that the definitions of entropy in statistical mechanics is the only entropy that is equivalent to the classical thermodynamics entropy under the following postulates:
Second law of thermodynamics
The second law of thermodynamics requires that, in general, the total entropy of any system does not decrease other than by increasing the entropy of some other system. Hence, in a system isolated from its environment, the entropy of that system tends not to decrease. It follows that heat cannot flow from a colder body to a hotter body without the application of work to the colder body. Secondly, it is impossible for any device operating on a cycle to produce net work from a single temperature reservoir; the production of net work requires flow of heat from a hotter reservoir to a colder reservoir, or a single expanding reservoir undergoing adiabatic cooling, which performs adiabatic work. As a result, there is no possibility of a perpetual motion machine. It follows that a reduction in the increase of entropy in a specified process, such as a chemical reaction, means that it is energetically more efficient.
It follows from the second law of thermodynamics that the entropy of a system that is not isolated may decrease. An air conditioner, for example, may cool the air in a room, thus reducing the entropy of the air of that system. The heat expelled from the room (the system), which the air conditioner transports and discharges to the outside air, always makes a bigger contribution to the entropy of the environment than the decrease of the entropy of the air of that system. Thus, the total of entropy of the room plus the entropy of the environment increases, in agreement with the second law of thermodynamics.
In mechanics, the second law in conjunction with the fundamental thermodynamic relation places limits on a system's ability to do useful work. The entropy change of a system at temperature absorbing an infinitesimal amount of heat in a reversible way, is given by . More explicitly, an energy is not available to do useful work, where is the temperature of the coldest accessible reservoir or heat sink external to the system. For further discussion, see Exergy.
Statistical mechanics demonstrates that entropy is governed by probability, thus allowing for a decrease in disorder even in an isolated system. Although this is possible, such an event has a small probability of occurring, making it unlikely.
The applicability of a second law of thermodynamics is limited to systems in or sufficiently near equilibrium state, so that they have defined entropy. Some inhomogeneous systems out of thermodynamic equilibrium still satisfy the hypothesis of local thermodynamic equilibrium, so that entropy density is locally defined as an intensive quantity. For such systems, there may apply a principle of maximum time rate of entropy production. It states that such a system may evolve to a steady state that maximises its time rate of entropy production. This does not mean that such a system is necessarily always in a condition of maximum time rate of entropy production; it means that it may evolve to such a steady state.
Applications
The fundamental thermodynamic relation
The entropy of a system depends on its internal energy and its external parameters, such as its volume. In the thermodynamic limit, this fact leads to an equation relating the change in the internal energy to changes in the entropy and the external parameters. This relation is known as the fundamental thermodynamic relation. If external pressure bears on the volume as the only external parameter, this relation is:Since both internal energy and entropy are monotonic functions of temperature , implying that the internal energy is fixed when one specifies the entropy and the volume, this relation is valid even if the change from one state of thermal equilibrium to another with infinitesimally larger entropy and volume happens in a non-quasistatic way (so during this change the system may be very far out of thermal equilibrium and then the whole-system entropy, pressure, and temperature may not exist).
The fundamental thermodynamic relation implies many thermodynamic identities that are valid in general, independent of the microscopic details of the system. Important examples are the Maxwell relations and the relations between heat capacities.
Entropy in chemical thermodynamics
Thermodynamic entropy is central in chemical thermodynamics, enabling changes to be quantified and the outcome of reactions predicted. The second law of thermodynamics states that entropy in an isolated system — the combination of a subsystem under study and its surroundings — increases during all spontaneous chemical and physical processes. The Clausius equation introduces the measurement of entropy change which describes the direction and quantifies the magnitude of simple changes such as heat transfer between systems — always from hotter body to cooler one spontaneously.
Thermodynamic entropy is an extensive property, meaning that it scales with the size or extent of a system. In many processes it is useful to specify the entropy as an intensive property independent of the size, as a specific entropy characteristic of the type of system studied. Specific entropy may be expressed relative to a unit of mass, typically the kilogram (unit: J⋅kg−1⋅K−1). Alternatively, in chemistry, it is also referred to one mole of substance, in which case it is called the molar entropy with a unit of J⋅mol−1⋅K−1.
Thus, when one mole of substance at about is warmed by its surroundings to , the sum of the incremental values of constitute each element's or compound's standard molar entropy, an indicator of the amount of energy stored by a substance at . Entropy change also measures the mixing of substances as a summation of their relative quantities in the final mixture.
Entropy is equally essential in predicting the extent and direction of complex chemical reactions. For such applications, must be incorporated in an expression that includes both the system and its surroundings: Via additional steps this expression becomes the equation of Gibbs free energy change for reactants and products in the system at the constant pressure and temperature :where is the enthalpy change and is the entropy change.
World's technological capacity to store and communicate entropic information
A 2011 study in Science estimated the world's technological capacity to store and communicate optimally compressed information normalised on the most effective compression algorithms available in the year 2007, therefore estimating the entropy of the technologically available sources. The author's estimate that human kind's technological capacity to store information grew from 2.6 (entropically compressed) exabytes in 1986 to 295 (entropically compressed) exabytes in 2007. The world's technological capacity to receive information through one-way broadcast networks was 432 exabytes of (entropically compressed) information in 1986, to 1.9 zettabytes in 2007. The world's effective capacity to exchange information through two-way telecommunication networks was 281 petabytes of (entropically compressed) information in 1986, to 65 (entropically compressed) exabytes in 2007.
Entropy balance equation for open systems
In chemical engineering, the principles of thermodynamics are commonly applied to "open systems", i.e. those in which heat, work, and mass flow across the system boundary. In general, flow of heat , flow of shaft work and pressure-volume work across the system boundaries cause changes in the entropy of the system. Heat transfer entails entropy transfer , where is the absolute thermodynamic temperature of the system at the point of the heat flow. If there are mass flows across the system boundaries, they also influence the total entropy of the system. This account, in terms of heat and work, is valid only for cases in which the work and heat transfers are by paths physically distinct from the paths of entry and exit of matter from the system.
To derive a generalised entropy balanced equation, we start with the general balance equation for the change in any extensive quantity in a thermodynamic system, a quantity that may be either conserved, such as energy, or non-conserved, such as entropy. The basic generic balance expression states that , i.e. the rate of change of in the system, equals the rate at which enters the system at the boundaries, minus the rate at which leaves the system across the system boundaries, plus the rate at which is generated within the system. For an open thermodynamic system in which heat and work are transferred by paths separate from the paths for transfer of matter, using this generic balance equation, with respect to the rate of change with time of the extensive quantity entropy , the entropy balance equation is:where is the net rate of entropy flow due to the flows of mass into and out of the system with entropy per unit mass , is the rate of entropy flow due to the flow of heat across the system boundary and is the rate of entropy generation within the system, e.g. by chemical reactions, phase transitions, internal heat transfer or frictional effects such as viscosity.
In case of multiple heat flows the term is replaced by , where is the heat flow through -th port into the system and is the temperature at the -th port.
The nomenclature "entropy balance" is misleading and often deemed inappropriate because entropy is not a conserved quantity. In other words, the term is never a known quantity but always a derived one based on the expression above. Therefore, the open system version of the second law is more appropriately described as the "entropy generation equation" since it specifies that:with zero for reversible process and positive values for irreversible one.
Entropy change formulas for simple processes
For certain simple transformations in systems of constant composition, the entropy changes are given by simple formulas.
Isothermal expansion or compression of an ideal gas
For the expansion (or compression) of an ideal gas from an initial volume and pressure to a final volume and pressure at any constant temperature, the change in entropy is given by:Here is the amount of gas (in moles) and is the ideal gas constant. These equations also apply for expansion into a finite vacuum or a throttling process, where the temperature, internal energy and enthalpy for an ideal gas remain constant.
Cooling and heating
For pure heating or cooling of any system (gas, liquid or solid) at constant pressure from an initial temperature to a final temperature , the entropy change is:
provided that the constant-pressure molar heat capacity (or specific heat) is constant and that no phase transition occurs in this temperature interval.
Similarly at constant volume, the entropy change is:where the constant-volume molar heat capacity is constant and there is no phase change.
At low temperatures near absolute zero, heat capacities of solids quickly drop off to near zero, so the assumption of constant heat capacity does not apply.
Since entropy is a state function, the entropy change of any process in which temperature and volume both vary is the same as for a path divided into two steps – heating at constant volume and expansion at constant temperature. For an ideal gas, the total entropy change is:Similarly if the temperature and pressure of an ideal gas both vary:
Phase transitions
Reversible phase transitions occur at constant temperature and pressure. The reversible heat is the enthalpy change for the transition, and the entropy change is the enthalpy change divided by the thermodynamic temperature. For fusion (i.e., melting) of a solid to a liquid at the melting point , the entropy of fusion is:Similarly, for vaporisation of a liquid to a gas at the boiling point , the entropy of vaporisation is:
Approaches to understanding entropy
As a fundamental aspect of thermodynamics and physics, several different approaches to entropy beyond that of Clausius and Boltzmann are valid.
Standard textbook definitions
The following is a list of additional definitions of entropy from a collection of textbooks:
a measure of energy dispersal at a specific temperature.
a measure of disorder in the universe or of the availability of the energy in a system to do work.
a measure of a system's thermal energy per unit temperature that is unavailable for doing useful work.
In Boltzmann's analysis in terms of constituent particles, entropy is a measure of the number of possible microscopic states (or microstates) of a system in thermodynamic equilibrium.
Order and disorder
Entropy is often loosely associated with the amount of order or disorder, or of chaos, in a thermodynamic system. The traditional qualitative description of entropy is that it refers to changes in the state of the system and is a measure of "molecular disorder" and the amount of wasted energy in a dynamical energy transformation from one state or form to another. In this direction, several recent authors have derived exact entropy formulas to account for and measure disorder and order in atomic and molecular assemblies. One of the simpler entropy order/disorder formulas is that derived in 1984 by thermodynamic physicist Peter Landsberg, based on a combination of thermodynamics and information theory arguments. He argues that when constraints operate on a system, such that it is prevented from entering one or more of its possible or permitted states, as contrasted with its forbidden states, the measure of the total amount of "disorder" and "order" in the system are each given by:
Here, is the "disorder" capacity of the system, which is the entropy of the parts contained in the permitted ensemble, is the "information" capacity of the system, an expression similar to Shannon's channel capacity, and is the "order" capacity of the system.
Energy dispersal
The concept of entropy can be described qualitatively as a measure of energy dispersal at a specific temperature. Similar terms have been in use from early in the history of classical thermodynamics, and with the development of statistical thermodynamics and quantum theory, entropy changes have been described in terms of the mixing or "spreading" of the total energy of each constituent of a system over its particular quantised energy levels.
Ambiguities in the terms disorder and chaos, which usually have meanings directly opposed to equilibrium, contribute to widespread confusion and hamper comprehension of entropy for most students. As the second law of thermodynamics shows, in an isolated system internal portions at different temperatures tend to adjust to a single uniform temperature and thus produce equilibrium. A recently developed educational approach avoids ambiguous terms and describes such spreading out of energy as dispersal, which leads to loss of the differentials required for work even though the total energy remains constant in accordance with the first law of thermodynamics (compare discussion in next section). Physical chemist Peter Atkins, in his textbook Physical Chemistry, introduces entropy with the statement that "spontaneous changes are always accompanied by a dispersal of energy or matter and often both".
Relating entropy to energy usefulness
It is possible (in a thermal context) to regard lower entropy as a measure of the effectiveness or usefulness of a particular quantity of energy. Energy supplied at a higher temperature (i.e. with low entropy) tends to be more useful than the same amount of energy available at a lower temperature. Mixing a hot parcel of a fluid with a cold one produces a parcel of intermediate temperature, in which the overall increase in entropy represents a "loss" that can never be replaced.
As the entropy of the universe is steadily increasing, its total energy is becoming less useful. Eventually, this is theorised to lead to the heat death of the universe.
Entropy and adiabatic accessibility
A definition of entropy based entirely on the relation of adiabatic accessibility between equilibrium states was given by E. H. Lieb and J. Yngvason in 1999. This approach has several predecessors, including the pioneering work of Constantin Carathéodory from 1909 and the monograph by R. Giles. In the setting of Lieb and Yngvason, one starts by picking, for a unit amount of the substance under consideration, two reference states and such that the latter is adiabatically accessible from the former but not conversely. Defining the entropies of the reference states to be 0 and 1 respectively, the entropy of a state is defined as the largest number such that is adiabatically accessible from a composite state consisting of an amount in the state and a complementary amount, , in the state . A simple but important result within this setting is that entropy is uniquely determined, apart from a choice of unit and an additive constant for each chemical element, by the following properties: it is monotonic with respect to the relation of adiabatic accessibility, additive on composite systems, and extensive under scaling.
Entropy in quantum mechanics
In quantum statistical mechanics, the concept of entropy was developed by John von Neumann and is generally referred to as "von Neumann entropy":where is the density matrix, is the trace operator and is the Boltzmann constant.
This upholds the correspondence principle, because in the classical limit, when the phases between the basis states are purely random, this expression is equivalent to the familiar classical definition of entropy for states with classical probabilities :i.e. in such a basis the density matrix is diagonal.
Von Neumann established a rigorous mathematical framework for quantum mechanics with his work . He provided in this work a theory of measurement, where the usual notion of wave function collapse is described as an irreversible process (the so-called von Neumann or projective measurement). Using this concept, in conjunction with the density matrix he extended the classical concept of entropy into the quantum domain.
Information theory
When viewed in terms of information theory, the entropy state function is the amount of information in the system that is needed to fully specify the microstate of the system. Entropy is the measure of the amount of missing information before reception. Often called Shannon entropy, it was originally devised by Claude Shannon in 1948 to study the size of information of a transmitted message. The definition of information entropy is expressed in terms of a discrete set of probabilities so that:where the base of the logarithm determines the units (for example, the binary logarithm corresponds to bits).
In the case of transmitted messages, these probabilities were the probabilities that a particular message was actually transmitted, and the entropy of the message system was a measure of the average size of information of a message. For the case of equal probabilities (i.e. each message is equally probable), the Shannon entropy (in bits) is just the number of binary questions needed to determine the content of the message.
Most researchers consider information entropy and thermodynamic entropy directly linked to the same concept, while others argue that they are distinct. Both expressions are mathematically similar. If is the number of microstates that can yield a given macrostate, and each microstate has the same a priori probability, then that probability is . The Shannon entropy (in nats) is:and if entropy is measured in units of per nat, then the entropy is given by:which is the Boltzmann entropy formula, where is the Boltzmann constant, which may be interpreted as the thermodynamic entropy per nat. Some authors argue for dropping the word entropy for the function of information theory and using Shannon's other term, "uncertainty", instead.
Measurement
The entropy of a substance can be measured, although in an indirect way. The measurement, known as entropymetry, is done on a closed system with constant number of particles and constant volume , and it uses the definition of temperature in terms of entropy, while limiting energy exchange to heat :The resulting relation describes how entropy changes when a small amount of energy is introduced into the system at a certain temperature .
The process of measurement goes as follows. First, a sample of the substance is cooled as close to absolute zero as possible. At such temperatures, the entropy approaches zerodue to the definition of temperature. Then, small amounts of heat are introduced into the sample and the change in temperature is recorded, until the temperature reaches a desired value (usually 25 °C). The obtained data allows the user to integrate the equation above, yielding the absolute value of entropy of the substance at the final temperature. This value of entropy is called calorimetric entropy.
Interdisciplinary applications
Although the concept of entropy was originally a thermodynamic concept, it has been adapted in other fields of study, including information theory, psychodynamics, thermoeconomics/ecological economics, and evolution.
Philosophy and theoretical physics
Entropy is the only quantity in the physical sciences that seems to imply a particular direction of progress, sometimes called an arrow of time. As time progresses, the second law of thermodynamics states that the entropy of an isolated system never decreases in large systems over significant periods of time. Hence, from this perspective, entropy measurement is thought of as a clock in these conditions.
Biology
Chiavazzo et al. proposed that where cave spiders choose to lay their eggs can be explained through entropy minimisation.
Entropy has been proven useful in the analysis of base pair sequences in DNA. Many entropy-based measures have been shown to distinguish between different structural regions of the genome, differentiate between coding and non-coding regions of DNA, and can also be applied for the recreation of evolutionary trees by determining the evolutionary distance between different species.
Cosmology
Assuming that a finite universe is an isolated system, the second law of thermodynamics states that its total entropy is continually increasing. It has been speculated, since the 19th century, that the universe is fated to a heat death in which all the energy ends up as a homogeneous distribution of thermal energy so that no more work can be extracted from any source.
If the universe can be considered to have generally increasing entropy, then – as Roger Penrose has pointed out – gravity plays an important role in the increase because gravity causes dispersed matter to accumulate into stars, which collapse eventually into black holes. The entropy of a black hole is proportional to the surface area of the black hole's event horizon. Jacob Bekenstein and Stephen Hawking have shown that black holes have the maximum possible entropy of any object of equal size. This makes them likely end points of all entropy-increasing processes, if they are totally effective matter and energy traps. However, the escape of energy from black holes might be possible due to quantum activity (see Hawking radiation).
The role of entropy in cosmology remains a controversial subject since the time of Ludwig Boltzmann. Recent work has cast some doubt on the heat death hypothesis and the applicability of any simple thermodynamic model to the universe in general. Although entropy does increase in the model of an expanding universe, the maximum possible entropy rises much more rapidly, moving the universe further from the heat death with time, not closer. This results in an "entropy gap" pushing the system further away from the posited heat death equilibrium. Other complicating factors, such as the energy density of the vacuum and macroscopic quantum effects, are difficult to reconcile with thermodynamical models, making any predictions of large-scale thermodynamics extremely difficult.
Current theories suggest the entropy gap to have been originally opened up by the early rapid exponential expansion of the universe.
Economics
Romanian American economist Nicholas Georgescu-Roegen, a progenitor in economics and a paradigm founder of ecological economics, made extensive use of the entropy concept in his magnum opus on The Entropy Law and the Economic Process. Due to Georgescu-Roegen's work, the laws of thermodynamics form an integral part of the ecological economics school. Although his work was blemished somewhat by mistakes, a full chapter on the economics of Georgescu-Roegen has approvingly been included in one elementary physics textbook on the historical development of thermodynamics.
In economics, Georgescu-Roegen's work has generated the term 'entropy pessimism'. Since the 1990s, leading ecological economist and steady-state theorist Herman Daly – a student of Georgescu-Roegen – has been the economics profession's most influential proponent of the entropy pessimism position.
| Physical sciences | Thermodynamics | null |
9908 | https://en.wikipedia.org/wiki/Equation%20of%20state | Equation of state | In physics and chemistry, an equation of state is a thermodynamic equation relating state variables, which describe the state of matter under a given set of physical conditions, such as pressure, volume, temperature, or internal energy. Most modern equations of state are formulated in the Helmholtz free energy. Equations of state are useful in describing the properties of pure substances and mixtures in liquids, gases, and solid states as well as the state of matter in the interior of stars.
Overview
At present, there is no single equation of state that accurately predicts the properties of all substances under all conditions. An example of an equation of state correlates densities of gases and liquids to temperatures and pressures, known as the ideal gas law, which is roughly accurate for weakly polar gases at low pressures and moderate temperatures. This equation becomes increasingly inaccurate at higher pressures and lower temperatures, and fails to predict condensation from a gas to a liquid.
The general form of an equation of state may be written as
where is the pressure, the volume, and the temperature of the system. Yet also other variables may be used in that form. It is directly related to Gibbs phase rule, that is, the number of independent variables depends on the number of substances and phases in the system.
An equation used to model this relationship is called an equation of state. In most cases this model will comprise some empirical parameters that are usually adjusted to measurement data. Equations of state can also describe solids, including the transition of solids from one crystalline state to another. Equations of state are also used for the modeling of the state of matter in the interior of stars, including neutron stars, dense matter (quark–gluon plasmas) and radiation fields. A related concept is the perfect fluid equation of state used in cosmology.
Equations of state are applied in many fields such as process engineering and petroleum industry as well as pharmaceutical industry.
Any consistent set of units may be used, although SI units are preferred. Absolute temperature refers to the use of the Kelvin (K), with zero being absolute zero.
, number of moles of a substance
, , molar volume, the volume of 1 mole of gas or liquid
, ideal gas constant ≈ 8.3144621J/mol·K
, pressure at the critical point
, molar volume at the critical point
, absolute temperature at the critical point
Historical background
Boyle's law was one of the earliest formulation of an equation of state. In 1662, the Irish physicist and chemist Robert Boyle performed a series of experiments employing a J-shaped glass tube, which was sealed on one end. Mercury was added to the tube, trapping a fixed quantity of air in the short, sealed end of the tube. Then the volume of gas was measured as additional mercury was added to the tube. The pressure of the gas could be determined by the difference between the mercury level in the short end of the tube and that in the long, open end. Through these experiments, Boyle noted that the gas volume varied inversely with the pressure. In mathematical form, this can be stated as:The above relationship has also been attributed to Edme Mariotte and is sometimes referred to as Mariotte's law. However, Mariotte's work was not published until 1676.
In 1787 the French physicist Jacques Charles found that oxygen, nitrogen, hydrogen, carbon dioxide, and air expand to roughly the same extent over the same 80-kelvin interval. This is known today as Charles's law. Later, in 1802, Joseph Louis Gay-Lussac published results of similar experiments, indicating a linear relationship between volume and temperature:Dalton's law (1801) of partial pressure states that the pressure of a mixture of gases is equal to the sum of the pressures of all of the constituent gases alone.
Mathematically, this can be represented for species as:In 1834, Émile Clapeyron combined Boyle's law and Charles' law into the first statement of the ideal gas law. Initially, the law was formulated as pVm = R(TC + 267) (with temperature expressed in degrees Celsius), where R is the gas constant. However, later work revealed that the number should actually be closer to 273.2, and then the Celsius scale was defined with , giving:In 1873, J. D. van der Waals introduced the first equation of state derived by the assumption of a finite volume occupied by the constituent molecules. His new formula revolutionized the study of equations of state, and was the starting point of cubic equations of state, which most famously continued via the Redlich–Kwong equation of state and the Soave modification of Redlich-Kwong.
The van der Waals equation of state can be written as
where is a parameter describing the attractive energy between particles and is a parameter describing the volume of the particles.
Ideal gas law
Classical ideal gas law
The classical ideal gas law may be written
In the form shown above, the equation of state is thus
If the calorically perfect gas approximation is used, then the ideal gas law may also be expressed as follows
where is the number density of the gas (number of atoms/molecules per unit volume), is the (constant) adiabatic index (ratio of specific heats), is the internal energy per unit mass (the "specific internal energy"), is the specific heat capacity at constant volume, and is the specific heat capacity at constant pressure.
Quantum ideal gas law
Since for atomic and molecular gases, the classical ideal gas law is well suited in most cases, let us describe the equation of state for elementary particles with mass and spin that takes into account quantum effects. In the following, the upper sign will always correspond to Fermi–Dirac statistics and the lower sign to Bose–Einstein statistics. The equation of state of such gases with particles occupying a volume with temperature and pressure is given by
where is the Boltzmann constant and the chemical potential is given by the following implicit function
In the limiting case where , this equation of state will reduce to that of the classical ideal gas. It can be shown that the above equation of state in the limit reduces to
With a fixed number density , decreasing the temperature causes in Fermi gas, an increase in the value for pressure from its classical value implying an effective repulsion between particles (this is an apparent repulsion due to quantum exchange effects not because of actual interactions between particles since in ideal gas, interactional forces are neglected) and in Bose gas, a decrease in pressure from its classical value implying an effective attraction. The quantum nature of this equation is in it dependence on s and ħ.
Cubic equations of state
Cubic equations of state are called such because they can be rewritten as a cubic function of . Cubic equations of state originated from the van der Waals equation of state. Hence, all cubic equations of state can be considered 'modified van der Waals equation of state'. There is a very large number of such cubic equations of state. For process engineering, cubic equations of state are today still highly relevant, e.g. the Peng Robinson equation of state or the Soave Redlich Kwong equation of state.
Virial equations of state
Virial equation of state
Although usually not the most convenient equation of state, the virial equation is important because it can be derived directly from statistical mechanics. This equation is also called the Kamerlingh Onnes equation. If appropriate assumptions are made about the mathematical form of intermolecular forces, theoretical expressions can be developed for each of the coefficients. A is the first virial coefficient, which has a constant value of 1 and makes the statement that when volume is large, all fluids behave like ideal gases. The second virial coefficient B corresponds to interactions between pairs of molecules, C to triplets, and so on. Accuracy can be increased indefinitely by considering higher order terms. The coefficients B, C, D, etc. are functions of temperature only.
The BWR equation of state
where
is pressure
is molar density
Values of the various parameters can be found in reference materials. The BWR equation of state has also frequently been used for the modelling of the Lennard-Jones fluid. There are several extensions and modifications of the classical BWR equation of state available.
The Benedict–Webb–Rubin–Starling equation of state is a modified BWR equation of state and can be written as
Note that in this virial equation, the fourth and fifth virial terms are zero. The second virial coefficient is monotonically decreasing as temperature is lowered. The third virial coefficient is monotonically increasing as temperature is lowered.
The Lee–Kesler equation of state is based on the corresponding states principle, and is a modification of the BWR equation of state.
Physically based equations of state
There is a large number of physically based equations of state available today. Most of those are formulated in the Helmholtz free energy as a function of temperature, density (and for mixtures additionally the composition). The Helmholtz energy is formulated as a sum of multiple terms modelling different types of molecular interaction or molecular structures, e.g. the formation of chains or dipolar interactions. Hence, physically based equations of state model the effect of molecular size, attraction and shape as well as hydrogen bonding and polar interactions of fluids. In general, physically based equations of state give more accurate results than traditional cubic equations of state, especially for systems containing liquids or solids. Most physically based equations of state are built on monomer term describing the Lennard-Jones fluid or the Mie fluid.
Perturbation theory-based models
Perturbation theory is frequently used for modelling dispersive interactions in an equation of state. There is a large number of perturbation theory based equations of state available today, e.g. for the classical Lennard-Jones fluid. The two most important theories used for these types of equations of state are the Barker-Henderson perturbation theory and the Weeks–Chandler–Andersen perturbation theory.
Statistical associating fluid theory (SAFT)
An important contribution for physically based equations of state is the statistical associating fluid theory (SAFT) that contributes the Helmholtz energy that describes the association (a.k.a. hydrogen bonding) in fluids, which can also be applied for modelling chain formation (in the limit of infinite association strength). The SAFT equation of state was developed using statistical mechanical methods (in particular the perturbation theory of Wertheim) to describe the interactions between molecules in a system. The idea of a SAFT equation of state was first proposed by Chapman et al. in 1988 and 1989. Many different versions of the SAFT models have been proposed, but all use the same chain and association terms derived by Chapman et al.
Multiparameter equations of state
Multiparameter equations of state are empirical equations of state that can be used to represent pure fluids with high accuracy. Multiparameter equations of state are empirical correlations of experimental data and are usually formulated in the Helmholtz free energy. The functional form of these models is in most parts not physically motivated. They can be usually applied in both liquid and gaseous states. Empirical multiparameter equations of state represent the Helmholtz energy of the fluid as the sum of ideal gas and residual terms. Both terms are explicit in temperature and density:
with
The reduced density and reduced temperature are in most cases the critical values for the pure fluid. Because integration of the multiparameter equations of state is not required and thermodynamic properties can be determined using classical thermodynamic relations, there are few restrictions as to the functional form of the ideal or residual terms. Typical multiparameter equations of state use upwards of 50 fluid specific parameters, but are able to represent the fluid's properties with high accuracy. Multiparameter equations of state are available currently for about 50 of the most common industrial fluids including refrigerants. The IAPWS95 reference equation of state for water is also a multiparameter equations of state. Mixture models for multiparameter equations of state exist, as well. Yet, multiparameter equations of state applied to mixtures are known to exhibit artifacts at times.
One example of such an equation of state is the form proposed by Span and Wagner.
This is a somewhat simpler form that is intended to be used more in technical applications. Equations of state that require a higher accuracy use a more complicated form with more terms.
List of further equations of state
Stiffened equation of state
When considering water under very high pressures, in situations such as underwater nuclear explosions, sonic shock lithotripsy, and sonoluminescence, the stiffened equation of state is often used:
where is the internal energy per unit mass, is an empirically determined constant typically taken to be about 6.1, and is another constant, representing the molecular attraction between water molecules. The magnitude of the correction is about 2 gigapascals (20,000 atmospheres).
The equation is stated in this form because the speed of sound in water is given by .
Thus water behaves as though it is an ideal gas that is already under about 20,000 atmospheres (2 GPa) pressure, and explains why water is commonly assumed to be incompressible: when the external pressure changes from 1 atmosphere to 2 atmospheres (100 kPa to 200 kPa), the water behaves as an ideal gas would when changing from 20,001 to 20,002 atmospheres (2000.1 MPa to 2000.2 MPa).
This equation mispredicts the specific heat capacity of water but few simple alternatives are available for severely nonisentropic processes such as strong shocks.
Morse oscillator equation of state
An equation of state of Morse oscillator has been derived, and it has the following form:
Where is the first order virial parameter and it depends on the temperature, is the second order virial parameter of Morse oscillator and it depends on the parameters of Morse oscillator in addition to the absolute temperature. is the fractional volume of the system.
Ultrarelativistic equation of state
An ultrarelativistic fluid has equation of state
where is the pressure, is the mass density, and is the speed of sound.
Ideal Bose equation of state
The equation of state for an ideal Bose gas is
where α is an exponent specific to the system (e.g. in the absence of a potential field, α = 3/2), z is exp(μ/kBT) where μ is the chemical potential, Li is the polylogarithm, ζ is the Riemann zeta function, and Tc is the critical temperature at which a Bose–Einstein condensate begins to form.
Jones–Wilkins–Lee equation of state for explosives (JWL equation)
The equation of state from Jones–Wilkins–Lee is used to describe the detonation products of explosives.
The ratio is defined by using , which is the density of the explosive (solid part) and , which is the density of the detonation products. The parameters , , , and are given by several references. In addition, the initial density (solid part) , speed of detonation , Chapman–Jouguet pressure and the chemical energy per unit volume of the explosive are given in such references. These parameters are obtained by fitting the JWL-EOS to experimental results. Typical parameters for some explosives are listed in the table below.
Others
Tait equation for water and other liquids. Several equations are referred to as the Tait equation.
Murnaghan equation of state
Birch–Murnaghan equation of state
Stacey–Brennan–Irvine equation of state
Modified Rydberg equation of state
Adapted polynomial equation of state
Johnson–Holmquist equation of state
Mie–Grüneisen equation of state
Anton-Schmidt equation of state
State-transition equation
| Physical sciences | Thermodynamics | Physics |
9920 | https://en.wikipedia.org/wiki/Electronic%20oscillator | Electronic oscillator | An electronic oscillator is an electronic circuit that produces a periodic, oscillating or alternating current (AC) signal, usually a sine wave, square wave or a triangle wave, powered by a direct current (DC) source. Oscillators are found in many electronic devices, such as radio receivers, television sets, radio and television broadcast transmitters, computers, computer peripherals, cellphones, radar, and many other devices.
Oscillators are often characterized by the frequency of their output signal:
A low-frequency oscillator (LFO) is an oscillator that generates a frequency below approximately 20 Hz. This term is typically used in the field of audio synthesizers, to distinguish it from an audio frequency oscillator.
An audio oscillator produces frequencies in the audio range, 20 Hz to 20 kHz.
A radio frequency (RF) oscillator produces signals above the audio range, more generally in the range of 100 kHz to 100 GHz.
There are two general types of electronic oscillators: the linear or harmonic oscillator, and the nonlinear or relaxation oscillator. The two types are fundamentally different in how oscillation is produced, as well as in the characteristic type of output signal that is generated.
The most-common linear oscillator in use is the crystal oscillator, in which the output frequency is controlled by a piezo-electric resonator consisting of a vibrating quartz crystal. Crystal oscillators are ubiquitous in modern electronics, being the source for the clock signal in computers and digital watches, as well as a source for the signals generated in radio transmitters and receivers. As a crystal oscillator's “native” output waveform is sinusoidal, a signal-conditioning circuit may be used to convert the output to other waveform types, such as the square wave typically utilized in computer clock circuits.
Harmonic oscillators
Linear or harmonic oscillators generate a sinusoidal (or nearly-sinusoidal) signal. There are two types:
Feedback oscillator
The most common form of linear oscillator is an electronic amplifier such as a transistor or operational amplifier connected in a feedback loop with its output fed back into its input through a frequency selective electronic filter to provide positive feedback. When the power supply to the amplifier is switched on initially, electronic noise in the circuit provides a non-zero signal to get oscillations started. The noise travels around the loop and is amplified and filtered until very quickly it converges on a sine wave at a single frequency.
Feedback oscillator circuits can be classified according to the type of frequency selective filter they use in the feedback loop:
In an RC oscillator circuit, the filter is a network of resistors and capacitors. RC oscillators are mostly used to generate lower frequencies, for example in the audio range. Common types of RC oscillator circuits are the phase shift oscillator and the Wien bridge oscillator. LR oscillators, using inductor and resistor filters also exist, however they are much less common due to the required size of an inductor to achieve a value appropriate for use at lower frequencies.
In an LC oscillator circuit, the filter is a tuned circuit (often called a tank circuit) consisting of an inductor (L) and capacitor (C) connected together, which acts as a resonator. Charge flows back and forth between the capacitor's plates through the inductor, so the tuned circuit can store electrical energy oscillating at its resonant frequency. The amplifier adds power to compensate for resistive energy losses in the circuit and supplies the power for the output signal. LC oscillators are often used at radio frequencies, when a tunable frequency source is necessary, such as in signal generators, tunable radio transmitters and the local oscillators in radio receivers. Typical LC oscillator circuits are the Hartley, Colpitts and Clapp circuits.
In a crystal oscillator circuit the filter is a piezoelectric crystal (commonly a quartz crystal). The crystal mechanically vibrates as a resonator, and its frequency of vibration determines the oscillation frequency. Since the resonant frequency of the crystal is determined by its dimensions, crystal oscillators are fixed frequency oscillators, their frequency can only be adjusted over a tiny range of less than one percent. Crystals have a very high Q-factor and also better temperature stability than tuned circuits, so crystal oscillators have much better frequency stability than LC or RC oscillators. Crystal oscillators are the most common type of linear oscillator, used to stabilize the frequency of most radio transmitters, and to generate the clock signal in computers and quartz clocks. Crystal oscillators often use the same circuits as LC oscillators, with the crystal replacing the tuned circuit; the Pierce oscillator circuit is also commonly used. Quartz crystals are generally limited to frequencies of 30 MHz or below. Other types of resonators, dielectric resonators and surface acoustic wave (SAW) devices, are used to control higher frequency oscillators, up into the microwave range. For example, SAW oscillators are used to generate the radio signal in cell phones.
Negative-resistance oscillator
In addition to the feedback oscillators described above, which use two-port amplifying active elements such as transistors and operational amplifiers, linear oscillators can also be built using one-port (two terminal) devices with negative resistance, such as magnetron tubes, tunnel diodes, IMPATT diodes and Gunn diodes. Negative-resistance oscillators are usually used at high frequencies in the microwave range and above, since at these frequencies feedback oscillators perform poorly due to excessive phase shift in the feedback path.
In negative-resistance oscillators, a resonant circuit, such as an LC circuit, crystal, or cavity resonator, is connected across a device with negative differential resistance, and a DC bias voltage is applied to supply energy. A resonant circuit by itself is "almost" an oscillator; it can store energy in the form of electronic oscillations if excited, but because it has electrical resistance and other losses the oscillations are damped and decay to zero. The negative resistance of the active device cancels the (positive) internal loss resistance in the resonator, in effect creating a resonator circuit with no damping, which generates spontaneous continuous oscillations at its resonant frequency.
The negative-resistance oscillator model is not limited to one-port devices like diodes; feedback oscillator circuits with two-port amplifying devices such as transistors and tubes also have negative resistance. At high frequencies, three terminal devices such as transistors and FETs are also used in negative resistance oscillators. At high frequencies these devices do not need a feedback loop, but with certain loads applied to one port can become unstable at the other port and show negative resistance due to internal feedback. The negative resistance port is connected to a tuned circuit or resonant cavity, causing them to oscillate. High-frequency oscillators in general are designed using negative-resistance techniques.
List of harmonic oscillator circuits
Some of the many harmonic oscillator circuits are listed below:
Armstrong oscillator, a.k.a. Meissner oscillator
Hartley oscillator
Colpitts oscillator
Clapp oscillator
Seiler oscillator
Vackář oscillator
Pierce oscillator
Tri-tet oscillator
Cathode follower oscillator
Wien bridge oscillator
Phase-shift oscillator
Cross-coupled oscillator
Dynatron oscillator
Opto-electronic oscillator
Robinson oscillator
Relaxation oscillator
A nonlinear or relaxation oscillator produces a non-sinusoidal output, such as a square, sawtooth or triangle wave. It consists of an energy-storing element (a capacitor or, more rarely, an inductor) and a nonlinear switching device (a latch, Schmitt trigger, or negative resistance element) connected in a feedback loop. The switching device periodically charges the storage element with energy and when its voltage or current reaches a threshold discharges it again, thus causing abrupt changes in the output waveform. Although in the past negative resistance devices like the unijunction transistor, thyratron tube or neon lamp were used, today relaxation oscillators are mainly built with integrated circuits like the 555 timer IC.
Square-wave relaxation oscillators are used to provide the clock signal for sequential logic circuits such as timers and counters, although crystal oscillators are often preferred for their greater stability. Triangle-wave or sawtooth oscillators are used in the timebase circuits that generate the horizontal deflection signals for cathode-ray tubes in analogue oscilloscopes and television sets. They are also used in voltage-controlled oscillators (VCOs), inverters and switching power supplies, dual-slope analog to digital converters (ADCs), and in function generators to generate square and triangle waves for testing equipment. In general, relaxation oscillators are used at lower frequencies and have poorer frequency stability than linear oscillators.
Ring oscillators are built of a ring of active delay stages, such as inverters. Generally the ring has an odd number of inverting stages, so that there is no single stable state for the internal ring voltages. Instead, a single transition propagates endlessly around the ring.
Some of the more common relaxation oscillator circuits are listed below:
Multivibrator
Pearson–Anson oscillator
Ring oscillator
Delay-line oscillator
Royer oscillator
Voltage-controlled oscillator (VCO)
An oscillator can be designed so that the oscillation frequency can be varied over some range by an input voltage or current. These voltage controlled oscillators are widely used in phase-locked loops, in which the oscillator's frequency can be locked to the frequency of another oscillator. These are ubiquitous in modern communications circuits, used in filters, modulators, demodulators, and forming the basis of frequency synthesizer circuits which are used to tune radios and televisions.
Radio frequency VCOs are usually made by adding a varactor diode to the tuned circuit or resonator in an oscillator circuit. Changing the DC voltage across the varactor changes its capacitance, which changes the resonant frequency of the tuned circuit. Voltage controlled relaxation oscillators can be constructed by charging and discharging the energy storage capacitor with a voltage controlled current source. Increasing the input voltage increases the rate of charging the capacitor, decreasing the time between switching events.
Theory of feedback oscillators
A feedback oscillator circuit consists of two parts connected in a feedback loop; an amplifier and an electronic filter . The filter's purpose is to limit the frequencies that can pass through the loop so the circuit only oscillates at the desired frequency. Since the filter and wires in the circuit have resistance they consume energy and the amplitude of the signal drops as it passes through the filter. The amplifier is needed to increase the amplitude of the signal to compensate for the energy lost in the other parts of the circuit, so the loop will oscillate, as well as supply energy to the load attached to the output.
Frequency of oscillation - the Barkhausen criterion
To determine the frequency(s) at which a feedback oscillator circuit will oscillate, the feedback loop is thought of as broken at some point (see diagrams) to give an input and output port (for accuracy, the output port must be terminated with an impedance equal to the input port). A sine wave is applied to the input and the amplitude and phase of the sine wave after going through the loop is calculated
and so
Since in the complete circuit is connected to , for oscillations to exist
The ratio of output to input of the loop, , is called the loop gain. So the condition for oscillation is that the loop gain must be one
Since is a complex number with two parts, a magnitude and an angle, the above equation actually consists of two conditions:
The magnitude of the gain (amplification) around the loop at ω0 must be unity
so that after a trip around the loop the sine wave is the same amplitude. It can be seen intuitively that if the loop gain were greater than one, the amplitude of the sinusoidal signal would increase as it travels around the loop, resulting in a sine wave that grows exponentially with time, without bound. If the loop gain were less than one, the signal would decrease around the loop, resulting in an exponentially decaying sine wave that decreases to zero.
The sine wave at the end of the loop must be in phase with the wave at the beginning of the loop. Since the sine wave is periodic and repeats every 2π radians, this means that the phase shift around the loop at the oscillation frequency ω0 must be zero or a multiple of 2π radians (360°)
Equations (1) and (2) are called the Barkhausen stability criterion. It is a necessary but not a sufficient criterion for oscillation, so there are some circuits which satisfy these equations that will not oscillate. An equivalent condition often used instead of the Barkhausen condition is that the circuit's closed loop transfer function (the circuit's complex impedance at its output) have a pair of poles on the imaginary axis.
In general, the phase shift of the feedback network increases with increasing frequency so there are only a few discrete frequencies (often only one) which satisfy the second equation. If the amplifier gain is high enough that the loop gain is unity (or greater, see Startup section) at one of these frequencies, the circuit will oscillate at that frequency. Many amplifiers such as common-emitter transistor circuits are "inverting", meaning that their output voltage decreases when their input increases. In these the amplifier provides 180° phase shift, so the circuit will oscillate at the frequency at which the feedback network provides the other 180° phase shift.
At frequencies well below the poles of the amplifying device, the amplifier will act as a pure gain , but if the oscillation frequency is near the amplifier's cutoff frequency , within , the active device can no longer be considered a 'pure gain', and it will contribute some phase shift to the loop.
An alternate mathematical stability test sometimes used instead of the Barkhausen criterion is the Nyquist stability criterion. This has a wider applicability than the Barkhausen, so it can identify some of the circuits which pass the Barkhausen criterion but do not oscillate.
Frequency stability
Temperature changes, other environmental changes, aging, and manufacturing tolerances will cause component values to "drift" away from their designed values. Changes in frequency determining components such as the tank circuit in LC oscillators will cause the oscillation frequency to change, so for a constant frequency these components must have stable values. How stable the oscillator's frequency is to other changes in the circuit, such as changes in values of other components, gain of the amplifier, the load impedance, or the supply voltage, is mainly dependent on the Q factor ("quality factor") of the feedback filter. Since the amplitude of the output is constant due to the nonlinearity of the amplifier (see Startup section below), changes in component values cause changes in the phase shift of the feedback loop. Since oscillation can only occur at frequencies where the phase shift is a multiple of 360°, , shifts in component values cause the oscillation frequency to change to bring the loop phase back to 360n°. The amount of frequency change caused by a given phase change depends on the slope of the loop phase curve at , which is determined by the
so
RC oscillators have the equivalent of a very low , so the phase changes very slowly with frequency, therefore a given phase change will cause a large change in the frequency. In contrast, LC oscillators have tank circuits with high (~102). This means the phase shift of the feedback network increases rapidly with frequency near the resonant frequency of the tank circuit. So a large change in phase causes only a small change in frequency. Therefore, the circuit's oscillation frequency is very close to the natural resonant frequency of the tuned circuit, and doesn't depend much on other components in the circuit. The quartz crystal resonators used in crystal oscillators have even higher (104 to 106) and their frequency is very stable and independent of other circuit components.
Tunability
The frequency of RC and LC oscillators can be tuned over a wide range by using variable components in the filter. A microwave cavity can be tuned mechanically by moving one of the walls. In contrast, a quartz crystal is a mechanical resonator whose resonant frequency is mainly determined by its dimensions, so a crystal oscillator's frequency is only adjustable over a very narrow range, a tiny fraction of one percent.
Its frequency can be changed slightly by using a trimmer capacitor in series or parallel with the crystal.
Startup and amplitude of oscillation
The Barkhausen criterion above, eqs. (1) and (2), merely gives the frequencies at which steady-state oscillation is possible, but says nothing about the amplitude of the oscillation, whether the amplitude is stable, or whether the circuit will start oscillating when the power is turned on. For a practical oscillator two additional requirements are necessary:
In order for oscillations to start up in the circuit from zero, the circuit must have "excess gain"; the loop gain for small signals must be greater than one at its oscillation frequency
For stable operation, the feedback loop must include a nonlinear component which reduces the gain back to unity as the amplitude increases to its operating value.
A typical rule of thumb is to make the small signal loop gain at the oscillation frequency 2 or 3. When the power is turned on, oscillation is started by the power turn-on transient or random electronic noise present in the circuit. Noise guarantees that the circuit will not remain "balanced" precisely at its unstable DC equilibrium point (Q point) indefinitely. Due to the narrow passband of the filter, the response of the circuit to a noise pulse will be sinusoidal, it will excite a small sine wave of voltage in the loop. Since for small signals the loop gain is greater than one, the amplitude of the sine wave increases exponentially.
During startup, while the amplitude of the oscillation is small, the circuit is approximately linear, so the analysis used in the Barkhausen criterion is applicable. When the amplitude becomes large enough that the amplifier becomes nonlinear, generating harmonic distortion, technically the frequency domain analysis used in normal amplifier circuits is no longer applicable, so the "gain" of the circuit is undefined. However the filter attenuates the harmonic components produced by the nonlinearity of the amplifier, so the fundamental frequency component mainly determines the loop gain (this is the "harmonic balance" analysis technique for nonlinear circuits).
The sine wave cannot grow indefinitely; in all real oscillators some nonlinear process in the circuit limits its amplitude, reducing the gain as the amplitude increases, resulting in stable operation at some constant amplitude. In most oscillators this nonlinearity is simply the saturation (limiting or clipping) of the amplifying device, the transistor, vacuum tube or op-amp. The maximum voltage swing of the amplifier's output is limited by the DC voltage provided by its power supply. Another possibility is that the output may be limited by the amplifier slew rate.
As the amplitude of the output nears the power supply voltage rails, the amplifier begins to saturate on the peaks (top and bottom) of the sine wave, flattening or "clipping" the peaks. To achieve the maximum amplitude sine wave output from the circuit, the amplifier should be biased midway between its clipping levels. For example, an op amp should be biased midway between the two supply voltage rails. A common-emitter transistor amplifier's collector voltage should be biased midway between cutoff and saturation levels.
Since the output of the amplifier can no longer increase with increasing input, further increases in amplitude cause the equivalent gain of the amplifier and thus the loop gain to decrease. The amplitude of the sine wave, and the resulting clipping, continues to grow until the loop gain is reduced to unity, , satisfying the Barkhausen criterion, at which point the amplitude levels off and steady state operation is achieved, with the output a slightly distorted sine wave with peak amplitude determined by the supply voltage. This is a stable equilibrium; if the amplitude of the sine wave increases for some reason, increased clipping of the output causes the loop gain to drop below one temporarily, reducing the sine wave's amplitude back to its unity-gain value. Similarly if the amplitude of the wave decreases, the decreased clipping will cause the loop gain to increase above one, increasing the amplitude.
The amount of harmonic distortion in the output is dependent on how much excess loop gain the circuit has:
If the small signal loop gain is made close to one, just slightly greater, the output waveform will have minimum distortion, and the frequency will be most stable and independent of supply voltage and load impedance. However, the oscillator may be slow starting up, and a small decrease in gain due to a variation in component values may prevent it from oscillating.
If the small signal loop gain is made significantly greater than one, the oscillator starts up faster, but more severe clipping of the sine wave occurs, and thus the resulting distortion of the output waveform increases. The oscillation frequency becomes more dependent on the supply voltage and current drawn by the load.
An exception to the above are high Q oscillator circuits such as crystal oscillators; the narrow bandwidth of the crystal removes the harmonics from the output, producing a 'pure' sinusoidal wave with almost no distortion even with large loop gains.
Design procedure
Since oscillators depend on nonlinearity for their operation, the usual linear frequency domain circuit analysis techniques used for amplifiers based on the Laplace transform, such as root locus and gain and phase plots (Bode plots), cannot capture their full behavior. To determine startup and transient behavior and calculate the detailed shape of the output waveform, electronic circuit simulation computer programs like SPICE are used. A typical design procedure for oscillator circuits is to use linear techniques such as the Barkhausen stability criterion or Nyquist stability criterion to design the circuit, use a rule of thumb to choose the loop gain, then simulate the circuit on computer to make sure it starts up reliably and to determine the nonlinear aspects of operation such as harmonic distortion. Component values are tweaked until the simulation results are satisfactory. The distorted oscillations of real-world (nonlinear) oscillators are called limit cycles and are studied in nonlinear control theory.
Amplitude-stabilized oscillators
In applications where a 'pure' very low distortion sine wave is needed, such as precision signal generators, a nonlinear component is often used in the feedback loop that provides a 'slow' gain reduction with amplitude. This stabilizes the loop gain at an amplitude below the saturation level of the amplifier, so it does not saturate and "clip" the sine wave. Resistor-diode networks and FETs are often used for the nonlinear element. An older design uses a thermistor or an ordinary incandescent light bulb; both provide a resistance that increases with temperature as the current through them increases.
As the amplitude of the signal current through them increases during oscillator startup, the increasing resistance of these devices reduces the loop gain. The essential characteristic of all these circuits is that the nonlinear gain-control circuit must have a long time constant, much longer than a single period of the oscillation. Therefore, over a single cycle they act as virtually linear elements, and so introduce very little distortion. The operation of these circuits is somewhat analogous to an automatic gain control (AGC) circuit in a radio receiver. The Wein bridge oscillator is a widely used circuit in which this type of gain stabilization is used.
Frequency limitations
At high frequencies it becomes difficult to physically implement feedback oscillators because of shortcomings of the components. Since at high frequencies the tank circuit has very small capacitance and inductance, parasitic capacitance and parasitic inductance of component leads and PCB traces become significant. These may create unwanted feedback paths between the output and input of the active device, creating instability and oscillations at unwanted frequencies (parasitic oscillation). Parasitic feedback paths inside the active device itself, such as the interelectrode capacitance between output and input, make the device unstable. The input impedance of the active device falls with frequency, so it may load the feedback network. As a result, stable feedback oscillators are difficult to build for frequencies above 500 MHz, and negative resistance oscillators are usually used for frequencies above this.
History
The first practical oscillators were based on electric arcs, which were used for lighting in the 19th century. The current through an arc light is unstable due to its negative resistance, and often breaks into spontaneous oscillations, causing the arc to make hissing, humming or howling sounds which had been noticed by Humphry Davy in 1821, Benjamin Silliman in 1822, Auguste Arthur de la Rive in 1846, and David Edward Hughes in 1878. Ernst Lecher in 1888 showed that the current through an electric arc could be oscillatory.
An oscillator was built by Elihu Thomson in 1892 by placing an LC tuned circuit in parallel with an electric arc and included a magnetic blowout. Independently, in the same year, George Francis FitzGerald realized that if the damping resistance in a resonant circuit could be made zero or negative, the circuit would produce oscillations, and, unsuccessfully, tried to build a negative resistance oscillator with a dynamo, what would now be called a parametric oscillator. The arc oscillator was rediscovered and popularized by William Duddell in 1900. Duddell, a student at London Technical College, was investigating the hissing arc effect. He attached an LC circuit (tuned circuit) to the electrodes of an arc lamp, and the negative resistance of the arc excited oscillation in the tuned circuit. Some of the energy was radiated as sound waves by the arc, producing a musical tone. Duddell demonstrated his oscillator before the London Institute of Electrical Engineers by sequentially connecting different tuned circuits across the arc to play the national anthem "God Save the Queen". Duddell's "singing arc" did not generate frequencies above the audio range. In 1902 Danish physicists Valdemar Poulsen and P. O. Pederson were able to increase the frequency produced into the radio range by operating the arc in a hydrogen atmosphere with a magnetic field, inventing the Poulsen arc radio transmitter, the first continuous wave radio transmitter, which was used through the 1920s.
The vacuum-tube feedback oscillator was invented around 1912, when it was discovered that feedback ("regeneration") in the recently invented audion (triode) vacuum tube could produce oscillations. At least six researchers independently made this discovery, although not all of them can be said to have a role in the invention of the oscillator. In the summer of 1912, Edwin Armstrong observed oscillations in audion radio receiver circuits and went on to use positive feedback in his invention of the regenerative receiver. Austrian Alexander Meissner independently discovered positive feedback and invented oscillators in March 1913. Irving Langmuir at General Electric observed feedback in 1913. Fritz Lowenstein may have preceded the others with a crude oscillator in late 1911. In Britain, H. J. Round patented amplifying and oscillating circuits in 1913. In August 1912, Lee De Forest, the inventor of the audion, had also observed oscillations in his amplifiers, but he didn't understand the significance and tried to eliminate it until he read Armstrong's patents in 1914, which he promptly challenged. Armstrong and De Forest fought a protracted legal battle over the rights to the "regenerative" oscillator circuit which has been called "the most complicated patent litigation in the history of radio". De Forest ultimately won before the Supreme Court in 1934 on technical grounds, but most sources regard Armstrong's claim as the stronger one.
The first and most widely used relaxation oscillator circuit, the astable multivibrator, was invented in 1917 by French engineers Henri Abraham and Eugene Bloch. They called their cross-coupled, dual-vacuum-tube circuit a multivibrateur, because the square-wave signal it produced was rich in harmonics, compared to the sinusoidal signal of other vacuum-tube oscillators.
Vacuum-tube feedback oscillators became the basis of radio transmission by 1920. However, the triode vacuum tube oscillator performed poorly above 300 MHz because of interelectrode capacitance. To reach higher frequencies, new "transit time" (velocity modulation) vacuum tubes were developed, in which electrons traveled in "bunches" through the tube. The first of these was the Barkhausen–Kurz oscillator (1920), the first tube to produce power in the UHF range. The most important and widely used were the klystron (R. and S. Varian, 1937) and the cavity magnetron (J. Randall and H. Boot, 1940).
Mathematical conditions for feedback oscillations, now called the Barkhausen criterion, were derived by Heinrich Georg Barkhausen in 1921. He also showed that all linear oscillators must have negative resistance. The first analysis of a nonlinear electronic oscillator model, the Van der Pol oscillator, was done by Balthasar van der Pol in 1927. He originated the term "relaxation oscillation" and was first to distinguish between linear and relaxation oscillators. He showed that the stability of the oscillations (limit cycles) in actual oscillators was due to the nonlinearity of the amplifying device. Further advances in mathematical analysis of oscillation were made by Hendrik Wade Bode and Harry Nyquist in the 1930s. In 1969 Kaneyuki Kurokawa derived necessary and sufficient conditions for oscillation in negative-resistance circuits, which form the basis of modern microwave oscillator design.
| Technology | Functional circuits | null |
9927 | https://en.wikipedia.org/wiki/Endomembrane%20system | Endomembrane system | The endomembrane system is composed of the different membranes (endomembranes) that are suspended in the cytoplasm within a eukaryotic cell. These membranes divide the cell into functional and structural compartments, or organelles. In eukaryotes the organelles of the endomembrane system include: the nuclear membrane, the endoplasmic reticulum, the Golgi apparatus, lysosomes, vesicles, endosomes, and plasma (cell) membrane among others. The system is defined more accurately as the set of membranes that forms a single functional and developmental unit, either being connected directly, or exchanging material through vesicle transport. Importantly, the endomembrane system does not include the membranes of plastids or mitochondria, but might have evolved partially from the actions of the latter (see below).
The nuclear membrane contains a lipid bilayer that encompasses the contents of the nucleus. The endoplasmic reticulum (ER) is a synthesis and transport organelle that branches into the cytoplasm in plant and animal cells. The Golgi apparatus is a series of multiple compartments where molecules are packaged for delivery to other cell components or for secretion from the cell. Vacuoles, which are found in both plant and animal cells (though much bigger in plant cells), are responsible for maintaining the shape and structure of the cell as well as storing waste products. A vesicle is a relatively small, membrane-enclosed sac that stores or transports substances. The cell membrane is a protective barrier that regulates what enters and leaves the cell. There is also an organelle known as the Spitzenkörper that is only found in fungi, and is connected with hyphal tip growth.
In prokaryotes endomembranes are rare, although in many photosynthetic bacteria the plasma membrane is highly folded and most of the cell cytoplasm is filled with layers of light-gathering membrane. These light-gathering membranes may even form enclosed structures called chlorosomes in green sulfur bacteria. Another example is the complex "pepin" system of Thiomargarita species, especially T. magnifica.
The organelles of the endomembrane system are related through direct contact or by the transfer of membrane segments as vesicles. Despite these relationships, the various membranes are not identical in structure and function. The thickness, molecular composition, and metabolic behavior of a membrane are not fixed, they may be modified several times during the membrane's life. One unifying characteristic the membranes share is a lipid bilayer, with proteins attached to either side or traversing them.
History of the concept
Most lipids are synthesized in yeast either in the endoplasmic reticulum, lipid particles, or the mitochondrion, with little or no lipid synthesis occurring in the plasma membrane or nuclear membrane. Sphingolipid biosynthesis begins in the endoplasmic reticulum, but is completed in the Golgi apparatus. The situation is similar in mammals, with the exception of the first few steps in ether lipid biosynthesis, which occur in peroxisomes. The various membranes that enclose the other subcellular organelles must therefore be constructed by transfer of lipids from these sites of synthesis. However, although it is clear that lipid transport is a central process in organelle biogenesis, the mechanisms by which lipids are transported through cells remain poorly understood.
The first proposal that the membranes within cells form a single system that exchanges material between its components was by Morré and Mollenhauer in 1974. This proposal was made as a way of explaining how the various lipid membranes are assembled in the cell, with these membranes being assembled through lipid flow from the sites of lipid synthesis. The idea of lipid flow through a continuous system of membranes and vesicles was an alternative to the various membranes being independent entities that are formed from transport of free lipid components, such as fatty acids and sterols, through the cytosol. Importantly, the transport of lipids through the cytosol and lipid flow through a continuous endomembrane system are not mutually exclusive processes and both may occur in cells.
Components of the system
Nuclear envelope
The nuclear envelope surrounds the nucleus, separating its contents from the cytoplasm. It has two membranes, each a lipid bilayer with associated proteins. The outer nuclear membrane is continuous with the rough endoplasmic reticulum membrane, and like that structure, features ribosomes attached to the surface. The outer membrane is also continuous with the inner nuclear membrane since the two layers are fused together at numerous tiny holes called nuclear pores that perforate the nuclear envelope. These pores are about 120 nm in diameter and regulate the passage of molecules between the nucleus and cytoplasm, permitting some to pass through the membrane, but not others. Since the nuclear pores are located in an area of high traffic, they play an important role in cell physiology. The space between the outer and inner membranes is called the perinuclear space and is joined with the lumen of the rough ER.
The nuclear envelope's structure is determined by a network of intermediate filaments (protein filaments). This network is organized into a mesh-like lining called the nuclear lamina, which binds to chromatin, integral membrane proteins, and other nuclear components along the inner surface of the nucleus. The nuclear lamina is thought to help materials inside the nucleus reach the nuclear pores and in the disintegration of the nuclear envelope during mitosis and its reassembly at the end of the process.
The nuclear pores are highly efficient at selectively allowing the passage of materials to and from the nucleus, because the nuclear envelope has a considerable amount of traffic. RNA and ribosomal subunits must be continually transferred from the nucleus to the cytoplasm. Histones, gene regulatory proteins, DNA and RNA polymerases, and other substances essential for nuclear activities must be imported from the cytoplasm. The nuclear envelope of a typical mammalian cell contains 3000–4000 pore complexes. If the cell is synthesizing DNA each pore complex needs to transport about 100 histone molecules per minute. If the cell is growing rapidly, each complex also needs to transport about 6 newly assembled large and small ribosomal subunits per minute from the nucleus to the cytosol, where they are used to synthesize proteins.
Endoplasmic reticulum
The endoplasmic reticulum (ER) is a membranous synthesis and transport organelle that is an extension of the nuclear envelope. More than half the total membrane in eukaryotic cells is accounted for by the ER. The ER is made up of flattened sacs and branching tubules that are thought to interconnect, so that the ER membrane forms a continuous sheet enclosing a single internal space. This highly convoluted space is called the ER lumen and is also referred to as the ER cisternal space. The lumen takes up about ten percent of the entire cell volume. The endoplasmic reticulum membrane allows molecules to be selectively transferred between the lumen and the cytoplasm, and since it is connected to the nuclear envelope, it provides a channel between the nucleus and the cytoplasm.
The ER has a central role in producing, processing, and transporting biochemical compounds for use inside and outside of the cell. Its membrane is the site of production of all the transmembrane proteins and lipids for many of the cell's organelles, including the ER itself, the Golgi apparatus, lysosomes, endosomes, secretory vesicles, and the plasma membrane. Furthermore, almost all of the proteins that will exit the cell, plus those destined for the lumen of the ER, Golgi apparatus, or lysosomes, are originally delivered to the ER lumen. Consequently, many of the proteins found in the cisternal space of the endoplasmic reticulum lumen are there only temporarily as they pass on their way to other locations. Other proteins, however, constantly remain in the lumen and are known as endoplasmic reticulum resident proteins. These special proteins contain a specialized retention signal made up of a specific sequence of amino acids that enables them to be retained by the organelle. An example of an important endoplasmic reticulum resident protein is the chaperone protein known as BiP which identifies other proteins that have been improperly built or processed and keeps them from being sent to their final destinations.
The ER is involved in cotranslational sorting of proteins. A polypeptide which contains an ER signal sequence is recognised by the signal recognition particle which halts the production of the protein. The SRP transports the nascent protein to the ER membrane where it is released through a membrane channel and translation resumes.
There are two distinct, though connected, regions of ER that differ in structure and function: smooth ER and rough ER. The rough endoplasmic reticulum is so named because the cytoplasmic surface is covered with ribosomes, giving it a bumpy appearance when viewed through an electron microscope. The smooth ER appears smooth since its cytoplasmic surface lacks ribosomes.
Functions of the smooth ER
In the great majority of cells, smooth ER regions are scarce and are often partly smooth and partly rough. They are sometimes called transitional ER because they contain ER exit sites from which transport vesicles carrying newly synthesized proteins and lipids bud off for transport to the Golgi apparatus. In certain specialized cells, however, the smooth ER is abundant and has additional functions. The smooth ER of these specialized cells functions in diverse metabolic processes, including synthesis of lipids, metabolism of carbohydrates, and detoxification of drugs and poisons.
Enzymes of the smooth ER are vital to the synthesis of lipids, including oils, phospholipids, and steroids. Sex hormones of vertebrates and the steroid hormones secreted by the adrenal glands are among the steroids produced by the smooth ER in animal cells. The cells that synthesize these hormones are rich in smooth ER.
Liver cells are another example of specialized cells that contain an abundance of smooth ER. These cells provide an example of the role of smooth ER in carbohydrate metabolism. Liver cells store carbohydrates in the form of glycogen. The breakdown of glycogen eventually leads to the release of glucose from the liver cells, which is important in the regulation of sugar concentration in the blood. However, the primary product of glycogen breakdown is glucose-1-phosphate. This is converted to glucose-6-phosphate and then an enzyme of the liver cell's smooth ER removes the phosphate from the glucose, so that it can then leave the cell.
Enzymes of the smooth ER can also help detoxify drugs and poisons. Detoxification usually involves the addition of a hydroxyl group to a drug, making the drug more soluble and thus easier to purge from the body. One extensively studied detoxification reaction is carried out by the cytochrome P450 family of enzymes, which catalyze oxidation reactions on water-insoluble drugs or metabolites that would otherwise accumulate to toxic levels in cell membrane.
In muscle cells, a specialized smooth ER (sarcoplasmic reticulum) forms a membranous compartment (cisternal space) into which calcium ions are pumped. When a muscle cell becomes stimulated by a nerve impulse, calcium goes back across this membrane into the cytosol and generates the contraction of the muscle cell.
Functions of the rough ER
Many types of cells export proteins produced by ribosomes attached to the rough ER. The ribosomes assemble amino acids into protein units, which are carried into the rough ER for further adjustments. These proteins may be either transmembrane proteins, which become embedded in the membrane of the endoplasmic reticulum, or water-soluble proteins, which are able to pass through the membrane into the lumen. Those that reach the inside of the endoplasmic reticulum are folded into the correct three-dimensional conformation. Chemicals, such as carbohydrates or sugars, are added, then the endoplasmic reticulum either transports the completed proteins, called secretory proteins, to areas of the cell where they are needed, or they are sent to the Golgi apparatus for further processing and modification.
Once secretory proteins are formed, the ER membrane separates them from the proteins that will remain in the cytosol. Secretory proteins depart from the ER enfolded in the membranes of vesicles that bud like bubbles from the transitional ER. These vesicles in transit to another part of the cell are called transport vesicles. An alternative mechanism for transport of lipids and proteins out of the ER are through lipid transfer proteins at regions called membrane contact sites where the ER becomes closely and stably associated with the membranes of other organelles, such as the plasma membrane, Golgi or lysosomes.
In addition to making secretory proteins, the rough ER makes membranes that grows in place from the addition of proteins and phospholipids. As polypeptides intended to be membrane proteins grow from the ribosomes, they are inserted into the ER membrane itself and are kept there by their hydrophobic portions. The rough ER also produces its own membrane phospholipids; enzymes built into the ER membrane assemble phospholipids. The ER membrane expands and can be transferred by transport vesicles to other components of the endomembrane system.
Golgi apparatus
The Golgi apparatus (also known as the Golgi body and the Golgi complex) is composed of separate sacs called cisternae. Its shape is similar to a stack of pancakes. The number of these stacks varies with the specific function of the cell. The Golgi apparatus is used by the cell for further protein modification. The section of the Golgi apparatus that receives the vesicles from the ER is known as the cis face, and is usually near the ER. The opposite end of the Golgi apparatus is called the trans face, this is where the modified compounds leave. The trans face is usually facing the plasma membrane, which is where most of the substances the Golgi apparatus modifies are sent.
Vesicles sent off by the ER containing proteins are further altered at the Golgi apparatus and then prepared for secretion from the cell or transport to other parts of the cell. Various things can happen to the proteins on their journey through the enzyme covered space of the Golgi apparatus. The modification and synthesis of the carbohydrate portions of glycoproteins is common in protein processing. The Golgi apparatus removes and substitutes sugar monomers, producing a large variety of oligosaccharides. In addition to modifying proteins, the Golgi also manufactures macromolecules itself. In plant cells, the Golgi produces pectins and other polysaccharides needed by the plant structure.
Once the modification process is completed, the Golgi apparatus sorts the products of its processing and sends them to various parts of the cell. Molecular identification labels or tags are added by the Golgi enzymes to help with this. After everything is organized, the Golgi apparatus sends off its products by budding vesicles from its trans face.
Vacuoles
Vacuoles, like vesicles, are membrane-bound sacs within the cell. They are larger than vesicles and their specific function varies. The operations of vacuoles are different for plant and animal vacuoles.
In plant cells, vacuoles cover anywhere from 30% to 90% of the total cell volume. Most mature plant cells contain one large central vacuole encompassed by a membrane called the tonoplast. Vacuoles of plant cells act as storage compartments for the nutrients and waste of a cell. The solution that these molecules are stored in is called the cell sap. Pigments that color the cell are sometime located in the cell sap. Vacuoles can also increase the size of the cell, which elongates as water is added, and they control the turgor pressure (the osmotic pressure that keeps the cell wall from caving in). Like lysosomes of animal cells, vacuoles have an acidic pH and contain hydrolytic enzymes. The pH of vacuoles enables them to perform homeostatic procedures in the cell. For example, when the pH in the cells environment drops, the H+ ions surging into the cytosol can be transferred to a vacuole in order to keep the cytosol's pH constant.
In animals, vacuoles serve in exocytosis and endocytosis processes. Endocytosis refers to when substances are taken into the cell, whereas for exocytosis substances are moved from the cell into the extracellular space. Material to be taken-in is surrounded by the plasma membrane, and then transferred to a vacuole. There are two types of endocytosis, phagocytosis (cell eating) and pinocytosis (cell drinking). In phagocytosis, cells engulf large particles such as bacteria. Pinocytosis is the same process, except the substances being ingested are in the fluid form.
Vesicles
Vesicles are small membrane-enclosed transport units that can transfer molecules between different compartments. Most vesicles transfer the membranes assembled in the endoplasmic reticulum to the Golgi apparatus, and then from the Golgi apparatus to various locations.
There are various types of vesicles each with a different protein configuration. Most are formed from specific regions of membranes. When a vesicle buds off from a membrane it contains specific proteins on its cytosolic surface. Each membrane a vesicle travels to contains a marker on its cytosolic surface. This marker corresponds with the proteins on the vesicle traveling to the membrane. Once the vesicle finds the membrane, they fuse.
There are three well known types of vesicles. They are clathrin-coated, COPI-coated, and COPII-coated vesicles. Each performs different functions in the cell. For example, clathrin-coated vesicles transport substances between the Golgi apparatus and the plasma membrane. COPI- and COPII-coated vesicles are frequently used for transportation between the ER and the Golgi apparatus.
Lysosomes
Lysosomes are organelles that contain hydrolytic enzymes that are used for intracellular digestion. The main functions of a lysosome are to process molecules taken in by the cell and to recycle worn out cell parts. The enzymes inside of lysosomes are acid hydrolases which require an acidic environment for optimal performance. Lysosomes provide such an environment by maintaining a pH of 5.0 inside of the organelle. If a lysosome were to rupture, the enzymes released would not be very active because of the cytosol's neutral pH. However, if numerous lysosomes leaked the cell could be destroyed from autodigestion.
Lysosomes carry out intracellular digestion, in a process called phagocytosis (from the Greek , to eat and , vessel, referring here to the cell), by fusing with a vacuole and releasing their enzymes into the vacuole. Through this process, sugars, amino acids, and other monomers pass into the cytosol and become nutrients for the cell. Lysosomes also use their hydrolytic enzymes to recycle the cell's obsolete organelles in a process called autophagy. The lysosome engulfs another organelle and uses its enzymes to take apart the ingested material. The resulting organic monomers are then returned to the cytosol for reuse. The last function of a lysosome is to digest the cell itself through autolysis.
Spitzenkörper
The spitzenkörper is a component of the endomembrane system found only in fungi, and is associated with hyphal tip growth. It is a phase-dark body that is composed of an aggregation of membrane-bound vesicles containing cell wall components, serving as a point of assemblage and release of such components intermediate between the Golgi and the cell membrane. The spitzenkörper is motile and generates new hyphal tip growth as it moves forward.
Plasma membrane
The plasma membrane is a phospholipid bilayer membrane that separates the cell from its environment and regulates the transport of molecules and signals into and out of the cell. Embedded in the membrane are proteins that perform the functions of the plasma membrane. The plasma membrane is not a fixed or rigid structure, the molecules that compose the membrane are capable of lateral movement. This movement and the multiple components of the membrane are why it is referred to as a fluid mosaic. Smaller molecules such as carbon dioxide, water, and oxygen can pass through the plasma membrane freely by diffusion or osmosis. Larger molecules needed by the cell are assisted by proteins through active transport.
The plasma membrane of a cell has multiple functions. These include transporting nutrients into the cell, allowing waste to leave, preventing materials from entering the cell, averting needed materials from leaving the cell, maintaining the pH of the cytosol, and preserving the osmotic pressure of the cytosol. Transport proteins which allow some materials to pass through but not others are used for these functions. These proteins use ATP hydrolysis to pump materials against their concentration gradients.
In addition to these universal functions, the plasma membrane has a more specific role in multicellular organisms. Glycoproteins on the membrane assist the cell in recognizing other cells, in order to exchange metabolites and form tissues. Other proteins on the plasma membrane allow attachment to the cytoskeleton and extracellular matrix; a function that maintains cell shape and fixes the location of membrane proteins. Enzymes that catalyze reactions are also found on the plasma membrane. Receptor proteins on the membrane have a shape that matches with a chemical messenger, resulting in various cellular responses.
Evolution
The origin of the endomembrane system is linked to the origin of eukaryotes themselves and the origin of eukaryoties to the endosymbiotic origin of mitochondria. Many models have been put forward to explain the origin of the endomembrane system (reviewed in). The most recent concept suggests that the endomembrane system evolved from outer membrane vesicles the endosymbiotic mitochondrion secreted, and got enclosed within infoldings of the host prokaryote (in turn, a result of the ingestion of the endosymbiont). This OMV (outer membrane vesicles)-based model for the origin of the endomembrane system is currently the one that requires the fewest novel inventions at eukaryote origin and explains the many connections of mitochondria with other compartments of the cell. Currently, this "inside-out" hypothesis (which states that the alphaproteobacteria, the ancestral mitochondria, were engulfed by the blebs of an asgardarchaeon, and later the blebs fused leaving infoldings which would eventually become the endomembrane system) is favored more than the outside-in one (which suggested that the endomembrane system arose due to infoldings within the archaeal membrane).
| Biology and health sciences | Organelles | Biology |
9931 | https://en.wikipedia.org/wiki/Amplifier | Amplifier | An amplifier, electronic amplifier or (informally) amp is an electronic device that can increase the magnitude of a signal (a time-varying voltage or current). It is a two-port electronic circuit that uses electric power from a power supply to increase the amplitude (magnitude of the voltage or current) of a signal applied to its input terminals, producing a proportionally greater amplitude signal at its output. The amount of amplification provided by an amplifier is measured by its gain: the ratio of output voltage, current, or power to input. An amplifier is defined as a circuit that has a power gain greater than one.
An amplifier can be either a separate piece of equipment or an electrical circuit contained within another device. Amplification is fundamental to modern electronics, and amplifiers are widely used in almost all electronic equipment. Amplifiers can be categorized in different ways. One is by the frequency of the electronic signal being amplified. For example, audio amplifiers amplify signals in the audio (sound) range of less than 20 kHz, RF amplifiers amplify frequencies in the radio frequency range between 20 kHz and 300 GHz, and servo amplifiers and instrumentation amplifiers may work with very low frequencies down to direct current. Amplifiers can also be categorized by their physical placement in the signal chain; a preamplifier may precede other signal processing stages, for example, while a power amplifier is usually used after other amplifier stages to provide enough output power for the final use of the signal. The first practical electrical device which could amplify was the triode vacuum tube, invented in 1906 by Lee De Forest, which led to the first amplifiers around 1912. Today most amplifiers use transistors.
History
Vacuum tubes
The first practical prominent device that could amplify was the triode vacuum tube, invented in 1906 by Lee De Forest, which led to the first amplifiers around 1912. Vacuum tubes were used in almost all amplifiers until the 1960s–1970s when transistors replaced them. Today, most amplifiers use transistors, but vacuum tubes continue to be used in some applications.
The development of audio communication technology in form of the telephone, first patented in 1876, created the need to increase the amplitude of electrical signals to extend the transmission of signals over increasingly long distances. In telegraphy, this problem had been solved with intermediate devices at stations that replenished the dissipated energy by operating a signal recorder and transmitter back-to-back, forming a relay, so that a local energy source at each intermediate station powered the next leg of transmission.
For duplex transmission, i.e. sending and receiving in both directions, bi-directional relay repeaters were developed starting with the work of C. F. Varley for telegraphic transmission. Duplex transmission was essential for telephony and the problem was not satisfactorily solved until 1904, when H. E. Shreeve of the American Telephone and Telegraph Company improved existing attempts at constructing a telephone repeater consisting of back-to-back carbon-granule transmitter and electrodynamic receiver pairs. The Shreeve repeater was first tested on a line between Boston and Amesbury, MA, and more refined devices remained in service for some time. After the turn of the century it was found that negative resistance mercury lamps could amplify, and were also tried in repeaters, with little success.
The development of thermionic valves which began around 1902, provided an entirely electronic method of amplifying signals. The first practical version of such devices was the Audion triode, invented in 1906 by Lee De Forest, which led to the first amplifiers around 1912. Since the only previous device which was widely used to strengthen a signal was the relay used in telegraph systems, the amplifying vacuum tube was first called an electron relay. The terms amplifier and amplification, derived from the Latin amplificare, (to enlarge or expand), were first used for this new capability around 1915 when triodes became widespread.
The amplifying vacuum tube revolutionized electrical technology. It made possible long-distance telephone lines, public address systems, radio broadcasting, talking motion pictures, practical audio recording, radar, television, and the first computers. For 50 years virtually all consumer electronic devices used vacuum tubes. Early tube amplifiers often had positive feedback (regeneration), which could increase gain but also make the amplifier unstable and prone to oscillation. Much of the mathematical theory of amplifiers was developed at Bell Telephone Laboratories during the 1920s to 1940s. Distortion levels in early amplifiers were high, usually around 5%, until 1934, when Harold Black developed negative feedback; this allowed the distortion levels to be greatly reduced, at the cost of lower gain. Other advances in the theory of amplification were made by Harry Nyquist and Hendrik Wade Bode.
The vacuum tube was virtually the only amplifying device, other than specialized power devices such as the magnetic amplifier and amplidyne, for 40 years. Power control circuitry used magnetic amplifiers until the latter half of the twentieth century when power semiconductor devices became more economical, with higher operating speeds. The old Shreeve electroacoustic carbon repeaters were used in adjustable amplifiers in telephone subscriber sets for the hearing impaired until the transistor provided smaller and higher quality amplifiers in the 1950s.
Transistors
The first working transistor was a point-contact transistor invented by John Bardeen and Walter Brattain in 1947 at Bell Labs, where William Shockley later invented the bipolar junction transistor (BJT) in 1948. They were followed by the invention of the metal–oxide–semiconductor field-effect transistor (MOSFET) by Mohamed M. Atalla and Dawon Kahng at Bell Labs in 1959. Due to MOSFET scaling, the ability to scale down to increasingly small sizes, the MOSFET has since become the most widely used amplifier.
The replacement of bulky electron tubes with transistors during the 1960s and 1970s created a revolution in electronics, making possible a large class of portable electronic devices, such as the transistor radio developed in 1954. Today, use of vacuum tubes is limited to some high power applications, such as radio transmitters, as well as some musical instrument and high-end audiophile amplifiers.
Beginning in the 1970s, more and more transistors were connected on a single chip thereby creating higher scales of integration (such as small-scale, medium-scale and large-scale integration) in integrated circuits. Many amplifiers commercially available today are based on integrated circuits.
For special purposes, other active elements have been used. For example, in the early days of the satellite communication, parametric amplifiers were used. The core circuit was a diode whose capacitance was changed by an RF signal created locally. Under certain conditions, this RF signal provided energy that was modulated by the extremely weak satellite signal received at the earth station.
Advances in digital electronics since the late 20th century provided new alternatives to the conventional linear-gain amplifiers by using digital switching to vary the pulse-shape of fixed amplitude signals, resulting in devices such as the Class-D amplifier.
Ideal
In principle, an amplifier is an electrical two-port network that produces a signal at the output port that is a replica of the signal applied to the input port, but increased in magnitude.
The input port can be idealized as either being a voltage input, which takes no current, with the output proportional to the voltage across the port; or a current input, with no voltage across it, in which the output is proportional to the current through the port. The output port can be idealized as being either a dependent voltage source, with zero source resistance and its output voltage dependent on the input; or a dependent current source, with infinite source resistance and the output current dependent on the input. Combinations of these choices lead to four types of ideal amplifiers. In idealized form they are represented by each of the four types of dependent source used in linear analysis, as shown in the figure, namely:
Each type of amplifier in its ideal form has an ideal input and output resistance that is the same as that of the corresponding dependent source:
In real amplifiers the ideal impedances are not possible to achieve, but these ideal elements can be used to construct equivalent circuits of real amplifiers by adding impedances (resistance, capacitance and inductance) to the input and output. For any particular circuit, a small-signal analysis is often used to find the actual impedance. A small-signal AC test current Ix is applied to the input or output node, all external sources are set to AC zero, and the corresponding alternating voltage Vx across the test current source determines the impedance seen at that node as R = Vx / Ix.
Amplifiers designed to attach to a transmission line at input and output, especially RF amplifiers, do not fit into this classification approach. Rather than dealing with voltage or current individually, they ideally couple with an input or output impedance matched to the transmission line impedance, that is, match ratios of voltage to current. Many real RF amplifiers come close to this ideal. Although, for a given appropriate source and load impedance, RF amplifiers can be characterized as amplifying voltage or current, they fundamentally are amplifying power.
Properties
Amplifier properties are given by parameters that include:
Gain, the ratio between the magnitude of output and input signals
Bandwidth, the width of the useful frequency range
Efficiency, the ratio between the power of the output and total power consumption
Linearity, the extent to which the proportion between input and output amplitude is the same for high amplitude and low amplitude input
Noise, a measure of undesired noise mixed into the output
Output dynamic range, the ratio of the largest and the smallest useful output levels
Slew rate, the maximum rate of change of the output
Rise time, settling time, ringing and overshoot that characterize the step response
Stability, the ability to avoid self-oscillation
Amplifiers are described according to the properties of their inputs, their outputs, and how they relate. All amplifiers have gain, a multiplication factor that relates the magnitude of some property of the output signal to a property of the input signal. The gain may be specified as the ratio of output voltage to input voltage (voltage gain), output power to input power (power gain), or some combination of current, voltage, and power. In many cases the property of the output that varies is dependent on the same property of the input, making the gain unitless (though often expressed in decibels (dB)).
Most amplifiers are designed to be linear. That is, they provide constant gain for any normal input level and output signal. If an amplifier's gain is not linear, the output signal can become distorted. There are, however, cases where variable gain is useful. Certain signal processing applications use exponential gain amplifiers.
Amplifiers are usually designed to function well in a specific application, for example: radio and television transmitters and receivers, high-fidelity ("hi-fi") stereo equipment, microcomputers and other digital equipment, and guitar and other instrument amplifiers. Every amplifier includes at least one active device, such as a vacuum tube or transistor.
Negative feedback
Negative feedback is a technique used in most modern amplifiers to increase bandwidth, reduce distortion, and control gain. In a negative feedback amplifier part of the output is fed back and added to the input in the opposite phase, subtracting from the input. The main effect is to reduce the overall gain of the system. However, any unwanted signals introduced by the amplifier, such as distortion are also fed back. Since they are not part of the original input, they are added to the input in opposite phase, subtracting them from the input. In this way, negative feedback also reduces nonlinearity, distortion and other errors introduced by the amplifier. Large amounts of negative feedback can reduce errors to the point that the response of the amplifier itself becomes almost irrelevant as long as it has a large gain, and the output performance of the system (the "closed loop performance") is defined entirely by the components in the feedback loop. This technique is used particularly with operational amplifiers (op-amps).
Non-feedback amplifiers can achieve only about 1% distortion for audio-frequency signals. With negative feedback, distortion can typically be reduced to 0.001%. Noise, even crossover distortion, can be practically eliminated. Negative feedback also compensates for changing temperatures, and degrading or nonlinear components in the gain stage, but any change or nonlinearity in the components in the feedback loop will affect the output. Indeed, the ability of the feedback loop to define the output is used to make active filter circuits.
Another advantage of negative feedback is that it extends the bandwidth of the amplifier. The concept of feedback is used in operational amplifiers to precisely define gain, bandwidth, and other parameters entirely based on the components in the feedback loop.
Negative feedback can be applied at each stage of an amplifier to stabilize the operating point of active devices against minor changes in power-supply voltage or device characteristics.
Some feedback, positive or negative, is unavoidable and often undesirable—introduced, for example, by parasitic elements, such as inherent capacitance between input and output of devices such as transistors, and capacitive coupling of external wiring. Excessive frequency-dependent positive feedback can produce parasitic oscillation and turn an amplifier into an oscillator.
Categories
Active devices
All amplifiers include some form of active device: this is the device that does the actual amplification. The active device can be a vacuum tube, discrete solid state component, such as a single transistor, or part of an integrated circuit, as in an op-amp.
Transistor amplifiers (or solid state amplifiers) are the most common type of amplifier in use today. A transistor is used as the active element. The gain of the amplifier is determined by the properties of the transistor itself as well as the circuit it is contained within.
Common active devices in transistor amplifiers include bipolar junction transistors (BJTs) and metal oxide semiconductor field-effect transistors (MOSFETs).
Applications are numerous. Some common examples are audio amplifiers in a home stereo or public address system, RF high power generation for semiconductor equipment, to RF and microwave applications such as radio transmitters.
Transistor-based amplification can be realized using various configurations: for example a bipolar junction transistor can realize common base, common collector or common emitter amplification; a MOSFET can realize common gate, common source or common drain amplification. Each configuration has different characteristics.
Vacuum-tube amplifiers (also known as tube amplifiers or valve amplifiers) use a vacuum tube as the active device. While semiconductor amplifiers have largely displaced valve amplifiers for low-power applications, valve amplifiers can be much more cost effective in high power applications such as radar, countermeasures equipment, and communications equipment. Many microwave amplifiers are specially designed valve amplifiers, such as the klystron, gyrotron, traveling wave tube, and crossed-field amplifier, and these microwave valves provide much greater single-device power output at microwave frequencies than solid-state devices. Vacuum tubes remain in use in some high end audio equipment, as well as in musical instrument amplifiers, due to a preference for "tube sound".
Magnetic amplifiers are devices somewhat similar to a transformer where one winding is used to control the saturation of a magnetic core and hence alter the impedance of the other winding. They have largely fallen out of use due to development in semiconductor amplifiers but are still useful in HVDC control, and in nuclear power control circuitry due to not being affected by radioactivity.
Negative resistances can be used as amplifiers, such as the tunnel diode amplifier.
Power amplifiers
A power amplifier is an amplifier designed primarily to increase the power available to a load. In practice, amplifier power gain depends on the source and load impedances, as well as the inherent voltage and current gain. A radio frequency (RF) amplifier design typically optimizes impedances for power transfer, while audio and instrumentation amplifier designs normally optimize input and output impedance for least loading and highest signal integrity. An amplifier that is said to have a gain of 20 dB might have a voltage gain of 20 dB and an available power gain of much more than 20 dB (power ratio of 100)—yet actually deliver a much lower power gain if, for example, the input is from a 600 Ω microphone and the output connects to a 47 kΩ input socket for a power amplifier. In general, the power amplifier is the last 'amplifier' or actual circuit in a signal chain (the output stage) and is the amplifier stage that requires attention to power efficiency. Efficiency considerations lead to the various classes of power amplifiers based on the biasing of the output transistors or tubes: see power amplifier classes below.
Audio power amplifiers are typically used to drive loudspeakers. They will often have two output channels and deliver equal power to each. An RF power amplifier is found in radio transmitter final stages. A servo motor controller amplifies a control voltage to adjust the speed of a motor, or the position of a motorized system.
Operational amplifiers (op-amps)
An operational amplifier is an amplifier circuit which typically has very high open loop gain and differential inputs. Op amps have become very widely used as standardized "gain blocks" in circuits due to their versatility; their gain, bandwidth and other characteristics can be controlled by feedback through an external circuit. Though the term today commonly applies to integrated circuits, the original operational amplifier design used valves, and later designs used discrete transistor circuits.
A fully differential amplifier is similar to the operational amplifier, but also has differential outputs. These are usually constructed using BJTs or FETs.
Distributed amplifiers
These use balanced transmission lines to separate individual single stage amplifiers, the outputs of which are summed by the same transmission line. The transmission line is a balanced type with the input at one end and on one side only of the balanced transmission line and the output at the opposite end is also the opposite side of the balanced transmission line. The gain of each stage adds linearly to the output rather than multiplies one on the other as in a cascade configuration. This allows a higher bandwidth to be achieved than could otherwise be realised even with the same gain stage elements.
Switched mode amplifiers
These nonlinear amplifiers have much higher efficiencies than linear amps, and are used where the power saving justifies the extra complexity. Class-D amplifiers are the main example of this type of amplification.
Negative resistance amplifier
A negative resistance amplifier is a type of regenerative amplifier that can use the feedback between the transistor's source and gate to transform a capacitive impedance on the transistor's source to a negative resistance on its gate. Compared to other types of amplifiers, a negative resistance amplifier will require only a tiny amount of power to achieve very high gain, maintaining a good noise figure at the same time.
Applications
Video amplifiers
Video amplifiers are designed to process video signals and have varying bandwidths depending on whether the video signal is for SDTV, EDTV, HDTV 720p or 1080i/p etc. The specification of the bandwidth itself depends on what kind of filter is used—and at which point ( or for example) the bandwidth is measured. Certain requirements for step response and overshoot are necessary for an acceptable TV image.
Microwave amplifiers
Traveling wave tube amplifiers (TWTAs) are used for high power amplification at low microwave frequencies. They typically can amplify across a broad spectrum of frequencies; however, they are usually not as tunable as klystrons.
Klystrons are specialized linear-beam vacuum-devices, designed to provide high power, widely tunable amplification of millimetre and sub-millimetre waves. Klystrons are designed for large scale operations and despite having a narrower bandwidth than TWTAs, they have the advantage of coherently amplifying a reference signal so its output may be precisely controlled in amplitude, frequency and phase.
Solid-state devices such as silicon short channel MOSFETs like double-diffused metal–oxide–semiconductor (DMOS) FETs, GaAs FETs, SiGe and GaAs heterojunction bipolar transistors/HBTs, HEMTs, IMPATT diodes, and others, are used especially at lower microwave frequencies and power levels on the order of watts specifically in applications like portable RF terminals/cell phones and access points where size and efficiency are the drivers. New materials like gallium nitride (GaN) or GaN on silicon or on silicon carbide/SiC are emerging in HEMT transistors and applications where improved efficiency, wide bandwidth, operation roughly from few to few tens of GHz with output power of few watts to few hundred of watts are needed.
Depending on the amplifier specifications and size requirements microwave amplifiers can be realised as monolithically integrated, integrated as modules or based on discrete parts or any combination of those.
The maser is a non-electronic microwave amplifier.
Musical instrument amplifiers
Instrument amplifiers are a range of audio power amplifiers used to increase the sound level of musical instruments, for example guitars, during performances. An amplifier's tone mainly comes from the order and amount in which it applies EQ and distortion.
Classification of amplifier stages and systems
Common terminal
One set of classifications for amplifiers is based on which device terminal is common to both the input and the output circuit. In the case of bipolar junction transistors, the three classes are common emitter, common base, and common collector. For field-effect transistors, the corresponding configurations are common source, common gate, and common drain; for vacuum tubes, common cathode, common grid, and common plate.
The common emitter (or common source, common cathode, etc.) is most often configured to provide amplification of a voltage applied between base and emitter, and the output signal taken between collector and emitter is inverted, relative to the input. The common collector arrangement applies the input voltage between base and collector, and to take the output voltage between emitter and collector. This causes negative feedback, and the output voltage tends to follow the input voltage. This arrangement is also used as the input presents a high impedance and does not load the signal source, though the voltage amplification is less than one. The common-collector circuit is, therefore, better known as an emitter follower, source follower, or cathode follower.
Unilateral or bilateral
An amplifier whose output exhibits no feedback to its input side is described as 'unilateral'. The input impedance of a unilateral amplifier is independent of load, and output impedance is independent of signal source impedance.
An amplifier that uses feedback to connect part of the output back to the input is a bilateral amplifier. Bilateral amplifier input impedance depends on the load, and output impedance on the signal source impedance.
All amplifiers are bilateral to some degree; however they may often be modeled as unilateral under operating conditions where feedback is small enough to neglect for most purposes, simplifying analysis (see the common base article for an example).
Inverting or non-inverting
Another way to classify amplifiers is by the phase relationship of the input signal to the output signal. An 'inverting' amplifier produces an output 180 degrees out of phase with the input signal (that is, a polarity inversion or mirror image of the input as seen on an oscilloscope). A 'non-inverting' amplifier maintains the phase of the input signal waveforms. An emitter follower is a type of non-inverting amplifier, indicating that the signal at the emitter of a transistor is following (that is, matching with unity gain but perhaps an offset) the input signal. Voltage follower is also non-inverting type of amplifier having unity gain.
This description can apply to a single stage of an amplifier, or to a complete amplifier system.
Function
Other amplifiers may be classified by their function or output characteristics. These functional descriptions usually apply to complete amplifier systems or sub-systems and rarely to individual stages.
A servo amplifier indicates an integrated feedback loop to actively control the output at some desired level. A DC servo indicates use at frequencies down to DC levels, where the rapid fluctuations of an audio or RF signal do not occur. These are often used in mechanical actuators, or devices such as DC motors that must maintain a constant speed or torque. An AC servo amp. can do this for some AC motors.
A linear amplifier responds to different frequency components independently, and does not generate harmonic distortion or intermodulation distortion. No amplifier can provide perfect linearity (even the most linear amplifier has some nonlinearities, since the amplifying devices—transistors or vacuum tubes—follow nonlinear power laws such as square-laws and rely on circuitry techniques to reduce those effects).
A nonlinear amplifier generates significant distortion and so changes the harmonic content; there are situations where this is useful. Amplifier circuits intentionally providing a non-linear transfer function include:
a device like a silicon controlled rectifier or a transistor used as a switch may be employed to turn either fully on or off a load such as a lamp based on a threshold in a continuously variable input.
a non-linear amplifier in an analog computer or true RMS converter for example can provide a special transfer function, such as logarithmic or square-law.
a Class C RF amplifier may be chosen because it can be very efficient—but is non-linear. Following such an amplifier with a so-called tank tuned circuit can reduce unwanted harmonics (distortion) sufficiently to make it useful in transmitters, or some desired harmonic may be selected by setting the resonant frequency of the tuned circuit to a higher frequency rather than fundamental frequency in frequency multiplier circuits.
Automatic gain control circuits require an amplifier's gain be controlled by the time-averaged amplitude so that the output amplitude varies little when weak stations are being received. The non-linearities are assumed arranged so the relatively small signal amplitude suffers from little distortion (cross-channel interference or intermodulation) yet is still modulated by the relatively large gain-control DC voltage.
AM detector circuits that use amplification such as anode-bend detectors, precision rectifiers and infinite impedance detectors (so excluding unamplified detectors such as cat's-whisker detectors), as well as peak detector circuits, rely on changes in amplification based on the signal's instantaneous amplitude to derive a direct current from an alternating current input.
Operational amplifier comparator and detector circuits.
A wideband amplifier has a precise amplification factor over a wide frequency range, and is often used to boost signals for relay in communications systems. A narrowband amp amplifies a specific narrow range of frequencies, to the exclusion of other frequencies.
An RF amplifier amplifies signals in the radio frequency range of the electromagnetic spectrum, and is often used to increase the sensitivity of a receiver or the output power of a transmitter.
An audio amplifier amplifies audio frequencies. This category subdivides into small signal amplification, and power amps that are optimised to driving speakers, sometimes with multiple amps grouped together as separate or bridgeable channels to accommodate different audio reproduction requirements. Frequently used terms within audio amplifiers include:
Preamplifier (preamp.), which may include a phono preamp with RIAA equalization, or tape head preamps with CCIR equalisation filters. They may include filters or tone control circuitry.
Power amplifier (normally drives loudspeakers), headphone amplifiers, and public address amplifiers.
Stereo amplifiers imply two channels of output (left and right), though the term simply means "solid" sound (referring to three-dimensional)—so quadraphonic stereo was used for amplifiers with four channels. 5.1 and 7.1 systems refer to Home theatre systems with 5 or 7 normal spatial channels, plus a subwoofer channel.
Buffer amplifiers, which may include emitter followers, provide a high impedance input for a device (perhaps another amplifier, or perhaps an energy-hungry load such as lights) that would otherwise draw too much current from the source. Line drivers are a type of buffer that feeds long or interference-prone interconnect cables, possibly with differential outputs through twisted pair cables.
Interstage coupling method
Amplifiers are sometimes classified by the coupling method of the signal at the input, output, or between stages. Different types of these include:
Resistive-capacitive (RC) coupled amplifier, using a network of resistors and capacitors By design these amplifiers cannot amplify DC signals as the capacitors block the DC component of the input signal. RC-coupled amplifiers were used very often in circuits with vacuum tubes or discrete transistors. In the days of the integrated circuit a few more transistors on a chip are much cheaper and smaller than a capacitor.
Inductive-capacitive (LC) coupled amplifier, using a network of inductors and capacitors This kind of amplifier is most often used in selective radio-frequency circuits.
Transformer coupled amplifier, using a transformer to match impedances or to decouple parts of the circuits Quite often LC-coupled and transformer-coupled amplifiers cannot be distinguished as a transformer is some kind of inductor.
Direct coupled amplifier, using no impedance and bias matching components This class of amplifier was very uncommon in the vacuum tube days when the anode (output) voltage was at greater than several hundred volts and the grid (input) voltage at a few volts minus. So they were used only if the gain was specified down to DC (e.g., in an oscilloscope). In the context of modern electronics developers are encouraged to use directly coupled amplifiers whenever possible. In FET and CMOS technologies direct coupling is dominant since gates of MOSFETs theoretically pass no current through themselves. Therefore, DC component of the input signals is automatically filtered.
Frequency range
Depending on the frequency range and other properties amplifiers are designed according to different principles.
Frequency ranges down to DC are used only when this property is needed. Amplifiers for direct current signals are vulnerable to minor variations in the properties of components with time. Special methods, such as chopper stabilized amplifiers are used to prevent objectionable drift in the amplifier's properties for DC. "DC-blocking" capacitors can be added to remove DC and sub-sonic frequencies from audio amplifiers.
Depending on the frequency range specified different design principles must be used. Up to the MHz range only "discrete" properties need be considered; e.g., a terminal has an input impedance.
As soon as any connection within the circuit gets longer than perhaps 1% of the wavelength of the highest specified frequency (e.g., at 100 MHz the wavelength is 3 m, so the critical connection length is approx. 3 cm) design properties radically change. For example, a specified length and width of a PCB trace can be used as a selective or impedance-matching entity.
Above a few hundred MHz, it gets difficult to use discrete elements, especially inductors. In most cases, PCB traces of very closely defined shapes are used instead (stripline techniques).
The frequency range handled by an amplifier might be specified in terms of bandwidth (normally implying a response that is 3 dB down when the frequency reaches the specified bandwidth), or by specifying a frequency response that is within a certain number of decibels between a lower and an upper frequency (e.g. "20 Hz to 20 kHz plus or minus 1 dB").
Power amplifier classes
Power amplifier circuits (output stages) are classified as A, B, AB and C for analog designs—and class D and E for switching designs. The power amplifier classes are based on the proportion of each input cycle (conduction angle) during which an amplifying device passes current. The image of the conduction angle derives from amplifying a sinusoidal signal. If the device is always on, the conducting angle is 360°. If it is on for only half of each cycle, the angle is 180°. The angle of flow is closely related to the amplifier power efficiency.
Example amplifier circuit
The practical amplifier circuit shown above could be the basis for a moderate-power audio amplifier. It features a typical (though substantially simplified) design as found in modern amplifiers, with a class-AB push–pull output stage, and uses some overall negative feedback. Bipolar transistors are shown, but this design would also be realizable with FETs or valves.
The input signal is coupled through capacitor C1 to the base of transistor Q1. The capacitor allows the AC signal to pass, but blocks the DC bias voltage established by resistors R1 and R2 so that any preceding circuit is not affected by it. Q1 and Q2 form a differential amplifier (an amplifier that multiplies the difference between two inputs by some constant), in an arrangement known as a long-tailed pair. This arrangement is used to conveniently allow the use of negative feedback, which is fed from the output to Q2 via R7 and R8.
The negative feedback into the difference amplifier allows the amplifier to compare the input to the actual output. The amplified signal from Q1 is directly fed to the second stage, Q3, which is a common emitter stage that provides further amplification of the signal and the DC bias for the output stages, Q4 and Q5. R6 provides the load for Q3 (a better design would probably use some form of active load here, such as a constant-current sink). So far, all of the amplifier is operating in class A. The output pair are arranged in class-AB push–pull, also called a complementary pair. They provide the majority of the current amplification (while consuming low quiescent current) and directly drive the load, connected via DC-blocking capacitor C2. The diodes D1 and D2 provide a small amount of constant voltage bias for the output pair, just biasing them into the conducting state so that crossover distortion is minimized. That is, the diodes push the output stage firmly into class-AB mode (assuming that the base-emitter drop of the output transistors is reduced by heat dissipation).
This design is simple, but a good basis for a practical design because it automatically stabilises its operating point, since feedback internally operates from DC up through the audio range and beyond. Further circuit elements would probably be found in a real design that would roll-off the frequency response above the needed range to prevent the possibility of unwanted oscillation. Also, the use of fixed diode bias as shown here can cause problems if the diodes are not both electrically and thermally matched to the output transistors if the output transistors turn on too much, they can easily overheat and destroy themselves, as the full current from the power supply is not limited at this stage.
A common solution to help stabilise the output devices is to include some emitter resistors, typically one ohm or so. Calculating the values of the circuit's resistors and capacitors is done based on the components employed and the intended use of the amp.
| Technology | Components | null |
9996 | https://en.wikipedia.org/wiki/Embroidery | Embroidery | Embroidery is the art of decorating fabric or other materials using a needle to stitch thread or yarn. Embroidery may also incorporate other materials such as pearls, beads, quills, and sequins. In modern days, embroidery is usually seen on hats, clothing, blankets, and handbags. Embroidery is available in a wide variety of thread or yarn colour. It is often used to personalize gifts or clothing items.
Some of the basic techniques or stitches of the earliest embroidery are chain stitch, buttonhole or blanket stitch, running stitch, satin stitch, and cross stitch. Those stitches remain the fundamental techniques of hand embroidery today.
History
Origins
The process used to tailor, patch, mend and reinforce cloth fostered the development of sewing techniques, and the decorative possibilities of sewing led to the art of embroidery. Indeed, the remarkable stability of basic embroidery stitches has been noted:
The art of embroidery has been found worldwide and several early examples have been found. Works in China have been dated to the Warring States period (5th–3rd century BC). In a garment from Migration period Sweden, roughly 300–700 AD, the edges of bands of trimming are reinforced with running stitch, back stitch, stem stitch, tailor's buttonhole stitch, and Whip stitch, but it is uncertain whether this work simply reinforced the seams or should be interpreted as decorative embroidery.
Historical applications and techniques
Depending on time, location and materials available, embroidery could be the domain of a few experts or a widespread, popular technique. This flexibility led to a variety of works, from the royal to the mundane. Examples of high status items include elaborately embroidered clothing, religious objects, and household items often were seen as a mark of wealth and status.
In medieval England, Opus Anglicanum, a technique used by professional workshops and guilds in medieval England, was used to embellish textiles used in church rituals. In 16th century England, some books, usually bibles or other religious texts, had embroidered bindings. The Bodleian Library in Oxford contains one presented to Queen Elizabeth I in 1583. It also owns a copy of The Epistles of Saint Paul, whose cover was reputedly embroidered by the Queen.
In 18th-century England and its colonies, with the rise of the merchant class and the wider availability of luxury materials, rich embroideries began to appear in a secular context. These embroideries took the form of items displayed in private homes of well-to-do citizens, as opposed to a church or royal setting. Even so, the embroideries themselves may still have had religious themes. Samplers employing fine silks were produced by the daughters of wealthy families. Embroidery was a skill marking a girl's path into womanhood as well as conveying rank and social standing.
Embroidery was an important art and signified social status in the Medieval Islamic world as well. The 17th-century Turkish traveler Evliya Çelebi called it the "craft of the two hands". In cities such as Damascus, Cairo and Istanbul, embroidery was visible on handkerchiefs, uniforms, flags, calligraphy, shoes, robes, tunics, horse trappings, slippers, sheaths, pouches, covers, and even on leather belts. Craftsmen embroidered items with gold and silver thread. Embroidery cottage industries, some employing over 800 people, grew to supply these items.
In the 16th century, in the reign of the Mughal Emperor Akbar, his chronicler Abu al-Fazl ibn Mubarak wrote in the famous Ain-i-Akbari:
Conversely, embroidery is also a folk art, using materials that were accessible to nonprofessionals. Examples include Hardanger embroidery from Norway; Merezhka from Ukraine; Mountmellick embroidery from Ireland; Nakshi kantha from Bangladesh and West Bengal; Achachi from Peru; and Brazilian embroidery. Many techniques had a practical use such as Sashiko from Japan, which was used as a way to reinforce clothing.
While historically viewed as a pastime, activity, or hobby, intended just for women, embroidery has often been used as a form of biography. Women who were unable to access a formal education or, at times, writing implements, were often taught embroidery and utilized it as a means of documenting their lives by telling stories through their embroidery. In terms of documenting the histories of marginalized groups, especially women of color both within the United States and around the world, embroidery is a means of studying the everyday lives of those whose lives largely went unstudied throughout much of history.
21st century
Since the late 2010s, there has been a growth in the popularity of embroidering by hand. As a result of visual social media such as Pinterest and Instagram, artists are able to share their work more extensively, which has inspired younger generations to pick up needle and threads.
Contemporary embroidery artists believe hand embroidery has grown in popularity as a result of an increasing need for relaxation and digitally disconnecting practices. Many people are also using embroidery to creatively upcycle and repair clothing, to help counteract over-consumption and fashion industry waste.
Modern hand embroidery, as opposed to cross-stitching, is characterized by a more "liberal" approach, where stitches are more freely combined in unconventional ways to create various textures and designs.
Modern canvas work tends to follow symmetrical counted stitching patterns with designs emerging from the repetition of one or just a few similar stitches in a variety of hues. In contrast, many forms of surface embroidery make use of a wide range of stitching patterns in a single piece of work.
Climate crisis
Training women in traditional embroidery skills in Inner Mongolia, was begun by Bai Jingying as a reaction to the financial pressures caused by the impact of climate change, including desertification, in the region.
Classification
Embroidery can be classified according to what degree the design takes into account the nature of the base material and by the relationship of stitch placement to the fabric. The main categories are free or surface embroidery, counted-thread embroidery, and needlepoint or canvas work.
In free or surface embroidery, designs are applied without regard to the weave of the underlying fabric. Examples include crewel and traditional Chinese and Japanese embroidery.
Counted-thread embroidery patterns are created by making stitches over a predetermined number of threads in the foundation fabric. Counted-thread embroidery is more easily worked on an even-weave foundation fabric such as embroidery canvas, aida cloth, or specially woven cotton and linen fabrics. Examples include cross-stitch and some forms of blackwork embroidery.
While similar to counted thread in regards to technique, in canvas work or needlepoint, threads are stitched through a fabric mesh to create a dense pattern that completely covers the foundation fabric. Examples of canvas work include bargello and Berlin wool work.
Embroidery can also be classified by the similarity of its appearance. In drawn thread work and cutwork, the foundation fabric is deformed or cut away to create holes that are then embellished with embroidery, often with thread in the same color as the foundation fabric. When created with white thread on white linen or cotton, this work is collectively referred to as whitework. However, whitework can either be counted or free. Hardanger embroidery is a counted embroidery and the designs are often geometric. Conversely, styles such as Broderie anglaise are similar to free embroidery, with floral or abstract designs that are not dependent on the weave of the fabric.
Traditional hand embroidery around the world
Materials and tools
Materials
The fabrics and yarns used in traditional embroidery vary from place to place. Wool, linen, and silk have been in use for thousands of years for both fabric and yarn. Today, embroidery thread is manufactured in cotton, rayon, and novelty yarns as well as in traditional wool, linen, and silk. Ribbon embroidery uses narrow ribbon in silk or silk/organza blend ribbon, most commonly to create floral motifs.
Surface embroidery techniques such as chain stitch and couching or laid-work are the most economical of expensive yarns; couching is generally used for goldwork. Canvas work techniques, in which large amounts of yarn are buried on the back of the work, use more materials but provide a sturdier and more substantial finished textile.
Tools
A needle is the main stitching tool in embroidery, and comes in various sizes and types.
In both canvas work and surface embroidery an embroidery hoop or frame can be used to stretch the material and ensure even stitching tension that prevents pattern distortion.
Machine embroidery
The development of machine embroidery and its mass production came about in stages during the Industrial Revolution. The first embroidery machine was the hand embroidery machine, invented in France in 1832 by Josué Heilmann. The next evolutionary step was the schiffli embroidery machine. The latter borrowed from the sewing machine and the Jacquard loom to fully automate its operation. The manufacture of machine-made embroideries in St. Gallen in eastern Switzerland flourished in the latter half of the 19th century. Both St. Gallen, Switzerland and Plauen, Germany were important centers for machine embroidery and embroidery machine development. Many Swiss and Germans immigrated to Hudson county, New Jersey in the early twentieth century and developed a machine embroidery industry there. Shiffli machines have continued to evolve and are still used for industrial scale embroidery.
Contemporary embroidery is stitched with a computerized embroidery machine using patterns digitized with embroidery software. In machine embroidery, different types of "fills" add texture and design to the finished work. Machine embroidery is used to add logos and monograms to business shirts or jackets, gifts, and team apparel as well as to decorate household items for the bed and bath and other linens, draperies, and decorator fabrics that mimic the elaborate hand embroidery of the past.
Machine embroidery is most typically done with rayon thread, although polyester thread can also be used. Cotton thread, on the other hand, is prone to breaking and is avoided.
There has also been a development in free hand machine embroidery, new machines have been designed that allow for the user to create free-motion embroidery which has its place in textile arts, quilting, dressmaking, home furnishings and more. Users can use the embroidery software to digitize the digital embroidery designs. These digitized design are then transferred to the embroidery machine with the help of a flash drive and then the embroidery machine embroiders the selected design onto the fabric.
In literature
In Greek mythology the goddess Athena is said to have passed down the art of embroidery (along with weaving) to humans, leading to the famed competition between herself and the mortal Arachne.
Gallery
| Technology | Techniques_2 | null |
10008 | https://en.wikipedia.org/wiki/Electrode | Electrode | An electrode is an electrical conductor used to make contact with a nonmetallic part of a circuit (e.g. a semiconductor, an electrolyte, a vacuum or air). Electrodes are essential parts of batteries that can consist of a variety of materials (chemicals) depending on the type of battery.
Michael Faraday coined the term "" in 1833; the word recalls the Greek ἤλεκτρον (, "amber") and ὁδός (, "path, way").
The electrophore, invented by Johan Wilcke in 1762, was an early version of an electrode used to study static electricity.
Anode and cathode in electrochemical cells
Electrodes are an essential part of any battery. The first electrochemical battery was devised by Alessandro Volta and was aptly named the Voltaic cell. This battery consisted of a stack of copper and zinc electrodes separated by brine-soaked paper disks. Due to fluctuation in the voltage provided by the voltaic cell, it was not very practical. The first practical battery was invented in 1839 and named the Daniell cell after John Frederic Daniell. It still made use of the zinc–copper electrode combination. Since then, many more batteries have been developed using various materials. The basis of all these is still using two electrodes, anodes and cathodes.
Anode (-)
'Anode' was coined by William Whewell at Michael Faraday's request, derived from the Greek words ἄνο (ano), 'upwards' and ὁδός (hodós), 'a way'. The anode is the electrode through which the conventional current enters from the electrical circuit of an electrochemical cell (battery) into the non-metallic cell. The electrons then flow to the other side of the battery. Benjamin Franklin surmised that the electrical flow moved from positive to negative. The electrons flow away from the anode and the conventional current towards it. From both can be concluded that the charge of the anode is negative. The electron entering the anode comes from the oxidation reaction that takes place next to it.
Cathode (+)
The cathode is in many ways the opposite of the anode. The name (also coined by Whewell) comes from the Greek words κάτω (kato), 'downwards' and ὁδός (hodós), 'a way'. It is the positive electrode, meaning the electrons flow from the electrical circuit through the cathode into the non-metallic part of the electrochemical cell. At the cathode, the reduction reaction takes place with the electrons arriving from the wire connected to the cathode and are absorbed by the oxidizing agent.
Primary cell
A primary cell is a battery designed to be used once and then discarded. This is due to the electrochemical reactions taking place at the electrodes in the cell not being reversible. An example of a primary cell is the discardable alkaline battery commonly used in flashlights. Consisting of a zinc anode and a manganese oxide cathode in which ZnO is formed.
The half-reactions are:
Zn(s) + 2OH−(aq) → ZnO(s) + H2O(l) + 2e− [E0oxidation = -1.28 V]
2MnO2(s) + H2O(l) + 2e− → Mn2O3(s) + 2OH−(aq) [E0reduction = +0.15 V]
Overall reaction:
Zn(s) + 2MnO2(s) ZnO(s) + Mn2O3(s) [E0total = +1.43 V]
The ZnO is prone to clumping and will give less efficient discharge if recharged again. It is possible to recharge these batteries but is due to safety concerns advised against by the manufacturer. Other primary cells include zinc–carbon, zinc–chloride, and lithium iron disulfide.
Secondary cell
Contrary to the primary cell a secondary cell can be recharged. The first was the lead–acid battery, invented in 1859 by French physicist Gaston Planté. This type of battery is still the most widely used in automobiles, among others. The cathode consists of lead dioxide (PbO2) and the anode of solid lead. Other commonly used rechargeable batteries are nickel–cadmium, nickel–metal hydride, and Lithium-ion. The last of which will be explained more thoroughly in this article due to its importance.
Marcus' theory of electron transfer
Marcus theory is a theory originally developed by Nobel laureate Rudolph A. Marcus and explains the rate at which an electron can move from one chemical species to another, for this article this can be seen as 'jumping' from the electrode to a species in the solvent or vice versa.
We can represent the problem as calculating the transfer rate for the transfer of an electron from donor to an acceptor
D + A → D+ + A−
The potential energy of the system is a function of the translational, rotational, and vibrational coordinates of the reacting species and the molecules of the surrounding medium, collectively called the reaction coordinates. The abscissa the figure to the right represents these. From the classical electron transfer theory, the expression of the reaction rate constant (probability of reaction) can be calculated, if a non-adiabatic process and parabolic potential energy are assumed, by finding the point of intersection (Qx). One important thing to note, and was noted by Marcus when he came up with the theory, the electron transfer must abide by the law of conservation of energy and the Frank-Condon principle.
Doing this and then rearranging this leads to the expression of the free energy activation () in terms of the overall free energy of the reaction ().
In which the is the reorganisation energy.
Filling this result in the classically derived Arrhenius equation
leads to
With A being the pre-exponential factor which is usually experimentally determined, although a semi classical derivation provides more information as will be explained below.
This classically derived result qualitatively reproduced observations of a maximum electron transfer rate under the conditions . For a more extensive mathematical treatment one could read the paper by Newton. An interpretation of this result and what a closer look at the physical meaning of the one can read the paper by Marcus.
the situation at hand can be more accurately described by using the displaced harmonic oscillator model, in this model quantum tunneling is allowed. This is needed in order to explain why even at near-zero Kelvin there still are electron transfers, in contradiction to the classical theory.
Without going into too much detail on how the derivation is done, it rests on using Fermi's golden rule from time-dependent perturbation theory with the full Hamiltonian of the system. It is possible to look at the overlap in the wavefunctions of both the reactants and the products (the right and the left side of the chemical reaction) and therefore when their energies are the same and allow for electron transfer. As touched on before this must happen because only then conservation of energy is abided by. Skipping over a few mathematical steps the probability of electron transfer can be calculated (albeit quite difficult) using the following formula
With being the electronic coupling constant describing the interaction between the two states (reactants and products) and being the line shape function. Taking the classical limit of this expression, meaning , and making some substitution an expression is obtained very similar to the classically derived formula, as expected.
The main difference is now the pre-exponential factor has now been described by more physical parameters instead of the experimental factor . One is once again revered to the sources as listed below for a more in-depth and rigorous mathematical derivation and interpretation.
Efficiency
The physical properties of electrodes are mainly determined by the material of the electrode and the topology of the electrode. The properties required depend on the application and therefore there are many kinds of electrodes in circulation. The defining property for a material to be used as an electrode is that it be conductive. Any conducting material such as metals, semiconductors, graphite or conductive polymers can therefore be used as an electrode. Often electrodes consist of a combination of materials, each with a specific task. Typical constituents are the active materials which serve as the particles which oxidate or reduct, conductive agents which improve the conductivity of the electrode and binders which are used to contain the active particles within the electrode. The efficiency of electrochemical cells is judged by a number of properties, important quantities are the self-discharge time, the discharge voltage and the cycle performance. The physical properties of the electrodes play an important role in determining these quantities. Important properties of the electrodes are: the electrical resistivity, the specific heat capacity (c_p), the electrode potential and the hardness. Of course, for technological applications, the cost of the material is also an important factor. The values of these properties at room temperature (T = 293 K) for some commonly used materials are listed in the table below.
Surface effects
The surface topology of the electrode plays an important role in determining the efficiency of an electrode. The efficiency of the electrode can be reduced due to contact resistance. To create an efficient electrode it is therefore important to design it such that it minimizes the contact resistance.
Manufacturing
The production of electrodes for Li-ion batteries is done in various steps as follows:
The various constituents of the electrode are mixed into a solvent. This mixture is designed such that it improves the performance of the electrodes. Common components of this mixture are:
The active electrode particles.
A binder used to contain the active electrode particles.
A conductive agent used to improve the conductivity of the electrode.
The mixture created is known as an ‘electrode slurry’.
The electrode slurry above is coated onto a conductor which acts as the current collector in the electrochemical cell. Typical current collectors are copper for the cathode and aluminum for the anode.
After the slurry has been applied to the conductor it is dried and then pressed to the required thickness.
Structure of the electrode
For a given selection of constituents of the electrode, the final efficiency is determined by the internal structure of the electrode. The important factors in the internal structure in determining the performance of the electrode are:
Clustering of the active material and the conductive agent. In order for all the components of the slurry to perform their task, they should all be spread out evenly within the electrode.
An even distribution of the conductive agent over the active material. This makes sure that the conductivity of the electrode is optimal.
The adherence of the electrode to the current collectors. The adherence makes sure that the electrode does not dissolve into the electrolyte.
The density of the active material. A balance should be found between the amount of active material, the conductive agent and the binder. Since the active material is the important factor in the electrode, the slurry should be designed such that the density of the active material is as high as possible, without the conductive agent and the binder not functioning properly.
These properties can be influenced in the production of the electrodes in a number of manners. The most important step in the manufacturing of the electrodes is creating the electrode slurry. As can be seen above, the important properties of the electrode all have to do with the even distribution of the components of the electrode. Therefore, it is very important that the electrode slurry be as homogeneous as possible. Multiple procedures have been developed to improve this mixing stage and current research is still being done.
Electrodes in lithium ion batteries
A modern application of electrodes is in lithium-ion batteries (Li-ion batteries). A Li-ion battery is a kind of flow battery which can be seen in the image on the right.
Furthermore, a Li-ion battery is an example of a secondary cell since it is rechargeable. It can both act as a galvanic or electrolytic cell. Li-ion batteries use lithium ions as the solute in the electrolyte which are dissolved in an organic solvent. Lithium electrodes were first studied by Gilbert N. Lewis and Frederick G. Keyes in 1913. In the following century these electrodes were used to create and study the first Li-ion batteries. Li-ion batteries are very popular due to their great performance. Applications include mobile phones and electric cars. Due to their popularity, much research is being done to reduce the cost and increase the safety of Li-ion batteries. An integral part of the Li-ion batteries are their anodes and cathodes, therefore much research is being done into increasing the efficiency, safety and reducing the costs of these electrodes specifically.
Cathodes
In Li-ion batteries, the cathode consists of a intercalated lithium compound (a layered material consisting of layers of molecules composed of lithium and other elements). A common element which makes up part of the molecules in the compound is cobalt. Another frequently used element is manganese. The best choice of compound usually depends on the application of the battery. Advantages for cobalt-based compounds over manganese-based compounds are their high specific heat capacity, high volumetric heat capacity, low self-discharge rate, high discharge voltage and high cycle durability. There are however also drawbacks in using cobalt-based compounds such as their high cost and their low thermostability. Manganese has similar advantages and a lower cost, however there are some problems associated with using manganese. The main problem is that manganese tends to dissolve into the electrolyte over time. For this reason, cobalt is still the most common element which is used in the lithium compounds. There is much research being done into finding new materials which can be used to create cheaper and longer lasting Li-ion batteries For example, Chinese and American researchers have demonstrated that ultralong single wall carbon nanotubes significantly enhance lithium iron phosphate cathodes. By creating a highly efficient conductive network that securely binds lithium iron phosphate particles, adding carbon nanotubes as a conductive additive at a dosage of just 0.5 wt.% helps cathodes to achieve a remarkable rate capacity of 161.5 mAh g-1 at 0.5 C and 130.2 mAh g-1 at 5 C, whole maintaining 87.4% capacity retention after 200 cycles at 2 C.
Anodes
The anodes used in mass-produced Li-ion batteries are either carbon based (usually graphite) or made out of spinel lithium titanate (Li4Ti5O12). Graphite anodes have been successfully implemented in many modern commercially available batteries due to its cheap price, longevity and high energy density. However, it presents issues of dendrite growth, with risks of shorting the battery and posing a safety issue. Li4Ti5O12 has the second largest market share of anodes, due to its stability and good rate capability, but with challenges such as low capacity. During the early 2000s, silicon anode research began picking up pace, becoming one of the decade's most promising candidates for future lithium-ion battery anodes. Silicon has one of the highest gravimetric capacities when compared to graphite and Li4Ti5O12 as well as a high volumetric one. Furthermore, Silicon has the advantage of operating under a reasonable open circuit voltage without parasitic lithium reactions. However, silicon anodes have a major issue of volumetric expansion during lithiation of around 360%. This expansion may pulverize the anode, resulting in poor performance. To fix this problem, scientists looked into varying the dimensionality of the Si. Many studies have been developed in Si nanowires, Si tubes as well as Si sheets. As a result, composite hierarchical Si anodes have become the major technology for future applications in lithium-ion batteries. In the early 2020s, technology is reaching commercial levels with factories being built for mass production of anodes in the United States. Furthermore, metallic lithium is another possible candidate for the anode. It boasts a higher specific capacity than silicon, however, does come with the drawback of working with the highly unstable metallic lithium. Similarly to graphite anodes, dendrite formation is another major limitation of metallic lithium, with the solid electrolyte interphase being a major design challenge. In the end, if stabilized, metallic lithium would be able to produce batteries that hold the most charge, while being the lightest. In recent years, researchers have conducted several studies on the use of single wall carbon nanotubes (SWCNTs) as conductive additives. These SWCNTs help to preserve electron conduction, ensure stable electrochemical reactions, and maintain uniform volume changes during cycling, effectively reducing anode pulverization.
Mechanical properties
A common failure mechanism of batteries is mechanical shock, which breaks either the electrode or the system's container, leading to poor conductivity and electrolyte leakage. However, the relevance of mechanical properties of electrodes goes beyond the resistance to collisions due to its environment. During standard operation, the incorporation of ions into electrodes leads to a change in volume. This is well exemplified by Si electrodes in lithium-ion batteries expanding around 300% during lithiation. Such change may lead to the deformations in the lattice and, therefore stresses in the material. The origin of stresses may be due to geometric constraints in the electrode or inhomogeneous plating of the ion. This phenomenon is very concerning as it may lead to electrode fracture and performance loss. Thus, mechanical properties are crucial to enable the development of new electrodes for long lasting batteries. A possible strategy for measuring the mechanical behavior of electrodes during operation is by using nanoindentation. The method is able to analyze how the stresses evolve during the electrochemical reactions, being a valuable tool in evaluating possible pathways for coupling mechanical behavior and electrochemistry.
More than just affecting the electrode's morphology, stresses are also able to impact electrochemical reactions. While the chemical driving forces are usually higher in magnitude than the mechanical energies, this is not true for Li-ion batteries. A study by Dr. Larché established a direct relation between the applied stress and the chemical potential of the electrode. Though it neglects multiple variables such as the variation of elastic constraints, it subtracts from the total chemical potential the elastic energy induced by the stress.
In this equation, μ represents the chemical potential, with μ° being its reference value. T stands for the temperature and k the Boltzmann constant. The term γ inside the logarithm is the activity and x is the ratio of the ion to the total composition of the electrode. The novel term Ω is the partial molar volume of the ion in the host and σ corresponds to the mean stress felt by the system. The result of this equation is that diffusion, which is dependent on chemical potential, gets impacted by the added stress and, therefore changes the battery's performance. Furthermore, mechanical stresses may also impact the electrode's solid-electrolyte-interphase layer. The interface which regulates the ion and charge transfer and can be degraded by stress. Thus, more ions in the solution will be consumed to reform it, diminishing the overall efficiency of the system.
Other anodes and cathodes
In a vacuum tube or a semiconductor having polarity (diodes, electrolytic capacitors) the anode is the positive (+) electrode and the cathode the negative (−). The electrons enter the device through the cathode and exit the device through the anode. Many devices have other electrodes to control operation, e.g., base, gate, control grid.
In a three-electrode cell, a counter electrode, also called an auxiliary electrode, is used only to make a connection to the electrolyte so that a current can be applied to the working electrode. The counter electrode is usually made of an inert material, such as a noble metal or graphite, to keep it from dissolving.
Welding electrodes
In arc welding, an electrode is used to conduct current through a workpiece to fuse two pieces together. Depending upon the process, the electrode is either consumable, in the case of gas metal arc welding or shielded metal arc welding, or non-consumable, such as in gas tungsten arc welding. For a direct current system, the weld rod or stick may be a cathode for a filling type weld or an anode for other welding processes. For an alternating current arc welder, the welding electrode would not be considered an anode or cathode.
Alternating current electrodes
For electrical systems which use alternating current, the electrodes are the connections from the circuitry to the object to be acted upon by the electric current but are not designated anode or cathode because the direction of flow of the electrons changes periodically, usually many times per second.
Chemically modified electrodes
Chemically modified electrodes are electrodes that have their surfaces chemically modified to change the electrode's physical, chemical, electrochemical, optical, electrical, and transportive properties. These electrodes are used for advanced purposes in research and investigation.
Uses
Electrodes are used to provide current through nonmetal objects to alter them in numerous ways and to measure conductivity for numerous purposes. Examples include:
Electrodes for fuel cells
Electrodes for medical purposes, such as EEG (for recording brain activity), ECG (recording heart beats), ECT (electrical brain stimulation), defibrillator (recording and delivering cardiac stimulation)
Electrodes for electrophysiology techniques in biomedical research
Electrodes for execution by the electric chair
Electrodes for electroplating
Electrodes for arc welding
Electrodes for cathodic protection
Electrodes for grounding
Electrodes for chemical analysis using electrochemical methods
Nanoelectrodes for high-precision measurements in nanoelectrochemistry
Inert electrodes for electrolysis (made of platinum)
Membrane electrode assembly
Electrodes for Taser electroshock weapon
| Physical sciences | Electrical circuits | Physics |
10024 | https://en.wikipedia.org/wiki/MDMA | MDMA | 3,4-Methylenedioxymethamphetamine (MDMA), commonly known as ecstasy (tablet form), and molly (crystal form), is an empathogen–entactogenic drug with stimulant and minor psychedelic properties. In studies, it has been used alongside psychotherapy in the treatment of post-traumatic stress disorder (PTSD) and social anxiety in autism spectrum disorder. The purported pharmacological effects that may be prosocial include altered sensations, increased energy, empathy, and pleasure. When taken by mouth, effects begin in 30 to 45 minutes and last three to six hours.
MDMA was first synthesized in 1912 by Merck chemist Anton Köllisch. It was used to enhance psychotherapy beginning in the 1970s and became popular as a street drug in the 1980s. MDMA is commonly associated with dance parties, raves, and electronic dance music. Tablets sold as ecstasy may be mixed with other substances such as ephedrine, amphetamine, and methamphetamine. In 2016, about 21 million people between the ages of 15 and 64 used ecstasy (0.3% of the world population). This was broadly similar to the percentage of people who use cocaine or amphetamines, but lower than for cannabis or opioids. In the United States, as of 2017, about 7% of people have used MDMA at some point in their lives and 0.9% have used it in the last year. The lethal risk from one dose of MDMA is estimated to be from 1 death in 20,000 instances to 1 death in 50,000 instances.
Short-term adverse effects include grinding of the teeth, blurred vision, sweating, and a rapid heartbeat, and extended use can also lead to addiction, memory problems, paranoia, and difficulty sleeping. Deaths have been reported due to increased body temperature and dehydration. Following use, people often feel depressed and tired, although this effect does not appear in clinical use, suggesting that it is not a direct result of MDMA administration. MDMA acts primarily by increasing the release of the neurotransmitters serotonin, dopamine, and norepinephrine in parts of the brain. It belongs to the substituted amphetamine classes of drugs. MDMA is structurally similar to mescaline (a psychedelic), methamphetamine (a stimulant), as well as endogenous monoamine neurotransmitters such as serotonin, norepinephrine, and dopamine.
MDMA has limited approved medical uses in a small number of countries, but is illegal in most jurisdictions. In the United States, the Food and Drug Administration (FDA) is evaluating the drug for clinical use . Canada has allowed limited distribution of MDMA upon application to and approval by Health Canada. In Australia, it may be prescribed in the treatment of PTSD by specifically authorised psychiatrists.
Effects
In general, MDMA users report feeling the onset of subjective effects within 30 to 60 minutes of oral consumption and reaching peak effect at 75 to 120 minutes, which then plateaus for about 3.5 hours. The desired short-term psychoactive effects of MDMA have been reported to include:
Euphoria – a sense of general well-being and happiness
Increased self-confidence, sociability, and perception of facilitated communication
Entactogenic effects—increased empathy or feelings of closeness with others and oneself
Dilated pupils
Relaxation and reduced anxiety
Increased emotionality
A sense of inner peace
Mild hallucination
Enhanced sensation, perception, or sexuality
Altered sense of time
The experience elicited by MDMA depends on the dose, setting, and user. The variability of the induced altered state is lower compared to other psychedelics. For example, MDMA used at parties is associated with high motor activity, reduced sense of identity, and poor awareness of surroundings. Use of MDMA individually or in small groups in a quiet environment and when concentrating, is associated with increased lucidity, concentration, sensitivity to aesthetic aspects of the environment, enhanced awareness of emotions, and improved capability of communication. In psychotherapeutic settings, MDMA effects have been characterized by infantile ideas, mood lability, and memories and moods connected with childhood experiences.
MDMA has been described as an "empathogenic" drug because of its empathy-producing effects. Results of several studies show the effects of increased empathy with others. When testing MDMA for medium and high doses, it showed increased hedonic and arousal continuum. The effect of MDMA increasing sociability is consistent, while its effects on empathy have been more mixed.
Uses
Recreational
MDMA is often considered the drug of choice within the rave culture and is also used at clubs, festivals, and house parties. In the rave environment, the sensory effects of music and lighting are often highly synergistic with the drug. The psychedelic amphetamine quality of MDMA offers multiple appealing aspects to users in the rave setting. Some users enjoy the feeling of mass communion from the inhibition-reducing effects of the drug, while others use it as party fuel because of the drug's stimulatory effects. MDMA is used less often than other stimulants, typically less than once per week.
MDMA is sometimes taken in conjunction with other psychoactive drugs such as LSD, psilocybin mushrooms, 2C-B, and ketamine. The combination with LSD is called "candy-flipping". The combination with 2C-B is called "nexus flipping". For this combination, most people take the MDMA first, wait until the peak is over, and then take the 2C-B.
MDMA is often co-administered with alcohol, methamphetamine, and prescription drugs such as SSRIs with which MDMA has several drug-drug interactions. Three life-threatening reports of MDMA co-administration with ritonavir have been reported; with ritonavir having severe and dangerous drug-drug interactions with a wide range of both psychoactive, anti-psychotic, and non-psychoactive drugs.
Medical
As of 2023, MDMA therapies have only been approved for research purposes, with no widely accepted medical indications, although this varies by jurisdiction. Before it was widely banned, it saw limited use in psychotherapy. In 2017 the United States Food and Drug Administration (FDA) granted breakthrough therapy designation for MDMA-assisted psychotherapy for post-traumatic stress disorder (PTSD), with some preliminary evidence that MDMA may facilitate psychotherapy efficacy for PTSD. Pilot studies indicate that MDMA-assisted psychotherapy may be beneficial in treating social anxiety in autistic adults. In these pilot studies, the vast majority of participants reported increased feelings of empathy that persisted after the therapy sessions.
Other
Small doses of MDMA are used by some religious practitioners as an entheogen to enhance prayer or meditation. MDMA has been used as an adjunct to New Age spiritual practices.
Forms
MDMA has become widely known as ecstasy (shortened "E", "X", or "XTC"), usually referring to its tablet form, although this term may also include the presence of possible adulterants or diluents. The UK term "mandy" and the US term "molly" colloquially refer to MDMA in a crystalline powder form that is thought to be free of adulterants. MDMA is also sold in the form of the hydrochloride salt, either as loose crystals or in gelcaps. MDMA tablets can sometimes be found in a shaped form that may depict characters from popular culture. These are sometimes collectively referred to as "fun tablets".
Partly due to the global supply shortage of sassafras oil—a problem largely assuaged by use of improved or alternative modern methods of synthesis—the purity of substances sold as molly have been found to vary widely. Some of these substances contain methylone, ethylone, MDPV, mephedrone, or any other of the group of compounds commonly known as bath salts, in addition to, or in place of, MDMA. Powdered MDMA ranges from pure MDMA to crushed tablets with 30–40% purity. MDMA tablets typically have low purity due to bulking agents that are added to dilute the drug and increase profits (notably lactose) and binding agents. Tablets sold as ecstasy sometimes contain 3,4-methylenedioxyamphetamine (MDA), 3,4-methylenedioxyethylamphetamine (MDEA), other amphetamine derivatives, caffeine, opiates, or painkillers. Some tablets contain little or no MDMA. The proportion of seized ecstasy tablets with MDMA-like impurities has varied annually and by country. The average content of MDMA in a preparation is 70 to 120mg with the purity having increased since the 1990s.
MDMA is usually consumed by mouth. It is also sometimes snorted.
Adverse effects
Short-term
Acute adverse effects are usually the result of high or multiple doses, although single dose toxicity can occur in susceptible individuals. The most serious short-term physical health risks of MDMA are hyperthermia and dehydration. Cases of life-threatening or fatal hyponatremia (excessively low sodium concentration in the blood) have developed in MDMA users attempting to prevent dehydration by consuming excessive amounts of water without replenishing electrolytes.
The immediate adverse effects of MDMA use can include:
Bruxism (grinding and clenching of the teeth)
Dehydration
Diarrhea
Erectile dysfunction
Hyperthermia
Increased wakefulness or insomnia
Increased perspiration and sweating
Increased heart rate and blood pressure
Increased psychomotor activity
Loss of appetite
Nausea and vomiting
Visual and auditory hallucinations (rarely)
Other adverse effects that may occur or persist for up to a week following cessation of moderate MDMA use include:
Physiological
Insomnia
Loss of appetite
Tiredness or lethargy
Trismus (lockjaw)
Psychological
Anhedonia
Anxiety or paranoia
Depression
Impulsiveness
Irritability
Memory impairment
Restlessness
Administration of MDMA to mice causes DNA damage in their brain, especially when the mice are sleep deprived. Even at the very low doses that are comparable to those self-administered by humans, MDMA causes oxidative stress and both single and double-strand breaks in the DNA of the hippocampus region of the mouse brain.
Long-term
, the long-term effects of MDMA on human brain structure and function have not been fully determined. However, there is consistent evidence of structural and functional deficits in MDMA users with high lifetime exposure. These structural or functional changes appear to be dose dependent and may be less prominent in MDMA users with only a moderate (typically <50 doses used and <100 tablets consumed) lifetime exposure. Nonetheless, moderate MDMA use may still be neurotoxic and what constitutes moderate use is not clearly established.
Furthermore, it is not clear yet whether "typical" recreational users of MDMA (1 to 2 pills of 75 to 125mg MDMA or analogue every 1 to 4 weeks) will develop neurotoxic brain lesions. Long-term exposure to MDMA in humans has been shown to produce marked neurodegeneration in striatal, hippocampal, prefrontal, and occipital serotonergic axon terminals. Neurotoxic damage to serotonergic axon terminals has been shown to persist for more than two years. Elevations in brain temperature from MDMA use are positively correlated with MDMA-induced neurotoxicity. However, most studies on MDMA and serotonergic neurotoxicity in humans focus more on heavy users who consume as much as seven times or more the amount that most users report taking. The evidence for the presence of serotonergic neurotoxicity in casual users who take lower doses less frequently is not conclusive.
However, adverse neuroplastic changes to brain microvasculature and white matter have been observed to occur in humans using low doses of MDMA. Reduced gray matter density in certain brain structures has also been noted in human MDMA users. Global reductions in gray matter volume, thinning of the parietal and orbitofrontal cortices, and decreased hippocampal activity have been observed in long term users. The effects established so far for recreational use of ecstasy lie in the range of moderate to severe effects for serotonin transporter reduction.
Impairments in multiple aspects of cognition, including attention, learning, memory, visual processing, and sleep, have been found in regular MDMA users. The magnitude of these impairments is correlated with lifetime MDMA usage and are partially reversible with abstinence. Several forms of memory are impaired by chronic ecstasy use; however, the effects for memory impairments in ecstasy users are generally small overall. MDMA use is also associated with increased impulsivity and depression.
Serotonin depletion following MDMA use can cause depression in subsequent days. In some cases, depressive symptoms persist for longer periods. Some studies indicate repeated recreational use of ecstasy is associated with depression and anxiety, even after quitting the drug. Depression is one of the main reasons for cessation of use.
At high doses, MDMA induces a neuroimmune response that, through several mechanisms, increases the permeability of the blood–brain barrier, thereby making the brain more susceptible to environmental toxins and pathogens. In addition, MDMA has immunosuppressive effects in the peripheral nervous system and pro-inflammatory effects in the central nervous system.
MDMA may increase the risk of cardiac valvulopathy in heavy or long-term users due to activation of serotonin 5-HT2B receptors. MDMA induces cardiac epigenetic changes in DNA methylation, particularly hypermethylation changes.
Reinforcement disorders
Approximately 60% of MDMA users experience withdrawal symptoms when they stop taking MDMA. Some of these symptoms include fatigue, loss of appetite, depression, and trouble concentrating. Tolerance to some of the desired and adverse effects of MDMA is expected to occur with consistent MDMA use. A 2007 delphic analysis of a panel of experts in pharmacology, psychiatry, law, policing and others estimated MDMA to have a psychological dependence and physical dependence potential roughly three-fourths to four-fifths that of cannabis.
MDMA has been shown to induce ΔFosB in the nucleus accumbens. Because MDMA releases dopamine in the striatum, the mechanisms by which it induces ΔFosB in the nucleus accumbens are analogous to other dopaminergic psychostimulants. Therefore, chronic use of MDMA at high doses can result in altered brain structure and drug addiction that occur as a consequence of ΔFosB overexpression in the nucleus accumbens. MDMA is less addictive than other stimulants such as methamphetamine and cocaine. Compared with amphetamine, MDMA and its metabolite MDA are less reinforcing.
One study found approximately 15% of chronic MDMA users met the DSM-IV diagnostic criteria for substance dependence. However, there is little evidence for a specific diagnosable MDMA dependence syndrome because MDMA is typically used relatively infrequently.
There are currently no medications to treat MDMA addiction.
During pregnancy
MDMA is a moderately teratogenic drug (i.e., it is toxic to the fetus). In utero exposure to MDMA is associated with a neuro- and cardiotoxicity and impaired motor functioning. Motor delays may be temporary during infancy or long-term. The severity of these developmental delays increases with heavier MDMA use. MDMA has been shown to promote the survival of fetal dopaminergic neurons in culture.
Overdose
MDMA overdose symptoms vary widely due to the involvement of multiple organ systems. Some of the more overt overdose symptoms are listed in the table below. The number of instances of fatal MDMA intoxication is low relative to its usage rates. In most fatalities, MDMA was not the only drug involved. Acute toxicity is mainly caused by serotonin syndrome and sympathomimetic effects. Sympathomimetic side effects can be managed with carvedilol. MDMA's toxicity in overdose may be exacerbated by caffeine, with which it is frequently cut in order to increase volume. A scheme for management of acute MDMA toxicity has been published focusing on treatment of hyperthermia, hyponatraemia, serotonin syndrome, and multiple organ failure.
Interactions
A number of drug interactions can occur between MDMA and other drugs, including serotonergic drugs. MDMA also interacts with drugs which inhibit CYP450 enzymes, like ritonavir (Norvir), particularly CYP2D6 inhibitors. Life-threatening reactions and death have occurred in people who took MDMA while on ritonavir. Bupropion, a strong CYP2D6 inhibitor, has been found to increase MDMA exposure with administration of MDMA. Concurrent use of MDMA high dosages with another serotonergic drug can result in a life-threatening condition called serotonin syndrome. Severe overdose resulting in death has also been reported in people who took MDMA in combination with certain monoamine oxidase inhibitors (MAOIs), such as phenelzine (Nardil), tranylcypromine (Parnate), or moclobemide (Aurorix, Manerix). Serotonin reuptake inhibitors (SRIs) such as citalopram (Celexa), duloxetine (Cymbalta), fluoxetine (Prozac), and paroxetine (Paxil) have been shown to block most of the subjective effects of MDMA. Norepinephrine reuptake inhibitors (NRIs) such as reboxetine (Edronax) have been found to reduce emotional excitation and feelings of stimulation with MDMA but do not appear to influence its entactogenic or mood-elevating effects.
MDMA induces the release of monoamine neurotransmitters and thereby acts as an indirectly acting sympathomimetic and produces a variety of cardiostimulant effects. It dose-dependently increases heart rate, blood pressure, and cardiac output. SRIs like citalopram and paroxetine, as well as the serotonin 5-HT2A receptor antagonist ketanserin, have been found to partially block the increases in heart rate and blood pressure with MDMA. It is notable in this regard that serotonergic psychedelics such as psilocybin, which act as serotonin 5-HT2A receptor agonists, likewise have sympathomimetic effects. The NRI reboxetine and the serotonin–norepinephrine reuptake inhibitor (SNRI) duloxetine block MDMA-induced increases in heart rate and blood pressure. Conversely, bupropion, a norepinephrine–dopamine reuptake inhibitor (NDRI) with only weak dopaminergic activity, reduced MDMA-induced heart rate and circulating norepinephrine increases but did not affect MDMA-induced blood pressure increases. On the other hand, the robust NDRI methylphenidate, which has sympathomimetic effects of its own, has been found to augment the cardiovascular effects and increases in circulating norepinephrine and epinephrine levels induced by MDMA.
The non-selective beta blocker pindolol blocked MDMA-induced increases in heart rate but not blood pressure. The α2-adrenergic receptor agonist clonidine did not affect the cardiovascular effects of MDMA, though it reduced blood pressure. The α1-adrenergic receptor antagonists doxazosin and prazosin blocked or reduced MDMA-induced blood pressure increases but augmented MDMA-induced heart rate and cardiac output increases. The dual α1- and β-adrenergic receptor blocker carvedilol reduced MDMA-induced heart rate and blood pressure increases. In contrast to the cases of serotonergic and noradrenergic agents, the dopamine D2 receptor antagonist haloperidol did not affect the cardiovascular responses to MDMA. Due to the theoretical risk of "unopposed α-stimulation" and possible consequences like coronary vasospasm, it has been suggested that dual α1- and β-adrenergic receptor antagonists like carvedilol and labetalol, rather than selective beta blockers, should be used in the management of stimulant-induced sympathomimetic toxicity, for instance in the context of overdose.
Pharmacology
Pharmacodynamics
MDMA is an entactogen or empathogen, as well as a stimulant, euphoriant, and weak psychedelic. It is a substrate of the monoamine transporters (MATs) and acts as a monoamine releasing agent (MRA). The drug is specifically a well-balanced serotonin–norepinephrine–dopamine releasing agent (SNDRA). To a lesser extent, MDMA also acts as a serotonin–norepinephrine–dopamine reuptake inhibitor (SNDRI). MDMA enters monoaminergic neurons via the MATs and then, via poorly understood mechanisms, reverses the direction of these transporters to produce efflux of the monoamine neurotransmitters rather than the usual reuptake. Induction of monoamine efflux by amphetamines in general may involve intracellular Na+ and Ca2+ elevation and PKC and CaMKIIα activation. MDMA also acts on the vesicular monoamine transporter 2 (VMAT2) on synaptic vesicles to increase the cytosolic concentrations of the monoamine neurotransmitters available for efflux. By inducing release and reuptake inhibition of serotonin, norepinephrine, and dopamine, MDMA increases levels of these neurotransmitters in the brain and periphery.
In addition to its actions as an SNDRA, MDMA directly but more modestly interacts with a number of monoamine and other receptors. It is a low-potency partial agonist of the serotonin 5-HT2 receptors, including of the serotonin 5-HT2A, 5-HT2B, and 5-HT2C receptors. The drug also interacts with α2-adrenergic receptors, with the sigma σ1 and σ2 receptors, and with the imidazoline I1 receptor. It is thought that agonism of the serotonin 5-HT2A receptor by MDMA may mediate the weak psychedelic effects of the drug in humans. However, findings in this area appear to be conflicting. Likewise, findings on MDMA and induction of the head-twitch response (HTR), a behavioral proxy of psychedelic-like effects, are contradictory in animals. Along with the preceding receptor interactions, MDMA is a potent partial agonist of the rodent trace amine-associated receptor 1 (TAAR1). Conversely however, it is far weaker in terms of potency as an agonist of the human TAAR1. Moreover, MDMA acts as a weak partial agonist or antagonist of the human TAAR1 rather than as an efficacious agonist. In relation to this, MDMA has been said to be inactive as a human TAAR1 agonist. TAAR1 activation is thought to auto-inhibit and constrain the effects of amphetamines that possess TAAR1 agonism, for instance MDMA in rodents.
Elevation of serotonin, norepinephrine, and dopamine levels by MDMA is believed to mediate most of the drug's effects, including its entactogenic, stimulant, euphoriant, hyperthermic, and sympathomimetic effects. The entactogenic effects of MDMA, including increased sociability, empathy, feelings of closeness, and reduced aggression, are thought to be mainly due to induction of serotonin release. The exact serotonin receptors responsible for these effects are unclear, but may include the serotonin 5-HT1A receptor, 5-HT1B receptor, and 5-HT2A receptor, as well as 5-HT1A receptor-mediated oxytocin release and consequent activation of the oxytocin receptor. Induction of dopamine release is thought to be importantly involved in the stimulant and euphoriant effects of MDMA, while induction of norepinephrine release and serotonin 5-HT2A receptor stimulation are believed to mediate its sympathomimetic effects. MDMA has been associated with a unique subjective "magic" or euphoria that few or no other known entactogens are said to fully reproduce. The mechanisms underlying this property of MDMA are unknown, but it has been theorized to be due to a very specific mixture and balance of pharmacological activities, including combined serotonin, norepinephrine, and dopamine release and direct serotonin receptor agonism. Repeated activation of serotonin 5-HT2B receptors by MDMA is thought to result in risk of valvular heart disease (VHD) and primary pulmonary hypertension (PPH). MDMA has been associated with serotonergic neurotoxicity. This may be due to formation of toxic MDMA metabolites and/or induction of simultaneous serotonin and dopamine release, with consequent uptake of dopamine into serotonergic neurons and breakdown into toxic species.
MDMA is a racemic mixture of two enantiomers, (S)-MDMA and (R)-MDMA. (S)-MDMA is much more potent as an SNDRA in vitro and in producing MDMA-like subjective effects in humans than (R)-MDMA. By contrast, (R)-MDMA acts as a lower-potency serotonin–norepinephrine releasing agent (SNRA) with weak or negligible effects on dopamine. Relatedly, (R)-MDMA shows weak or negligible stimulant-like and rewarding effects in animals. Both (S)-MDMA and (R)-MDMA produce entactogen-type effects in animals and humans. In addition, both (S)-MDMA and (R)-MDMA are weak agonists of the serotonin 5-HT2 receptors. (R)-MDMA is more potent and efficacious as a serotonin 5-HT2A and 5-HT2B receptor agonist than (S)-MDMA, whereas (S)-MDMA is somewhat more potent as an agonist of the serotonin 5-HT2C receptor. Despite its greater serotonin 5-HT2A receptor agonism however, (R)-MDMA did not produce more psychedelic-like effects than (S)-MDMA in humans.
MDMA produces 3,4-methylenedioxyamphetamine (MDA) as a minor active metabolite. Peak levels of MDA are about 5 to 10% of those of MDMA and total exposure to MDA is almost 10% of that of MDMA with oral MDMA administration. As a result, MDA may contribute to some extent to the effects of MDMA. MDA is an entactogen, stimulant, and weak psychedelic similarly to MDMA. Like MDMA, it acts as a potent and well-balanced SNDRA and as a weak serotonin 5-HT2 receptor agonist. However, MDA shows much more potent and efficacious serotonin 5-HT2A, 5-HT2B, and 5-HT2C receptor agonism than MDMA. Accordingly, MDA produces greater psychedelic effects than MDMA in humans and might particularly contribute to the mild psychedelic-like effects of MDMA. On the other hand, MDA may also be importantly involved in toxicity of MDMA, such as cardiac valvulopathy.
The duration of action of MDMA (3–6hours) is much shorter than its elimination half-life (8–9hours) would imply. In relation to this, MDMA's duration and the offset of its effects appear to be determined more by rapid acute tolerance rather than by circulating drug concentrations. Similar findings have been made for amphetamine and methamphetamine. One mechanism by which tolerance to MDMA may occur is internalization of the serotonin transporter (SERT). Although MDMA and serotonin are not significant TAAR1 agonists in humans, TAAR1 activation by MDMA may result in SERT internalization.
Pharmacokinetics
The MDMA concentration in the blood stream starts to rise after about 30 minutes, and reaches its maximal concentration in the blood stream between 1.5 and 3 hours after ingestion. It is then slowly metabolized and excreted, with levels of MDMA and its metabolites decreasing to half their peak concentration over the next several hours. The duration of action of MDMA is about 3 to 6hours. Brain serotonin levels are depleted after MDMA administration but serotonin levels typically return to normal within 24 to 48hours.
Metabolites of MDMA that have been identified in humans include 3,4-methylenedioxyamphetamine (MDA), 4-hydroxy-3-methoxymethamphetamine (HMMA), 4-hydroxy-3-methoxyamphetamine (HMA), 3,4-dihydroxyamphetamine (DHA) (also called alpha-methyldopamine (α-Me-DA)), 3,4-methylenedioxyphenylacetone (MDP2P), and 3,4-methylenedioxy-N-hydroxyamphetamine (MDOH). The contributions of these metabolites to the psychoactive and toxic effects of MDMA are an area of active research. 80% of MDMA is metabolised in the liver, and about 20% is excreted unchanged in the urine.
MDMA is known to be metabolized by two main metabolic pathways: (1) O-demethylenation followed by catechol-O-methyltransferase (COMT)-catalyzed methylation or glucuronide/sulfate conjugation; and (2) N-dealkylation, deamination, and oxidation to the corresponding benzoic acid derivatives conjugated with glycine. The metabolism may be primarily by cytochrome P450 (CYP450) enzymes CYP2D6 and CYP3A4 and COMT. Complex, nonlinear pharmacokinetics arise via autoinhibition of CYP2D6 and CYP2D8, resulting in zeroth order kinetics at higher doses. It is thought that this can result in sustained and higher concentrations of MDMA if the user takes consecutive doses of the drug.
MDMA and metabolites are primarily excreted as conjugates, such as sulfates and glucuronides. MDMA is a chiral compound and has been almost exclusively administered as a racemate. However, the two enantiomers have been shown to exhibit different kinetics. The disposition of MDMA may also be stereoselective, with the S-enantiomer having a shorter elimination half-life and greater excretion than the R-enantiomer. Evidence suggests that the area under the blood plasma concentration versus time curve (AUC) was two to four times higher for the (R)-enantiomer than the (S)-enantiomer after a 40mg oral dose in human volunteers. Likewise, the plasma half-life of (R)-MDMA was significantly longer than that of the (S)-enantiomer (5.8±2.2 hours vs 3.6±0.9 hours). However, because MDMA excretion and metabolism have nonlinear kinetics, the half-lives would be higher at more typical doses (100mg is sometimes considered a typical dose).
Chemistry
MDMA is in the substituted methylenedioxyphenethylamine and substituted amphetamine classes of chemicals. As a free base, MDMA is a colorless oil insoluble in water. The most common salt of MDMA is the hydrochloride salt; pure MDMA hydrochloride is water-soluble and appears as a white or off-white powder or crystal.
Synthesis
There are numerous methods available to synthesize MDMA via different intermediates. The original MDMA synthesis described in Merck's patent involves brominating safrole to 1-(3,4-methylenedioxyphenyl)-2-bromopropane and then reacting this adduct with methylamine. Most illicit MDMA is synthesized using MDP2P (3,4-methylenedioxyphenyl-2-propanone) as a precursor. MDP2P in turn is generally synthesized from piperonal, safrole or isosafrole. One method is to isomerize safrole to isosafrole in the presence of a strong base, and then oxidize isosafrole to MDP2P. Another method uses the Wacker process to oxidize safrole directly to the MDP2P intermediate with a palladium catalyst. Once the MDP2P intermediate has been prepared, a reductive amination leads to racemic MDMA (an equal parts mixture of (R)-MDMA and (S)-MDMA). Relatively small quantities of essential oil are required to make large amounts of MDMA. The essential oil of Ocotea cymbarum, for example, typically contains between 80 and 94% safrole. This allows 500mL of the oil to produce between 150 and 340 grams of MDMA.
Detection in body fluids
MDMA and MDA may be quantitated in blood, plasma or urine to monitor for use, confirm a diagnosis of poisoning or assist in the forensic investigation of a traffic or other criminal violation or a sudden death. Some drug abuse screening programs rely on hair, saliva, or sweat as specimens. Most commercial amphetamine immunoassay screening tests cross-react significantly with MDMA or its major metabolites, but chromatographic techniques can easily distinguish and separately measure each of these substances. The concentrations of MDA in the blood or urine of a person who has taken only MDMA are, in general, less than 10% those of the parent drug.
History
Early research and use
MDMA was first synthesized in 1912 by Merck chemist Anton Köllisch. At the time, Merck was interested in developing substances that stopped abnormal bleeding. Merck wanted to avoid an existing patent held by Bayer for one such compound: hydrastinine. Köllisch developed a preparation of a hydrastinine analogue, methylhydrastinine, at the request of fellow lab members, Walther Beckh and Otto Wolfes. MDMA (called methylsafrylamin, safrylmethylamin or N-Methyl-a-Methylhomopiperonylamin in Merck laboratory reports) was an intermediate compound in the synthesis of methylhydrastinine. Merck was not interested in MDMA itself at the time. On 24 December 1912, Merck filed two patent applications that described the synthesis and some chemical properties of MDMA and its subsequent conversion to methylhydrastinine.
Merck records indicate its researchers returned to the compound sporadically. A 1920 Merck patent describes a chemical modification to MDMA. In 1927, Max Oberlin studied the pharmacology of MDMA while searching for substances with effects similar to adrenaline or ephedrine, the latter being structurally similar to MDMA. Compared to ephedrine, Oberlin observed that it had similar effects on vascular smooth muscle tissue, stronger effects at the uterus, and no "local effect at the eye". MDMA was also found to have effects on blood sugar levels comparable to high doses of ephedrine. Oberlin concluded that the effects of MDMA were not limited to the sympathetic nervous system. Research was stopped "particularly due to a strong price increase of safrylmethylamine", which was still used as an intermediate in methylhydrastinine synthesis. Albert van Schoor performed simple toxicological tests with the drug in 1952, most likely while researching new stimulants or circulatory medications. After pharmacological studies, research on MDMA was not continued. In 1959, Wolfgang Fruhstorfer synthesized MDMA for pharmacological testing while researching stimulants. It is unclear if Fruhstorfer investigated the effects of MDMA in humans.
Outside of Merck, other researchers began to investigate MDMA. In 1953 and 1954, the United States Army commissioned a study of toxicity and behavioral effects in animals injected with mescaline and several analogues, including MDMA. Conducted at the University of Michigan in Ann Arbor, these investigations were declassified in October 1969 and published in 1973. A 1960 Polish paper by Biniecki and Krajewski describing the synthesis of MDMA as an intermediate was the first published scientific paper on the substance.
MDMA may have been in non-medical use in the western United States in 1968. An August 1970 report at a meeting of crime laboratory chemists indicates MDMA was being used recreationally in the Chicago area by 1970. MDMA likely emerged as a substitute for its analog 3,4-methylenedioxyamphetamine (MDA), a drug at the time popular among users of psychedelics which was made a Schedule 1 controlled substance in the United States in 1970.
Shulgin's research
American chemist and psychopharmacologist Alexander Shulgin reported he synthesized MDMA in 1965 while researching methylenedioxy compounds at Dow Chemical Company, but did not test the psychoactivity of the compound at this time. Around 1970, Shulgin sent instructions for N-methylated MDA (MDMA) synthesis to the founder of a Los Angeles chemical company who had requested them. This individual later provided these instructions to a client in the Midwest. Shulgin may have suspected he played a role in the emergence of MDMA in Chicago.
Shulgin first heard of the psychoactive effects of N-methylated MDA around 1975 from a young student who reported "amphetamine-like content". Around 30 May 1976, Shulgin again heard about the effects of N-methylated MDA, this time from a graduate student in a medicinal chemistry group he advised at San Francisco State University who directed him to the University of Michigan study. She and two close friends had consumed 100mg of MDMA and reported positive emotional experiences. Following the self-trials of a colleague at the University of San Francisco, Shulgin synthesized MDMA and tried it himself in September and October 1976. Shulgin first reported on MDMA in a presentation at a conference in Bethesda, Maryland in December 1976. In 1978, he and David E. Nichols published a report on the drug's psychoactive effect in humans. They described MDMA as inducing "an easily controlled altered state of consciousness with emotional and sensual overtones" comparable "to marijuana, to psilocybin devoid of the hallucinatory component, or to low levels of MDA".
While not finding his own experiences with MDMA particularly powerful, Shulgin was impressed with the drug's disinhibiting effects and thought it could be useful in therapy. Believing MDMA allowed users to strip away habits and perceive the world clearly, Shulgin called the drug window. Shulgin occasionally used MDMA for relaxation, referring to it as "my low-calorie martini", and gave the drug to friends, researchers, and others who he thought could benefit from it. One such person was Leo Zeff, a psychotherapist who had been known to use psychedelic substances in his practice. When he tried the drug in 1977, Zeff was impressed with the effects of MDMA and came out of his semi-retirement to promote its use in therapy. Over the following years, Zeff traveled around the United States and occasionally to Europe, eventually training an estimated four thousand psychotherapists in the therapeutic use of MDMA. Zeff named the drug Adam, believing it put users in a state of primordial innocence.
Psychotherapists who used MDMA believed the drug eliminated the typical fear response and increased communication. Sessions were usually held in the home of the patient or the therapist. The role of the therapist was minimized in favor of patient self-discovery accompanied by MDMA induced feelings of empathy. Depression, substance use disorders, relationship problems, premenstrual syndrome, and autism were among several psychiatric disorders MDMA assisted therapy was reported to treat. According to psychiatrist George Greer, therapists who used MDMA in their practice were impressed by the results. Anecdotally, MDMA was said to greatly accelerate therapy. According to David Nutt, MDMA was widely used in the western US in couples counseling, and was called empathy. Only later was the term ecstasy used for it, coinciding with rising opposition to its use.
Rising recreational use
In the late 1970s and early 1980s, "Adam" spread through personal networks of psychotherapists, psychiatrists, users of psychedelics, and yuppies. Hoping MDMA could avoid criminalization like LSD and mescaline, psychotherapists and experimenters attempted to limit the spread of MDMA and information about it while conducting informal research. Early MDMA distributors were deterred from large scale operations by the threat of possible legislation. Between the 1970s and the mid-1980s, this network of MDMA users consumed an estimated 500,000 doses.
A small recreational market for MDMA developed by the late 1970s, consuming perhaps 10,000 doses in 1976. By the early 1980s MDMA was being used in Boston and New York City nightclubs such as Studio 54 and Paradise Garage. Into the early 1980s, as the recreational market slowly expanded, production of MDMA was dominated by a small group of therapeutically minded Boston chemists. Having commenced production in 1976, this "Boston Group" did not keep up with growing demand and shortages frequently occurred.
Perceiving a business opportunity, Michael Clegg, the Southwest distributor for the Boston Group, started his own "Texas Group" backed financially by Texas friends. In 1981, Clegg had coined "Ecstasy" as a slang term for MDMA to increase its marketability. Starting in 1983, the Texas Group mass-produced MDMA in a Texas lab or imported it from California and marketed tablets using pyramid sales structures and toll-free numbers. MDMA could be purchased via credit card and taxes were paid on sales. Under the brand name "Sassyfras", MDMA tablets were sold in brown bottles. The Texas Group advertised "Ecstasy parties" at bars and discos, describing MDMA as a "fun drug" and "good to dance to". MDMA was openly distributed in Austin and Dallas–Fort Worth area bars and nightclubs, becoming popular with yuppies, college students, and gays.
Recreational use also increased after several cocaine dealers switched to distributing MDMA following experiences with the drug. A California laboratory that analyzed confidentially submitted drug samples first detected MDMA in 1975. Over the following years the number of MDMA samples increased, eventually exceeding the number of MDA samples in the early 1980s. By the mid-1980s, MDMA use had spread to colleges around the United States.
Media attention and scheduling
United States
In an early media report on MDMA published in 1982, a Drug Enforcement Administration (DEA) spokesman stated the agency would ban the drug if enough evidence for abuse could be found. By mid-1984, MDMA use was becoming more noticed. Bill Mandel reported on "Adam" in a 10 June San Francisco Chronicle article, but misidentified the drug as methyloxymethylenedioxyamphetamine (MMDA). In the next month, the World Health Organization identified MDMA as the only substance out of twenty phenethylamines to be seized a significant number of times.
After a year of planning and data collection, MDMA was proposed for scheduling by the DEA on 27 July 1984 with a request for comments and objections. The DEA was surprised when a number of psychiatrists, psychotherapists, and researchers objected to the proposed scheduling and requested a hearing. In a Newsweek article published the next year, a DEA pharmacologist stated that the agency had been unaware of its use among psychiatrists. An initial hearing was held on 1 February 1985 at the DEA offices in Washington, D.C., with administrative law judge Francis L. Young presiding. It was decided there to hold three more hearings that year: Los Angeles on 10 June, Kansas City, Missouri on 10–11 July, and Washington, D.C., on 8–11 October.
Sensational media attention was given to the proposed criminalization and the reaction of MDMA proponents, effectively advertising the drug. In response to the proposed scheduling, the Texas Group increased production from 1985 estimates of 30,000 tablets a month to as many as 8,000 per day, potentially making two million ecstasy tablets in the months before MDMA was made illegal. By some estimates the Texas Group distributed 500,000 tablets per month in Dallas alone. According to one participant in an ethnographic study, the Texas Group produced more MDMA in eighteen months than all other distribution networks combined across their entire histories. By May 1985, MDMA use was widespread in California, Texas, southern Florida, and the northeastern United States. According to the DEA there was evidence of use in twenty-eight states and Canada. Urged by Senator Lloyd Bentsen, the DEA announced an emergency Schedule I classification of MDMA on 31 May 1985. The agency cited increased distribution in Texas, escalating street use, and new evidence of MDA (an analog of MDMA) neurotoxicity as reasons for the emergency measure. The ban took effect one month later on 1 July 1985 in the midst of Nancy Reagan's "Just Say No" campaign.
As a result of several expert witnesses testifying that MDMA had an accepted medical usage, the administrative law judge presiding over the hearings recommended that MDMA be classified as a Schedule III substance. Despite this, DEA administrator John C. Lawn overruled and classified the drug as Schedule I. Harvard psychiatrist Lester Grinspoon then sued the DEA, claiming that the DEA had ignored the medical uses of MDMA, and the federal court sided with Grinspoon, calling Lawn's argument "strained" and "unpersuasive", and vacated MDMA's Schedule I status. Despite this, less than a month later Lawn reviewed the evidence and reclassified MDMA as Schedule I again, claiming that the expert testimony of several psychiatrists claiming over 200 cases where MDMA had been used in a therapeutic context with positive results could be dismissed because they were not published in medical journals. In 2017, the FDA granted breakthrough therapy designation for its use with psychotherapy for PTSD. However, this designation has been questioned and problematized.
United Nations
While engaged in scheduling debates in the United States, the DEA also pushed for international scheduling. In 1985 the World Health Organization's Expert Committee on Drug Dependence recommended that MDMA be placed in Schedule I of the 1971 United Nations Convention on Psychotropic Substances. The committee made this recommendation on the basis of the pharmacological similarity of MDMA to previously scheduled drugs, reports of illicit trafficking in Canada, drug seizures in the United States, and lack of well-defined therapeutic use. While intrigued by reports of psychotherapeutic uses for the drug, the committee viewed the studies as lacking appropriate methodological design and encouraged further research. Committee chairman Paul Grof dissented, believing international control was not warranted at the time and a recommendation should await further therapeutic data. The Commission on Narcotic Drugs added MDMA to Schedule I of the convention on 11 February 1986.
Post-scheduling
The use of MDMA in Texas clubs declined rapidly after criminalization, although by 1991 the drug remained popular among young middle-class whites and in nightclubs. In 1985, MDMA use became associated with acid house on the Spanish island of Ibiza. Thereafter in the late 1980s, the drug spread alongside rave culture to the UK and then to other European and American cities. Illicit MDMA use became increasingly widespread among young adults in universities and later, in high schools. Since the mid-1990s, MDMA has become the most widely used amphetamine-type drug by college students and teenagers. MDMA became one of the four most widely used illicit drugs in the US, along with cocaine, heroin, and cannabis.
According to some estimates as of 2004, only marijuana attracts more first time users in the US.
After MDMA was criminalized, most medical use stopped, although some therapists continued to prescribe the drug illegally. Later, Charles Grob initiated an ascending-dose safety study in healthy volunteers. Subsequent FDA-approved MDMA studies in humans have taken place in the United States in Detroit (Wayne State University), Chicago (University of Chicago), San Francisco (UCSF and California Pacific Medical Center), Baltimore (NIDA–NIH Intramural Program), and South Carolina. Studies have also been conducted in Switzerland (University Hospital of Psychiatry, Zürich), the Netherlands (Maastricht University), and Spain (Universitat Autònoma de Barcelona).
"Molly", short for 'molecule', was recognized as a slang term for crystalline or powder MDMA in the 2000s.
In 2010, the BBC reported that use of MDMA had decreased in the UK in previous years. This may be due to increased seizures during use and decreased production of the precursor chemicals used to manufacture MDMA. Unwitting substitution with other drugs, such as mephedrone and methamphetamine, as well as legal alternatives to MDMA, such as BZP, MDPV, and methylone, are also thought to have contributed to its decrease in popularity.
In 2017 it was found that some pills being sold as MDMA contained pentylone, which can cause very unpleasant agitation and paranoia.
According to David Nutt, when safrole was restricted by the United Nations in order to reduce the supply of MDMA, producers in China began using anethole instead, but this gives para-methoxyamphetamine (PMA, also known as "Dr Death"), which is much more toxic than MDMA and can cause overheating, muscle spasms, seizures, unconsciousness, and death. People wanting MDMA are sometimes sold PMA instead.
Society and culture
Legal status
MDMA is legally controlled in most of the world under the UN Convention on Psychotropic Substances and other international agreements, although exceptions exist for research and limited medical use. In general, the unlicensed use, sale or manufacture of MDMA are all criminal offences.
Australia
In Australia, MDMA was rescheduled on 1 July 2023 as a schedule 8 substance (available on prescription) when used in the treatment of PTSD, while remaining a schedule 9 substance (prohibited) for all other uses. For the treatment of PTSD, MDMA can only be prescribed by psychiatrists with specific training and authorisation.
In 1986, MDMA was declared an illegal substance because of its allegedly harmful effects and potential for misuse. Any non-authorised sale, use or manufacture is strictly prohibited by law. Permits for research uses on humans must be approved by a recognized ethics committee on human research.
In Western Australia under the Misuse of Drugs Act 1981 4.0g of MDMA is the amount required determining a court of trial, 2.0g is considered a presumption with intent to sell or supply and 28.0g is considered trafficking under Australian law.
The Australian Capital Territory passed legislation to decriminalise the possession of small amounts of MDMA, which took effect in October 2023.
United Kingdom
In the United Kingdom, MDMA was made illegal in 1977 by a modification order to the existing Misuse of Drugs Act 1971. Although MDMA was not named explicitly in this legislation, the order extended the definition of Class A drugs to include various ring-substituted phenethylamines. The drug is therefore illegal to sell, buy, or possess without a licence in the UK. Penalties include a maximum of seven years and/or unlimited fine for possession; life and/or unlimited fine for production or trafficking.
Some researchers such as David Nutt have criticized the scheduling of MDMA, which he determined to be a relatively harmless drug. An editorial he wrote in the Journal of Psychopharmacology, where he compared the risk of harm for horse riding (1 adverse event in 350) to that of ecstasy (1 in 10,000) resulted in his dismissal as well as the resignation of his colleagues from the ACMD.
United States
In the United States, MDMA is listed in Schedule I of the Controlled Substances Act. In a 2011 federal court hearing, the American Civil Liberties Union successfully argued that the sentencing guideline for MDMA/ecstasy is based on outdated science, leading to excessive prison sentences. Other courts have upheld the sentencing guidelines. The United States District Court for the Eastern District of Tennessee explained its ruling by noting that "an individual federal district court judge simply cannot marshal resources akin to those available to the Commission for tackling the manifold issues involved with determining a proper drug equivalency."
Netherlands
In the Netherlands, the Expert Committee on the List (Expertcommissie Lijstensystematiek Opiumwet) issued a report in June 2011 which discussed the evidence for harm and the legal status of MDMA, arguing in favor of maintaining it on List I.
Canada
In Canada, MDMA is listed as a Schedule 1 as it is an analogue of amphetamine. The Controlled Drugs and Substances Act was updated as a result of the Safe Streets and Communities Act changing amphetamines from Schedule III to Schedule I in March 2012. In 2022, the federal government granted British Columbia a 3-year exemption, legalizing the possession of up to of MDMA in the province from February 2023 until February 2026.
Demographics
In 2014, 3.5% of 18 to 25 year-olds had used MDMA in the United States. In the European Union as of 2018, 4.1% of adults (15–64 years old) have used MDMA at least once in their life, and 0.8% had used it in the last year. Among young adults, 1.8% had used MDMA in the last year.
In Europe, an estimated 37% of regular club-goers aged 14 to 35 used MDMA in the past year according to the 2015 European Drug report. The highest one-year prevalence of MDMA use in Germany in 2012 was 1.7% among people aged 25 to 29 compared with a population average of 0.4%. Among adolescent users in the United States between 1999 and 2008, girls were more likely to use MDMA than boys.
Economics
Europe
In 2008 the European Monitoring Centre for Drugs and Drug Addiction noted that although there were some reports of tablets being sold for as little as €1, most countries in Europe then reported typical retail prices in the range of €3 to €9 per tablet, typically containing 25–65mg of MDMA. By 2014 the EMCDDA reported that the range was more usually between €5 and €10 per tablet, typically containing 57–102mg of MDMA, although MDMA in powder form was becoming more common.
North America
The United Nations Office on Drugs and Crime stated in its 2014 World Drug Report that US ecstasy retail prices range from US$1 to $70 per pill, or from $15,000 to $32,000 per kilogram. A new research area named Drug Intelligence aims to automatically monitor distribution networks based on image processing and machine learning techniques, in which an Ecstasy pill picture is analyzed to detect correlations among different production batches. These novel techniques allow police scientists to facilitate the monitoring of illicit distribution networks.
, most of the MDMA in the United States is produced in British Columbia, Canada and imported by Canada-based Asian transnational criminal organizations. The market for MDMA in the United States is relatively small compared to methamphetamine, cocaine, and heroin. In the United States, about 0.9 million people used ecstasy in 2010.
Australia
MDMA is particularly expensive in Australia, costing A$15–A$30 per tablet. In terms of purity data for Australian MDMA, the average is around 34%, ranging from less than 1% to about 85%. The majority of tablets contain 70–85mg of MDMA. Most MDMA enters Australia from the Netherlands, the UK, Asia, and the US.
Corporate logos on pills
A number of ecstasy manufacturers brand their pills with a logo, often being the logo of an unrelated corporation. Some pills depict logos of products or media popular with children, such as Shaun the Sheep.
Research directions
A 2014 review of the safety and efficacy of MDMA as a treatment for various disorders, particularly post-traumatic stress disorder (PTSD), indicated that MDMA has therapeutic efficacy in some patients. Four clinical trials provide moderate evidence in support of this treatment. Some authors have concluded that because of MDMA's potential to cause lasting harm in humans (e.g., serotonergic neurotoxicity and persistent memory impairment), "considerably more research must be performed" on its efficacy in PTSD treatment to determine if the potential treatment benefits outweigh its potential to harm a patient. Other authors have argued that the neurotoxic effects of MDMA are dose-dependent, with lower doses exhibiting lower neurotoxicity or even neuroprotection, and that MDMA assisted psychotherapy is considerably safer than current treatments.
Animal models suggest that postnatal exposure may ameliorate social impairments in autism.
Recent evidence suggests the safe and potentially effective use of MDMA to treat the negative symptoms of schizophrenia. Unlike other treatments for mental illness, MDMA would be intended to be used infrequently and alongside psychotherapy in treatment.
| Biology and health sciences | Recreational drugs | Health |
10043 | https://en.wikipedia.org/wiki/Estimator | Estimator | In statistics, an estimator is a rule for calculating an estimate of a given quantity based on observed data: thus the rule (the estimator), the quantity of interest (the estimand) and its result (the estimate) are distinguished. For example, the sample mean is a commonly used estimator of the population mean.
There are point and interval estimators. The point estimators yield single-valued results. This is in contrast to an interval estimator, where the result would be a range of plausible values. "Single value" does not necessarily mean "single number", but includes vector valued or function valued estimators.
Estimation theory is concerned with the properties of estimators; that is, with defining properties that can be used to compare different estimators (different rules for creating estimates) for the same quantity, based on the same data. Such properties can be used to determine the best rules to use under given circumstances. However, in robust statistics, statistical theory goes on to consider the balance between having good properties, if tightly defined assumptions hold, and having worse properties that hold under wider conditions.
Background
An "estimator" or "point estimate" is a statistic (that is, a function of the data) that is used to infer the value of an unknown parameter in a statistical model. A common way of phrasing it is "the estimator is the method selected to obtain an estimate of an unknown parameter".
The parameter being estimated is sometimes called the estimand. It can be either finite-dimensional (in parametric and semi-parametric models), or infinite-dimensional (semi-parametric and non-parametric models). If the parameter is denoted then the estimator is traditionally written by adding a circumflex over the symbol: . Being a function of the data, the estimator is itself a random variable; a particular realization of this random variable is called the "estimate". Sometimes the words "estimator" and "estimate" are used interchangeably.
The definition places virtually no restrictions on which functions of the data can be called the "estimators". The attractiveness of different estimators can be judged by looking at their properties, such as unbiasedness, mean square error, consistency, asymptotic distribution, etc. The construction and comparison of estimators are the subjects of the estimation theory. In the context of decision theory, an estimator is a type of decision rule, and its performance may be evaluated through the use of loss functions.
When the word "estimator" is used without a qualifier, it usually refers to point estimation. The estimate in this case is a single point in the parameter space. There also exists another type of estimator: interval estimators, where the estimates are subsets of the parameter space.
The problem of density estimation arises in two applications. Firstly, in estimating the probability density functions of random variables and secondly in estimating the spectral density function of a time series. In these problems the estimates are functions that can be thought of as point estimates in an infinite dimensional space, and there are corresponding interval estimation problems.
Definition
Suppose a fixed parameter needs to be estimated. Then an "estimator" is a function that maps the sample space to a set of sample estimates. An estimator of is usually denoted by the symbol . It is often convenient to express the theory using the algebra of random variables: thus if X is used to denote a random variable corresponding to the observed data, the estimator (itself treated as a random variable) is symbolised as a function of that random variable, . The estimate for a particular observed data value (i.e. for ) is then , which is a fixed value. Often an abbreviated notation is used in which is interpreted directly as a random variable, but this can cause confusion.
Quantified properties
The following definitions and attributes are relevant.
Error
For a given sample , the "error" of the estimator is defined as
where is the parameter being estimated. The error, e, depends not only on the estimator (the estimation formula or procedure), but also on the sample.
Mean squared error
The mean squared error of is defined as the expected value (probability-weighted average, over all samples) of the squared errors; that is,
It is used to indicate how far, on average, the collection of estimates are from the single parameter being estimated. Consider the following analogy. Suppose the parameter is the bull's-eye of a target, the estimator is the process of shooting arrows at the target, and the individual arrows are estimates (samples). Then high MSE means the average distance of the arrows from the bull's eye is high, and low MSE means the average distance from the bull's eye is low. The arrows may or may not be clustered. For example, even if all arrows hit the same point, yet grossly miss the target, the MSE is still relatively large. However, if the MSE is relatively low then the arrows are likely more highly clustered (than highly dispersed) around the target.
Sampling deviation
For a given sample , the sampling deviation of the estimator is defined as
where is the expected value of the estimator. The sampling deviation, d, depends not only on the estimator, but also on the sample.
Variance
The variance of is the expected value of the squared sampling deviations; that is, . It is used to indicate how far, on average, the collection of estimates are from the expected value of the estimates. (Note the difference between MSE and variance.) If the parameter is the bull's-eye of a target, and the arrows are estimates, then a relatively high variance means the arrows are dispersed, and a relatively low variance means the arrows are clustered. Even if the variance is low, the cluster of arrows may still be far off-target, and even if the variance is high, the diffuse collection of arrows may still be unbiased. Finally, even if all arrows grossly miss the target, if they nevertheless all hit the same point, the variance is zero.
Bias
The bias of is defined as . It is the distance between the average of the collection of estimates, and the single parameter being estimated. The bias of is a function of the true value of so saying that the bias of is means that for every the bias of is .
There are two kinds of estimators: biased estimators and unbiased estimators. Whether an estimator is biased or not can be identified by the relationship between and 0:
If , is biased.
If , is unbiased.
The bias is also the expected value of the error, since . If the parameter is the bull's eye of a target and the arrows are estimates, then a relatively high absolute value for the bias means the average position of the arrows is off-target, and a relatively low absolute bias means the average position of the arrows is on target. They may be dispersed, or may be clustered. The relationship between bias and variance is analogous to the relationship between accuracy and precision.
The estimator is an unbiased estimator of if and only if . Bias is a property of the estimator, not of the estimate. Often, people refer to a "biased estimate" or an "unbiased estimate", but they really are talking about an "estimate from a biased estimator", or an "estimate from an unbiased estimator". Also, people often confuse the "error" of a single estimate with the "bias" of an estimator. That the error for one estimate is large, does not mean the estimator is biased. In fact, even if all estimates have astronomical absolute values for their errors, if the expected value of the error is zero, the estimator is unbiased. Also, an estimator's being biased does not preclude the error of an estimate from being zero in a particular instance. The ideal situation is to have an unbiased estimator with low variance, and also try to limit the number of samples where the error is extreme (that is, to have few outliers). Yet unbiasedness is not essential. Often, if just a little bias is permitted, then an estimator can be found with lower mean squared error and/or fewer outlier sample estimates.
An alternative to the version of "unbiased" above, is "median-unbiased", where the median of the distribution of estimates agrees with the true value; thus, in the long run half the estimates will be too low and half too high. While this applies immediately only to scalar-valued estimators, it can be extended to any measure of central tendency of a distribution: see median-unbiased estimators.
In a practical problem, can always have functional relationship with . For example, if a genetic theory states there is a type of leaf (starchy green) that occurs with probability , with .
Then, for leaves, the random variable , or the number of starchy green leaves, can be modeled with a distribution. The number can be used to express the following estimator for : . One can show that is an unbiased estimator for :
.
Unbiased
A desired property for estimators is the unbiased trait where an estimator is shown to have no systematic tendency to produce estimates larger or smaller than the true parameter. Additionally, unbiased estimators with smaller variances are preferred over larger variances because it will be closer to the "true" value of the parameter. The unbiased estimator with the smallest variance is known as the minimum-variance unbiased estimator (MVUE).
To find if your estimator is unbiased it is easy to follow along the equation , . With estimator T with and parameter of interest solving the previous equation so it is shown as the estimator is unbiased. Looking at the figure to the right despite being the only unbiased estimator, if the distributions overlapped and were both centered around then distribution would actually be the preferred unbiased estimator.
Expectation
When looking at quantities in the interest of expectation for the model distribution there is an unbiased estimator which should satisfy the two equations below.
Variance
Similarly, when looking at quantities in the interest of variance as the model distribution there is also an unbiased estimator that should satisfy the two equations below.
Note we are dividing by n − 1 because if we divided with n we would obtain an estimator with a negative bias which would thus produce estimates that are too small for . It should also be mentioned that even though is unbiased for the reverse is not true.
Relationships among the quantities
The mean squared error, variance, and bias, are related: i.e. mean squared error = variance + square of bias. In particular, for an unbiased estimator, the variance equals the mean squared error.
The standard deviation of an estimator of (the square root of the variance), or an estimate of the standard deviation of an estimator of , is called the standard error of .
The bias-variance tradeoff will be used in model complexity, over-fitting and under-fitting. It is mainly used in the field of supervised learning and predictive modelling to diagnose the performance of algorithms.
Behavioral properties
Consistency
A consistent estimator is an estimator whose sequence of estimates converge in probability to the quantity being estimated as the index (usually the sample size) grows without bound. In other words, increasing the sample size increases the probability of the estimator being close to the population parameter.
Mathematically, an estimator is a consistent estimator for parameter θ, if and only if for the sequence of estimates }, and for all , no matter how small, we have
.
The consistency defined above may be called weak consistency. The sequence is strongly consistent, if it converges almost surely to the true value.
An estimator that converges to a multiple of a parameter can be made into a consistent estimator by multiplying the estimator by a scale factor, namely the true value divided by the asymptotic value of the estimator. This occurs frequently in estimation of scale parameters by measures of statistical dispersion.
Fisher consistency
An estimator can be considered Fisher consistent as long as the estimator is the same functional of the empirical distribution function as the true distribution function. Following the formula:
Where and are the empirical distribution function and theoretical distribution function, respectively.
An easy example to see if some estimator is Fisher consistent is to check the consistency of mean and variance. For example, to check consistency for the mean and to check for variance confirm that .
Asymptotic normality
An asymptotically normal estimator is a consistent estimator whose distribution around the true parameter θ approaches a normal distribution with standard deviation shrinking in proportion to as the sample size n grows. Using to denote convergence in distribution, tn is asymptotically normal if
for some V.
In this formulation V/n can be called the asymptotic variance of the estimator. However, some authors also call V the asymptotic variance.
Note that convergence will not necessarily have occurred for any finite "n", therefore this value is only an approximation to the true variance of the estimator, while in the limit the asymptotic variance (V/n) is simply zero. To be more specific, the distribution of the estimator tn converges weakly to a dirac delta function centered at .
The central limit theorem implies asymptotic normality of the sample mean as an estimator of the true mean.
More generally, maximum likelihood estimators are asymptotically normal under fairly weak regularity conditions — see the asymptotics section of the maximum likelihood article. However, not all estimators are asymptotically normal; the simplest examples are found when the true value of a parameter lies on the boundary of the allowable parameter region.
Efficiency
The efficiency of an estimator is used to estimate the quantity of interest in a "minimum error" manner. In reality, there is not an explicit best estimator; there can only be a better estimator. Whether the efficiency of an estimator is better or not is based on the choice of a particular loss function, and it is reflected by two naturally desirable properties of estimators: to be unbiased and have minimal mean squared error (MSE) . These cannot in general both be satisfied simultaneously: a biased estimator may have a lower mean squared error than any unbiased estimator (see estimator bias).
This equation relates the mean squared error with the estimator bias:
The first term represents the mean squared error; the second term represents the square of the estimator bias; and the third term represents the variance of the estimator. The quality of the estimator can be identified from the comparison between the variance, the square of the estimator bias, or the MSE. The variance of the good estimator (good efficiency) would be smaller than the variance of the bad estimator (bad efficiency). The square of an estimator bias with a good estimator would be smaller than the estimator bias with a bad estimator. The MSE of a good estimator would be smaller than the MSE of the bad estimator. Suppose there are two estimator, is the good estimator and is the bad estimator. The above relationship can be expressed by the following formulas.
Besides using formula to identify the efficiency of the estimator, it can also be identified through the graph. If an estimator is efficient, in the frequency vs. value graph, there will be a curve with high frequency at the center and low frequency on the two sides. For example:
If an estimator is not efficient, the frequency vs. value graph, there will be a relatively more gentle curve.
To put it simply, the good estimator has a narrow curve, while the bad estimator has a large curve. Plotting these two curves on one graph with a shared y-axis, the difference becomes more obvious.
Among unbiased estimators, there often exists one with the lowest variance, called the minimum variance unbiased estimator (MVUE). In some cases an unbiased efficient estimator exists, which, in addition to having the lowest variance among unbiased estimators, satisfies the Cramér–Rao bound, which is an absolute lower bound on variance for statistics of a variable.
Concerning such "best unbiased estimators", see also Cramér–Rao bound, Gauss–Markov theorem, Lehmann–Scheffé theorem, Rao–Blackwell theorem.
Robustness
| Mathematics | Statistics | null |
10045 | https://en.wikipedia.org/wiki/Emerald | Emerald | Emerald is a gemstone and a variety of the mineral beryl (Be3Al2(SiO3)6) colored green by trace amounts of chromium or sometimes vanadium. Beryl has a hardness of 7.5–8 on the Mohs scale. Most emeralds have many inclusions, so their toughness (resistance to breakage) is classified as generally poor. Emerald is a cyclosilicate.
Etymology
The word "emerald" is derived (via and ), from Vulgar Latin: esmaralda/esmaraldus, a variant of Latin smaragdus, which was via (smáragdos; "green gem"). The Greek word may have a Semitic, Sanskrit or Persian origin. According to Webster's Dictionary the term emerald was first used in the 14th century.
Properties determining value
Emeralds, like all colored gemstones, are graded using four basic parameters known as "the four Cs": color, clarity, cut and carat weight. Normally, in grading colored gemstones, color is by far the most important criterion. However, in the grading of emeralds, clarity is considered a close second. A fine emerald must possess not only a pure verdant green hue as described below, but also a high degree of transparency to be considered a top gemstone.
This member of the beryl family ranks among the traditional "big four" gems along with diamonds, rubies and sapphires.
In the 1960s, the American jewelry industry changed the definition of emerald to include the green vanadium-bearing beryl. As a result, vanadium emeralds purchased as emeralds in the United States are not recognized as such in the United Kingdom and Europe. In America, the distinction between traditional emeralds and the new vanadium kind is often reflected in the use of terms such as "Colombian emerald".
Color
In gemology, color is divided into three components: hue, saturation, and tone. Emeralds occur in hues ranging from yellow-green to blue-green, with the primary hue necessarily being green. Yellow and blue are the normal secondary hues found in emeralds. Only gems that are medium to dark in tone are considered emeralds; light-toned gems are known instead by the species name green beryl. The finest emeralds are approximately 75% tone on a scale where 0% tone is colorless and 100% is opaque black. In addition, a fine emerald will be saturated and have a hue that is bright (vivid). Gray is the normal saturation modifier or mask found in emeralds; a grayish-green hue is a dull-green hue.
Clarity
Emeralds tend to have numerous inclusions and surface-breaking fissures. Unlike diamonds, where the loupe standard (i.e., 10× magnification) is used to grade clarity, emeralds are graded by eye. Thus, if an emerald has no visible inclusions to the eye (assuming normal visual acuity) it is considered flawless. Stones that lack surface breaking fissures are extremely rare and therefore almost all emeralds are treated ("oiled", see below) to enhance the apparent clarity. The inclusions and fissures within an emerald are sometimes described as jardin (French for garden), because of their mossy appearance. Imperfections are unique for each emerald and can be used to identify a particular stone. Eye-clean stones of a vivid primary green hue (as described above), with no more than 15% of any secondary hue or combination (either blue or yellow) of a medium-dark tone, command the highest prices. The relative non-uniformity motivates the cutting of emeralds in cabochon form, rather than faceted shapes. Faceted emeralds are most commonly given an oval cut, or the signature emerald cut, a rectangular cut with facets around the top edge.
Treatments
Most emeralds are oiled as part of the post-lapidary process, in order to fill in surface-reaching cracks so that clarity and stability are improved. Cedar oil, having a similar refractive index, is often used in this widely adopted practice. Other liquids, including synthetic oils and polymers with refractive indexes close to that of emeralds, such as Opticon, are also used. The least expensive emeralds are often treated with epoxy resins, which are effective for filling stones with many fractures. These treatments are typically applied in a vacuum chamber under mild heat, to open the pores of the stone and allow the fracture-filling agent to be absorbed more effectively. The U.S. Federal Trade Commission requires the disclosure of this treatment when an oil-treated emerald is sold. The use of oil is traditional and largely accepted by the gem trade, although oil-treated emeralds are worth much less than untreated emeralds of similar quality. Untreated emeralds must also be accompanied by a certificate from a licensed, independent gemology laboratory. Other treatments, for example the use of green-tinted oil, are not acceptable in the trade. Gems are graded on a four-step scale; none, minor, moderate and highly enhanced. These categories reflect levels of enhancement, not clarity. A gem graded none on the enhancement scale may still exhibit visible inclusions. Laboratories apply these criteria differently. Some gemologists consider the mere presence of oil or polymers to constitute enhancement. Others may ignore traces of oil if the presence of the material does not improve the look of the gemstone.
Emerald mines
Emeralds in antiquity were mined in Ancient Egypt at locations on Mount Smaragdus since 1500 BC, and India and Austria since at least the 14th century AD. The Egyptian mines were exploited on an industrial scale by the Roman and Byzantine Empires, and later by Islamic conquerors. Mining in Egypt ceased with the discovery of the Colombian deposits. Today, only ruins remain in Egypt.
Colombia is by far the world's largest producer of emeralds, constituting 50–95% of the world production, with the number depending on the year, source and grade. Emerald production in Colombia has increased drastically in the last decade, increasing by 78% from 2000 to 2010. The three main emerald mining areas in Colombia are Muzo, Coscuez, and Chivor. Rare "trapiche" emeralds are found in Colombia, distinguished by ray-like spokes of dark impurities.
Zambia is the world's second biggest producer, with its Kafubu River area deposits (Kagem Mines) about southwest of Kitwe responsible for 20% of the world's production of gem-quality stones in 2004. In the first half of 2011, the Kagem Mines produced 3.74 tons of emeralds.
Emeralds are found all over the world in countries such as Afghanistan, Australia, Austria, Brazil, Bulgaria, Cambodia, Canada, China, Egypt, Ethiopia, France, Germany, India, Kazakhstan, Madagascar, Mozambique, Namibia, Nigeria, Norway, Pakistan, Russia, Somalia, South Africa, Spain, Switzerland, Tanzania, the United States, Zambia, and Zimbabwe. In the US, emeralds have been found in Connecticut, Montana, Nevada, North Carolina, and South Carolina. In 1998, emeralds were discovered in the Yukon Territory of Canada.
Origin determinations
Since the onset of concerns regarding diamond origins, research has been conducted to determine if the mining location could be determined for an emerald already in circulation. Traditional research used qualitative guidelines such as an emerald's color, style and quality of cutting, type of fracture filling, and the anthropological origins of the artifacts bearing the mineral to determine the emerald's mine location. More recent studies using energy-dispersive X-ray spectroscopy methods have uncovered trace chemical element differences between emeralds, including ones mined in close proximity to one another. American gemologist David Cronin and his colleagues have extensively examined the chemical signatures of emeralds resulting from fluid dynamics and subtle precipitation mechanisms, and their research demonstrated the chemical homogeneity of emeralds from the same mining location and the statistical differences that exist between emeralds from different mining locations, including those between the three locations: Muzo, Coscuez, and Chivor, in Colombia, South America.
Synthetic emerald
Both hydrothermal and flux-growth synthetics have been produced, and a method has been developed for producing an emerald overgrowth on colorless beryl. The first commercially successful emerald synthesis process was that of Carroll Chatham, likely involving a lithium vanadate flux process, as Chatham's emeralds do not have any water and contain traces of vanadate, molybdenum and vanadium. The other large producer of flux emeralds was Pierre Gilson Sr., whose products have been on the market since 1964. Gilson's emeralds are usually grown on natural colorless beryl seeds, which are coated on both sides. Growth occurs at the rate of 1 mm per month, a typical seven-month growth run produces emerald crystals 7 mm thick.
Hydrothermal synthetic emeralds have been attributed to IG Farben, Nacken, Tairus, and others, but the first satisfactory commercial product was that of Johann Lechleitner of Innsbruck, Austria, which appeared on the market in the 1960s. These stones were initially sold under the names "Emerita" and "Symeralds", and they were grown as a thin layer of emerald on top of natural colorless beryl stones. Later, from 1965 to 1970, the Linde Division of Union Carbide produced completely synthetic emeralds by hydrothermal synthesis. According to their patents (attributable to E.M. Flanigen), acidic conditions are essential to prevent the chromium (which is used as the colorant) from precipitating. Also, it is important that the silicon-containing nutrient be kept away from the other ingredients to prevent nucleation and confine growth to the seed crystals. Growth occurs by a diffusion-reaction process, assisted by convection. The largest producer of hydrothermal emeralds today is Tairus, which has succeeded in synthesizing emeralds with chemical composition similar to emeralds in alkaline deposits in Colombia, and whose products are thus known as “Colombian created emeralds” or “Tairus created emeralds”. Luminescence in ultraviolet light is considered a supplementary test when making a natural versus synthetic determination, as many, but not all, natural emeralds are inert to ultraviolet light. Many synthetics are also UV inert.
Synthetic emeralds are often referred to as "created", as their chemical and gemological composition is the same as their natural counterparts. The U.S. Federal Trade Commission (FTC) has very strict regulations as to what can and what cannot be called a "synthetic" stone. The FTC says: "§ 23.23(c) It is unfair or deceptive to use the word "laboratory-grown", "laboratory-created", "[manufacturer name]-created", or "synthetic" with the name of any natural stone to describe any industry product unless such industry product has essentially the same optical, physical, and chemical properties as the stone named."
Historical and cultural references
Emerald is regarded as the traditional birthstone for May as well as the traditional gemstone for the astrological sign of Taurus.
Traditional alchemical lore ascribes several uses and characteristics to emeralds:
The virtue of the Emerald is to counteract poison. They say that if a venomous animal should look at it, it will become blinded. The gem also acts as a preservative against epilepsy; it cures leprosy, strengthens sight and memory, checks copulation, during which act it will break, if worn at the time on the finger.
According to French writer Brantôme ( 1540–1614) Hernán Cortés had one of the emeralds which he had looted from Mexico text engraved, Inter Natos Mulierum non surrexit major ("Among those born of woman there hath not arisen a greater," Matthew 11:11), in reference to John the Baptist. Brantôme considered engraving such a beautiful and simple product of nature sacrilegious and considered this act the cause for Cortez's loss in 1541 of an extremely precious pearl, and even for the death of King Charles IX of France, who died (1574) soon afterward.
In American author L. Frank Baum's 1900 children's novel The Wonderful Wizard of Oz, and the 1939 MGM film adaptation, the protagonist must travel to an Emerald City to meet the eponymous character, the Wizard.
The chief deity of one of India's most famous temples, the Meenakshi Amman Temple in Madurai, is the goddess Meenakshi, whose idol is traditionally thought to be made of emerald.
Notable emeralds
Gallery
| Physical sciences | Mineral gemstones | null |
10048 | https://en.wikipedia.org/wiki/Ethanol | Ethanol | Ethanol (also called ethyl alcohol, grain alcohol, drinking alcohol, or simply alcohol) is an organic compound with the chemical formula . It is an alcohol, with its formula also written as , or EtOH, where Et stands for ethyl. Ethanol is a volatile, flammable, colorless liquid with a characteristic wine-like odor and pungent taste.
As a psychoactive depressant, it is the active ingredient in alcoholic beverages, and the second most consumed drug globally behind caffeine.
Ethanol is naturally produced by the fermentation process of sugars by yeasts or via petrochemical processes such as ethylene hydration. Historically it was used as a general anesthetic, and has modern medical applications as an antiseptic, disinfectant, solvent for some medications, and antidote for methanol poisoning and ethylene glycol poisoning. It is used as a chemical solvent and in the synthesis of organic compounds, and as a fuel source for lamps, stoves, and internal combustion engines. Ethanol also can be dehydrated to make ethylene, an important chemical feedstock. As of 2023, world production of ethanol fuel was , coming mostly from the U.S. (51%) and Brazil (26%).
Name
Ethanol is the systematic name defined by the International Union of Pure and Applied Chemistry for a compound consisting of an alkyl group with two carbon atoms (prefix "eth-"), having a single bond between them (infix "-an-") and an attached −OH functional group (suffix "-ol").
The "eth-" prefix and the qualifier "ethyl" in "ethyl alcohol" originally came from the name "ethyl" assigned in 1834 to the group − by Justus Liebig. He coined the word from the German name Aether of the compound −O− (commonly called "ether" in English, more specifically called "diethyl ether"). According to the Oxford English Dictionary, Ethyl is a contraction of the Ancient Greek αἰθήρ (, "upper air") and the Greek word ὕλη (, "wood, raw material", hence "matter, substance"). Ethanol was coined as a result of a resolution on naming alcohols and phenols that was adopted at the International Conference on Chemical Nomenclature that was held in April 1892 in Geneva, Switzerland.
The term alcohol now refers to a wider class of substances in chemistry nomenclature, but in common parlance it remains the name of ethanol. It is a medieval loan from Arabic , a powdered ore of antimony used since antiquity as a cosmetic, and retained that meaning in Middle Latin. The use of 'alcohol' for ethanol (in full, "alcohol of wine") was first recorded in 1753. Before the late 18th century the term alcohol generally referred to any sublimated substance.
Uses
Recreational drug
As a central nervous system depressant, ethanol is one of the most commonly consumed psychoactive drugs. Despite alcohol's psychoactive, addictive, and carcinogenic properties, it is readily available and legal for sale in many countries. There are laws regulating the sale, exportation/importation, taxation, manufacturing, consumption, and possession of alcoholic beverages. The most common regulation is prohibition for minors.
In mammals, ethanol is primarily metabolized in the liver and stomach by ADH enzymes. These enzymes catalyze the oxidation of ethanol into acetaldehyde (ethanal):
CH3CH2OH + NAD+ → CH3CHO + NADH + H+
When present in significant concentrations, this metabolism of ethanol is additionally aided by the cytochrome P450 enzyme CYP2E1 in humans, while trace amounts are also metabolized by catalase. The resulting intermediate, acetaldehyde, is a known carcinogen, and poses significantly greater toxicity in humans than ethanol itself. Many of the symptoms typically associated with alcohol intoxication—as well as many of the health hazards typically associated with the long-term consumption of ethanol—can be attributed to acetaldehyde toxicity in humans.
The subsequent oxidation of acetaldehyde into acetate is performed by aldehyde dehydrogenase (ALDH) enzymes. A mutation in the ALDH2 gene that encodes for an inactive or dysfunctional form of this enzyme affects roughly 50 % of east Asian populations, contributing to the characteristic alcohol flush reaction that can cause temporary reddening of the skin as well as a number of related, and often unpleasant, symptoms of acetaldehyde toxicity. This mutation is typically accompanied by another mutation in the ADH enzyme ADH1B in roughly 80 % of east Asians, which improves the catalytic efficiency of converting ethanol into acetaldehyde.
Medical
Ethanol is the oldest known sedative, used as an oral general anesthetic during surgery in ancient Mesopotamia and in medieval times. Mild intoxication starts at a blood alcohol concentration of 0.03-0.05 % and induces anesthetic coma at 0.4%. This use carries the high risk of deadly alcohol intoxication, pulmonary aspiration and vomiting, which led to use of alternatives in antiquity, such as opium and cannabis, and later diethyl ether, starting in the 1840s.
Ethanol is used as an antiseptic in medical wipes and hand sanitizer gels for its bactericidal and anti-fungal effects. Ethanol kills microorganisms by dissolving their membrane lipid bilayer and denaturing their proteins, and is effective against most bacteria, fungi and viruses. It is ineffective against bacterial spores, which can be treated with hydrogen peroxide.
A solution of 70% ethanol is more effective than pure ethanol because ethanol relies on water molecules for optimal antimicrobial activity. Absolute ethanol may inactivate microbes without destroying them because the alcohol is unable to fully permeate the microbe's membrane. Ethanol can also be used as a disinfectant and antiseptic by inducing cell dehydration through disruption of the osmotic balance across the cell membrane, causing water to leave the cell, leading to cell death.
Ethanol may be administered as an antidote to ethylene glycol poisoning and methanol poisoning. It does so by acting as a competitive inhibitor against methanol and ethylene glycol for alcohol dehydrogenase (ADH). Though it has more side effects, ethanol is less expensive and more readily available than fomepizole in the role.
Ethanol is used to dissolve many water-insoluble medications and related compounds. Liquid preparations of pain medications, cough and cold medicines, and mouth washes, for example, may contain up to 25% ethanol and may need to be avoided in individuals with adverse reactions to ethanol such as alcohol-induced respiratory reactions. Ethanol is present mainly as an antimicrobial preservative in over 700 liquid preparations of medicine including acetaminophen, iron supplements, ranitidine, furosemide, mannitol, phenobarbital, trimethoprim/sulfamethoxazole and over-the-counter cough medicine.
Some medicinal solutions of ethanol are also known as tinctures.
Energy source
The largest single use of ethanol is as an engine fuel and fuel additive. Brazil in particular relies heavily upon the use of ethanol as an engine fuel, due in part to its role as one of the world's leading producers of ethanol. Gasoline sold in Brazil contains at least 25% anhydrous ethanol. Hydrous ethanol (about 95% ethanol and 5% water) can be used as fuel in more than 90% of new gasoline-fueled cars sold in the country.
The US and many other countries primarily use E10 (10% ethanol, sometimes known as gasohol) and E85 (85% ethanol) ethanol/gasoline mixtures. Over time, it is believed that a material portion of the ≈ per year market for gasoline will begin to be replaced with fuel ethanol.
Australian law limits the use of pure ethanol from sugarcane waste to 10 % in automobiles. Older cars (and vintage cars designed to use a slower burning fuel) should have the engine valves upgraded or replaced.
According to an industry advocacy group, ethanol as a fuel reduces harmful tailpipe emissions of carbon monoxide, particulate matter, oxides of nitrogen, and other ozone-forming pollutants. Argonne National Laboratory analyzed greenhouse gas emissions of many different engine and fuel combinations, and found that biodiesel/petrodiesel blend (B20) showed a reduction of 8%, conventional E85 ethanol blend a reduction of 17% and cellulosic ethanol 64%, compared with pure gasoline. Ethanol has a much greater research octane number (RON) than gasoline, meaning it is less prone to pre-ignition, allowing for better ignition advance which means more torque, and efficiency in addition to the lower carbon emissions.
Ethanol combustion in an internal combustion engine yields many of the products of incomplete combustion produced by gasoline and significantly larger amounts of formaldehyde and related species such as acetaldehyde. This leads to a significantly larger photochemical reactivity and more ground level ozone. This data has been assembled into The Clean Fuels Report comparison of fuel emissions and show that ethanol exhaust generates 2.14 times as much ozone as gasoline exhaust. When this is added into the custom Localized Pollution Index of The Clean Fuels Report, the local pollution of ethanol (pollution that contributes to smog) is rated 1.7, where gasoline is 1.0 and higher numbers signify greater pollution. The California Air Resources Board formalized this issue in 2008 by recognizing control standards for formaldehydes as an emissions control group, much like the conventional NOx and reactive organic gases (ROGs).
More than 20% of Brazilian cars are able to use 100% ethanol as fuel, which includes ethanol-only engines and flex-fuel engines. Flex-fuel engines in Brazil are able to work with all ethanol, all gasoline or any mixture of both. In the United States, flex-fuel vehicles can run on 0% to 85% ethanol (15% gasoline) since higher ethanol blends are not yet allowed or efficient. Brazil supports this fleet of ethanol-burning automobiles with large national infrastructure that produces ethanol from domestically grown sugarcane.
Ethanol's high miscibility with water makes it unsuitable for shipping through modern pipelines like liquid hydrocarbons. Mechanics have seen increased cases of damage to small engines (in particular, the carburetor) and attribute the damage to the increased water retention by ethanol in fuel.
Ethanol was commonly used as fuel in early bipropellant rocket (liquid-propelled) vehicles, in conjunction with an oxidizer such as liquid oxygen. The German A-4 ballistic rocket of World War II (better known by its propaganda name ), which is credited as having begun the space age, used ethanol as the main constituent of . Under such nomenclature, the ethanol was mixed with 25% water to reduce the combustion chamber temperature. The design team helped develop U.S. rockets following World War II, including the ethanol-fueled Redstone rocket, which launched the first U.S. astronaut on suborbital spaceflight. Alcohols fell into general disuse as more energy-dense rocket fuels were developed, although ethanol was used in recent experimental lightweight rocket-powered racing aircraft.
Commercial fuel cells operate on reformed natural gas, hydrogen or methanol. Ethanol is an attractive alternative due to its wide availability, low cost, high purity and low toxicity. There is a wide range of fuel cell concepts that have entered trials including direct-ethanol fuel cells, auto-thermal reforming systems and thermally integrated systems. The majority of work is being conducted at a research level although there are a number of organizations at the beginning of the commercialization of ethanol fuel cells.
Ethanol fireplaces can be used for home heating or for decoration. Ethanol can also be used as stove fuel for cooking.
Other uses
Ethanol is an important industrial ingredient. It has widespread use as a precursor for other organic compounds such as ethyl halides, ethyl esters, diethyl ether, acetic acid, and ethyl amines. It is considered a universal solvent, as its molecular structure allows for the dissolving of both polar, hydrophilic and nonpolar, hydrophobic compounds. As ethanol also has a low boiling point, it is easy to remove from a solution that has been used to dissolve other compounds, making it a popular extracting agent for botanical oils. Cannabis oil extraction methods often use ethanol as an extraction solvent, and also as a post-processing solvent to remove oils, waxes, and chlorophyll from solution in a process known as winterization.
Ethanol is found in paints, tinctures, markers, and personal care products such as mouthwashes, perfumes and deodorants. Polysaccharides precipitate from aqueous solution in the presence of alcohol, and ethanol precipitation is used for this reason in the purification of DNA and RNA. Because of its low freezing point of and low toxicity, ethanol is sometimes used in laboratories (with dry ice or other coolants) as a cooling bath to keep vessels at temperatures below the freezing point of water. For the same reason, it is also used as the active fluid in alcohol thermometers.
Chemistry
Ethanol is a 2-carbon alcohol. Its molecular formula is CH3CH2OH. The structure of the molecule of ethanol is (an ethyl group linked to a hydroxyl group), which indicates that the carbon of a methyl group (CH3−) is attached to the carbon of a methylene group (−CH2–), which is attached to the oxygen of a hydroxyl group (−OH). It is a constitutional isomer of dimethyl ether. Ethanol is sometimes abbreviated as EtOH, using the common organic chemistry notation of representing the ethyl group (C2H5−) with Et.
Physical properties
Ethanol is a volatile, colorless liquid that has a slight odor. It burns with a smokeless blue flame that is not always visible in normal light. The physical properties of ethanol stem primarily from the presence of its hydroxyl group and the shortness of its carbon chain. Ethanol's hydroxyl group is able to participate in hydrogen bonding, rendering it more viscous and less volatile than less polar organic compounds of similar molecular weight, such as propane. Ethanol's adiabatic flame temperature for combustion in air is 2082 °C or 3779 °F.
Ethanol is slightly more refractive than water, having a refractive index of 1.36242 (at λ=589.3 nm and ). The triple point for ethanol is .
Solvent properties
Ethanol is a versatile solvent, miscible with water and with many organic solvents, including acetic acid, acetone, benzene, carbon tetrachloride, chloroform, diethyl ether, ethylene glycol, glycerol, nitromethane, pyridine, and toluene. Its main use as a solvent is in making tincture of iodine, cough syrups, etc. It is also miscible with light aliphatic hydrocarbons, such as pentane and hexane, and with aliphatic chlorides such as trichloroethane and tetrachloroethylene.
Ethanol's miscibility with water contrasts with the immiscibility of longer-chain alcohols (five or more carbon atoms), whose water miscibility decreases sharply as the number of carbons increases. The miscibility of ethanol with alkanes is limited to alkanes up to undecane: mixtures with dodecane and higher alkanes show a miscibility gap below a certain temperature (about 13 °C for dodecane). The miscibility gap tends to get wider with higher alkanes, and the temperature for complete miscibility increases.
Ethanol-water mixtures have less volume than the sum of their individual components at the given fractions. Mixing equal volumes of ethanol and water results in only 1.92 volumes of mixture. Mixing ethanol and water is exothermic, with up to 777 J/mol being released at 298 K.
Hydrogen bonding causes pure ethanol to be hygroscopic to the extent that it readily absorbs water from the air. The polar nature of the hydroxyl group causes ethanol to dissolve many ionic compounds, notably sodium and potassium hydroxides, magnesium chloride, calcium chloride, ammonium chloride, ammonium bromide, and sodium bromide. Sodium and potassium chlorides are slightly soluble in ethanol. Because the ethanol molecule also has a nonpolar end, it will also dissolve nonpolar substances, including most essential oils and numerous flavoring, coloring, and medicinal agents.
The addition of even a few percent of ethanol to water sharply reduces the surface tension of water. This property partially explains the "tears of wine" phenomenon. When wine is swirled in a glass, ethanol evaporates quickly from the thin film of wine on the wall of the glass. As the wine's ethanol content decreases, its surface tension increases and the thin film "beads up" and runs down the glass in channels rather than as a smooth sheet.
Azeotrope with water
At atmospheric pressure, mixtures of ethanol and water form an azeotrope at about 89.4 mol% ethanol (95.6% ethanol by mass, 97% alcohol by volume), with a boiling point of 351.3 K (78.1 °C). At lower pressure, the composition of the ethanol-water azeotrope shifts to more ethanol-rich mixtures. The minimum-pressure azeotrope has an ethanol fraction of 100% and a boiling point of 306 K (33 °C), corresponding to a pressure of roughly 70 torr (9.333 kPa). Below this pressure, there is no azeotrope, and it is possible to distill absolute ethanol from an ethanol-water mixture.
Flammability
An ethanol–water solution will catch fire if heated above a temperature called its flash point and an ignition source is then applied to it. For 20% alcohol by mass (about 25% by volume), this will occur at about . The flash point of pure ethanol is , but may be influenced very slightly by atmospheric composition such as pressure and humidity. Ethanol mixtures can ignite below average room temperature. Ethanol is considered a flammable liquid (Class 3 Hazardous Material) in concentrations above 2.35% by mass (3.0% by volume; 6 proof). Dishes using burning alcohol for culinary effects are called flambé.
Natural occurrence
Ethanol is a byproduct of the metabolic process of yeast. As such, ethanol will be present in any yeast habitat. Ethanol can commonly be found in overripe fruit. Ethanol produced by symbiotic yeast can be found in bertam palm blossoms. Although some animal species, such as the pentailed treeshrew, exhibit ethanol-seeking behaviors, most show no interest or avoidance of food sources containing ethanol. Ethanol is also produced during the germination of many plants as a result of natural anaerobiosis.
Ethanol has been detected in outer space, forming an icy coating around dust grains in interstellar clouds.
Minute quantity amounts (average 196 ppb) of endogenous ethanol and acetaldehyde were found in the exhaled breath of healthy volunteers. Auto-brewery syndrome, also known as gut fermentation syndrome, is a rare medical condition in which intoxicating quantities of ethanol are produced through endogenous fermentation within the digestive system.
Production
Ethanol is produced both as a petrochemical, through the hydration of ethylene and, via biological processes, by fermenting sugars with yeast. Which process is more economical depends on prevailing prices of petroleum and grain feed stocks.
Sources
World production of ethanol in 2006 was , with 69% of the world supply coming from Brazil and the U.S. Brazilian ethanol is produced from sugarcane, which has relatively high yields (830% more fuel than the fossil fuels used to produce it) compared to some other energy crops. Sugarcane not only has a greater concentration of sucrose than corn (by about 30%), but is also much easier to extract. The bagasse generated by the process is not discarded, but burned by power plants to produce electricity. Bagasse burning accounts for around 9% of the electricity produced in Brazil.
In the 1970s most industrial ethanol in the U.S. was made as a petrochemical, but in the 1980s the U.S. introduced subsidies for corn-based ethanol. According to the Renewable Fuels Association, as of 30 October 2007, 131 grain ethanol bio-refineries in the U.S. have the capacity to produce of ethanol per year. An additional 72 construction projects underway (in the U.S.) can add of new capacity in the next 18 months.
In India ethanol is made from sugarcane. Sweet sorghum is another potential source of ethanol, and is suitable for growing in dryland conditions. The International Crops Research Institute for the Semi-Arid Tropics is investigating the possibility of growing sorghum as a source of fuel, food, and animal feed in arid parts of Asia and Africa. Sweet sorghum has one-third the water requirement of sugarcane over the same time period. It also requires about 22% less water than corn. The world's first sweet sorghum ethanol distillery began commercial production in 2007 in Andhra Pradesh, India.
Ethanol has been produced in the laboratory by converting carbon dioxide via biological and electrochemical reactions.
Hydration
Ethanol can be produced from petrochemical feed stocks, primarily by the acid-catalyzed hydration of ethylene. It is often referred to as synthetic ethanol.
The catalyst is most commonly phosphoric acid, adsorbed onto a porous support such as silica gel or diatomaceous earth. This catalyst was first used for large-scale ethanol production by the Shell Oil Company in 1947. The reaction is carried out in the presence of high pressure steam at where a 5:3 ethylene to steam ratio is maintained. This process was used on an industrial scale by Union Carbide Corporation and others. It is no longer practiced in the US as fermentation ethanol produced from corn is more economical.
In an older process, first practiced on the industrial scale in 1930 by Union Carbide but now almost entirely obsolete, ethylene was hydrated indirectly by reacting it with concentrated sulfuric acid to produce ethyl sulfate, which was hydrolyzed to yield ethanol and regenerate the sulfuric acid:
Fermentation
Ethanol in alcoholic beverages and fuel is produced by fermentation. Certain species of yeast (e.g., Saccharomyces cerevisiae) metabolize sugar (namely polysaccharides), producing ethanol and carbon dioxide. The chemical equations below summarize the conversion:
Fermentation is the process of culturing yeast under favorable thermal conditions to produce alcohol. This process is carried out at around . Toxicity of ethanol to yeast limits the ethanol concentration obtainable by brewing; higher concentrations, therefore, are obtained by fortification or distillation. The most ethanol-tolerant yeast strains can survive up to approximately 18% ethanol by volume.
To produce ethanol from starchy materials such as cereals, the starch must first be converted into sugars. In brewing beer, this has traditionally been accomplished by allowing the grain to germinate, or malt, which produces the enzyme amylase. When the malted grain is mashed, the amylase converts the remaining starches into sugars.
Sugars for ethanol fermentation can be obtained from cellulose. Deployment of this technology could turn a number of cellulose-containing agricultural by-products, such as corncobs, straw, and sawdust, into renewable energy resources. Other agricultural residues such as sugarcane bagasse and energy crops such as switchgrass may also be fermentable sugar sources.
Testing
Breweries and biofuel plants employ two methods for measuring ethanol concentration. Infrared ethanol sensors measure the vibrational frequency of dissolved ethanol using the C−H band at 2900 cm. This method uses a relatively inexpensive solid-state sensor that compares the C−H band with a reference band to calculate the ethanol content. The calculation makes use of the Beer–Lambert law. Alternatively, by measuring the density of the starting material and the density of the product, using a hydrometer, the change in specific gravity during fermentation indicates the alcohol content. This inexpensive and indirect method has a long history in the beer brewing industry.
Purification
Ethylene hydration or brewing produces an ethanol–water mixture. For most industrial and fuel uses, the ethanol must be purified. Fractional distillation at atmospheric pressure can concentrate ethanol to 95.6% by weight (89.5 mole%). This mixture is an azeotrope with a boiling point of , and cannot be further purified by distillation. Addition of an entraining agent, such as benzene, cyclohexane, or heptane, allows a new ternary azeotrope comprising the ethanol, water, and the entraining agent to be formed. This lower-boiling ternary azeotrope is removed preferentially, leading to water-free ethanol.
Apart from distillation, ethanol may be dried by addition of a desiccant, such as molecular sieves, cellulose, or cornmeal. The desiccants can be dried and reused. Molecular sieves can be used to selectively absorb the water from the 95.6% ethanol solution. Molecular sieves of pore-size 3 Ångstrom, a type of zeolite, effectively sequester water molecules while excluding ethanol molecules. Heating the wet sieves drives out the water, allowing regeneration of their desiccant capability.
Membranes can also be used to separate ethanol and water. Membrane-based separations are not subject to the limitations of the water-ethanol azeotrope because the separations are not based on vapor-liquid equilibria. Membranes are often used in the so-called hybrid membrane distillation process. This process uses a pre-concentration distillation column as the first separating step. The further separation is then accomplished with a membrane operated either in vapor permeation or pervaporation mode. Vapor permeation uses a vapor membrane feed and pervaporation uses a liquid membrane feed.
A variety of other techniques have been discussed, including the following:
Salting using potassium carbonate to exploit its insolubility will cause a phase separation with ethanol and water. This offers a very small potassium carbonate impurity to the alcohol that can be removed by distillation. This method is very useful in purification of ethanol by distillation, as ethanol forms an azeotrope with water.
Direct electrochemical reduction of carbon dioxide to ethanol under ambient conditions using copper nanoparticles on a carbon nanospike film as the catalyst;
Extraction of ethanol from grain mash by supercritical carbon dioxide;
Pervaporation;
Fractional freezing is also used to concentrate fermented alcoholic solutions, such as traditionally made Applejack (beverage);
Pressure swing adsorption.
Grades of ethanol
Pure ethanol and alcoholic beverages are heavily taxed as psychoactive drugs, but ethanol has many uses that do not involve its consumption. To relieve the tax burden on these uses, most jurisdictions waive the tax when an agent has been added to the ethanol to render it unfit to drink. These include bittering agents such as denatonium benzoate and toxins such as methanol, naphtha, and pyridine. Products of this kind are called denatured alcohol.
Absolute or anhydrous alcohol refers to ethanol with a low water content. There are various grades with maximum water contents ranging from 1% to a few parts per million (ppm). If azeotropic distillation is used to remove water, it will contain trace amounts of the material separation agent (e.g. benzene). Absolute alcohol is not intended for human consumption. Absolute ethanol is used as a solvent for laboratory and industrial applications, where water will react with other chemicals, and as fuel alcohol. Spectroscopic ethanol is an absolute ethanol with a low absorbance in ultraviolet and visible light, fit for use as a solvent in ultraviolet-visible spectroscopy. Pure ethanol is classed as 200 proof in the US, equivalent to 175 degrees proof in the UK system. Rectified spirit, an azeotropic composition of 96% ethanol containing 4% water, is used instead of anhydrous ethanol for various purposes. Spirits of wine are about 94% ethanol (188 proof). The impurities are different from those in 95% (190 proof) laboratory ethanol.
Reactions
Ethanol is classified as a primary alcohol, meaning that the carbon that its hydroxyl group attaches to has at least two hydrogen atoms attached to it as well. Many ethanol reactions occur at its hydroxyl group.
Ester formation
In the presence of acid catalysts, ethanol reacts with carboxylic acids to produce ethyl esters and water:
RCOOH + HOCH2CH3 → RCOOCH2CH3 + H2O
This reaction, which is conducted on large scale industrially, requires the removal of the water from the reaction mixture as it is formed. Esters react in the presence of an acid or base to give back the alcohol and a salt. This reaction is known as saponification because it is used in the preparation of soap. Ethanol can also form esters with inorganic acids. Diethyl sulfate and triethyl phosphate are prepared by treating ethanol with sulfur trioxide and phosphorus pentoxide respectively. Diethyl sulfate is a useful ethylating agent in organic synthesis. Ethyl nitrite, prepared from the reaction of ethanol with sodium nitrite and sulfuric acid, was formerly used as a diuretic.
Dehydration
In the presence of acid catalysts, alcohols can be converted to alkenes such as ethanol to ethylene. Typically solid acids such as alumina are used.
CH3CH2OH → H2C=CH2 + H2O
Since water is removed from the same molecule, the reaction is known as intramolecular dehydration. Intramolecular dehydration of an alcohol requires a high temperature and the presence of an acid catalyst such as sulfuric acid. Ethylene produced from sugar-derived ethanol (primarily in Brazil) competes with ethylene produced from petrochemical feedstocks such as naphtha and ethane. At a lower temperature than that of intramolecular dehydration, intermolecular alcohol dehydration may occur producing a symmetrical ether. This is a condensation reaction. In the following example, diethyl ether is produced from ethanol:
2 CH3CH2OH → CH3CH2OCH2CH3 + H2O
Combustion
Complete combustion of ethanol forms carbon dioxide and water:
C2H5OH (l) + 3 O2 (g) → 2 CO2 (g) + 3 H2O (l); −ΔcH = 1371 kJ/mol = 29.8 kJ/g = 327 kcal/mol = 7.1 kcal/g
C2H5OH (l) + 3 O2 (g) → 2 CO2 (g) + 3 H2O (g); −ΔcH = 1236 kJ/mol = 26.8 kJ/g = 295.4 kcal/mol = 6.41 kcal/g
Specific heat = 2.44 kJ/(kg·K)
Acid-base chemistry
Ethanol is a neutral molecule and the pH of a solution of ethanol in water is nearly 7.00. Ethanol can be quantitatively converted to its conjugate base, the ethoxide ion (CH3CH2O−), by reaction with an alkali metal such as sodium:
2 CH3CH2OH + 2 Na → 2 CH3CH2ONa + H2
or a very strong base such as sodium hydride:
CH3CH2OH + NaH → CH3CH2ONa + H2
The acidities of water and ethanol are nearly the same, as indicated by their pKa of 15.7 and 16 respectively. Thus, sodium ethoxide and sodium hydroxide exist in an equilibrium that is closely balanced:
CH3CH2OH + NaOH CH3CH2ONa + H2O
Halogenation
Ethanol is not used industrially as a precursor to ethyl halides, but the reactions are illustrative. Ethanol reacts with hydrogen halides to produce ethyl halides such as ethyl chloride and ethyl bromide via an SN2 reaction:
CH3CH2OH + HCl → CH3CH2Cl + H2O
HCl requires a catalyst such as zinc chloride.
HBr requires refluxing with a sulfuric acid catalyst. Ethyl halides can, in principle, also be produced by treating ethanol with more specialized halogenating agents, such as thionyl chloride or phosphorus tribromide.
CH3CH2OH + SOCl2 → CH3CH2Cl + SO2 + HCl
Upon treatment with halogens in the presence of base, ethanol gives the corresponding haloform (CHX3, where X = Cl, Br, I). This conversion is called the haloform reaction.
An intermediate in the reaction with chlorine is the aldehyde called chloral, which forms chloral hydrate upon reaction with water:
4 Cl2 + CH3CH2OH → CCl3CHO + 5 HCl
CCl3CHO + H2O → CCl3C(OH)2H
Oxidation
Ethanol can be oxidized to acetaldehyde and further oxidized to acetic acid, depending on the reagents and conditions. This oxidation is of no importance industrially, but in the human body, these oxidation reactions are catalyzed by the enzyme liver ADH. The oxidation product of ethanol, acetic acid, is a nutrient for humans, being a precursor to acetyl CoA, where the acetyl group can be spent as energy or used for biosynthesis.
Metabolism
Ethanol is similar to macronutrients such as proteins, fats, and carbohydrates in that it provides calories. When consumed and metabolized, it contributes 7 kilocalories per gram via ethanol metabolism.
Safety
Ethanol is very flammable and should not be used around an open flame.
Pure ethanol will irritate the skin and eyes. Nausea, vomiting, and intoxication are symptoms of ingestion. Long-term use by ingestion can result in serious liver damage. Atmospheric concentrations above one part per thousand are above the European Union occupational exposure limits.
History
The fermentation of sugar into ethanol is one of the earliest biotechnologies employed by humans. Ethanol has historically been identified variously as spirit of wine or ardent spirits, and as aqua vitae or aqua vita. The intoxicating effects of its consumption have been known since ancient times. Ethanol has been used by humans since prehistory as the intoxicating ingredient of alcoholic beverages. Dried residue on 9,000-year-old pottery found in China suggests that Neolithic people consumed alcoholic beverages.
The inflammable nature of the exhalations of wine was already known to ancient natural philosophers such as Aristotle (384–322 BCE), Theophrastus (–287 BCE), and Pliny the Elder (23/24–79 CE). However, this did not immediately lead to the isolation of ethanol, despite the development of more advanced distillation techniques in second- and third-century Roman Egypt. An important recognition, first found in one of the writings attributed to Jābir ibn Ḥayyān (ninth century CE), was that by adding salt to boiling wine, which increases the wine's relative volatility, the flammability of the resulting vapors may be enhanced. The distillation of wine is attested in Arabic works attributed to al-Kindī (–873 CE) and to al-Fārābī (–950), and in the 28th book of al-Zahrāwī's (Latin: Abulcasis, 936–1013) Kitāb al-Taṣrīf (later translated into Latin as Liber servatoris). In the twelfth century, recipes for the production of aqua ardens ("burning water", i.e., ethanol) by distilling wine with salt started to appear in a number of Latin works, and by the end of the thirteenth century it had become a widely known substance among Western European chemists.
The works of Taddeo Alderotti (1223–1296) describe a method for concentrating ethanol involving repeated fractional distillation through a water-cooled still, by which an ethanol purity of 90% could be obtained. The medicinal properties of ethanol were studied by Arnald of Villanova (1240–1311 CE) and John of Rupescissa (–1366), the latter of whom regarded it as a life-preserving substance able to prevent all diseases (the aqua vitae or "water of life", also called by John the quintessence of wine). In China, archaeological evidence indicates that the true distillation of alcohol began during the Jin (1115–1234) or Southern Song (1127–1279) dynasties. A still has been found at an archaeological site in Qinglong, Hebei, dating to the 12th century. In India, the true distillation of alcohol was introduced from the Middle East, and was in wide use in the Delhi Sultanate by the 14th century.
In 1796, German-Russian chemist Johann Tobias Lowitz obtained pure ethanol by mixing partially purified ethanol (the alcohol-water azeotrope) with an excess of anhydrous alkali and then distilling the mixture over low heat. French chemist Antoine Lavoisier described ethanol as a compound of carbon, hydrogen, and oxygen, and in 1807 Nicolas-Théodore de Saussure determined ethanol's chemical formula. Fifty years later, Archibald Scott Couper published the structural formula of ethanol, one of the first structural formulas determined.
Ethanol was first prepared synthetically in 1825 by Michael Faraday. He found that sulfuric acid could absorb large volumes of coal gas. He gave the resulting solution to Henry Hennell, a British chemist, who found in 1826 that it contained "sulphovinic acid" (ethyl hydrogen sulfate). In 1828, Hennell and the French chemist Georges-Simon Serullas independently discovered that sulphovinic acid could be decomposed into ethanol. Thus, in 1825 Faraday had unwittingly discovered that ethanol could be produced from ethylene (a component of coal gas) by acid-catalyzed hydration, a process similar to current industrial ethanol synthesis.
Ethanol was used as lamp fuel in the U.S. as early as 1840, but a tax levied on industrial alcohol during the Civil War made this use uneconomical. The tax was repealed in 1906. Use as an automotive fuel dates back to 1908, with the Ford Model T able to run on petrol (gasoline) or ethanol. It fuels some spirit lamps.
Ethanol intended for industrial use is often produced from ethylene. Ethanol has widespread use as a solvent of substances intended for human contact or consumption, including scents, flavorings, colorings, and medicines. In chemistry, it is both a solvent and a feedstock for the synthesis of other products. It has a long history as a fuel for heat and light, and more recently as a fuel for internal combustion engines.
| Physical sciences | Carbon–oxygen bond | null |
10065 | https://en.wikipedia.org/wiki/Empirical%20formula | Empirical formula | In chemistry, the empirical formula of a chemical compound is the simplest whole number ratio of atoms present in a compound. A simple example of this concept is that the empirical formula of sulfur monoxide, or SO, is simply SO, as is the empirical formula of disulfur dioxide, S2O2. Thus, sulfur monoxide and disulfur dioxide, both compounds of sulfur and oxygen, have the same empirical formula. However, their molecular formulas, which express the number of atoms in each molecule of a chemical compound, are not the same.
An empirical formula makes no mention of the arrangement or number of atoms. It is standard for many ionic compounds, like calcium chloride (CaCl2), and for macromolecules, such as silicon dioxide (SiO2).
The molecular formula, on the other hand, shows the number of each type of atom in a molecule. The structural formula shows the arrangement of the molecule. It is also possible for different types of compounds to have equal empirical formulas.
In the early days of chemistry, information regarding the composition of compounds came from elemental analysis, which gives information about the relative amounts of elements present in a compound, which can be written as percentages or mole ratios. However, chemists were not able to determine the exact amounts of these elements and were only able to know their ratios, hence the name "empirical formula". Since ionic compounds are extended networks of anions and cations, all formulas of ionic compounds are empirical.
Examples
Glucose (), ribose (), Acetic acid (), and formaldehyde () all have different molecular formulas but the same empirical formula: . This is the actual molecular formula for formaldehyde, but acetic acid has double the number of atoms, ribose has five times the number of atoms, and glucose has six times the number of atoms.
Calculation example
A chemical analysis of a sample of methyl acetate provides the following elemental data: 48.64% carbon (C), 8.16% hydrogen (H), and 43.20% oxygen (O). For the purposes of determining empirical formulas, it's assumed that we have 100 grams of the compound. If this is the case, the percentages will be equal to the mass of each element in grams.
Step 1: Change each percentage to an expression of the mass of each element in grams. That is, 48.64% C becomes 48.64 g C, 8.16% H becomes 8.16 g H, and 43.20% O becomes 43.20 g O.
Step 2: Convert the amount of each element in grams to its amount in moles
Step 3: Divide each of the resulting values by the smallest of these values (2.7)
Step 4: If necessary, multiply these numbers by integers in order to get whole numbers; if an operation is done to one of the numbers, it must be done to all of them.
Thus, the empirical formula of methyl acetate is . This formula also happens to be methyl acetate's molecular formula.
| Physical sciences | Substance | Chemistry |
10090 | https://en.wikipedia.org/wiki/Erythromycin | Erythromycin | Erythromycin is an antibiotic used for the treatment of a number of bacterial infections. This includes respiratory tract infections, skin infections, chlamydia infections, pelvic inflammatory disease, and syphilis. It may also be used during pregnancy to prevent Group B streptococcal infection in the newborn, and to improve delayed stomach emptying. It can be given intravenously and by mouth. An eye ointment is routinely recommended after delivery to prevent eye infections in the newborn.
Common side effects include abdominal cramps, vomiting, and diarrhea. More serious side effects may include Clostridioides difficile colitis, liver problems, prolonged QT, and allergic reactions. It is generally safe in those who are allergic to penicillin. Erythromycin also appears to be safe to use during pregnancy. While generally regarded as safe during breastfeeding, its use by the mother during the first two weeks of life may increase the risk of pyloric stenosis in the baby. This risk also applies if taken directly by the baby during this age. It is in the macrolide family of antibiotics and works by decreasing bacterial protein production.
Erythromycin was first isolated in 1952 from the bacteria Saccharopolyspora erythraea. It is on the World Health Organization's List of Essential Medicines. In 2022, it was the 271st most commonly prescribed medication in the United States, with more than 800,000 prescriptions.
Medical uses
Erythromycin can be used to treat bacteria responsible for causing infections of the skin and upper respiratory tract, including Streptococcus, Staphylococcus, Haemophilus and Corynebacterium genera. The following represents MIC susceptibility data for a few medically significant bacteria:
Haemophilus influenzae: 0.015 to 256 μg/ml
Staphylococcus aureus: 0.023 to 1024 μg/ml
Streptococcus pyogenes: 0.004 to 256 μg/ml
Corynebacterium minutissimum: 0.015 to 64 μg/ml
It may be useful in treating gastroparesis due to this promotility effect. It has been shown to improve feeding intolerances in those who are critically ill. Intravenous erythromycin may also be used in endoscopy to help clear stomach contents to enhance endoscopic visualization, potentially improving diagnostic accuracy and subsequent management.
Available forms
Erythromycin is available in enteric-coated tablets, slow-release capsules, oral suspensions, ophthalmic solutions, ointments, gels, enteric-coated capsules, non enteric-coated tablets, non enteric-coated capsules, and injections.
The following erythromycin combinations are available for oral dosage:
erythromycin base (capsules, tablets)
erythromycin estolate (capsules, oral suspension, tablets), contraindicated during pregnancy
erythromycin ethylsuccinate (oral suspension, tablets)
erythromycin stearate (oral suspension, tablets)
For injection, the available combinations are:
erythromycin gluceptate
erythromycin lactobionate
For ophthalmic use:
erythromycin base (ointment)
Adverse effects
Gastrointestinal disturbances, such as diarrhea, nausea, abdominal pain, and vomiting, are very common because erythromycin is a motilin agonist.
More serious side effects include arrhythmia with prolonged QT intervals, including torsades de pointes, and reversible deafness. Allergic reactions range from urticaria to anaphylaxis. Cholestasis and Stevens–Johnson syndrome are some other rare side effects that may occur.
Studies have shown evidence both for and against the association of pyloric stenosis and exposure to erythromycin prenatally and postnatally. Exposure to erythromycin (especially long courses at antimicrobial doses, and also through breastfeeding) has been linked to an increased probability of pyloric stenosis in young infants. Erythromycin used for feeding intolerance in young infants has not been associated with hypertrophic pyloric stenosis.
Erythromycin estolate has been associated with reversible hepatotoxicity in pregnant women in the form of elevated serum glutamic-oxaloacetic transaminase and is not recommended during pregnancy. Some evidence suggests similar hepatotoxicity in other populations.
It can also affect the central nervous system, causing psychotic reactions, nightmares, and night sweats.
Interactions
Erythromycin is metabolized by enzymes of the cytochrome P450 system, in particular, by isozymes of the CYP3A superfamily. The activity of the CYP3A enzymes can be induced or inhibited by certain drugs (e.g., dexamethasone), which can cause it to affect the metabolism of many different drugs, including erythromycin. If other CYP3A substrates — drugs that are broken down by CYP3A — such as simvastatin (Zocor), lovastatin (Mevacor), or atorvastatin (Lipitor) — are taken concomitantly with erythromycin, levels of the substrates increase, often causing adverse effects. A noted drug interaction involves erythromycin and simvastatin, resulting in increased simvastatin levels and the potential for rhabdomyolysis. Another group of CYP3A4 substrates are drugs used for migraine such as ergotamine and dihydroergotamine; their adverse effects may be more pronounced if erythromycin is associated.
Earlier case reports on sudden death prompted a study on a large cohort that confirmed a link between erythromycin, ventricular tachycardia, and sudden cardiac death in patients also taking drugs that prolong the metabolism of erythromycin (like verapamil or diltiazem) by interfering with CYP3A4. Hence, erythromycin should not be administered to people using these drugs, or drugs that also prolong the QT interval. Other examples include terfenadine (Seldane, Seldane-D), astemizole (Hismanal), cisapride (Propulsid, withdrawn in many countries for prolonging the QT time) and pimozide (Orap). Interactions with theophylline, which is used mostly in asthma, were also shown.
Erythromycin and doxycycline can have a synergistic effect when combined and kill bacteria (E. coli) with a higher potency than the sum of the two drugs together. This synergistic relationship is only temporary. After approximately 72 hours, the relationship shifts to become antagonistic, whereby a 50/50 combination of the two drugs kills less bacteria than if the two drugs were administered separately.
It may alter the effectiveness of combined oral contraceptive pills because of its effect on the gut flora. A review found that when erythromycin was given with certain oral contraceptives, there was an increase in the maximum serum concentrations and AUC of estradiol and dienogest.
Erythromycin is an inhibitor of the cytochrome P450 system, which means it can have a rapid effect on levels of other drugs metabolised by this system, e.g., warfarin.
Pharmacology
Mechanism of action
Erythromycin displays bacteriostatic activity or inhibits growth of bacteria, especially at higher concentrations. By binding to the 50s subunit of the bacterial rRNA complex, protein synthesis and subsequent structure and function processes critical for life or replication are inhibited. Erythromycin interferes with aminoacyl translocation, preventing the transfer of the tRNA bound at the A site of the rRNA complex to the P site of the rRNA complex. Without this translocation, the A site remains occupied, thus the addition of an incoming tRNA and its attached amino acid to the nascent polypeptide chain is inhibited. This interferes with the production of functionally useful proteins, which is the basis of this antimicrobial action.
Erythromycin increases gut motility by binding to motilin receptor, thus it is a motilin receptor agonist in addition to its antimicrobial properties. It can be therefore administered intravenously as a stomach emptying stimulant.
Pharmacokinetics
Erythromycin is easily inactivated by gastric acid; therefore, all orally administered formulations are given as either enteric-coated or more-stable salts or esters, such as erythromycin ethylsuccinate. Erythromycin is very rapidly absorbed, and diffuses into most tissues and phagocytes. Due to the high concentration in phagocytes, erythromycin is actively transported to the site of infection, where, during active phagocytosis, large concentrations of erythromycin are released.
Metabolism
Most of erythromycin is metabolised by demethylation in the liver by the hepatic enzyme CYP3A4. Its main elimination route is in the bile with little renal excretion, 2%–15% unchanged drug. Erythromycin's elimination half-life ranges between 1.5 and 2.0 hours and is between 5 and 6 hours in patients with end-stage renal disease. Erythromycin levels peak in the serum 4 hours after dosing; ethylsuccinate peaks 0.5–2.5 hours after dosing, but can be delayed if digested with food.
Erythromycin crosses the placenta and enters breast milk. The American Association of Pediatrics determined erythromycin is safe to take while breastfeeding. Absorption in pregnant patients has been shown to be variable, frequently resulting in levels lower than in nonpregnant patients.
Chemistry
Composition
Standard-grade erythromycin is primarily composed of four related compounds known as erythromycins A, B, C, and D. Each of these compounds can be present in varying amounts and can differ by lot. Erythromycin A has been found to have the most antibacterial activity, followed by erythromycin B. Erythromycins C and D are about half as active as erythromycin A. Some of these related compounds have been purified and can be studied and researched individually.
Synthesis
Over the three decades after the discovery of erythromycin A and its activity as an antimicrobial, many attempts were made to synthesize it in the laboratory. The presence of 10 stereogenic carbons and several points of distinct substitution has made the total synthesis of erythromycin A a formidable task. Complete syntheses of erythromycins’ related structures and precursors such as 6-deoxyerythronolide B have been accomplished, giving way to possible syntheses of different erythromycins and other macrolide antimicrobials. Woodward successfully completed the synthesis of erythromycin A, which was published in 1981.
History
In 1949 Abelardo B. Aguilar, a Filipino scientist, sent some soil samples to his employer at Eli Lilly. Aguilar managed to isolate erythromycin from the metabolic products of a strain of Streptomyces erythreus (designation changed to Saccharopolyspora erythraea) found in the samples. Aguilar received no further credit or compensation for his discovery.
The scientist was allegedly promised a trip to the company's manufacturing plant in Indianapolis, but it was never fulfilled. In a letter to the company's president, Aguilar wrote: “A leave of absence is all I ask as I do not wish to sever my connection with a great company which has given me wonderful breaks in life.” The request was not granted.
Aguilar reached out to Eli Lilly again in 1993, requesting royalties from sales of the drug over the years, intending to use them to put up a foundation for poor and sickly Filipinos. This request was also denied. He died in September of the same year.
Lilly filed for patent protection on the compound which was granted in 1953. The product was launched commercially in 1952 under the brand name Ilosone (after the Philippine region of Iloilo where it was originally collected). Erythromycin was formerly also called Ilotycin.
The antibiotic clarithromycin was invented by scientists at the Japanese drug company Taisho Pharmaceutical in the 1970s as a result of their efforts to overcome the acid instability of erythromycin.
Society and culture
Economics
It is available as a generic medication.
In the United States, in 2014, the price increased to seven dollars per 500mg tablet.
The US price of erythromycin rose three times between 2010 and 2015, from 24 cents per 500mg tablet in 2010 to $8.96 in 2015. In 2017, a Kaiser Health News study found that the per-unit cost of dozens of generics doubled or even tripled from 2015 to 2016, increasing spending by the Medicaid program. Due to price increases by drug manufacturers, Medicaid paid on average $2,685,330 more for Erythromycin in 2016 compared to 2015 (not including rebates). In the US by 2018, generic drug prices had climbed another 5% on average.
The UK price listed in the BNF for erythromycin 500mg tablets was £36.40 for 100 tablets (36.4 pence each) . This price is not paid by NHS patients: there is no NHS prescription charge in Scotland, Wales, and Northern Ireland; while NHS patients in England without an exemption are liable for a flat rate prescription charge. , that charge was £9.90 for each prescribed medicine.
Brand names
Brand names include Robimycin, E-Mycin, E.E.S. Granules, E.E.S.-200, E.E.S.-400, E.E.S.-400 Filmtab, Erymax, Ery-Tab, Eryc, Ranbaxy, Erypar, EryPed, Eryped 200, Eryped 400, Erythrocin Stearate Filmtab, Erythrocot, E-Base, Erythroped, Ilosone, MY-E, Pediamycin, Zineryt, Abboticin, Abboticin-ES, Erycin, PCE Dispertab, Stiemycine, Acnasol, and Tiloryth.
Veterinary uses
Erythromycin is also used in fishcare for the "broad spectrum treatment and control of bacterial disease". Body slime, mouth fungus, furunculosis, bacterial gill illness, and hemorrhagic septicaemia are all examples of bacterial diseases in fish that may be treated and controlled with this therapy. The usage of Erythromycin in fishcare is mainly limited to therapies targeting gram-positive bacteria.
| Biology and health sciences | Antibiotics | Health |
10093 | https://en.wikipedia.org/wiki/Eurostar | Eurostar | Eurostar is an international high-speed rail service in Western Europe, connecting Belgium, France, Germany, the Netherlands and the United Kingdom.
The service is operated by the Eurostar Group which was formed from the merger of Eurostar, which operated trains through the Channel Tunnel to the United Kingdom, and Thalys which operated in Western Europe. The operator is exploring future network expansions and aims to double passenger numbers by 2030.
History
Conception and planning
The history of the Eurostar brand can be traced to the choice in 1986 of a rail tunnel to provide a cross-channel link between Britain and France. A previous attempt to construct a tunnel between the two nations had begun in 1974, but was quickly aborted. Construction began afresh in 1988. Eurotunnel was created to manage and own the tunnel, which was finished in 1993, the official opening taking place on 6 May 1994.
In addition to the tunnel's shuttle trains carrying cars and lorries between Folkestone and Calais, the tunnel opened up the possibility of through passenger and freight train services between places further afield. British Rail and France's SNCF contracted with Eurotunnel to use half the tunnel's capacity for this purpose. In 1987, Britain, France and Belgium set up an International Project Group to specify a train to provide an international high-speed passenger service through the tunnel. France had been operating high-speed TGV services since 1981, and had begun construction of a new high-speed line between Paris and the Channel Tunnel, LGV Nord; French TGV technology was chosen as the basis for the new trains. An order for 30 trainsets, to be manufactured in France but with some British and Belgian components, was placed in December 1989. On 20 June 1993, the first Eurostar test train travelled through the tunnel to the UK. Various technical difficulties in running the new trains on British tracks were quickly overcome.
Launch of service
On 14 November 1994, Eurostar services began running from Waterloo International station in London, to Paris Nord, as well as Brussels-South railway station. The train service started with a limited Discovery service; the full daily service started from 28 May 1995.
In 1995, Eurostar was achieving an average end-to-end speed of from London to Paris.
On 8 January 1996, Eurostar launched services from a second railway station in the UK when Ashford International was opened.
Also in 1996, Eurostar commenced its year-round service to Disneyland with the first train running on 29 June. The following year saw the introduction of services to the French Alps during the winter.
On 20 July 2002 a summer seasonal service to Avignon-Centre was launched. The service ran until 2014 after which it was replaced on 1 May 2015 by an expanded service calling at Avignon TGV and also serving Lyon and Marseille.
On 23 September 2003, passenger services began running on the first completed section of High Speed 1. Following a high-profile glamorous opening ceremony and a large advertising campaign, on 14 November 2007, Eurostar services in London transferred from Waterloo to the extended and extensively refurbished London St Pancras International.
Direct services from London to Amsterdam (returning to Brussels only) were launched on 4 April 2018. This service was made a return service on 26 October 2020.
Records achieved
The Channel Tunnel used by Eurostar services holds the record for having the longest underwater section of any tunnel in the world, and it is the third-longest railway tunnel (behind the Seikan Tunnel and the Gotthard Base Tunnel) in the world.
On 30 July 2003, a Eurostar train set a new British speed record of on the first section of the "High Speed 1" railway between the Channel Tunnel, and Fawkham Junction in north Kent, two months before official public services began running.
On 16 May 2006, Eurostar set a new record for the longest non-stop high-speed journey, a distance of from London to Cannes taking 7hours 25minutes.
On 4 September 2007, a record-breaking train left Paris Nord at 10:44 (09:44BST) and reached London St Pancras International in 2hours 3minutes 39seconds, carrying journalists and railway workers. This record trip was also the first passenger-carrying arrival at the new London St Pancras International station. On 20 September 2007, Eurostar broke another record when it completed the journey from Brussels to London in 1hour 43minutes.
Regional Eurostar and Nightstar
The original proposals for Eurostar included direct services to Paris and Brussels from cities north of London: Manchester Piccadilly via Birmingham New Street on the West Coast Main Line and Leeds and via Edinburgh Waverley, Newcastle and on the East Coast Main Line.
Seven 14-coach "North of London" Eurostar trains for these Regional Eurostar services were built, but these services never came to fruition. Predicted journey times of almost nine hours for Glasgow to Paris at the time of growth of low-cost air travel during the 1990s made the plans commercially unviable against the cheaper and quicker airlines. Other reasons that have been suggested for these services having never been run were both government policies and the disruptive privatisation of British Rail. Three of the Regional Eurostar units were leased by Great North Eastern Railway (GNER) to increase domestic services from London King's Cross to York and later Leeds. The lease expired in December 2005, and most of the North of London sets were transferred to SNCF for TGV services in northern France.
An international Nightstar sleeper train was also planned; this would have travelled the same routes as Regional Eurostar, plus the Great Western Main Line to . These were also deemed commercially unviable, and the scheme was abandoned with no services ever operated. In 2000, the coaches were sold to Via Rail in Canada.
Merger with Thalys
On 27 September 2019, the heads of two of Eurostar's major shareholders, Guillaume Pepy of SNCF, and the chair of SNCB, , publicised that Eurostar was planning to come together with its sister company the Franco-Belgian transnational rail service Thalys. The arrangement is to merge their operations under the working title of "Green Speed" and expand services outside the core London-Paris-Brussels-Amsterdam service, to create a grand Western European high-speed rail service covering the UK, France, Belgium, the Netherlands and Germany, serving up to 30million customers by 2030.
Thalys assisted Eurostar with onward connections between Amsterdam and Brussels, and to provide the Amsterdam to London service, in lieu of passport and customs checks at Amsterdam Centraal station.
In September 2020, the merger between Thalys and Eurostar International was confirmed, a year after Thalys announced its intention to merge with the cross-Channel provider subject to gaining European Commission clearance, to form "Green Speed". SNCF and SNCB already hold a controlling shareholding in Eurostar. In October 2021, it was announced that, following the completion of the merger, the Thalys brand would be discontinued, with all of the new operation's services to be operated under the Eurostar name but with each service's own liveries.
In October 2023, the Eurostar brand replaced Thalys, operating as one network and combining ticket sales in a single system.
Corporate structure
Eurostar was originally operated as a collaboration of three separate French, British and Belgian corporate entities. On 1 September 2010, Eurostar was incorporated as a single corporate entity, Eurostar International Limited (EIL), replacing the joint operation between EUKL, SNCF and SNCB/NMBS. EIL is ultimately owned by SNCF (55%), Caisse de dépôt et placement du Québec (CDPQ) (30%), Hermes Infrastructure (10%) and SNCB (5%).
Impact of COVID-19
By January 2021, Eurostar ridership went down to less than 1% of pre-pandemic levels. The combined financial troubles and lack of ridership caused by the COVID-19 pandemic led to Eurostar seeking governmental assistance from Britain's Treasury and Department for Transport, even though Britain sold its 40% Eurostar holding in 2015. Eurostar's appeal included granting the company access to Bank of England-backed loans and a temporary reduction in track access charges for use of the UK's high-speed rail line. Despite being majority-owned by the French state railway, SNCF, Eurostar was thought to have already exhausted options for governmental assistance from Paris, but both the French transport minister and the UK Department for Transport confirmed they were working on further plans to maintain the service.
By the end of 2022, Eurostar had debts of €964m, following French bailouts and commercial loans. Ridership levels returned to around 8 million in 2022, however this figure was still 3 million below 2019 levels. Since the COVID-19 pandemic, Eurostar has not served the Ashford International or Ebbsfleet International stations in the UK, or Calais Frethun in France, and has withdrawn its Disneyland Paris and Avignon services, as part of plans to focus on the most profitable routes.
Mainline routes
LGV Nord (France)
The LGV Nord (, ) is a French high-speed rail line that connects Paris with the HSL 1 at the Belgium–France border and the Channel Tunnel. It opened in 1993. Of all French high-speed lines, LGV Nord sees the widest variety of high-speed rolling stock and is quite busy; a proposed cut-off bypassing Lille, which would reduce Eurostar journey times between Paris and London, is called LGV Picardie.
Channel Tunnel
The Channel Tunnel is the only rail connection between Great Britain and the European mainland. It joins LGV Nord in France with High Speed 1 in Britain. Tunnelling began in 1988, and the tunnel was officially opened by British sovereign, Elizabeth II, and the French President, François Mitterrand, on 6 May 1994.
It is owned by Getlink, which charges a toll to Eurostar for its use. Within the Channel Tunnel, Eurostar trains operate at a reduced speed of for safety reasons.
Since the launch of Eurostar services, severe disruptions and cancellations have been caused by fires breaking out within the Channel Tunnel, such as in 1996 and 2008.
HSL 1 (Belgium)
HSL 1 connects Brussels with the French border. 88 km (55 mi) long (71 km (44 mi) dedicated high-speed tracks, 17 km (11 mi) modernised lines), it began service on 14 December 1997. The line has appreciably shortened rail journeys, the journey from Paris to Brussels now taking 1:22. In combination with the LGV Nord, it has also impacted international journeys to France and London.
HSL 2 (Belgium)
HSL 2 runs between Leuven and Ans. 95 km (59 mi) long (61 km (38 mi) dedicated high-speed tracks, 34 km (21 mi) modernised lines) it began service on 15 December 2002. Combined with HSL 3 to the German border, the combined eastward high speed lines have greatly accelerated journeys between Brussels, Paris and Germany.
HSL 3 (Belgium)
HSL 3 connects Liège to the German border. 56 km (35 mi) long (42 km (26 mi) dedicated high-speed tracks, 14 km (8.7 mi) modernised lines), it was completed on 15 December 2007, but trains did not start to use it until 14 June 2009. HSL 3 is used by international Eurostar and ICE trains only.
Cologne–Aachen high-speed line
The Cologne–Aachen high-speed line is not a newly built railway line, but a project to upgrade the existing railway line which was opened in 1841 by the Rhenish Railway Company. The line inside Germany has a length of about 70 kilometres (43 mi). The first 40 km (25 mi) from Cologne to Düren have been rebuilt. Since 2002 the line allows for speeds up to 250 km/h (155 mph). Separate tracks have been built parallel to the high-speed tracks for local S-Bahn traffic. The remaining line from Düren to Aachen allows speeds up to 160 km/h (100 mph) with some slower sections.
High Speed 1 (United Kingdom)
High Speed 1, formerly known as the Channel Tunnel Rail Link (CTRL), is a British high-speed rail line that connects London with the Channel Tunnel. It opened in two stages. The first section between the tunnel and north Kent opened in September 2003, cutting journey times by 21minutes. On 14 November 2007, commercial services began over the whole of the High Speed 1 reducing journey times by a further 20minutes. The line's London terminal is London St Pancras International, which was redeveloped for the project.
HSL-Zuid (Netherlands)
The HSL-Zuid (, ), is a Dutch high-speed railway line that connects Amsterdam with the HSL 4 at the Belgium-Netherlands border. It opened on 7 September 2009.
Services
Frequency
Eurostar offers up to 15 weekday London – Paris services (19 on Fridays) including nine non-stop (13 on Fridays). There are also nine (ten on Friday) London–Brussels services, of which two run non-stop (continuing to Amsterdam) and a further two call at Lille only. Four services daily operate to Amsterdam via Brussels and Rotterdam, some calling at Lille. There were also seasonal services: in the winter, "Snow trains", aimed at skiers, to Bourg-Saint-Maurice, Aime-la-Plagne and Moûtiers in the Alps; these ran weekly, arriving in the alps in the evening and leaving the same evening to arrive in London the following morning. This service was suspended at the time of the COVID-19 pandemic. It resumed for the 2023/24 ski season, but with no through train. Instead, passengers change train at Lille-Europe.
In February 2018, Eurostar announced the start of its long-planned service from London to Amsterdam, with an initial two trains per day from April of that year running between London St Pancras International and Amsterdam Centraal. This launched as a one-way service, with return trains carrying passengers to Rotterdam and Brussels Midi/Zuid, making a 28-minute stop (which was not deemed long enough to process UK-bound passengers) and then carrying different passengers from Brussels to London. Initially passengers travelling back took a Thalys service to Brussels Midi/Zuid where they could join the Eurostar. This was due to the lack of facilities for juxtaposed controls by the UK Border Force at Amsterdam Centraal and Rotterdam Centraal. On 4 February 2020, the Dutch Minister of Infrastructure and Water Management, Cora van Nieuwenhuizen, and the UK Transport Secretary, Grant Shapps, announced that juxtaposed controls would be established at Amsterdam Centraal and Rotterdam Centraal. The direct train from Amsterdam was originally due to launch on 30 April 2020, and from Rotterdam on 18 May 2020, although it was later postponed to 26 October 2020 for both cities due to the COVID-19 pandemic.
Since 14 November 2007, all Eurostar trains have been routed via High Speed1 to or from the redeveloped London terminus at London St Pancras International, which at a cost of £800million was extensively rebuilt and extended to cope with long Eurostar trains.
It had been intended to retain some Eurostar services at Waterloo International, but this was ruled out on cost grounds.
Completion of High Speed1 increased the potential number of trains serving London. Separation of Eurostar from British domestic services through Kent meant that timetabling was no longer affected by peak-hour restrictions.
Fares
Eurostar's fares were significantly higher in its early years; the cheapest fare in 1994 was £99 return.
In 2002, Eurostar was planning cheaper fares, an example of which was an offer of £50-day returns from London to Paris or Brussels.
By March 2003, the cheapest fare from the UK was £59 return, available all year around. In June 2009 it was announced that one-way single fares would be available at £31 at the cheapest. Competition between Eurostar and airline services was a large factor in ticket prices being reduced from the initial levels.
Business Premier fares also slightly undercut air fares on similar routes, targeted at regular business travellers.
In 2009, Eurostar greatly increased its budget ticket availability to help maintain and grow its dominant market share.
The Eurostar ticketing system is very complex, being distributed through no fewer than 48 individual sales systems.
Eurostar is a member of the Amadeus CRS distribution system, making its tickets available alongside those of airlines worldwide.
Eurostar has two sub-classes of first class: Standard Premier and Business Premier; benefits include guaranteed faster checking-in and meals served at-seat, as well as the improved furnishings and interior of carriages.
The rebranding is part of Eurostar's marketing drive to attract more business professionals. Increasingly, business people in a group have been chartering private carriages as opposed to individual seats on the train.
Service connections
Without the operation of Regional Eurostar services using the North of London trainsets across the rest of Britain, Eurostar has developed its connections with other transport services instead, such as integrating effectively with traditional UK rail operators' schedules and routes, making it possible for passengers to use Eurostar as a quick connection to further destinations on the continent.
All three main terminals used by the Eurostar service – London St Pancras International, Paris Nord, and Brussels-South – are served by domestic trains and by local urban transport networks such as the London Underground, Paris Metro, Brussels Metro and Amsterdam Metro.
Integration with other operators
Standard Eurostar tickets no longer include free onward connections to or from any other station in Belgium: this is now available for a flat-rate supplement, currently £5.50.
Through-tickets
Eurostar offers a through-ticket to specific destinations by train, that is a single contract for multi leg journeys with certain passenger rights and protections.
Eurostar has announced several partnerships with other rail services,
most notably Thalys connections at Lille and Brussels for passengers to go beyond current Eurostar routes towards the Netherlands and Germany.
In 2002, Eurostar initiated the Eurostar-Plus program, offering through-tickets for onward journeys from Lille and Paris to dozens of destinations in France.
Through-tickets are also available from 68 British towns and cities to destinations in France and Belgium.
In May 2009 Eurostar announced that a formal connection to Switzerland had been established in a partnership between Eurostar and Lyria, which will operate TGV services from Lille to the Swiss Alps for Eurostar connection.
In May 2019, Eurostar ended its agreement with Deutsche Bahn that allowed passengers to travel on a through-ticket by train from the UK via Brussels to Germany and further to Austria and Switzerland. Under the agreement, passengers could travel on a single through-ticket with passenger rights in case of disruption of one train. However, the through-tickets ceased to be sold from 9 November 2019.
Railteam
Eurostar is a member of Railteam, a marketing alliance formed in July 2007 of seven European high-speed rail operators.
The alliance plans to allow tickets to be booked from one end of Europe to the other on a single website. In June 2009 London and Continental Railways, and the Eurostar UK operations they held ownership of, became fully nationalised by the UK government.
Air-rail alliances
In September 2024, Eurostar signed a memorandum of understanding to join SkyTeam as its first non-airline partner. This cooperation will enable integrated intermodal transport (air-rail) in the UK, France and the Netherlands.
Controls and security
Because the UK is not part of the European Union or the Schengen Area, and because the Netherlands, Belgium and France are not part of the Common Travel Area, all cross-channel Eurostar passengers must go through border controls. Both the British Government and the Schengen governments concerned (Belgium, Netherlands and France) have legal obligations to check the travel documents of those entering and leaving their respective countries.
To allow passengers to walk off the train without arrival checks in most cases, juxtaposed controls ordinarily take place at the embarkation station.
To comply with UK law, there are full security checks similar to those at airports, consisting of bag X-rays and walk-through metal detectors. The recommended check-in time is 90–120minutes except for business class where it is 45–60minutes; these are much longer than previously because of extra checks in place due to Brexit and the COVID-19 pandemic.
Eurostar passengers travelling within the Schengen area on trains towards London bypass border checks, and enter the pre-allocated cars at the rear of the train, which are reserved for these passengers. This area is then searched at Lille and all passengers removed. This arrangement was set up after numerous people entered the UK without prior authorisation, by buying a ticket from Brussels to Lille or Calais but remaining on the train until London – an issue exacerbated by Belgian police threatening to arrest UK Border Agency staff at Brussels-South if they tried to prevent passengers whom they suspected of attempting to exploit this loophole from boarding Eurostar trains. Travel from Calais or Lille towards Brussels and the Netherlands has no border or security control. On 7 July 2020, a modified agreement was signed in Brussels that includes The Netherlands in the previous agreement. This allows for juxtaposed controls in Amsterdam and Rotterdam like those in Brussels and Paris.
When the tripartite agreements were signed, the Belgian Government said that it had serious questions about the compatibility of this agreement with the Schengen Convention and the principle of free movement of people enshrined in various European treaties.
On 30 June 2009, Eurostar raised concerns at the UK House of Commons Home Affairs Select Committee that it was illegal under French law to collect the information required by the UK government under the e-Borders scheme, and the company would be unable to cooperate.
On the northbound Disneyland and ski trains, the security check and French passport check took place at the origin, while the UK passport check took place at the UK arrival stations. These were the only route where passengers are not cleared by UK border officials before crossing the Channel.
On the northbound Marseille-London train, there was no facility for security or passport checks at the southern French stations, so passengers left the train at Lille-Europe, taking all their belongings with them, and underwent security and border checks there before rejoining the train which waited at the station for just over an hour.
On several occasions, people have tried to stow away illegally on board the train, sometimes in large groups, trying to enter the UK; border monitoring and security is therefore extremely tight.
Eurostar claims to have good and well-funded security measures.
Operational performance
Eurostar's punctuality has fluctuated from year to year, but usually remains over 90%; in the first quarter of 1999, 89% of services operated were on time, and in the second quarter it reached 92%. Eurostar's best punctuality record was 97.35%, between 16 and 22 August 2004. In 2006, it was 92.7%, and in 2007, 91.5% were on time. In the first quarter of 2009, 96% of Eurostar services were punctual, compared with rival air routes' 76%.
An advantage held by Eurostar is the convenience and speed of the service: with shorter check-in times than at most airports and hence quicker boarding and less queueing and high punctuality, it takes less time to travel between central London and central Paris by high-speed rail than by air. Eurostar now has a dominant share of the combined rail–air market on its routes to Paris and Brussels. In 2004, it had a 66% share of the London–Paris market, and a 59% share of the London–Brussels market. In 2007, it achieved record market shares of 71% for London–Paris and 65% for London–Brussels routes.
Eurostar's passenger numbers initially failed to meet predictions. In 1996, London and Continental Railways forecast that passenger numbers would reach 21.4million annually by 2004, but only 7.3million was achieved. Eighty-two million passengers used Waterloo International Station from its opening in 1994 to its closure in 2007. 2008 was a record year for Eurostar, with a 10.3% rise in passenger use, which was attributed to the use of High Speed 1 and the move to London St Pancras International. The following year, Eurostar saw an 11.5% fall in passenger numbers during the first three months of 2009, attributed to the 2008 Channel Tunnel fire and the 2009 recession.
As a result of the poor economic conditions, Eurostar received state aid in May 2009 to cancel out some of the accumulated debt from the High Speed 1 construction programme. Later that year, during snowy conditions in the run-up to Christmas, thousands of passengers were left stranded as several trains broke down and many more were cancelled. In an independent review commissioned by Eurostar, the company came in for serious criticism about its handling of the incident and lack of plans for such a scenario.
In 2006, the Department for Transport predicted that, by 2037, annual cross-channel passenger numbers would probably reach 16million, considerably less optimistic than London and Continental Railways's original 1996 forecast. In 2007 Eurostar set a target of carrying 10million passengers by 2010.
The company cited several factors to support this objective, such as improved journey times, punctuality and station facilities. Passengers in general, it stated, are becoming increasingly aware of the environmental effects of air travel, and Eurostar services emit much less carbon dioxide. and that its remaining carbon emissions are now offset, making its services carbon neutral. Further expansion of the high-speed rail network in Europe, such as the HSL-Zuid line between Belgium and the Netherlands, continues to bring more destinations within rail-competitive range, giving Eurostar the possibility of opening up new services in future.
The following chart presents the estimated number of passengers annually transported by the Eurostar service since 1995:
In 2019, cumulative ridership since 1994 surpassed 200million. Eleven million passengers travelled on its international services during 2018, the highest ever, a 7% increase on the 10.3million carried in 2017.
Awards and accolades
Eurostar has been hailed as having set new standards in international rail travel and has won praise several times over for its high standards. However, Eurostar had previously struggled with its reputation and brand image. One commentator had defined the situation at the time as:
Eurostar won the Train Operator of the Year award in the HSBC Rail Awards for 2005. In 2006, Eurostar's Environment Group was set up, with the aim of making changes in the Eurostar services' daily running to decrease negative environmental impact. The organisation set itself a target of reducing carbon emissions per passenger journey by 25% by 2012. Drivers were trained in techniques to achieve maximum energy efficiency, and lighting was minimised; the provider of the bulk of the energy for the Channel Tunnel was switched to nuclear power stations in France. Eurostar's target was to reduce emissions by 35 percent per passenger journey by 2012, putting itself beyond the efforts of other railway companies in this field and thereby winning the 2007 Network Rail Efficiency Award. In the grand opening ceremony of London St Pancras International, one of the Eurostar trains was given the name 'Tread Lightly', said to symbolise their smaller impact on the environment compared to planes. By 2008, Eurostar's environmental credentials had become highly developed and promoted.
Since then, Eurostar has received multiple awards. It was declared the Best Train Company in the joint Guardian/Observer Travel Awards 2008 and earned a spot on the Sunday Times' Best Green Companies List (2009). Other awards include: ICARUS’ Environmental Award for Best Rail Provider (2009), Guardian & Observer Travel Award for Best Train Company (2009), Travel Weekly's Golden Globes Award for Best Rail Operator (2010), World Travel Market's Responsible Tourism Award for Best Low Carbon Initiative (2011), TNT Magazine's Gold Backpack Award for Favourite Travel Transport (2012), World Travel Awards Europe's Leading Passenger Rail Operator (2011), National Rail Awards Train of the Year (2017), PETA's Travel Award for Best Travel Experience (2019), Mobile Industry Awards' Distributor of the Year (2020).
Environmental initiatives
In 2007, Eurostar said they would become the world's first carbon-neutral train service through its launch of "Tread Lightly," an environmental programme with the goal of reducing the service's carbon-dioxide emissions by 25% by 2012. The programme included: reducing power consumption on its rolling stock; sourcing more electricity from lower-emission generators; adding new controls on lighting, heating, and air conditioning; reducing paper usage via electronic tickets; recycling water and employee uniforms; sourcing all food on board from Britain, France, or Belgium. Eurostar also funded three renewable energy projects in developing regions around the world: a windfarm in Tamil Nadu, India; a micro-hydropower project in China; and a plan specifying improvements on fuel consumption of three-wheeler taxis in Indonesia.
In 2019, Eurostar removed all single-use plastics from its trains between London and Paris. Now the trains serve only wooden cutlery, recyclable cans of water, glass wine bottles, paper-based coffee cups, and eco-friendly food packaging. Eurostar partnered with the Woodland Trust, ReforestAction, and Trees for All in 2020, with the goal of planting 20,000 trees each year in woodlands along its routes across the UK, Belgium, and the Netherlands. Since Tread Lightly launched, Eurostar has reduced its carbon footprint by over 40% and now emits up to 90% less greenhouse gas emissions than the equivalent flight.
But in 2023 cycle booking was described as “farcical”.
Domestic journeys on London services
Eurostar is not permitted to carry passengers on London services for journeys within one country, so passengers cannot travel (for example) from Lille to Marne-la-Vallée–Chessy, London to Ashford, or Rotterdam to Amsterdam on a London service. Lille to Brussels is the only international intra-Schengen journey that Eurostar is offering for sale on London services.
Fleet
Fleet details
Current fleet
Eurostar e300
Built between 1992 and 1996, Eurostar's fleet consisted of 38 EMU trains, designated Class 373 in the United Kingdom and TGV TMST in France. The units have also been branded as the Eurostar e300 by Eurostar since 2015. There are two variants:
31 "Inter-Capital" sets consisting of two power cars and eighteen passenger carriages. These trains are long and can carry 750 passengers: 206 in first class, 544 in standard class.
Seven shorter "North of London" sets which have two power cars and fourteen passenger carriages and are long. These sets have a capacity of 558 seats: 114 first class, 444 standard and which were designed to operate the aborted Regional Eurostar services.
Each train has a unique four-digit number starting with "3" (3xxx). This designates the train as a Mark 3 TGV (Mark 1 being SNCF TGV Sud-Est; Mark 2 being SNCF TGV Atlantique). The second digit denotes the country of ownership:
30xx UK
31xx Belgium
32xx France
33xx Regional Eurostar
The trains are essentially modified TGV sets, and can operate at up to on high-speed lines, and in the Channel Tunnel. It is possible to exceed the 300km/h speed limit, but only with special permission from the safety authorities in the respective country.
Speed limits in the Channel Tunnel are dictated by air-resistance, energy (heat) dissipation and the need to be used with other, slower trains. The trains were designed with Channel Tunnel safety in mind, and consist of two independent "half-sets" each with its own power car. In the event of a serious fire on board while travelling through the tunnel, passengers would be transferred into the undamaged half of the train, which would then be detached and driven out of the tunnel to safety.
If the undamaged part were the rear half of the train, this would be driven by the Chef du Train (conductor), who is a fully authorized driver and occupies the rear driving cab while the train travels through the tunnel for this purpose.
As the Class 374 units have entered service the Class 373 fleet has gradually been reduced. Eleven remain in regular service with 17 scrapped and ten in storage.
Fleet updates
In 2004–2005 the "Inter-Capital" sets still in daily use for international services were refurbished with a new interior designed by Philippe Starck.
The original grey-yellow scheme in Standard class and grey-red of First/Premium First were replaced with a grey-brown look in Standard and grey-burnt-orange in First class. Power points were added to seats in First class and coaches 5 and 14 in Standard class. Premium First class was renamed BusinessPremier.
In 2008, Eurostar announced that it would be carrying out a mid-life refurbishment of its Class 373 trains to allow the fleet to remain in service beyond 2020.
This will include the 28 units making up the Eurostar fleet, but not the three Class 373/1 units used by SNCF or the seven Class 373/2 "North of London" sets.
As part of the refurbishment, the Italian company Pininfarina was contracted to redesign the interiors, and The Yard Creative was selected to design the new buffet cars.
On 11 May 2009 Eurostar revealed the new look for its first-class compartments.
The first refurbished train was due in service in 2012, and Eurostar planned to complete the entire process by 2014. On 13 November 2014 Eurostar announced the first refurbished trains would not re-enter the fleet until the 3rd or 4th quarter of 2015 due to delays at the completion centre. The last refurbished e300 eventually re-entered service in April 2019.
Eurostar e320
In addition to the announced mid-life update of the existing Class 373 fleet, Eurostar in 2009 began looking to purchase eight new trainsets. Any new trains would need to meet the same safety rules governing passage through the Channel Tunnel as the existing Class 373 fleet. The replacement to the Class 373 trains has been decided jointly between the French Transport Ministry and the UK Department for Transport. The new trains will be equipped to use the new ERTMS in-cab signalling system, due to be fitted to High Speed 1 around 2040.
On 7 October 2010, it was reported that Eurostar had selected Siemens as preferred bidder to supply 10 Siemens Velaro trainsets at a cost of €600million These would be sixteen-car, self-propelled, trainsets built to meet Channel Tunnel requirements. The top speed of the e320 trainsets is with 902 seats, compared to the e300 fleet which has a top speed of and a seating capacity of 750. Total traction power will be rated at . The e320 trainsets would also be quadri-current, adding the ability to run on the system used in Germany, allowing for an expanded route network, including services between London and Cologne.
The selection of Siemens would see it break into the French high-speed market for the first time, as all French high-speed operators use TGV derivatives produced by Alstom. Alstom attempted legal action to prevent Eurostar from acquiring the German-built trains, claiming that the Siemens sets would breach Channel Tunnel safety rules, but the case was thrown out by the High Court in London. On 4 November 2010, Alstom lodged a complaint with the European Commission over the tendering process. Alstom then started legal action claiming that the Eurostar tender process was "ineffective", the High Court rejected the second suit in July 2011. In April 2012, Alstom said it would call off court actions against Siemens.
On 13 November 2014, Eurostar announced the purchase of an additional seven e320s for delivery in the second half of 2016. At the same time, Eurostar announced the first five e320s from the original order of ten would be available by December 2015, with the remaining five entering service by May 2016. Of the five sets ready by December 2015, three of them were planned to be used on London-Paris and London-Brussels routes.
Future fleet
In May 2024, Eurostar announced its intention to order up to 50 new trains.
Past fleet
Accidents and incidents
A number of technical incidents have affected Eurostar services over the years, but there has only been one major accident involving a service operated by Eurostar, a derailment in June 2000. Other incidents in the Channel Tunnel – such as the 1996 and 2008 Channel Tunnel fires – have affected Eurostar services but were not directly related to Eurostar's operations. However, the breakdowns in the tunnel, which resulted in cessation of service and inconvenience to thousands of passengers, in the run-up to Christmas 2009, proved a public-relations disaster.
2000
On 5 June 2000, a Eurostar train travelling from Paris to London derailed on the LGV Nord high-speed line while traveling at . Fourteen people were treated for light injuries or shock, with no fatalities or major injuries. The articulated nature of the trainset was credited with maintaining stability during the incident and all of the train stayed upright. The incident was caused by a traction link on the second bogie of the front power car coming loose, leading to components of the transmission system on that bogie impacting the track.
2009
During the December 2009 European snowfall, five Eurostar trains broke down inside the Channel Tunnel, after leaving France, and one in Kent on 18 December. Although the trains had been winterised, the systems had not coped with the conditions. Over 2,000passengers were stuck inside failed trains inside the tunnel, and over 75,000 had their services disrupted. All Eurostar services were cancelled from Saturday 19 December to Monday 21 December 2009. An independent review, published on 12 February 2010, was critical of the contingency plans in place for assisting passengers stranded by the delays, calling them "insufficient".
Future developments
Eurostar expansion
Eurostar and Thalys merged in 2023, with the intention to double combined passenger numbers from 14.8 million to 30 million.
In an interview with Eurostar's former Chief Executive Nicolas Petrovic in the Financial Times in May 2012, an intention for cross-Channel Eurostar to serve ten new destinations was expressed, including Amsterdam, Frankfurt, Cologne, Lyon, Marseille and Geneva, along with a likely second hub to be created in Brussels. London-Amsterdam services launched in 2018.
In March 2016, in an interview with Bloomberg, Eurostar's Chief Executive expressed interest in operating a direct train service between London and Bordeaux, but not before 2019. Journey time was said to be around 4.5hours using the new LGV Sud Europe Atlantique.
Operational difficulties with UK-Schengen trains
The e320 trains allow Eurostar the possibility of London to Germany services in the future, but implementing such new services is complex. The UK is neither part of the Schengen Agreement, which allows unrestricted movement across borders of member countries, nor a member of the EU. This means that travellers between the UK and EU must pass through full border identification, visa and customs controls for their departure and arrival countries, while travellers between stations within the Schengen area do not. The logistics of providing space and time for these controls while conforming to the requirements of free travel within the Schengen area makes implementing new services operationally complex. The "Lille loophole" solution requires Eurostar customers travelling from Brussels to Lille to be segregated and guarded from other passengers for their journey. Similarly, when the Amsterdam to London route began, it was direct in only one direction: passengers had to disembark in Brussels to go through the juxtaposed controls. The direct connection was subject to talks between the UK and Dutch governments, and juxtaposed controls buildings were constructed on platforms at Amsterdam Centraal and Rotterdam Centraal, opening on 26 October 2020. These were both closed on 15 June 2024 and are planned to remain closed until 9 February 2025 due to major track works at Amsterdam Centraal. Eurostar stated direct Rotterdam to London services could not be maintained due to the much smaller customs facility at Rotterdam, leaving around 760 of the 902 seats on each train empty.
The difficulties that Eurostar faces in expanding its services between the UK and the EU would also be faced by any potential competitors to Eurostar. Trains must use platforms that are physically isolated, a constraint which other intra-EU operators do not face. In addition, the British authorities are required to make security and passport checks on passengers before they board the train, which might deter domestic passengers. Compounding the difficulties in providing a similar service are the Channel Tunnel safety rules, the major ones being the "half-train rule" and the "length rule". The "half-train rule" stipulated that passenger trains had to be able to split in an emergency. Class 373 trains were designed as two half-sets, which when coupled form a complete train, enabling them to be split easily in the event of an emergency while in the tunnel, with the unaffected set able to be driven out. The half-train rule was finally abolished in May 2010. However, the "length rule", which states that passenger trains must be at least long with a through corridor (to match the distance between the safety doors in the tunnel), was retained, preventing any potential operators from applying to run services with existing fleets, as the majority of both TGV and ICE trains are only long.
Competition
Following the liberalisation of international rail travel by European Union directives in 2010, various operators have announced proposals for competition with Eurostar.
Deutsche Bahn (DB) intended to run services between London to Frankfurt and Amsterdam (two of the biggest air travel markets in Europe), with trains 'splitting & joining' in Brussels. In July 2010, DB announced that it intended to make a test run with a high-speed ICE-3MF train through the Channel Tunnel in October 2010 in preparation for possible future operations. The trial ran on 19 October 2010 with a Class 406 ICE train specially liveried with a British "Union flag" decal. The train was then put on display for the press at London St Pancras International. However, this was not the class of train planned for the proposed service, instead proposing to use Class 407 ICE units, specially adapted for stronger Channel Tunnel safety standards.
DB scrapped the plan, mainly due to advance passport check requirements. DB had hoped that immigration checks could be done on board, but British authorities required immigration and security checks to be done at Lille-Europe station, taking at least 30minutes.
In 2021, Renfe, the national operator of Spain announced it was proposing competing London to Paris services. In 2022, Getlink, the owner of the Channel Tunnel had reportedly considered purchasing trains suitable for competing services, leasing them to rival operations, while in 2023, Mobico Group, the owner of National Express has also been reported to be considering cross-Channel services named 'Evoyln'.
Long term possibilities
Stratford International station
Eurostar trains do not currently call at , which was intended to be a London stop for the regional Eurostars when the station was constructed. This was to be reviewed following the 2012 Olympics. However, in 2013, Eurostar claimed that its 'business would be hit' by stopping trains there.
Regional Eurostar
Although the original plan for Regional Eurostar services to destinations north of London was abandoned, the significantly improved journey times available since the opening of High Speed 1 — which is physically connected to both the East Coast Main Line and the North London Line (for the West Coast Main Line) at London St Pancras International – and the increased maximum speeds on the West Coast Main Line since the 2000s may make potential Regional Eurostar services more commercially viable. This would be even more likely if proposals are adopted for a new high-speed line from London to the north of Britain.
Simon Montague, Eurostar's Director of Communications, commented that: "...International services to the regions are only likely once High Speed 2 is built." However, as of 2014 the current plans for High Speed 2 do not allow for a direct rail link between that new line and High Speed 1, meaning passengers would still be required to change at London Euston and take some form of transportation to London St Pancras.
Key pieces of infrastructure still belong to LCR via its subsidiary London & Continental Stations and Property, such as the Manchester International Depot, and Eurostar (UK) still owns several track access rights and the rights to paths on both the East Coast Main Line and the West Coast Main Line.
While no announcement has been made of plans to start Regional Eurostar services, it remains a possibility for the future. In the meantime, the closest equivalent to Regional Eurostar services are same-station connections with East Midlands Railway and Thameslink, changing at London St Pancras. The construction of a new concourse at the adjacent London King's Cross improved interchange with London St Pancras and provided London North Eastern Railway, Great Northern, Hull Trains and Grand Central services with easier connections to Eurostar.
LGV Picardie
LGV Picardie is a proposed high-speed line between Paris and Calais via Amiens. By cutting off the corner of the LGV Nord at Lille, it would enable Eurostar trains to save 20minutes on the journey between Paris and Calais, bringing the London–Paris journey time under 2hours. In 2008 the French Government announced its future investment plans for new LGVs to be built up to 2020; LGV Picardie was not included but was listed as planned in the longer term.
| Technology | High-speed rail | null |
10100 | https://en.wikipedia.org/wiki/Equinox | Equinox | A solar equinox is a moment in time when the Sun crosses the Earth's equator, which is to say, appears directly above the equator, rather than north or south of the equator. On the day of the equinox, the Sun appears to rise "due east" and set "due west". This occurs twice each year, around 20 March and 23 September.
More precisely, an equinox is traditionally defined as the time when the plane of Earth's equator passes through the geometric center of the Sun's disk. Equivalently, this is the moment when Earth's rotation axis is directly perpendicular to the Sun-Earth line, tilting neither toward nor away from the Sun. In modern times, since the Moon (and to a lesser extent the planets) causes Earth's orbit to vary slightly from a perfect ellipse, the equinox is officially defined by the Sun's more regular ecliptic longitude rather than by its declination. The instants of the equinoxes are currently defined to be when the apparent geocentric longitude of the Sun is 0° and 180°.
The word is derived from the Latin , from (equal) and (night). On the day of an equinox, daytime and nighttime are of approximately equal duration all over the planet. Contrary to popular belief, they are not exactly equal because of the angular size of the Sun, atmospheric refraction, and the rapidly changing duration of the length of day that occurs at most latitudes around the equinoxes. Long before conceiving this equality, equatorial cultures noted the day when the Sun rises due east and sets due west, and indeed this happens on the day closest to the astronomically defined event. As a consequence, according to a properly constructed and aligned sundial, the daytime duration is 12 hours.
In the Northern Hemisphere, the March equinox is called the vernal or spring equinox while the September equinox is called the autumnal or fall equinox. In the Southern Hemisphere, the reverse is true. During the year, equinoxes alternate with solstices. Leap years and other factors cause the dates of both events to vary slightly.
Hemisphere-neutral names are northward equinox for the March equinox, indicating that at that moment the solar declination is crossing the celestial equator in a northward direction, and southward equinox for the September equinox, indicating that at that moment the solar declination is crossing the celestial equator in a southward direction.
Daytime is increasing at the fastest at the vernal equinox and decreasing at the fastest at the autumnal equinox.
Equinoxes on Earth
General
Systematically observing the sunrise, people discovered that it occurs between two extreme locations at the horizon and eventually noted the midpoint between the two. Later it was realized that this happens on a day when the duration of the day and the night are practically equal and the word "equinox" comes from Latin aequus, meaning "equal", and nox, meaning "night".
In the northern hemisphere, the vernal equinox (March) conventionally marks the beginning of spring in most cultures and is considered the start of the New Year in the Assyrian calendar, Hindu, and the Persian or Iranian calendars, while the autumnal equinox (September) marks the beginning of autumn. Ancient Greek calendars too had the beginning of the year either at the autumnal or vernal equinox and some at solstices. The Antikythera mechanism predicts the equinoxes and solstices.
The equinoxes are the only times when the solar terminator (the "edge" between night and day) is perpendicular to the equator. As a result, the northern and southern hemispheres are equally illuminated.
For the same reason, this is also the time when the Sun rises for an observer at one of Earth's rotational poles and sets at the other. For a brief period lasting approximately four days, both North and South Poles are in daylight. For example, in 2021 sunrise on the North Pole is 18 March 07:09 UTC, and sunset on the South Pole is 22 March 13:08 UTC. Also in 2021, sunrise on the South Pole is 20 September 16:08 UTC, and sunset on the North Pole is 24 September 22:30 UTC.
In other words, the equinoxes are the only times when the subsolar point is on the equator, meaning that the Sun is exactly overhead at a point on the equatorial line. The subsolar point crosses the equator moving northward at the March equinox and southward at the September equinox.
Date
When Julius Caesar established the Julian calendar in 45 BC, he set 25 March as the date of the spring equinox; this was already the starting day of the year in the Persian and Indian calendars. Because the Julian year is longer than the tropical year by about 11.3 minutes on average (or 1 day in 128 years), the calendar "drifted" with respect to the two equinoxes – so that in 300 AD the spring equinox occurred on about 21 March, and by the 1580s AD it had drifted backwards to 11 March.
This drift induced Pope Gregory XIII to establish the modern Gregorian calendar. The Pope wanted to continue to conform with the edicts of the Council of Nicaea in 325 AD concerning the date of Easter, which means he wanted to move the vernal equinox to the date on which it fell at that time (21 March is the day allocated to it in the Easter table of the Julian calendar), and to maintain it at around that date in the future, which he achieved by reducing the number of leap years from 100 to 97 every 400 years. However, there remained a small residual variation in the date and time of the vernal equinox of about ±27 hours from its mean position, virtually all because the distribution of 24 hour centurial leap-days causes large jumps (see Gregorian calendar leap solstice).
Modern dates
The dates of the equinoxes change progressively during the leap-year cycle, because the Gregorian calendar year is not commensurate with the period of the Earth's revolution about the Sun. It is only after a complete Gregorian leap-year cycle of 400 years that the seasons commence at approximately the same time. In the 21st century the earliest March equinox will be 19 March 2096, while the latest was 21 March 2003. The earliest September equinox will be 21 September 2096 while the latest was 23 September 2003 (Universal Time).
Names
Vernal equinox and autumnal equinox: these classical names are direct derivatives of Latin (ver = spring, and autumnus = autumn). These are the historically universal and still most widely used terms for the equinoxes, but are potentially confusing because in the southern hemisphere the vernal equinox does not occur in spring and the autumnal equinox does not occur in autumn. The equivalent common language English terms spring equinox and autumn (or fall) equinox are even more ambiguous. It has become increasingly common for people to refer to the September equinox in the southern hemisphere as the Vernal equinox.
March equinox and September equinox: names referring to the months of the year in which they occur, with no ambiguity as to which hemisphere is the context. They are still not universal, however, as not all cultures use a solar-based calendar where the equinoxes occur every year in the same month (as they do not in the Islamic calendar and Hebrew calendar, for example). Although the terms have become very common in the 21st century, they were sometimes used at least as long ago as the mid-20th century.
Northward equinox and southward equinox: names referring to the apparent direction of motion of the Sun. The northward equinox occurs in March when the Sun crosses the equator from south to north, and the southward equinox occurs in September when the Sun crosses the equator from north to south. These terms can be used unambiguously for other planets. They are rarely seen, although were first proposed over 100 years ago.
First point of Aries and first point of Libra: names referring to the astrological signs the Sun is entering. However, the precession of the equinoxes has shifted these points into the constellations Pisces and Virgo, respectively.
Length of equinoctial day and night
On the date of the equinox, the center of the Sun spends a roughly equal amount of time above and below the horizon at every location on the Earth, so night and day are about the same length. Sunrise and sunset can be defined in several ways, but a widespread definition is the time that the top limb of the Sun is level with the horizon. With this definition, the day is longer than the night at the equinoxes:
From the Earth, the Sun appears as a disc rather than a point of light, so when the centre of the Sun is below the horizon, its upper edge may be visible. Sunrise, which begins daytime, occurs when the top of the Sun's disk appears above the eastern horizon. At that instant, the disk's centre is still below the horizon.
The Earth's atmosphere refracts sunlight. As a result, an observer sees daylight before the top of the Sun's disk appears above the horizon.
In sunrise/sunset tables, the atmospheric refraction is assumed to be 34 arcminutes, and the assumed semidiameter (apparent radius) of the Sun is 16 arcminutes. (The apparent radius varies slightly depending on time of year, slightly larger at perihelion in January than aphelion in July, but the difference is comparatively small.) Their combination means that when the upper limb of the Sun is on the visible horizon, its centre is 50 arcminutes below the geometric horizon, which is the intersection with the celestial sphere of a horizontal plane through the eye of the observer.
These effects make the day about 14 minutes longer than the night at the equator and longer still towards the poles. The real equality of day and night only happens in places far enough from the equator to have a seasonal difference in day length of at least 7 minutes, actually occurring a few days towards the winter side of each equinox. One result of this is that, at latitudes below ±2.0 degrees, all the days of the year are longer than the nights.
The times of sunset and sunrise vary with the observer's location (longitude and latitude), so the dates when day and night are equal also depend upon the observer's location.
A third correction for the visual observation of a sunrise (or sunset) is the angle between the apparent horizon as seen by an observer and the geometric (or sensible) horizon. This is known as the dip of the horizon and varies from 3 arcminutes for a viewer standing on the sea shore to 160 arcminutes for a mountaineer on Everest. The effect of a larger dip on taller objects (reaching over 2½° of arc on Everest) accounts for the phenomenon of snow on a mountain peak turning gold in the sunlight long before the lower slopes are illuminated.
The date on which the day and night are exactly the same is known as an equilux; the neologism, believed to have been coined in the 1980s, achieved more widespread recognition in the 21st century. At the most precise measurements, a true equilux is rare, because the lengths of day and night change more rapidly than any other time of the year around the equinoxes. In the mid-latitudes, daylight increases or decreases by about three minutes per day at the equinoxes, and thus adjacent days and nights only reach within one minute of each other. The date of the closest approximation of the equilux varies slightly by latitude; in the mid-latitudes, it occurs a few days before the spring equinox and after the fall equinox in each respective hemisphere.
Auroras
Mirror-image conjugate auroras have been observed during the equinoxes.
Cultural aspects
The equinoxes are sometimes regarded as the start of spring and autumn. A number of traditional harvest festivals are celebrated on the date of the equinoxes.
People in countries including Iran, Afghanistan, Tajikistan celebrate Nowruz which is spring equinox in northern hemisphere. This day marks the new year in Solar Hijri calendar.
Religious architecture is often determined by the equinox; the Angkor Wat Equinox during which the sun rises in a perfect alignment over Angkor Wat in Cambodia is one such example.
Catholic churches, since the recommendations of Charles Borromeo, have often chosen the equinox as their reference point for the orientation of churches.
Effects on satellites
One effect of equinoctial periods is the temporary disruption of communications satellites. For all geostationary satellites, there are a few days around the equinox when the Sun goes directly behind the satellite relative to Earth (i.e. within the beam-width of the ground-station antenna) for a short period each day. The Sun's immense power and broad radiation spectrum overload the Earth station's reception circuits with noise and, depending on antenna size and other factors, temporarily disrupt or degrade the circuit. The duration of those effects varies but can range from a few minutes to an hour. (For a given frequency band, a larger antenna has a narrower beam-width and hence experiences shorter duration "Sun outage" windows.)
Satellites in geostationary orbit also experience difficulties maintaining power during the equinox because they have to travel through Earth's shadow and rely only on battery power. Usually, a satellite travels either north or south of the Earth's shadow because Earth's axis is not directly perpendicular to a line from the Earth to the Sun at other times. During the equinox, since geostationary satellites are situated above the Equator, they are in Earth's shadow for the longest duration all year.
Equinoxes on other planets
Equinoxes are defined on any planet with a tilted rotational axis. A dramatic example is Saturn, where the equinox places its ring system edge-on facing the Sun. As a result, they are visible only as a thin line when seen from Earth. When seen from above – a view seen during an equinox for the first time from the Cassini space probe in 2009 – they receive very little sunshine; indeed, they receive more planetshine than light from the Sun. This phenomenon occurs once every 14.7 years on average, and can last a few weeks before and after the exact equinox. Saturn's most recent equinox was on 11 August 2009, and its next will take place on 6 May 2025.
Mars's most recent equinoxes were on 12 January 2024 (northern autumn), and on 26 December 2022 (northern spring).
| Physical sciences | Celestial sphere | null |
10103 | https://en.wikipedia.org/wiki/Electroweak%20interaction | Electroweak interaction | In particle physics, the electroweak interaction or electroweak force is the unified description of two of the fundamental interactions of nature: electromagnetism (electromagnetic interaction) and the weak interaction. Although these two forces appear very different at everyday low energies, the theory models them as two different aspects of the same force. Above the unification energy, on the order of 246 GeV, they would merge into a single force. Thus, if the temperature is high enough – approximately 1015 K – then the electromagnetic force and weak force merge into a combined electroweak force.
During the quark epoch (shortly after the Big Bang), the electroweak force split into the electromagnetic and weak force. It is thought that the required temperature of 1015 K has not been seen widely throughout the universe since before the quark epoch, and currently the highest human-made temperature in thermal equilibrium is around (from the Large Hadron Collider).
Sheldon Glashow, Abdus Salam, and Steven Weinberg were awarded the 1979 Nobel Prize in Physics for their contributions to the unification of the weak and electromagnetic interaction between elementary particles, known as the Weinberg–Salam theory. The existence of the electroweak interactions was experimentally established in two stages, the first being the discovery of neutral currents in neutrino scattering by the Gargamelle collaboration in 1973, and the second in 1983 by the UA1 and the UA2 collaborations that involved the discovery of the W and Z gauge bosons in proton–antiproton collisions at the converted Super Proton Synchrotron. In 1999, Gerardus 't Hooft and Martinus Veltman were awarded the Nobel prize for showing that the electroweak theory is renormalizable.
History
After the Wu experiment in 1956 discovered parity violation in the weak interaction, a search began for a way to relate the weak and electromagnetic interactions. Extending his doctoral advisor Julian Schwinger's work, Sheldon Glashow first experimented with introducing two different symmetries, one chiral and one achiral, and combined them such that their overall symmetry was unbroken. This did not yield a renormalizable theory, and its gauge symmetry had to be broken by hand as no spontaneous mechanism was known, but it predicted a new particle, the Z boson. This received little notice, as it matched no experimental finding.
In 1964, Salam and John Clive Ward had the same idea, but predicted a massless photon and three massive gauge bosons with a manually broken symmetry. Later around 1967, while investigating spontaneous symmetry breaking, Weinberg found a set of symmetries predicting a massless, neutral gauge boson. Initially rejecting such a particle as useless, he later realized his symmetries produced the electroweak force, and he proceeded to predict rough masses for the W and Z bosons. Significantly, he suggested this new theory was renormalizable. In 1971, Gerard 't Hooft proved that spontaneously broken gauge symmetries are renormalizable even with massive gauge bosons.
Formulation
Mathematically, electromagnetism is unified with the weak interactions as a Yang–Mills field with an gauge group, which describes the formal operations that can be applied to the electroweak gauge fields without changing the dynamics of the system. These fields are the weak isospin fields , , and , and the weak hypercharge field .
This invariance is known as electroweak symmetry.
The generators of SU(2) and U(1) are given the name weak isospin (labeled ) and weak hypercharge (labeled ) respectively. These then give rise to the gauge bosons that mediate the electroweak interactions – the three bosons of weak isospin (, , and ), and the boson of weak hypercharge, respectively, all of which are "initially" massless. These are not physical fields yet, before spontaneous symmetry breaking and the associated Higgs mechanism.
In the Standard Model, the observed physical particles, the and bosons, and the photon, are produced through the spontaneous symmetry breaking of the electroweak symmetry SU(2) × U(1) to U(1), effected by the Higgs mechanism (see also Higgs boson), an elaborate quantum-field-theoretic phenomenon that "spontaneously" alters the realization of the symmetry and rearranges degrees of freedom.
The electric charge arises as the particular linear combination (nontrivial) of (weak hypercharge) and the component of weak isospin () that does not couple to the Higgs boson. That is to say: the Higgs and the electromagnetic field have no effect on each other, at the level of the fundamental forces ("tree level"), while any other combination of the hypercharge and the weak isospin must interact with the Higgs. This causes an apparent separation between the weak force, which interacts with the Higgs, and electromagnetism, which does not. Mathematically, the electric charge is a specific combination of the hypercharge and outlined in the figure.
(the symmetry group of electromagnetism only) is defined to be the group generated by this special linear combination, and the symmetry described by the group is unbroken, since it does not directly interact with the Higgs.
The above spontaneous symmetry breaking makes the and bosons coalesce into two different physical bosons with different masses – the boson, and the photon (),
where is the weak mixing angle. The axes representing the particles have essentially just been rotated, in the (, ) plane, by the angle . This also introduces a mismatch between the mass of the and the mass of the particles (denoted as and , respectively),
The and bosons, in turn, combine to produce the charged massive bosons :
Lagrangian
Before electroweak symmetry breaking
The Lagrangian for the electroweak interactions is divided into four parts before electroweak symmetry breaking becomes manifest,
The term describes the interaction between the three vector bosons and the vector boson,
where () and are the field strength tensors for the weak isospin and weak hypercharge gauge fields.
is the kinetic term for the Standard Model fermions. The interaction of the gauge bosons and the fermions are through the gauge covariant derivative,
where the subscript sums over the three generations of fermions; , , and are the left-handed doublet, right-handed singlet up, and right handed singlet down quark fields; and and are the left-handed doublet and right-handed singlet electron fields.
The Feynman slash means the contraction of the 4-gradient with the Dirac matrices, defined as
and the covariant derivative (excluding the gluon gauge field for the strong interaction) is defined as
Here is the weak hypercharge and the are the components of the weak isospin.
The term describes the Higgs field and its interactions with itself and the gauge bosons,
where is the vacuum expectation value.
The term describes the Yukawa interaction with the fermions,
and generates their masses, manifest when the Higgs field acquires a nonzero vacuum expectation value, discussed next. The for are matrices of Yukawa couplings.
After electroweak symmetry breaking
The Lagrangian reorganizes itself as the Higgs field acquires a non-vanishing vacuum expectation value dictated by the potential of the previous section. As a result of this rewriting, the symmetry breaking becomes manifest. In the history of the universe, this is believed to have happened shortly after the hot big bang, when the universe was at a temperature
(assuming the Standard Model of particle physics).
Due to its complexity, this Lagrangian is best described by breaking it up into several parts as follows.
The kinetic term contains all the quadratic terms of the Lagrangian, which include the dynamic terms (the partial derivatives) and the mass terms (conspicuously absent from the Lagrangian before symmetry breaking)
where the sum runs over all the fermions of the theory (quarks and leptons), and the fields and are given as
with to be replaced by the relevant field ( ) and by the structure constants of the appropriate gauge group.
The neutral current and charged current components of the Lagrangian contain the interactions between the fermions and gauge bosons,
where The electromagnetic current is
where is the fermions' electric charges.
The neutral weak current is
where is the fermions' weak isospin.
The charged current part of the Lagrangian is given by
where is the right-handed singlet neutrino field, and the CKM matrix determines the mixing between mass and weak eigenstates of the quarks.
contains the Higgs three-point and four-point self interaction terms,
contains the Higgs interactions with gauge vector bosons,
contains the gauge three-point self interactions,
contains the gauge four-point self interactions,
contains the Yukawa interactions between the fermions and the Higgs field,
| Physical sciences | Particle physics: General | Physics |
10106 | https://en.wikipedia.org/wiki/Earthquake | Earthquake | An earthquakealso called a quake, tremor, or tembloris the shaking of the Earth's surface resulting from a sudden release of energy in the lithosphere that creates seismic waves. Earthquakes can range in intensity, from those so weak they cannot be felt, to those violent enough to propel objects and people into the air, damage critical infrastructure, and wreak destruction across entire cities. The seismic activity of an area is the frequency, type, and size of earthquakes experienced over a particular time. The seismicity at a particular location in the Earth is the average rate of seismic energy release per unit volume.
In its most general sense, the word earthquake is used to describe any seismic event that generates seismic waves. Earthquakes can occur naturally or be induced by human activities, such as mining, fracking, and nuclear tests. The initial point of rupture is called the hypocenter or focus, while the ground level directly above it is the epicenter. Earthquakes are primarily caused by geological faults, but also by volcanic activity, landslides, and other seismic events. The frequency, type, and size of earthquakes in an area define its seismic activity, reflecting the average rate of seismic energy release.
Significant historical earthquakes include the 1556 Shaanxi earthquake in China, with over 830,000 fatalities, and the 1960 Valdivia earthquake in Chile, the largest ever recorded at 9.5 magnitude. Earthquakes result in various effects, such as ground shaking and soil liquefaction, leading to significant damage and loss of life. When the epicenter of a large earthquake is located offshore, the seabed may be displaced sufficiently to cause a tsunami. Earthquakes can trigger landslides. Earthquakes' occurrence is influenced by tectonic movements along faults, including normal, reverse (thrust), and strike-slip faults, with energy release and rupture dynamics governed by the elastic-rebound theory.
Efforts to manage earthquake risks involve prediction, forecasting, and preparedness, including seismic retrofitting and earthquake engineering to design structures that withstand shaking. The cultural impact of earthquakes spans myths, religious beliefs, and modern media, reflecting their profound influence on human societies. Similar seismic phenomena, known as marsquakes and moonquakes, have been observed on other celestial bodies, indicating the universality of such events beyond Earth.
Terminology
An earthquake is the shaking of the surface of Earth resulting from a sudden release of energy in the lithosphere that creates seismic waves. Earthquakes may also be referred to as quakes, tremors, or temblors. The word tremor is also used for non-earthquake seismic rumbling.
In its most general sense, an earthquake is any seismic event—whether natural or caused by humans—that generates seismic waves. Earthquakes are caused mostly by the rupture of geological faults but also by other events such as volcanic activity, landslides, mine blasts, fracking and nuclear tests. An earthquake's point of initial rupture is called its hypocenter or focus. The epicenter is the point at ground level directly above the hypocenter.
The seismic activity of an area is the frequency, type, and size of earthquakes experienced over a particular time. The seismicity at a particular location in the Earth is the average rate of seismic energy release per unit volume.
Major examples
One of the most devastating earthquakes in recorded history was the 1556 Shaanxi earthquake, which occurred on 23 January 1556 in Shaanxi, China. More than 830,000 people died. Most houses in the area were yaodongs—dwellings carved out of loess hillsides—and many victims were killed when these structures collapsed. The 1976 Tangshan earthquake, which killed between 240,000 and 655,000 people, was the deadliest of the 20th century.
The 1960 Chilean earthquake is the largest earthquake that has been measured on a seismograph, reaching 9.5 magnitude on 22 May 1960. Its epicenter was near Cañete, Chile. The energy released was approximately twice that of the next most powerful earthquake, the Good Friday earthquake (27 March 1964), which was centered in Prince William Sound, Alaska. The ten largest recorded earthquakes have all been megathrust earthquakes; however, of these ten, only the 2004 Indian Ocean earthquake is simultaneously one of the deadliest earthquakes in history.
Earthquakes that caused the greatest loss of life, while powerful, were deadly because of their proximity to either heavily populated areas or the ocean, where earthquakes often create tsunamis that can devastate communities thousands of kilometers away. Regions most at risk for great loss of life include those where earthquakes are relatively rare but powerful, and poor regions with lax, unenforced, or nonexistent seismic building codes.
Occurrence
Tectonic earthquakes occur anywhere on the earth where there is sufficient stored elastic strain energy to drive fracture propagation along a fault plane. The sides of a fault move past each other smoothly and aseismically only if there are no irregularities or asperities along the fault surface that increases the frictional resistance. Most fault surfaces do have such asperities, which leads to a form of stick-slip behavior. Once the fault has locked, continued relative motion between the plates leads to increasing stress and, therefore, stored strain energy in the volume around the fault surface. This continues until the stress has risen sufficiently to break through the asperity, suddenly allowing sliding over the locked portion of the fault, releasing the stored energy. This energy is released as a combination of radiated elastic strain seismic waves, frictional heating of the fault surface, and cracking of the rock, thus causing an earthquake. This process of gradual build-up of strain and stress punctuated by occasional sudden earthquake failure is referred to as the elastic-rebound theory. It is estimated that only 10 percent or less of an earthquake's total energy is radiated as seismic energy. Most of the earthquake's energy is used to power the earthquake fracture growth or is converted into heat generated by friction. Therefore, earthquakes lower the Earth's available elastic potential energy and raise its temperature, though these changes are negligible compared to the conductive and convective flow of heat out from the Earth's deep interior.
Fault types
There are three main types of fault, all of which may cause an interplate earthquake: normal, reverse (thrust), and strike-slip. Normal and reverse faulting are examples of dip-slip, where the displacement along the fault is in the direction of dip and where movement on them involves a vertical component. Many earthquakes are caused by movement on faults that have components of both dip-slip and strike-slip; this is known as oblique slip. The topmost, brittle part of the Earth's crust, and the cool slabs of the tectonic plates that are descending into the hot mantle, are the only parts of our planet that can store elastic energy and release it in fault ruptures. Rocks hotter than about flow in response to stress; they do not rupture in earthquakes. The maximum observed lengths of ruptures and mapped faults (which may break in a single rupture) are approximately . Examples are the earthquakes in Alaska (1957), Chile (1960), and Sumatra (2004), all in subduction zones. The longest earthquake ruptures on strike-slip faults, like the San Andreas Fault (1857, 1906), the North Anatolian Fault in Turkey (1939), and the Denali Fault in Alaska (2002), are about half to one third as long as the lengths along subducting plate margins, and those along normal faults are even shorter.
Normal faults
Normal faults occur mainly in areas where the crust is being extended such as a divergent boundary. Earthquakes associated with normal faults are generally less than magnitude 7. Maximum magnitudes along many normal faults are even more limited because many of them are located along spreading centers, as in Iceland, where the thickness of the brittle layer is only about .
Reverse faults
Reverse faults occur in areas where the crust is being shortened such as at a convergent boundary. Reverse faults, particularly those along convergent boundaries, are associated with the most powerful earthquakes (called megathrust earthquakes) including almost all of those of magnitude 8 or more. Megathrust earthquakes are responsible for about 90% of the total seismic moment released worldwide.
Strike-slip faults
Strike-slip faults are steep structures where the two sides of the fault slip horizontally past each other; transform boundaries are a particular type of strike-slip fault. Strike-slip faults, particularly continental transforms, can produce major earthquakes up to about magnitude 8. Strike-slip faults tend to be oriented near vertically, resulting in an approximate width of within the brittle crust. Thus, earthquakes with magnitudes much larger than 8 are not possible.
In addition, there exists a hierarchy of stress levels in the three fault types. Thrust faults are generated by the highest, strike-slip by intermediate, and normal faults by the lowest stress levels. This can easily be understood by considering the direction of the greatest principal stress, the direction of the force that "pushes" the rock mass during the faulting. In the case of normal faults, the rock mass is pushed down in a vertical direction, thus the pushing force (greatest principal stress) equals the weight of the rock mass itself. In the case of thrusting, the rock mass "escapes" in the direction of the least principal stress, namely upward, lifting the rock mass, and thus, the overburden equals the least principal stress. Strike-slip faulting is intermediate between the other two types described above. This difference in stress regime in the three faulting environments can contribute to differences in stress drop during faulting, which contributes to differences in the radiated energy, regardless of fault dimensions.
Energy released
For every unit increase in magnitude, there is a roughly thirty-fold increase in the energy released. For instance, an earthquake of magnitude 6.0 releases approximately 32 times more energy than a 5.0 magnitude earthquake and a 7.0 magnitude earthquake releases 1,000 times more energy than a 5.0 magnitude earthquake. An 8.6-magnitude earthquake releases the same amount of energy as 10,000 atomic bombs of the size used in World War II.
This is so because the energy released in an earthquake, and thus its magnitude, is proportional to the area of the fault that ruptures and the stress drop. Therefore, the longer the length and the wider the width of the faulted area, the larger the resulting magnitude. The most important parameter controlling the maximum earthquake magnitude on a fault, however, is not the maximum available length, but the available width because the latter varies by a factor of 20. Along converging plate margins, the dip angle of the rupture plane is very shallow, typically about 10 degrees. Thus, the width of the plane within the top brittle crust of the Earth can reach (such as in Japan, 2011, or in Alaska, 1964), making the most powerful earthquakes possible.
Focus
The majority of tectonic earthquakes originate in the Ring of Fire at depths not exceeding tens of kilometers. Earthquakes occurring at a depth of less than are classified as "shallow-focus" earthquakes, while those with a focal depth between are commonly termed "mid-focus" or "intermediate-depth" earthquakes. In subduction zones, where older and colder oceanic crust descends beneath another tectonic plate, deep-focus earthquakes may occur at much greater depths (ranging from ). These seismically active areas of subduction are known as Wadati–Benioff zones. Deep-focus earthquakes occur at a depth where the subducted lithosphere should no longer be brittle, due to the high temperature and pressure. A possible mechanism for the generation of deep-focus earthquakes is faulting caused by olivine undergoing a phase transition into a spinel structure.
Volcanic activity
Earthquakes often occur in volcanic regions and are caused there, both by tectonic faults and the movement of magma in volcanoes. Such earthquakes can serve as an early warning of volcanic eruptions, as during the 1980 eruption of Mount St. Helens. Earthquake swarms can serve as markers for the location of the flowing magma throughout the volcanoes. These swarms can be recorded by seismometers and tiltmeters (a device that measures ground slope) and used as sensors to predict imminent or upcoming eruptions.
Rupture dynamics
A tectonic earthquake begins as an area of initial slip on the fault surface that forms the focus. Once the rupture has been initiated, it begins to propagate away from the focus, spreading out along the fault surface. Lateral propagation will continue until either the rupture reaches a barrier, such as the end of a fault segment, or a region on the fault where there is insufficient stress to allow continued rupture. For larger earthquakes, the depth extent of rupture will be constrained downwards by the brittle-ductile transition zone and upwards by the ground surface. The mechanics of this process are poorly understood because it is difficult either to recreate such rapid movements in a laboratory or to record seismic waves close to a nucleation zone due to strong ground motion.
In most cases, the rupture speed approaches, but does not exceed, the shear wave (S wave) velocity of the surrounding rock. There are a few exceptions to this:
Supershear earthquakes
Supershear earthquake ruptures are known to have propagated at speeds greater than the S wave velocity. These have so far all been observed during large strike-slip events. The unusually wide zone of damage caused by the 2001 Kunlun earthquake has been attributed to the effects of the sonic boom developed in such earthquakes.
Slow earthquakes
Slow earthquake ruptures travel at unusually low velocities. A particularly dangerous form of slow earthquake is the tsunami earthquake, observed where the relatively low felt intensities, caused by the slow propagation speed of some great earthquakes, fail to alert the population of the neighboring coast, as in the 1896 Sanriku earthquake.
Co-seismic overpressuring and effect of pore pressure
During an earthquake, high temperatures can develop at the fault plane, increasing pore pressure and consequently vaporization of the groundwater already contained within the rock. In the coseismic phase, such an increase can significantly affect slip evolution and speed, in the post-seismic phase it can control the Aftershock sequence because, after the main event, pore pressure increase slowly propagates into the surrounding fracture network.
From the point of view of the Mohr-Coulomb strength theory, an increase in fluid pressure reduces the normal stress acting on the fault plane that holds it in place, and fluids can exert a lubricating effect. As thermal overpressurization may provide positive feedback between slip and strength fall at the fault plane, a common opinion is that it may enhance the faulting process instability. After the mainshock, the pressure gradient between the fault plane and the neighboring rock causes a fluid flow that increases pore pressure in the surrounding fracture networks; such an increase may trigger new faulting processes by reactivating adjacent faults, giving rise to aftershocks. Analogously, artificial pore pressure increase, by fluid injection in Earth's crust, may induce seismicity.
Tidal forces
Tides may trigger some seismicity.
Clusters
Most earthquakes form part of a sequence, related to each other in terms of location and time. Most earthquake clusters consist of small tremors that cause little to no damage, but there is a theory that earthquakes can recur in a regular pattern. Earthquake clustering has been observed, for example, in Parkfield, California where a long-term research study is being conducted around the Parkfield earthquake cluster.
Aftershocks
An aftershock is an earthquake that occurs after a previous earthquake, the mainshock. Rapid changes of stress between rocks, and the stress from the original earthquake are the main causes of these aftershocks, along with the crust around the ruptured fault plane as it adjusts to the effects of the mainshock. An aftershock is in the same region as the main shock but always of a smaller magnitude, however, they can still be powerful enough to cause even more damage to buildings that were already previously damaged from the mainshock. If an aftershock is larger than the mainshock, the aftershock is redesignated as the mainshock and the original main shock is redesignated as a foreshock. Aftershocks are formed as the crust around the displaced fault plane adjusts to the effects of the mainshock.
Swarms
Earthquake swarms are sequences of earthquakes striking in a specific area within a short period. They are different from earthquakes followed by a series of aftershocks by the fact that no single earthquake in the sequence is the main shock, so none has a notably higher magnitude than another. An example of an earthquake swarm is the 2004 activity at Yellowstone National Park. In August 2012, a swarm of earthquakes shook Southern California's Imperial Valley, showing the most recorded activity in the area since the 1970s.
Sometimes a series of earthquakes occur in what has been called an earthquake storm, where the earthquakes strike a fault in clusters, each triggered by the shaking or stress redistribution of the previous earthquakes. Similar to aftershocks but on adjacent segments of fault, these storms occur over the course of years, with some of the later earthquakes as damaging as the early ones. Such a pattern was observed in the sequence of about a dozen earthquakes that struck the North Anatolian Fault in Turkey in the 20th century and has been inferred for older anomalous clusters of large earthquakes in the Middle East.
Frequency
It is estimated that around 500,000 earthquakes occur each year, detectable with current instrumentation. About 100,000 of these can be felt. Minor earthquakes occur very frequently around the world in places like California and Alaska in the U.S., as well as in El Salvador, Mexico, Guatemala, Chile, Peru, Indonesia, the Philippines, Iran, Pakistan, the Azores in Portugal, Turkey, New Zealand, Greece, Italy, India, Nepal, and Japan. Larger earthquakes occur less frequently, the relationship being exponential; for example, roughly ten times as many earthquakes larger than magnitude 4 occur than earthquakes larger than magnitude 5. In the (low seismicity) United Kingdom, for example, it has been calculated that the average recurrences are:
an earthquake of 3.7–4.6 every year, an earthquake of 4.7–5.5 every 10 years, and an earthquake of 5.6 or larger every 100 years. This is an example of the Gutenberg–Richter law.
The number of seismic stations has increased from about 350 in 1931 to many thousands today. As a result, many more earthquakes are reported than in the past, but this is because of the vast improvement in instrumentation, rather than an increase in the number of earthquakes. The United States Geological Survey (USGS) estimates that, since 1900, there have been an average of 18 major earthquakes (magnitude 7.0–7.9) and one great earthquake (magnitude 8.0 or greater) per year, and that this average has been relatively stable. In recent years, the number of major earthquakes per year has decreased, though this is probably a statistical fluctuation rather than a systematic trend. More detailed statistics on the size and frequency of earthquakes is available from the United States Geological Survey. A recent increase in the number of major earthquakes has been noted, which could be explained by a cyclical pattern of periods of intense tectonic activity, interspersed with longer periods of low intensity. However, accurate recordings of earthquakes only began in the early 1900s, so it is too early to categorically state that this is the case.
Most of the world's earthquakes (90%, and 81% of the largest) take place in the , horseshoe-shaped zone called the circum-Pacific seismic belt, known as the Pacific Ring of Fire, which for the most part bounds the Pacific plate. Massive earthquakes tend to occur along other plate boundaries too, such as along the Himalayan Mountains.
With the rapid growth of mega-cities such as Mexico City, Tokyo, and Tehran in areas of high seismic risk, some seismologists are warning that a single earthquake may claim the lives of up to three million people.
Induced seismicity
While most earthquakes are caused by the movement of the Earth's tectonic plates, human activity can also produce earthquakes. Activities both above ground and below may change the stresses and strains on the crust, including building reservoirs, extracting resources such as coal or oil, and injecting fluids underground for waste disposal or fracking. Most of these earthquakes have small magnitudes. The 5.7 magnitude 2011 Oklahoma earthquake is thought to have been caused by disposing wastewater from oil production into injection wells, and studies point to the state's oil industry as the cause of other earthquakes in the past century. A Columbia University paper suggested that the 8.0 magnitude 2008 Sichuan earthquake was induced by loading from the Zipingpu Dam, though the link has not been conclusively proved.
Measurement and location
The instrumental scales used to describe the size of an earthquake began with the Richter scale in the 1930s. It is a relatively simple measurement of an event's amplitude, and its use has become minimal in the 21st century. Seismic waves travel through the Earth's interior and can be recorded by seismometers at great distances. The surface-wave magnitude was developed in the 1950s as a means to measure remote earthquakes and to improve the accuracy for larger events. The moment magnitude scale not only measures the amplitude of the shock but also takes into account the seismic moment (total rupture area, average slip of the fault, and rigidity of the rock). The Japan Meteorological Agency seismic intensity scale, the Medvedev–Sponheuer–Karnik scale, and the Mercalli intensity scale are based on the observed effects and are related to the intensity of shaking.
Intensity and magnitude
The shaking of the earth is a common phenomenon that has been experienced by humans from the earliest of times. Before the development of strong-motion accelerometers, the intensity of a seismic event was estimated based on the observed effects. Magnitude and intensity are not directly related and calculated using different methods. The magnitude of an earthquake is a single value that describes the size of the earthquake at its source. Intensity is the measure of shaking at different locations around the earthquake. Intensity values vary from place to place, depending on the distance from the earthquake and the underlying rock or soil makeup.
The first scale for measuring earthquake magnitudes was developed by Charles Francis Richter in 1935. Subsequent scales (seismic magnitude scales) have retained a key feature, where each unit represents a ten-fold difference in the amplitude of the ground shaking and a 32-fold difference in energy. Subsequent scales are also adjusted to have approximately the same numeric value within the limits of the scale.
Although the mass media commonly reports earthquake magnitudes as "Richter magnitude" or "Richter scale", standard practice by most seismological authorities is to express an earthquake's strength on the moment magnitude scale, which is based on the actual energy released by an earthquake, the static seismic moment.
Seismic waves
Every earthquake produces different types of seismic waves, which travel through rock with different velocities:
Longitudinal P waves (shock- or pressure waves)
Transverse S waves (both body waves)
Surface waves – (Rayleigh and Love waves)
Speed of seismic waves
Propagation velocity of the seismic waves through solid rock ranges from approx. up to , depending on the density and elasticity of the medium. In the Earth's interior, the shock- or P waves travel much faster than the S waves (approx. relation 1.7:1). The differences in travel time from the epicenter to the observatory are a measure of the distance and can be used to image both sources of earthquakes and structures within the Earth. Also, the depth of the hypocenter can be computed roughly.
P wave speed
Upper crust soils and unconsolidated sediments: per second
Upper crust solid rock: per second
Lower crust: per second
Deep mantle: per second.
S waves speed
Light sediments: per second
Earths crust: per second
Deep mantle: per second
Seismic wave arrival
As a consequence, the first waves of a distant earthquake arrive at an observatory via the Earth's mantle.
On average, the kilometer distance to the earthquake is the number of seconds between the P- and S wave times 8. Slight deviations are caused by inhomogeneities of subsurface structure. By such analysis of seismograms, the Earth's core was located in 1913 by Beno Gutenberg.
S waves and later arriving surface waves do most of the damage compared to P waves. P waves squeeze and expand the material in the same direction they are traveling, whereas S waves shake the ground up and down and back and forth.
Location and reporting
Earthquakes are not only categorized by their magnitude but also by the place where they occur. The world is divided into 754 Flinn–Engdahl regions (F-E regions), which are based on political and geographical boundaries as well as seismic activity. More active zones are divided into smaller F-E regions whereas less active zones belong to larger F-E regions.
Standard reporting of earthquakes includes its magnitude, date and time of occurrence, geographic coordinates of its epicenter, depth of the epicenter, geographical region, distances to population centers, location uncertainty, several parameters that are included in USGS earthquake reports (number of stations reporting, number of observations, etc.), and a unique event ID.
Although relatively slow seismic waves have traditionally been used to detect earthquakes, scientists realized in 2016 that gravitational measurement could provide instantaneous detection of earthquakes, and confirmed this by analyzing gravitational records associated with the 2011 Tohoku-Oki ("Fukushima") earthquake.
Effects
The effects of earthquakes include, but are not limited to, the following:
Shaking and ground rupture
Shaking and ground rupture are the main effects created by earthquakes, principally resulting in more or less severe damage to buildings and other rigid structures. The severity of the local effects depends on the complex combination of the earthquake magnitude, the distance from the epicenter, and the local geological and geomorphological conditions, which may amplify or reduce wave propagation. The ground-shaking is measured by ground acceleration.
Specific local geological, geomorphological, and geostructural features can induce high levels of shaking on the ground surface even from low-intensity earthquakes. This effect is called site or local amplification. It is principally due to the transfer of the seismic motion from hard deep soils to soft superficial soils and the effects of seismic energy focalization owing to the typical geometrical setting of such deposits.
Ground rupture is a visible breaking and displacement of the Earth's surface along the trace of the fault, which may be of the order of several meters in the case of major earthquakes. Ground rupture is a major risk for large engineering structures such as dams, bridges, and nuclear power stations and requires careful mapping of existing faults to identify any that are likely to break the ground surface within the life of the structure.
Soil liquefaction
Soil liquefaction occurs when, because of the shaking, water-saturated granular material (such as sand) temporarily loses its strength and transforms from a solid to a liquid. Soil liquefaction may cause rigid structures, like buildings and bridges, to tilt or sink into the liquefied deposits. For example, in the 1964 Alaska earthquake, soil liquefaction caused many buildings to sink into the ground, eventually collapsing upon themselves.
Human impacts
Physical damage from an earthquake will vary depending on the intensity of shaking in a given area and the type of population. Underserved and developing communities frequently experience more severe impacts (and longer lasting) from a seismic event compared to well-developed communities. Impacts may include:
Injuries and loss of life
Damage to critical infrastructure (short and long-term)
Roads, bridges, and public transportation networks
Water, power, sewer and gas interruption
Communication systems
Loss of critical community services including hospitals, police, and fire
General property damage
Collapse or destabilization (potentially leading to future collapse) of buildings
With these impacts and others, the aftermath may bring disease, a lack of basic necessities, mental consequences such as panic attacks and depression to survivors, and higher insurance premiums. Recovery times will vary based on the level of damage and the socioeconomic status of the impacted community.
Landslides
Earthquakes can produce slope instability leading to landslides, a major geological hazard. Landslide danger may persist while emergency personnel is attempting rescue work.
Fires
Earthquakes can cause fires by damaging electrical power or gas lines. In the event of water mains rupturing and a loss of pressure, it may also become difficult to stop the spread of a fire once it has started. For example, more deaths in the 1906 San Francisco earthquake were caused by fire than by the earthquake itself.
Tsunami
Tsunamis are long-wavelength, long-period sea waves produced by the sudden or abrupt movement of large volumes of water—including when an earthquake occurs at sea. In the open ocean, the distance between wave crests can surpass , and the wave periods can vary from five minutes to one hour. Such tsunamis travel 600–800 kilometers per hour (373–497 miles per hour), depending on water depth. Large waves produced by an earthquake or a submarine landslide can overrun nearby coastal areas in a matter of minutes. Tsunamis can also travel thousands of kilometers across open ocean and wreak destruction on far shores hours after the earthquake that generated them.
Ordinarily, subduction earthquakes under magnitude 7.5 do not cause tsunamis, although some instances of this have been recorded. Most destructive tsunamis are caused by earthquakes of magnitude 7.5 or more.
Floods
Floods may be secondary effects of earthquakes if dams are damaged. Earthquakes may cause landslips to dam rivers, which collapse and cause floods.
The terrain below the Sarez Lake in Tajikistan is in danger of catastrophic flooding if the landslide dam formed by the earthquake, known as the Usoi Dam, were to fail during a future earthquake. Impact projections suggest the flood could affect roughly five million people.
Management
Prediction
Earthquake prediction is a branch of the science of seismology concerned with the specification of the time, location, and magnitude of future earthquakes within stated limits. Many methods have been developed for predicting the time and place in which earthquakes will occur. Despite considerable research efforts by seismologists, scientifically reproducible predictions cannot yet be made to a specific day or month. Popular belief holds earthquakes are preceded by earthquake weather, in the early morning.
Forecasting
While forecasting is usually considered to be a type of prediction, earthquake forecasting is often differentiated from earthquake prediction. Earthquake forecasting is concerned with the probabilistic assessment of general earthquake hazards, including the frequency and magnitude of damaging earthquakes in a given area over years or decades. For well-understood faults the probability that a segment may rupture during the next few decades can be estimated.
Earthquake warning systems have been developed that can provide regional notification of an earthquake in progress, but before the ground surface has begun to move, potentially allowing people within the system's range to seek shelter before the earthquake's impact is felt.
Preparedness
The objective of earthquake engineering is to foresee the impact of earthquakes on buildings, bridges, tunnels, roadways, and other structures, and to design such structures to minimize the risk of damage. Existing structures can be modified by seismic retrofitting to improve their resistance to earthquakes. Earthquake insurance can provide building owners with financial protection against losses resulting from earthquakes. Emergency management strategies can be employed by a government or organization to mitigate risks and prepare for consequences.
Artificial intelligence may help to assess buildings and plan precautionary operations. The Igor expert system is part of a mobile laboratory that supports the procedures leading to the seismic assessment of masonry buildings and the planning of retrofitting operations on them. It has been applied to assess buildings in Lisbon, Rhodes, and Naples.
Individuals can also take preparedness steps like securing water heaters and heavy items that could injure someone, locating shutoffs for utilities, and being educated about what to do when the shaking starts. For areas near large bodies of water, earthquake preparedness encompasses the possibility of a tsunami caused by a large earthquake.
In culture
Historical views
From the lifetime of the Greek philosopher Anaxagoras in the 5th century BCE to the 14th century CE, earthquakes were usually attributed to "air (vapors) in the cavities of the Earth." Thales of Miletus (625–547 BCE) was the only documented person who believed that earthquakes were caused by tension between the earth and water. Other theories existed, including the Greek philosopher Anaxamines' (585–526 BCE) beliefs that short incline episodes of dryness and wetness caused seismic activity. The Greek philosopher Democritus (460–371 BCE) blamed water in general for earthquakes. Pliny the Elder called earthquakes "underground thunderstorms".
Mythology and religion
In Norse mythology, earthquakes were explained as the violent struggle of the god Loki. When Loki, god of mischief and strife, murdered Baldr, god of beauty and light, he was punished by being bound in a cave with a poisonous serpent placed above his head dripping venom. Loki's wife Sigyn stood by him with a bowl to catch the poison, but whenever she had to empty the bowl, the poison dripped on Loki's face, forcing him to jerk his head away and thrash against his bonds, which caused the earth to tremble.
In Greek mythology, Poseidon was the cause and god of earthquakes. When he was in a bad mood, he struck the ground with a trident, causing earthquakes and other calamities. He also used earthquakes to punish and inflict fear upon people as revenge.
In Japanese mythology, Namazu (鯰) is a giant catfish who causes earthquakes. Namazu lives in the mud beneath the earth and is guarded by the god Kashima, who restrains the fish with a stone. When Kashima lets his guard fall, Namazu thrashes about, causing violent earthquakes.
In the New Testament, Matthew's Gospel refers to earthquakes occurring both after the death of Jesus (Matthew 27:51, 54) and at his resurrection (Matthew 28:2). Earthquakes form part of the picture through which Jesus portrays the beginning of the end of time.
In popular culture
In modern popular culture, the portrayal of earthquakes is shaped by the memory of great cities laid waste, such as Kobe in 1995 or San Francisco in 1906. Fictional earthquakes tend to strike suddenly and without warning. For this reason, stories about earthquakes generally begin with the disaster and focus on its immediate aftermath, as in Short Walk to Daylight (1972), The Ragged Edge (1968) or Aftershock: Earthquake in New York (1999). A notable example is Heinrich von Kleist's classic novella, The Earthquake in Chile, which describes the destruction of Santiago in 1647. Haruki Murakami's short fiction collection After the Quake depicts the consequences of the Kobe earthquake of 1995.
The most popular single earthquake in fiction is the hypothetical "Big One" expected of California's San Andreas Fault someday, as depicted in the novels Richter 10 (1996), Goodbye California (1977), 2012 (2009), and San Andreas (2015), among other works. Jacob M. Appel's widely anthologized short story, A Comparative Seismology, features a con artist who convinces an elderly woman that an apocalyptic earthquake is imminent.
Contemporary depictions of earthquakes in film are variable in the manner in which they reflect human psychological reactions to the actual trauma that can be caused to directly afflicted families and their loved ones. Disaster mental health response research emphasizes the need to be aware of the different roles of loss of family and key community members, loss of home and familiar surroundings, and loss of essential supplies and services to maintain survival. Particularly for children, the clear availability of caregiving adults who can protect, nourish, and clothe them in the aftermath of the earthquake and help them make sense of what has befallen them is more important to their emotional and physical health than the simple giving of provisions. As was observed after other disasters involving destruction and loss of life and their media depictions, recently observed in the 2010 Haiti earthquake, it is also believed to be important not to pathologize the reactions to loss and displacement or disruption of governmental administration and services, but rather to validate the reactions to support constructive problem-solving and reflection.
Outside of earth
Phenomena similar to earthquakes have been observed on other planets (e.g., marsquakes on Mars) and on the Moon (e.g., moonquakes).
| Physical sciences | Earth science | null |
10116 | https://en.wikipedia.org/wiki/Endocytosis | Endocytosis | Endocytosis is a cellular process in which substances are brought into the cell. The material to be internalized is surrounded by an area of cell membrane, which then buds off inside the cell to form a vesicle containing the ingested materials. Endocytosis includes pinocytosis (cell drinking) and phagocytosis (cell eating). It is a form of active transport.
History
The term was proposed by De Duve in 1963. Phagocytosis was discovered by Élie Metchnikoff in 1882.
Pathways
Endocytosis pathways can be subdivided into four categories: namely, receptor-mediated endocytosis (also known as clathrin-mediated endocytosis), caveolae, pinocytosis, and phagocytosis.
Clathrin-mediated endocytosis is mediated by the production of small (approx. 100 nm in diameter) vesicles that have a morphologically characteristic coat made up of the cytosolic protein clathrin. Clathrin-coated vesicles (CCVs) are found in virtually all cells and form domains of the plasma membrane termed clathrin-coated pits. Coated pits can concentrate large extracellular molecules that have different receptors responsible for the receptor-mediated endocytosis of ligands, e.g. low density lipoprotein, transferrin, growth factors, antibodies and many others.
Study in mammalian cells confirm a reduction in clathrin coat size in an increased tension environment. In addition, it suggests that the two apparently distinct clathrin assembly modes, namely coated pits and coated plaques, observed in experimental investigations might be a consequence of varied tensions in the plasma membrane.
Caveolae are the most commonly reported non-clathrin-coated plasma membrane buds, which exist on the surface of many, but not all cell types. They consist of the cholesterol-binding protein caveolin (Vip21) with a bilayer enriched in cholesterol and glycolipids. Caveolae are small (approx. 50 nm in diameter) flask-shape pits in the membrane that resemble the shape of a cave (hence the name caveolae). They can constitute up to a third of the plasma membrane area of the cells of some tissues, being especially abundant in smooth muscle, type I pneumocytes, fibroblasts, adipocytes, and endothelial cells. Uptake of extracellular molecules is also believed to be specifically mediated via receptors in caveolae.
Potocytosis is a form of receptor-mediated endocytosis that uses caveolae vesicles to bring molecules of various sizes into the cell. Unlike most endocytosis that uses caveolae to deliver contents of vesicles to lysosomes or other organelles, material endocytosed via potocytosis is released into the cytosol.
Pinocytosis, which usually occurs from highly ruffled regions of the plasma membrane, is the invagination of the cell membrane to form a pocket, which then pinches off into the cell to form a vesicle (0.5–5 μm in diameter) filled with a large volume of extracellular fluid and molecules within it (equivalent to ~100 CCVs). The filling of the pocket occurs in a non-specific manner. The vesicle then travels into the cytosol and fuses with other vesicles such as endosomes and lysosomes.
Phagocytosis is the process by which cells bind and internalize particulate matter larger than around 0.75 μm in diameter, such as small-sized dust particles, cell debris, microorganisms and apoptotic cells. These processes involve the uptake of larger membrane areas than clathrin-mediated endocytosis and caveolae pathway.
More recent experiments have suggested that these morphological descriptions of endocytic events may be inadequate, and a more appropriate method of classification may be based upon whether particular pathways are dependent on clathrin and dynamin.
Dynamin-dependent clathrin-independent pathways include FEME, UFE, ADBE, EGFR-NCE and IL2Rβ uptake.
Dynamin-independent clathrin-independent pathways include the CLIC/GEEC pathway (regulated by Graf1), as well as MEND and macropinocytosis.
Clathrin-mediated endocytosis is the only pathway dependent on both clathrin and dynamin.
Principal components
The endocytic pathway of mammalian cells consists of distinct membrane compartments, which internalize molecules from the plasma membrane and recycle them back to the surface (as in early endosomes and recycling endosomes), or sort them to degradation (as in late endosomes and lysosomes). The principal components of the endocytic pathway are:
Early endosomes are the first compartment of the endocytic pathway. Early endosomes are often located in the periphery of the cell, and receive most types of vesicles coming from the cell surface. They have a characteristic tubulo-vesicular structure (vesicles up to 1 μm in diameter with connected tubules of approx. 50 nm diameter) and a mildly acidic pH. They are principally sorting organelles where many endocytosed ligands dissociate from their receptors in the acid pH of the compartment, and from which many of the receptors recycle to the cell surface (via tubules). It is also the site of sorting into transcytotic pathway to later compartments (like late endosomes or lysosomes) via transvesicular compartments (like multivesicular bodies (MVB) or endosomal carrier vesicles (ECVs)).
Late endosomes receive endocytosed material en route to lysosomes, usually from early endosomes in the endocytic pathway, from trans-Golgi network (TGN) in the biosynthetic pathway, and from phagosomes in the phagocytic pathway. Late endosomes often contain proteins characteristic of nucleosomes, mitochondria and mRNAs including lysosomal membrane glycoproteins and acid hydrolases. They are acidic (approx. pH 5.5), and are part of the trafficking pathway of mannose-6-phosphate receptors. Late endosomes are thought to mediate a final set of sorting events prior the delivery of material to lysosomes.
Lysosomes are the last compartment of the endocytic pathway. Their chief function is to break down cellular waste products, fats, carbohydrates, proteins, and other macromolecules into simple compounds. These are then returned to the cytoplasm as new cell-building materials. To accomplish this, lysosomes use some 40 different types of hydrolytic enzymes, all of which are manufactured in the endoplasmic reticulum, modified in the Golgi apparatus and function in an acidic environment. The approximate pH of a lysosome is 4.8 and by electron microscopy (EM) usually appear as large vacuoles (1-2 μm in diameter) containing electron dense material. They have a high content of lysosomal membrane proteins and active lysosomal hydrolases, but no mannose-6-phosphate receptor. They are generally regarded as the principal hydrolytic compartment of the cell.
It was recently found that an eisosome serves as a portal of endocytosis in yeast.
Clathrin-mediated
The major route for endocytosis in most cells, and the best-understood, is that mediated by the molecule clathrin. This large protein assists in the formation of a coated pit on the inner surface of the plasma membrane of the cell. This pit then buds into the cell to form a coated vesicle in the cytoplasm of the cell. In so doing, it brings into the cell not only a small area of the surface of the cell but also a small volume of fluid from outside the cell.
Coats function to deform the donor membrane to produce a vesicle, and they also function in the selection of the vesicle cargo. Coat complexes that have been well characterized so far include coat protein-I (COP-I), COP-II, and clathrin. Clathrin coats are involved in two crucial transport steps: (i) receptor-mediated and fluid-phase endocytosis from the plasma membrane to early endosome and (ii) transport from the TGN to endosomes. In endocytosis, the clathrin coat is assembled on the cytoplasmic face of the plasma membrane, forming pits that invaginate to pinch off (scission) and become free CCVs. In cultured cells, the assembly of a CCV takes ~ 1min, and several hundred to a thousand or more can form every minute. The main scaffold component of clathrin coat is the 190-kD protein called clathrin heavy chain (CHC), which is associated with a 25- kD protein called clathrin light chain (CLC), forming three-legged trimers called triskelions.
Vesicles selectively concentrate and exclude certain proteins during formation and are not representative of the membrane as a whole. AP2 adaptors are multisubunit complexes that perform this function at the plasma membrane. The best-understood receptors that are found concentrated in coated vesicles of mammalian cells are the LDL receptor (which removes LDL from circulating blood), the transferrin receptor (which brings ferric ions bound by transferrin into the cell) and certain hormone receptors (such as that for EGF).
At any one moment, about 25% of the plasma membrane of a fibroblast is made up of coated pits. As a coated pit has a life of about a minute before it buds into the cell, a fibroblast takes up its surface by this route about once every 50 minutes. Coated vesicles formed from the plasma membrane have a diameter of about 100 nm and a lifetime measured in a few seconds. Once the coat has been shed, the remaining vesicle fuses with endosomes and proceeds down the endocytic pathway. The actual budding-in process, whereby a pit is converted to a vesicle, is carried out by clathrin; Assisted by a set of cytoplasmic proteins, which includes dynamin and adaptors such as adaptin.
Coated pits and vesicles were first seen in thin sections of tissue in the electron microscope by Thomas F Roth and Keith R. Porter. The importance of them for the clearance of LDL from blood was discovered by Richard G. Anderson, Michael S. Brown and Joseph L. Goldstein in 1977. Coated vesicles were first purified by Barbara Pearse, who discovered the clathrin coat molecule in 1976.
Processes and components
Caveolin proteins like caveolin-1 (CAV1), caveolin-2 (CAV2), and caveolin-3 (CAV3), play significant roles in the caveolar formation process. More specifically, CAV1 and CAV2 are responsible for caveolae formation in non-muscle cells while CAV3 functions in muscle cells. The process starts with CAV1 being synthesized in the ER where it forms detergent-resistant oligomers. Then, these oligomers travel through the Golgi complex before arriving at the cell surface to aid in caveolar formation. Caveolae formation is also reversible through disassembly under certain conditions such as increased plasma membrane tension. These certain conditions then depend on the type of tissues that are expressing the caveolar function. For example, not all tissues that have caveolar proteins have a caveolar structure i.e. the blood-brain barrier.
Though there are many morphological features conserved among caveolae, the functions of each CAV protein are diverse. One common feature among caveolins is their hydrophobic stretches of potential hairpin structures that are made of α-helices. The insertion of these hairpin-like α-helices forms a caveolae coat which leads to membrane curvature. In addition to insertion, caveolins are also capable of oligomerization which further plays a role in membrane curvature. Recent studies have also discovered that polymerase I, transcript release factor, and serum deprivation protein response also play a role in the assembly of caveolae. Besides caveolae assembly, researchers have also discovered that CAV1 proteins can also influence other endocytic pathways. When CAV1 binds to Cdc42, CAV1 inactivates it and regulates Cdc42 activity during membrane trafficking events.
Mechanisms
The process of cell uptake depends on the tilt and chirality of constituent molecules to induce membrane budding. Since such chiral and tilted lipid molecules are likely to be in a "raft" form, researchers suggest that caveolae formation also follows this mechanism since caveolae are also enriched in raft constituents. When caveolin proteins bind to the inner leaflet via cholesterol, the membrane starts to bend, leading to spontaneous curvature. This effect is due to the force distribution generated when the caveolin oligomer binds to the membrane. The force distribution then alters the tension of the membrane which leads to budding and eventually vesicle formation.
Gallery
| Biology and health sciences | Cell processes | Biology |
10134 | https://en.wikipedia.org/wiki/Electromagnetic%20spectrum | Electromagnetic spectrum | The electromagnetic spectrum is the full range of electromagnetic radiation, organized by frequency or wavelength. The spectrum is divided into separate bands, with different names for the electromagnetic waves within each band. From low to high frequency these are: radio waves, microwaves, infrared, visible light, ultraviolet, X-rays, and gamma rays. The electromagnetic waves in each of these bands have different characteristics, such as how they are produced, how they interact with matter, and their practical applications.
Radio waves, at the low-frequency end of the spectrum, have the lowest photon energy and the longest wavelengths—thousands of kilometers, or more. They can be emitted and received by antennas, and pass through the atmosphere, foliage, and most building materials.
Gamma rays, at the high-frequency end of the spectrum, have the highest photon energies and the shortest wavelengths—much smaller than an atomic nucleus. Gamma rays, X-rays, and extreme ultraviolet rays are called ionizing radiation because their high photon energy is able to ionize atoms, causing chemical reactions. Longer-wavelength radiation such as visible light is nonionizing; the photons do not have sufficient energy to ionize atoms.
Throughout most of the electromagnetic spectrum, spectroscopy can be used to separate waves of different frequencies, so that the intensity of the radiation can be measured as a function of frequency or wavelength. Spectroscopy is used to study the interactions of electromagnetic waves with matter.
History and discovery
Humans have always been aware of visible light and radiant heat but for most of history it was not known that these phenomena were connected or were representatives of a more extensive principle. The ancient Greeks recognized that light traveled in straight lines and studied some of its properties, including reflection and refraction. Light was intensively studied from the beginning of the 17th century leading to the invention of important instruments like the telescope and microscope. Isaac Newton was the first to use the term spectrum for the range of colours that white light could be split into with a prism. Starting in 1666, Newton showed that these colours were intrinsic to light and could be recombined into white light. A debate arose over whether light had a wave nature or a particle nature with René Descartes, Robert Hooke and Christiaan Huygens favouring a wave description and Newton favouring a particle description. Huygens in particular had a well developed theory from which he was able to derive the laws of reflection and refraction. Around 1801, Thomas Young measured the wavelength of a light beam with his two-slit experiment thus conclusively demonstrating that light was a wave.
In 1800, William Herschel discovered infrared radiation. He was studying the temperature of different colours by moving a thermometer through light split by a prism. He noticed that the highest temperature was beyond red. He theorized that this temperature change was due to "calorific rays", a type of light ray that could not be seen. The next year, Johann Ritter, working at the other end of the spectrum, noticed what he called "chemical rays" (invisible light rays that induced certain chemical reactions). These behaved similarly to visible violet light rays, but were beyond them in the spectrum. They were later renamed ultraviolet radiation.
The study of electromagnetism began in 1820 when Hans Christian Ørsted discovered that electric currents produce magnetic fields (Oersted's law). Light was first linked to electromagnetism in 1845, when Michael Faraday noticed that the polarization of light traveling through a transparent material responded to a magnetic field (see Faraday effect). During the 1860s, James Clerk Maxwell developed four partial differential equations (Maxwell's equations) for the electromagnetic field. Two of these equations predicted the possibility and behavior of waves in the field. Analyzing the speed of these theoretical waves, Maxwell realized that they must travel at a speed that was about the known speed of light. This startling coincidence in value led Maxwell to make the inference that light itself is a type of electromagnetic wave. Maxwell's equations predicted an infinite range of frequencies of electromagnetic waves, all traveling at the speed of light. This was the first indication of the existence of the entire electromagnetic spectrum.
Maxwell's predicted waves included waves at very low frequencies compared to infrared, which in theory might be created by oscillating charges in an ordinary electrical circuit of a certain type. Attempting to prove Maxwell's equations and detect such low frequency electromagnetic radiation, in 1886, the physicist Heinrich Hertz built an apparatus to generate and detect what are now called radio waves. Hertz found the waves and was able to infer (by measuring their wavelength and multiplying it by their frequency) that they traveled at the speed of light. Hertz also demonstrated that the new radiation could be both reflected and refracted by various dielectric media, in the same manner as light. For example, Hertz was able to focus the waves using a lens made of tree resin. In a later experiment, Hertz similarly produced and measured the properties of microwaves. These new types of waves paved the way for inventions such as the wireless telegraph and the radio.
In 1895, Wilhelm Röntgen noticed a new type of radiation emitted during an experiment with an evacuated tube subjected to a high voltage. He called this radiation "x-rays" and found that they were able to travel through parts of the human body but were reflected or stopped by denser matter such as bones. Before long, many uses were found for this radiography.
The last portion of the electromagnetic spectrum was filled in with the discovery of gamma rays. In 1900, Paul Villard was studying the radioactive emissions of radium when he identified a new type of radiation that he at first thought consisted of particles similar to known alpha and beta particles, but with the power of being far more penetrating than either. However, in 1910, British physicist William Henry Bragg demonstrated that gamma rays are electromagnetic radiation, not particles, and in 1914, Ernest Rutherford (who had named them gamma rays in 1903 when he realized that they were fundamentally different from charged alpha and beta particles) and Edward Andrade measured their wavelengths, and found that gamma rays were similar to X-rays, but with shorter wavelengths.
The wave-particle debate was rekindled in 1901 when Max Planck discovered that light is absorbed only in discrete "quanta", now called photons, implying that light has a particle nature. This idea was made explicit by Albert Einstein in 1905, but never accepted by Planck and many other contemporaries. The modern position of science is that electromagnetic radiation has both a wave and a particle nature, the wave-particle duality. The contradictions arising from this position are still being debated by scientists and philosophers.
Range
Electromagnetic waves are typically described by any of the following three physical properties: the frequency f, wavelength λ, or photon energy E. Frequencies observed in astronomy range from (1 GeV gamma rays) down to the local plasma frequency of the ionized interstellar medium (~1 kHz). Wavelength is inversely proportional to the wave frequency, so gamma rays have very short wavelengths that are fractions of the size of atoms, whereas wavelengths on the opposite end of the spectrum can be indefinitely long. Photon energy is directly proportional to the wave frequency, so gamma ray photons have the highest energy (around a billion electron volts), while radio wave photons have very low energy (around a femtoelectronvolt). These relations are illustrated by the following equations:
where:
c is the speed of light in vacuum
h is the Planck constant.
Whenever electromagnetic waves travel in a medium with matter, their wavelength is decreased. Wavelengths of electromagnetic radiation, whatever medium they are traveling through, are usually quoted in terms of the vacuum wavelength, although this is not always explicitly stated.
Generally, electromagnetic radiation is classified by wavelength into radio wave, microwave, infrared, visible light, ultraviolet, X-rays and gamma rays. The behavior of EM radiation depends on its wavelength. When EM radiation interacts with single atoms and molecules, its behavior also depends on the amount of energy per quantum (photon) it carries.
Spectroscopy can detect a much wider region of the EM spectrum than the visible wavelength range of 400 nm to 700 nm in a vacuum. A common laboratory spectroscope can detect wavelengths from 2 nm to 2500 nm. Detailed information about the physical properties of objects, gases, or even stars can be obtained from this type of device. Spectroscopes are widely used in astrophysics. For example, many hydrogen atoms emit a radio wave photon that has a wavelength of 21.12 cm. Also, frequencies of 30 Hz and below can be produced by and are important in the study of certain stellar nebulae and frequencies as high as have been detected from astrophysical sources.
Regions
The types of electromagnetic radiation are broadly classified into the following classes (regions, bands or types):
Gamma radiation
X-ray radiation
Ultraviolet radiation
Visible light (light that humans can see)
Infrared radiation
Microwave radiation
Radio waves
This classification goes in the increasing order of wavelength, which is characteristic of the type of radiation.
There are no precisely defined boundaries between the bands of the electromagnetic spectrum; rather they fade into each other like the bands in a rainbow. Radiation of each frequency and wavelength (or in each band) has a mix of properties of the two regions of the spectrum that bound it. For example, red light resembles infrared radiation, in that it can excite and add energy to some chemical bonds and indeed must do so to power the chemical mechanisms responsible for photosynthesis and the working of the visual system.
In atomic and nuclear physics, the distinction between X-rays and gamma rays is based on sources: the photons generated from nuclear decay or other nuclear and subnuclear/particle process are termed gamma rays, whereas X-rays are generated by electronic transitions involving energetically deep inner atomic electrons. Electronic transitions in muonic atoms transitions are also said to produce X-rays. In astrophysics, energies below 100keV are called X-rays and higher energies are gamma rays.
The region of the spectrum where electromagnetic radiation is observed may differ from the region it was emitted in due to relative velocity of the source and observer, (the Doppler shift), relative gravitational potential (gravitational redshift), or expansion of the universe (cosmological redshift). For example, the cosmic microwave background, relic blackbody radiation from the era of recombination, started out at energies around 1eV, but as has undergone enough cosmological red shift to put it into the microwave region of the spectrum for observers on Earth.
Rationale for names
Electromagnetic radiation interacts with matter in different ways across the spectrum. These types of interaction are so different that historically different names have been applied to different parts of the spectrum, as though these were different types of radiation. Thus, although these "different kinds" of electromagnetic radiation form a quantitatively continuous spectrum of frequencies and wavelengths, the spectrum remains divided for practical reasons arising from these qualitative interaction differences.
Types of radiation
Radio waves
Radio waves are emitted and received by antennas, which consist of conductors such as metal rod resonators. In artificial generation of radio waves, an electronic device called a transmitter generates an alternating electric current which is applied to an antenna. The oscillating electrons in the antenna generate oscillating electric and magnetic fields that radiate away from the antenna as radio waves. In reception of radio waves, the oscillating electric and magnetic fields of a radio wave couple to the electrons in an antenna, pushing them back and forth, creating oscillating currents which are applied to a radio receiver. Earth's atmosphere is mainly transparent to radio waves, except for layers of charged particles in the ionosphere which can reflect certain frequencies.
Radio waves are extremely widely used to transmit information across distances in radio communication systems such as radio broadcasting, television, two way radios, mobile phones, communication satellites, and wireless networking. In a radio communication system, a radio frequency current is modulated with an information-bearing signal in a transmitter by varying either the amplitude, frequency or phase, and applied to an antenna. The radio waves carry the information across space to a receiver, where they are received by an antenna and the information extracted by demodulation in the receiver. Radio waves are also used for navigation in systems like Global Positioning System (GPS) and navigational beacons, and locating distant objects in radiolocation and radar. They are also used for remote control, and for industrial heating.
The use of the radio spectrum is strictly regulated by governments, coordinated by the International Telecommunication Union (ITU) which allocates frequencies to different users for different uses.
Microwaves
Microwaves are radio waves of short wavelength, from about 10 centimeters to one millimeter, in the SHF and EHF frequency bands. Microwave energy is produced with klystron and magnetron tubes, and with solid state devices such as Gunn and IMPATT diodes. Although they are emitted and absorbed by short antennas, they are also absorbed by polar molecules, coupling to vibrational and rotational modes, resulting in bulk heating. Unlike higher frequency waves such as infrared and visible light which are absorbed mainly at surfaces, microwaves can penetrate into materials and deposit their energy below the surface. This effect is used to heat food in microwave ovens, and for industrial heating and medical diathermy. Microwaves are the main wavelengths used in radar, and are used for satellite communication, and wireless networking technologies such as Wi-Fi. The copper cables (transmission lines) which are used to carry lower-frequency radio waves to antennas have excessive power losses at microwave frequencies, and metal pipes called waveguides are used to carry them. Although at the low end of the band the atmosphere is mainly transparent, at the upper end of the band absorption of microwaves by atmospheric gases limits practical propagation distances to a few kilometers.
Terahertz radiation or sub-millimeter radiation is a region of the spectrum from about 100 GHz to 30 terahertz (THz) between microwaves and far infrared which can be regarded as belonging to either band. Until recently, the range was rarely studied and few sources existed for microwave energy in the so-called terahertz gap, but applications such as imaging and communications are now appearing. Scientists are also looking to apply terahertz technology in the armed forces, where high-frequency waves might be directed at enemy troops to incapacitate their electronic equipment. Terahertz radiation is strongly absorbed by atmospheric gases, making this frequency range useless for long-distance communication.
Infrared radiation
The infrared part of the electromagnetic spectrum covers the range from roughly 300 GHz to 400 THz (1 mm – 750 nm). It can be divided into three parts:
Far-infrared, from 300 GHz to 30 THz (1 mm – 10 μm). The lower part of this range may also be called microwaves or terahertz waves. This radiation is typically absorbed by so-called rotational modes in gas-phase molecules, by molecular motions in liquids, and by phonons in solids. The water in Earth's atmosphere absorbs so strongly in this range that it renders the atmosphere in effect opaque. However, there are certain wavelength ranges ("windows") within the opaque range that allow partial transmission, and can be used for astronomy. The wavelength range from approximately 200 μm up to a few mm is often referred to as Submillimetre astronomy, reserving far infrared for wavelengths below 200 μm.
Mid-infrared, from 30 THz to 120 THz (10–2.5 μm). Hot objects (black-body radiators) can radiate strongly in this range, and human skin at normal body temperature radiates strongly at the lower end of this region. This radiation is absorbed by molecular vibrations, where the different atoms in a molecule vibrate around their equilibrium positions. This range is sometimes called the fingerprint region, since the mid-infrared absorption spectrum of a compound is very specific for that compound.
Near-infrared, from 120 THz to 400 THz (2,500–750 nm). Physical processes that are relevant for this range are similar to those for visible light. The highest frequencies in this region can be detected directly by some types of photographic film, and by many types of solid state image sensors for infrared photography and videography.
Visible light
Above infrared in frequency comes visible light. The Sun emits its peak power in the visible region, although integrating the entire emission power spectrum through all wavelengths shows that the Sun emits slightly more infrared than visible light. By definition, visible light is the part of the EM spectrum the human eye is the most sensitive to. Visible light (and near-infrared light) is typically absorbed and emitted by electrons in molecules and atoms that move from one energy level to another. This action allows the chemical mechanisms that underlie human vision and plant photosynthesis. The light that excites the human visual system is a very small portion of the electromagnetic spectrum. A rainbow shows the optical (visible) part of the electromagnetic spectrum; infrared (if it could be seen) would be located just beyond the red side of the rainbow whilst ultraviolet would appear just beyond the opposite violet end.
Electromagnetic radiation with a wavelength between 380 nm and 760 nm (400–790 terahertz) is detected by the human eye and perceived as visible light. Other wavelengths, especially near infrared (longer than 760 nm) and ultraviolet (shorter than 380 nm) are also sometimes referred to as light, especially when the visibility to humans is not relevant. White light is a combination of lights of different wavelengths in the visible spectrum. Passing white light through a prism splits it up into the several colours of light observed in the visible spectrum between 400 nm and 780 nm.
If radiation having a frequency in the visible region of the EM spectrum reflects off an object, say, a bowl of fruit, and then strikes the eyes, this results in visual perception of the scene. The brain's visual system processes the multitude of reflected frequencies into different shades and hues, and through this insufficiently understood psychophysical phenomenon, most people perceive a bowl of fruit.
At most wavelengths, however, the information carried by electromagnetic radiation is not directly detected by human senses. Natural sources produce EM radiation across the spectrum, and technology can also manipulate a broad range of wavelengths. Optical fiber transmits light that, although not necessarily in the visible part of the spectrum (it is usually infrared), can carry information. The modulation is similar to that used with radio waves.
Ultraviolet radiation
Next in frequency comes ultraviolet (UV). In frequency (and thus energy), UV rays sit between the violet end of the visible spectrum and the X-ray range. The UV wavelength spectrum ranges from 399 nm to 10 nm and is divided into 3 sections: UVA, UVB, and UVC.
UV is the lowest energy range energetic enough to ionize atoms, separating electrons from them, and thus causing chemical reactions. UV, X-rays, and gamma rays are thus collectively called ionizing radiation; exposure to them can damage living tissue. UV can also cause substances to glow with visible light; this is called fluorescence. UV fluorescence is used by forensics to detect any evidence like blood and urine, that is produced by a crime scene. Also UV fluorescence is used to detect counterfeit money and IDs, as they are laced with material that can glow under UV.
At the middle range of UV, UV rays cannot ionize but can break chemical bonds, making molecules unusually reactive. Sunburn, for example, is caused by the disruptive effects of middle range UV radiation on skin cells, which is the main cause of skin cancer. UV rays in the middle range can irreparably damage the complex DNA molecules in the cells producing thymine dimers making it a very potent mutagen. Due to skin cancer caused by UV, the sunscreen industry was invented to combat UV damage. Mid UV wavelengths are called UVB and UVB lights such as germicidal lamps are used to kill germs and also to sterilize water.
The Sun emits UV radiation (about 10% of its total power), including extremely short wavelength UV that could potentially destroy most life on land (ocean water would provide some protection for life there). However, most of the Sun's damaging UV wavelengths are absorbed by the atmosphere before they reach the surface. The higher energy (shortest wavelength) ranges of UV (called "vacuum UV") are absorbed by nitrogen and, at longer wavelengths, by simple diatomic oxygen in the air. Most of the UV in the mid-range of energy is blocked by the ozone layer, which absorbs strongly in the important 200–315 nm range, the lower energy part of which is too long for ordinary dioxygen in air to absorb. This leaves less than 3% of sunlight at sea level in UV, with all of this remainder at the lower energies. The remainder is UV-A, along with some UV-B. The very lowest energy range of UV between 315 nm and visible light (called UV-A) is not blocked well by the atmosphere, but does not cause sunburn and does less biological damage. However, it is not harmless and does create oxygen radicals, mutations and skin damage.
X-rays
After UV come X-rays, which, like the upper ranges of UV are also ionizing. However, due to their higher energies, X-rays can also interact with matter by means of the Compton effect. Hard X-rays have shorter wavelengths than soft X-rays and as they can pass through many substances with little absorption, they can be used to 'see through' objects with 'thicknesses' less than that equivalent to a few meters of water. One notable use is diagnostic X-ray imaging in medicine (a process known as radiography). X-rays are useful as probes in high-energy physics. In astronomy, the accretion disks around neutron stars and black holes emit X-rays, enabling studies of these phenomena. X-rays are also emitted by stellar corona and are strongly emitted by some types of nebulae. However, X-ray telescopes must be placed outside the Earth's atmosphere to see astronomical X-rays, since the great depth of the atmosphere of Earth is opaque to X-rays (with areal density of 1000 g/cm2), equivalent to 10 meters thickness of water. This is an amount sufficient to block almost all astronomical X-rays (and also astronomical gamma rays—see below).
Gamma rays
After hard X-rays come gamma rays, which were discovered by Paul Ulrich Villard in 1900. These are the most energetic photons, having no defined lower limit to their wavelength. In astronomy they are valuable for studying high-energy objects or regions, however as with X-rays this can only be done with telescopes outside the Earth's atmosphere. Gamma rays are used experimentally by physicists for their penetrating ability and are produced by a number of radioisotopes. They are used for irradiation of foods and seeds for sterilization, and in medicine they are occasionally used in radiation cancer therapy. More commonly, gamma rays are used for diagnostic imaging in nuclear medicine, an example being PET scans. The wavelength of gamma rays can be measured with high accuracy through the effects of Compton scattering.
| Physical sciences | Electrodynamics | null |
10192 | https://en.wikipedia.org/wiki/Explosive | Explosive | An explosive (or explosive material) is a reactive substance that contains a great amount of potential energy that can produce an explosion if released suddenly, usually accompanied by the production of light, heat, sound, and pressure. An explosive charge is a measured quantity of explosive material, which may either be composed solely of one ingredient or be a mixture containing at least two substances.
The potential energy stored in an explosive material may, for example, be:
chemical energy, such as nitroglycerin or grain dust
pressurized gas, such as a gas cylinder, aerosol can, or boiling liquid expanding vapor explosion
nuclear energy, such as in the fissile isotopes uranium-235 and plutonium-239
Explosive materials may be categorized by the speed at which they expand. Materials that detonate (the front of the chemical reaction moves faster through the material than the speed of sound) are said to be "high explosives" and materials that deflagrate are said to be "low explosives". Explosives may also be categorized by their sensitivity. Sensitive materials that can be initiated by a relatively small amount of heat or pressure are primary explosives and materials that are relatively insensitive are secondary or tertiary explosives.
A wide variety of chemicals can explode; a smaller number are manufactured specifically for the purpose of being used as explosives. The remainder are too dangerous, sensitive, toxic, expensive, unstable, or prone to decomposition or degradation over short time spans.
In contrast, some materials are merely combustible or flammable if they burn without exploding.
The distinction, however, is not very clear. Certain materials—dusts, powders, gases, or volatile organic liquids—may be simply combustible or flammable under ordinary conditions, but become explosive in specific situations or forms, such as dispersed airborne clouds, or confinement or sudden release.
History
Early thermal weapons, such as Greek fire, have existed since ancient times. At its roots, the history of chemical explosives lies in the history of gunpowder. During the Tang dynasty in the 9th century, Taoist Chinese alchemists were eagerly trying to find the elixir of immortality. In the process, they stumbled upon the explosive invention of black powder made from coal, saltpeter, and sulfur in 1044. Gunpowder was the first form of chemical explosives and by 1161, the Chinese were using explosives for the first time in warfare. The Chinese would incorporate explosives fired from bamboo or bronze tubes known as bamboo firecrackers. The Chinese also inserted live rats inside the bamboo firecrackers; when fired toward the enemy, the flaming rats created great psychological ramifications—scaring enemy soldiers away and causing cavalry units to go wild.
The first useful explosive stronger than black powder was nitroglycerin, developed in 1847. Since nitroglycerin is a liquid and highly unstable, it was replaced by nitrocellulose, trinitrotoluene (TNT) in 1863, smokeless powder, dynamite in 1867 and gelignite (the latter two being sophisticated stabilized preparations of nitroglycerin rather than chemical alternatives, both invented by Alfred Nobel). World War I saw the adoption of TNT in artillery shells. World War II saw extensive use of new explosives .
In turn, these have largely been replaced by more powerful explosives such as C-4 and PETN. However, C-4 and PETN react with metal and catch fire easily, yet unlike TNT, C-4 and PETN are waterproof and malleable.
Applications
Commercial
The largest commercial application of explosives is mining. Whether the mine is on the surface or is buried underground, the detonation or deflagration of either a high or low explosive in a confined space can be used to liberate a fairly specific sub-volume of a brittle material (rock) in a much larger volume of the same or similar material. The mining industry tends to use nitrate-based explosives such as emulsions of fuel oil and ammonium nitrate solutions, mixtures of ammonium nitrate prills (fertilizer pellets) and fuel oil (ANFO) and gelatinous suspensions or slurries of ammonium nitrate and combustible fuels.
In materials science and engineering, explosives are used in cladding (explosion welding). A thin plate of some material is placed atop a thick layer of a different material, both layers typically of metal. Atop the thin layer is placed an explosive. At one end of the layer of explosive, the explosion is initiated. The two metallic layers are forced together at high speed and with great force. The explosion spreads from the initiation site throughout the explosive. Ideally, this produces a metallurgical bond between the two layers.
As the length of time the shock wave spends at any point is small, we can see mixing of the two metals and their surface chemistries, through some fraction of the depth, and they tend to be mixed in some way. It is possible that some fraction of the surface material from either layer eventually gets ejected when the end of material is reached. Hence, the mass of the now "welded" bilayer, may be less than the sum of the masses of the two initial layers.
There are applications where a shock wave, and electrostatics, can result in high velocity projectiles such as in an electrostatic particle accelerator.
Military
Civilian
Safety
Types
Chemical
An explosion is a type of spontaneous chemical reaction that, once initiated, is driven by both a large exothermic change (great release of heat) and a large positive entropy change (great quantities of gases are released) in going from reactants to products, thereby constituting a thermodynamically favorable process in addition to one that propagates very rapidly. Thus, explosives are substances that contain a large amount of energy stored in chemical bonds. The energetic stability of the gaseous products and hence their generation comes from the formation of strongly bonded species like carbon monoxide, carbon dioxide, and nitrogen gas, which contain strong double and triple bonds having bond strengths of nearly 1 MJ/mole. Consequently, most commercial explosives are organic compounds containing –NO2, –ONO2 and –NHNO2 groups that, when detonated, release gases like the aforementioned (e.g., nitroglycerin, TNT, HMX, PETN, nitrocellulose).
An explosive is classified as a low or high explosive according to its rate of combustion: low explosives burn rapidly (or deflagrate), while high explosives detonate. While these definitions are distinct, the problem of precisely measuring rapid decomposition makes practical classification of explosives difficult. For a reaction to be classified as a detonation as opposed to just a deflagration, the propagation of the reaction shockwave through the material being tested must be faster than the speed of sound through that material. The speed of sound through a liquid or solid material is usually orders of magnitude faster than the speed of sound through air or other gases.
Traditional explosives mechanics is based on the shock-sensitive rapid oxidation of carbon and hydrogen to carbon dioxide, carbon monoxide and water in the form of steam. Nitrates typically provide the required oxygen to burn the carbon and hydrogen fuel. High explosives tend to have the oxygen, carbon and hydrogen contained in one organic molecule, and less sensitive explosives like ANFO are combinations of fuel (carbon and hydrogen fuel oil) and ammonium nitrate. A sensitizer such as powdered aluminum may be added to an explosive to increase the energy of the detonation. Once detonated, the nitrogen portion of the explosive formulation emerges as nitrogen gas and toxic nitric oxides.
Decomposition
The chemical decomposition of an explosive may take years, days, hours, or a fraction of a second. The slower processes of decomposition take place in storage and are of interest only from a stability standpoint. Of more interest are the other two rapid forms besides decomposition: deflagration and detonation.
Deflagration
In deflagration, decomposition of the explosive material is propagated by a flame front which moves relatively slowly through the explosive material, at speeds less than the speed of sound within the substance (which is usually still higher than 340 m/s or in most liquid or solid materials) in contrast to detonation, which occurs at speeds greater than the speed of sound. Deflagration is a characteristic of low explosive material.
Detonation
This term is used to describe an explosive phenomenon whereby the decomposition is propagated by a shock wave traversing the explosive material at speeds greater than the speed of sound within the substance. The shock front is capable of passing through the high explosive material at supersonic typically thousands of metres per second.
Exotic
In addition to chemical explosives, there are a number of more exotic explosive materials, and exotic methods of causing explosions. Examples include nuclear explosives, and abruptly heating a substance to a plasma state with a high-intensity laser or electric arc.
Laser- and arc-heating are used in laser detonators, exploding-bridgewire detonators, and exploding foil initiators, where a shock wave and then detonation in conventional chemical explosive material is created by laser- or electric-arc heating. Laser and electric energy are not currently used in practice to generate most of the required energy, but only to initiate reactions.
Properties
To determine the suitability of an explosive substance for a particular use, its physical properties must first be known. The usefulness of an explosive can only be appreciated when the properties and the factors affecting them are fully understood. Some of the more important characteristics are listed below:
Sensitivity
Sensitivity refers to the ease with which an explosive can be ignited or detonated, i.e., the amount and intensity of shock, friction, or heat that is required. When the term sensitivity is used, care must be taken to clarify what kind of sensitivity is under discussion. The relative sensitivity of a given explosive to impact may vary greatly from its sensitivity to friction or heat. Some of the test methods used to determine sensitivity relate to:
Impact – Sensitivity is expressed in terms of the distance through which a standard weight must be dropped onto the material to cause it to explode.
Friction – Sensitivity is expressed in terms of the amount of pressure applied to the material in order to create enough friction to cause a reaction.
Heat – Sensitivity is expressed in terms of the temperature at which decomposition of the material occurs.
Specific explosives (usually but not always highly sensitive on one or more of the three above axes) may be idiosyncratically sensitive to such factors as pressure drop, acceleration, the presence of sharp edges or rough surfaces, incompatible materials, or in rare nuclear or electromagnetic radiation. These factors present special hazards that may rule out any practical utility.
Sensitivity is an important consideration in selecting an explosive for a particular purpose. The explosive in an armor-piercing projectile must be relatively insensitive, or the shock of impact would cause it to detonate before it penetrated to the point desired. The explosive lenses around nuclear charges are also designed to be highly insensitive, to minimize the risk of accidental detonation.
Sensitivity to initiation
The index of the capacity of an explosive to be initiated into detonation in a sustained manner. It is defined by the power of the detonator which is certain to prime the explosive to a sustained and continuous detonation. Reference is made to the Sellier-Bellot scale that consists of a series of 10 detonators, from to , each of which corresponds to an increasing charge weight. In practice, most of the explosives on the market today are sensitive to an detonator, where the charge corresponds to 2 grams of mercury fulminate.
Velocity of detonation
The velocity with which the reaction process propagates in the mass of the explosive. Most commercial mining explosives have detonation velocities ranging from 1,800 m/s to 8,000 m/s. Today, velocity of detonation can be measured with accuracy. Together with density it is an important element influencing the yield of the energy transmitted for both atmospheric over-pressure and ground acceleration. By definition, a "low explosive", such as black powder, or smokeless gunpowder has a burn rate of 171–631 m/s. In contrast, a "high explosive", whether a primary, such as detonating cord, or a secondary, such as TNT or C-4, has a significantly higher burn rate about 6900–8092 m/s.
Stability
Stability is the ability of an explosive to be stored without deterioration.
The following factors affect the stability of an explosive:
Chemical constitution. In the strictest technical sense, the word "stability" is a thermodynamic term referring to the energy of a substance relative to a reference state or to some other substance. However, in the context of explosives, stability commonly refers to ease of detonation, which is concerned with chemical kinetics (i.e., rate of decomposition). It is perhaps best, then, to differentiate between the terms thermodynamically stable and kinetically stable by referring to the former as "inert." Contrarily, a kinetically unstable substance is said to be "labile." It is generally recognized that certain groups like nitro (–NO2), nitrate (–ONO2), and azide (–N3), are intrinsically labile. Kinetically, there exists a low activation barrier to the decomposition reaction. Consequently, these compounds exhibit high sensitivity to flame or mechanical shock. The chemical bonding in these compounds is characterized as predominantly covalent and thus they are not thermodynamically stabilized by a high ionic-lattice energy. Furthermore, they generally have positive enthalpies of formation and there is little mechanistic hindrance to internal molecular rearrangement to yield the more thermodynamically stable (more strongly bonded) decomposition products. For example, in lead azide, Pb(N3)2, the nitrogen atoms are already bonded to one another, so decomposition into Pb and N2[1] is relatively easy.
Temperature of storage. The rate of decomposition of explosives increases at higher temperatures. All standard military explosives may be considered to have a high degree of stability at temperatures from –10 to +35 °C, but each has a high temperature at which its rate of thermal decomposition rapidly accelerates and stability is reduced. As a rule of thumb, most explosives become dangerously unstable at temperatures above 70 °C.
Exposure to sunlight. When exposed to the ultraviolet rays of sunlight, many explosive compounds containing nitrogen groups rapidly decompose, affecting their stability.
Electrical discharge. Electrostatic or spark sensitivity to initiation is common in a number of explosives. Static or other electrical discharge may be sufficient to cause a reaction, even detonation, under some circumstances. As a result, safe handling of explosives and pyrotechnics usually requires proper electrical grounding of the operator.
Power, performance, and strength
The term power or performance as applied to an explosive refers to its ability to do work. In practice it is defined as the explosive's ability to accomplish what is intended in the way of energy delivery (i.e., fragment projection, air blast, high-velocity jet, underwater shock and bubble energy, etc.). Explosive power or performance is evaluated by a tailored series of tests to assess the material for its intended use. Of the tests listed below, cylinder expansion and air-blast tests are common to most testing programs, and the others support specific applications.
Cylinder expansion test. A standard amount of explosive is loaded into a long hollow cylinder, usually of copper, and detonated at one end. Data is collected concerning the rate of radial expansion of the cylinder and the maximum cylinder wall velocity. This also establishes the Gurney energy or 2E.
Cylinder fragmentation. A standard steel cylinder is loaded with explosive and detonated in a sawdust pit. The fragments are collected and the size distribution analyzed.
Detonation pressure (Chapman–Jouguet condition). Detonation pressure data derived from measurements of shock waves transmitted into water by the detonation of cylindrical explosive charges of a standard size.
Determination of critical diameter. This test establishes the minimum physical size a charge of a specific explosive must be to sustain its own detonation wave. The procedure involves the detonation of a series of charges of different diameters until difficulty in detonation wave propagation is observed.
Massive-diameter detonation velocity. Detonation velocity is dependent on loading density (c), charge diameter, and grain size. The hydrodynamic theory of detonation used in predicting explosive phenomena does not include the diameter of the charge, and therefore a detonation velocity, for a massive diameter. This procedure requires the firing of a series of charges of the same density and physical structure, but different diameters, and the extrapolation of the resulting detonation velocities to predict the detonation velocity of a charge of a massive diameter.
Pressure versus scaled distance. A charge of a specific size is detonated and its pressure effects measured at a standard distance. The values obtained are compared with those for TNT.
Impulse versus scaled distance. A charge of a specific size is detonated and its impulse (the area under the pressure-time curve) measured as a function of distance. The results are tabulated and expressed as TNT equivalents.
Relative bubble energy (RBE). A 5 to 50 kg charge is detonated in water and piezoelectric gauges measure peak pressure, time constant, impulse, and energy.
The RBE may be defined as Kx 3
RBE = Ks
where K = the bubble expansion period for an experimental (x) or a standard (s) charge.
Brisance
In addition to strength, explosives display a second characteristic, which is their shattering effect or brisance (from the French meaning to "break"). Brisance is important in determining the effectiveness of an explosion in fragmenting shells, bomb casings, and grenades. The rapidity with which an explosive reaches its peak pressure (power) is a measure of its brisance. Brisance values are primarily employed in France and Russia.
The sand crush test is commonly employed to determine the relative brisance in comparison to TNT. No test is capable of directly comparing the explosive properties of two or more compounds; it is important to examine the data from several such tests (sand crush, trauzl, and so forth) in order to gauge relative brisance. True values for comparison require field experiments.
Density
Density of loading refers to the mass of an explosive per unit volume. Several methods of loading are available, including pellet loading, cast loading, and press loading, the choice being determined by the characteristics of the explosive. Dependent upon the method employed, an average density of the loaded charge can be obtained that is within 80–99% of the theoretical maximum density of the explosive. High load density can reduce sensitivity by making the mass more resistant to internal friction. However, if density is increased to the extent that individual crystals are crushed, the explosive may become more sensitive. Increased load density also permits the use of more explosive, thereby increasing the power of the warhead. It is possible to compress an explosive beyond a point of sensitivity, known also as dead-pressing, in which the material is no longer capable of being reliably initiated, if at all.
Volatility
Volatility is the readiness with which a substance vaporizes. Excessive volatility often results in the development of pressure within rounds of ammunition and separation of mixtures into their constituents. Volatility affects the chemical composition of the explosive such that a marked reduction in stability may occur, which results in an increase in the danger of handling.
Hygroscopicity and water resistance
The introduction of water into an explosive is highly undesirable since it reduces the sensitivity, strength, and velocity of detonation of the explosive. Hygroscopicity is a measure of a material's moisture-absorbing tendencies. Moisture affects explosives adversely by acting as an inert material that absorbs heat when vaporized, and by acting as a solvent medium that can cause undesired chemical reactions. Sensitivity, strength, and velocity of detonation are reduced by inert materials that reduce the continuity of the explosive mass. When the moisture content evaporates during detonation, cooling occurs, which reduces the temperature of reaction. Stability is also affected by the presence of moisture since moisture promotes decomposition of the explosive and, in addition, causes corrosion of the explosive's metal container.
Explosives considerably differ from one another as to their behavior in the presence of water. Gelatin dynamites containing nitroglycerine have a degree of water resistance. Explosives based on ammonium nitrate have little or no water resistance as ammonium nitrate is highly soluble in water and is hygroscopic.
Toxicity
Many explosives are toxic to some extent. Manufacturing inputs can also be organic compounds or hazardous materials that require special handling due to risks (such as carcinogens). The decomposition products, residual solids, or gases of some explosives can be toxic, whereas others are harmless, such as carbon dioxide and water.
Examples of harmful by-products are:
Heavy metals, such as lead, mercury, and barium from primers (observed in high-volume firing ranges)
Nitric oxides from TNT
Perchlorates when used in large quantities
"Green explosives" seek to reduce environment and health impacts. An example of such is the lead-free primary explosive copper(I) 5-nitrotetrazolate, an alternative to lead azide.
Explosive train
Explosive material may be incorporated in the explosive train of a device or system. An example is a pyrotechnic lead igniting a booster, which causes the main charge to detonate.
Volume of products of explosion
The most widely used explosives are condensed liquids or solids converted to gaseous products by explosive chemical reactions and the energy released by those reactions. The gaseous products of complete reaction are typically carbon dioxide, steam, and nitrogen. Gaseous volumes computed by the ideal gas law tend to be too large at high pressures characteristic of explosions. Ultimate volume expansion may be estimated at three orders of magnitude, or one liter per gram of explosive. Explosives with an oxygen deficit will generate soot or gases like carbon monoxide and hydrogen, which may react with surrounding materials such as atmospheric oxygen. Attempts to obtain more precise volume estimates must consider the possibility of such side reactions, condensation of steam, and aqueous solubility of gases like carbon dioxide.
Oxygen balance (OB% or Ω)
Oxygen balance is an expression that is used to indicate the degree to which an explosive can be oxidized. If an explosive molecule contains just enough oxygen to convert all of its carbon to carbon dioxide, all of its hydrogen to water, and all of its metal to metal oxide with no excess, the molecule is said to have a zero oxygen balance. The molecule is said to have a positive oxygen balance if it contains more oxygen than is needed and a negative oxygen balance if it contains less oxygen than is needed. The sensitivity, strength, and brisance of an explosive are all somewhat dependent upon oxygen balance and tend to approach their maxima as oxygen balance approaches zero.
Chemical composition
A chemical explosive may consist of either a chemically pure compound, such as nitroglycerin, or a mixture of a fuel and an oxidizer, such as black powder or grain dust and air.
Pure compounds
Some chemical compounds are unstable in that, when shocked, they react, possibly to the point of detonation. Each molecule of the compound dissociates into two or more new molecules (generally gases) with the release of energy.
Nitroglycerin: A highly unstable and sensitive liquid
Acetone peroxide: A very unstable white organic peroxide
TNT: Yellow insensitive crystals that can be melted and cast without detonation
Cellulose nitrate: A nitrated polymer which can be a high or low explosive depending on nitration level and conditions
RDX, PETN, HMX: Very powerful explosives which can be used pure or in plastic explosives
C-4 (or Composition C-4): An RDX plastic explosive plasticized to be adhesive and malleable
The above compositions may describe most of the explosive material, but a practical explosive will often include small percentages of other substances. For example, dynamite is a mixture of highly sensitive nitroglycerin with sawdust, powdered silica, or most commonly diatomaceous earth, which act as stabilizers. Plastics and polymers may be added to bind powders of explosive compounds; waxes may be incorporated to make them safer to handle; aluminium powder may be introduced to increase total energy and blast effects. Explosive compounds are also often "alloyed": HMX or RDX powders may be mixed (typically by melt-casting) with TNT to form Octol or Cyclotol.
Oxidized fuel
An oxidizer is a pure substance (molecule) that in a chemical reaction can contribute some atoms of one or more oxidizing elements, in which the fuel component of the explosive burns. On the simplest level, the oxidizer may itself be an oxidizing element, such as gaseous or liquid oxygen.
Black powder: Potassium nitrate, charcoal and sulfur
Flash powder: Fine metal powder (usually aluminium or magnesium) and a strong oxidizer (e.g. potassium chlorate or perchlorate)
Ammonal: Ammonium nitrate and aluminium powder
Armstrong's mixture: Potassium chlorate and red phosphorus. This is a very sensitive mixture. It is a primary high explosive in which sulfur is substituted for some or all of the phosphorus to slightly decrease sensitivity.
Sprengel explosives: A very general class incorporating any strong oxidizer and highly reactive fuel, although in practice the name was most commonly applied to mixtures of chlorates and nitroaromatics.
ANFO: Ammonium nitrate and fuel oil
Cheddites: Chlorates or perchlorates and oil
Oxyliquits: Mixtures of organic materials and liquid oxygen
Panclastites: Mixtures of organic materials and dinitrogen tetroxide
Availability and cost
The availability and cost of explosives are determined by the availability of the raw materials and the cost, complexity, and safety of the manufacturing operations.
Classification
By sensitivity
Primary
A primary explosive is an explosive that is extremely sensitive to stimuli such as impact, friction, heat, static electricity, or electromagnetic radiation. Some primary explosives are also known as contact explosives. A relatively small amount of energy is required for initiation. As a very general rule, primary explosives are considered to be those compounds that are more sensitive than PETN. As a practical measure, primary explosives are sufficiently sensitive that they can be reliably initiated with a blow from a hammer; however, PETN can also usually be initiated in this manner, so this is only a very broad guideline. Additionally, several compounds, such as nitrogen triiodide, are so sensitive that they cannot even be handled without detonating. Nitrogen triiodide is so sensitive that it can be reliably detonated by exposure to alpha radiation.
Primary explosives are often used in detonators or to trigger larger charges of less sensitive secondary explosives. Primary explosives are commonly used in blasting caps and percussion caps to translate a physical shock signal. In other situations, different signals such as electrical or physical shock, or, in the case of laser detonation systems, light, are used to initiate an action, i.e., an explosion. A small quantity, usually milligrams, is sufficient to initiate a larger charge of explosive that is usually safer to handle.
Examples of primary high explosives are:
Acetone peroxide
Alkali metal ozonides
Ammonium permanganate
Ammonium chlorate
Azidotetrazolates
Azoclathrates
Benzoyl peroxide
Benzvalene
3,5-Bis(trinitromethyl)tetrazole
Chlorine oxides
Copper(I) acetylide
Copper(II) azide
Cumene hydroperoxide
Cycloprop(-2-)enyl nitrate (CXP or CPN)
Cyanogen azide
Cyanuric triazide
Diacetyl peroxide
1-Diazidocarbamoyl-5-azidotetrazole
Diazodinitrophenol
Diazomethane
Diethyl ether peroxide
4-Dimethylaminophenylpentazole
Disulfur dinitride
Ethyl azide
Explosive antimony
Fluorine perchlorate
Fulminic acid
Halogen azides:
Fluorine azide
Chlorine azide
Bromine azide
Iodine azide
Hexamethylene triperoxide diamine
Hydrazoic acid
Hypofluorous acid
Lead azide
Lead styphnate
Lead picrate
Manganese heptoxide
Mercury(II) fulminate
Mercury nitride
Methyl ethyl ketone peroxide
Nickel hydrazine nitrate
Nickel hydrazine perchlorate
Nitrogen trihalides:
Nitrogen trichloride
Nitrogen tribromide
Nitrogen triiodide
Nitroglycerin
Nitronium perchlorate
Nitrosyl perchlorate
Nitrotetrazolate-N-oxides
Pentazenium hexafluoroarsenate
Peroxy acids
Peroxymonosulfuric acid
Selenium tetraazide
Silicon tetraazide
Silver azide
Silver acetylide
Silver fulminate
Silver nitride
Tellurium tetraazide
tert-Butyl hydroperoxide
Tetraamine copper complexes
Tetraazidomethane
Tetrazene explosive
Tetrazoles
Titanium tetraazide
Triazidomethane
Oxides of xenon:
Xenon dioxide
Xenon oxytetrafluoride
Xenon tetroxide
Xenon trioxide
Secondary
A secondary explosive is less sensitive than a primary explosive and requires substantially more energy to be initiated. Because they are less sensitive, they are usable in a wider variety of applications and are safer to handle and store. Secondary explosives are used in larger quantities in an explosive train and are usually initiated by a smaller quantity of a primary explosive.
Examples of secondary explosives include TNT and RDX.
Tertiary
Tertiary explosives, also called blasting agents, are so insensitive to shock that they cannot be reliably detonated by practical quantities of primary explosive, and instead require an intermediate explosive booster of secondary explosive. These are often used for safety and the typically lower costs of material and handling. The largest consumers are large-scale mining and construction operations.
Most tertiaries include a fuel and an oxidizer. ANFO can be a tertiary explosive if its reaction rate is slow.
By velocity
Low
Low explosives (or low-order explosives) are compounds wherein the rate of decomposition proceeds through the material at less than the speed of sound. The decomposition is propagated by a flame front (deflagration) which travels much more slowly through the explosive material than a shock wave of a high explosive. Under normal conditions, low explosives undergo deflagration at rates that vary from a few centimetres per second to approximately . It is possible for them to deflagrate very quickly, producing an effect similar to a detonation. This can happen under higher pressure (such as when gunpowder deflagrates inside the confined space of a bullet casing, accelerating the bullet to well beyond the speed of sound) or temperature.
A low explosive is usually a mixture of a combustible substance and an oxidant that decomposes rapidly (deflagration); however, they burn more slowly than a high explosive, which has an extremely fast burn rate.
Low explosives are normally employed as propellants. Included in this group are petroleum products such as propane and gasoline, gunpowder (including smokeless powder), and light pyrotechnics, such as flares and fireworks, but can replace high explosives in certain applications, including in gas pressure blasting.
High
High explosives (HE, or high-order explosives) are explosive materials that detonate, meaning that the explosive shock front passes through the material at a supersonic speed. High explosives detonate with explosive velocity of about . For instance, TNT has a detonation (burn) rate of approximately 6.9 km/s (22,600 feet per second), detonating cord of 6.7 km/s (22,000 feet per second), and C-4 about 8.0 km/s (26,000 feet per second). They are normally employed in mining, demolition, and military applications. The term high explosive is in contrast with the term low explosive, which explodes (deflagrates) at a lower rate.
High explosives can be divided into two explosives classes differentiated by sensitivity: primary explosive and secondary explosive. Although tertiary explosives (such as ANFO at 3,200 m/s) can technically meet the explosive velocity definition, they are not considered high explosives in regulatory contexts.
Countless high-explosive compounds are chemically possible, but commercially and militarily important ones have included NG, TNT, TNP, TNX, RDX, HMX, PETN, TATP, TATB, and HNS.
By physical form
Explosives are often characterized by the physical form that the explosives are produced or used in. These use forms are commonly categorized as:
Pressings
Castings
Plastic or polymer bonded
Plastic explosives, a.k.a. putties
Rubberized
Extrudable
Binary
Blasting agents
Slurries and gels
Dynamites
Shipping label classifications
Shipping labels and tags may include both United Nations and national markings.
United Nations markings include numbered Hazard Class and Division (HC/D) codes and alphabetic Compatibility Group codes. Though the two are related, they are separate and distinct. Any Compatibility Group designator can be assigned to any Hazard Class and Division. An example of this hybrid marking would be a consumer firework, which is labeled as 1.4G or 1.4S.
Examples of national markings would include United States Department of Transportation (U.S. DOT) codes.
United Nations (UN) GHS Hazard Class and Division
The UN GHS Hazard Class and Division (HC/D) is a numeric designator within a hazard class indicating the character, predominance of associated hazards, and potential for causing personnel casualties and property damage. It is an internationally accepted system that communicates using the minimum amount of markings the primary hazard associated with a substance.
Listed below are the Divisions for Class 1 (Explosives):
1.1 Mass Detonation Hazard. With HC/D 1.1, it is expected that if one item in a container or pallet inadvertently detonates, the explosion will sympathetically detonate the surrounding items. The explosion could propagate to all or the majority of the items stored together, causing a mass detonation. There will also be fragments from the item's casing and/or structures in the blast area.
1.2 Non-mass explosion, fragment-producing. HC/D 1.2 is further divided into three subdivisions, HC/D 1.2.1, 1.2.2 and 1.2.3, to account for the magnitude of the effects of an explosion.
1.3 Mass fire, minor blast or fragment hazard. Propellants and many pyrotechnic items fall into this category. If one item in a package or stack initiates, it will usually propagate to the other items, creating a mass fire.
1.4 Moderate fire, no blast or fragment. HC/D 1.4 items are listed in the table as explosives with no significant hazard. Most small arms ammunition (including loaded weapons) and some pyrotechnic items fall into this category. If the energetic material in these items inadvertently initiates, most of the energy and fragments will be contained within the storage structure or the item containers themselves.
1.5 mass detonation hazard, very insensitive.
1.6 detonation hazard without mass detonation hazard, extremely insensitive.
To see an entire UNO Table, browse Paragraphs 3–8 and 3–9 of NAVSEA OP 5, Vol. 1, Chapter 3.
Class 1 Compatibility Group
Compatibility Group codes are used to indicate storage compatibility for HC/D Class 1 (explosive) materials. Letters are used to designate 13 compatibility groups as follows.
A: Primary explosive substance (1.1A).
B: An article containing a primary explosive substance and not containing two or more effective protective features. Some articles, such as detonator assemblies for blasting and primers, cap-type, are included. (1.1B, 1.2B, 1.4B).
C: Propellant explosive substance or other deflagrating explosive substance or article containing such explosive substance (1.1C, 1.2C, 1.3C, 1.4C). These are bulk propellants, propelling charges, and devices containing propellants with or without means of ignition. Examples include single-based propellant, double-based propellant, triple-based propellant, and composite propellants, solid propellant rocket motors and ammunition with inert projectiles.
D: Secondary detonating explosive substance or black powder or article containing a secondary detonating explosive substance, in each case without means of initiation and without a propelling charge, or article containing a primary explosive substance and containing two or more effective protective features. (1.1D, 1.2D, 1.4D, 1.5D).
E: Article containing a secondary detonating explosive substance without means of initiation, with a propelling charge (other than one containing flammable liquid, gel or hypergolic liquid) (1.1E, 1.2E, 1.4E).
F containing a secondary detonating explosive substance with its means of initiation, with a propelling charge (other than one containing flammable liquid, gel or hypergolic liquid) or without a propelling charge (1.1F, 1.2F, 1.3F, 1.4F).
G: Pyrotechnic substance or article containing a pyrotechnic substance, or article containing both an explosive substance and an illuminating, incendiary, tear-producing or smoke-producing substance (other than a water-activated article or one containing white phosphorus, phosphide or flammable liquid or gel or hypergolic liquid) (1.1G, 1.2G, 1.3G, 1.4G). Examples include Flares, signals, incendiary or illuminating ammunition and other smoke and tear producing devices.
H: Article containing both an explosive substance and white phosphorus (1.2H, 1.3H). These articles will spontaneously combust when exposed to the atmosphere.
J: Article containing both an explosive substance and flammable liquid or gel (1.1J, 1.2J, 1.3J). This excludes liquids or gels which are spontaneously flammable when exposed to water or the atmosphere, which belong in group H. Examples include liquid or gel filled incendiary ammunition, fuel-air explosive (FAE) devices, and flammable liquid fueled missiles.
K: Article containing both an explosive substance and a toxic chemical agent (1.2K, 1.3K)
L Explosive substance or article containing an explosive substance and presenting a special risk (e.g., due to water-activation or presence of hypergolic liquids, phosphides, or pyrophoric substances) needing isolation of each type (1.1L, 1.2L, 1.3L). Damaged or suspect ammunition of any group belongs in this group.
N: Articles containing only extremely insensitive detonating substances (1.6N).
S: Substance or article so packed or designed that any hazardous effects arising from accidental functioning are limited to the extent that they do not significantly hinder or prohibit fire fighting or other emergency response efforts in the immediate vicinity of the package (1.4S).
Regulation
The legality of possessing or using explosives varies by jurisdiction. Various countries around the world have enacted explosives law and require licenses to manufacture, distribute, store, use, possess explosives or ingredients.
Netherlands
In the Netherlands, the civil and commercial use of explosives is covered under the Wet explosieven voor civiel gebruik (explosives for civil use Act), in accordance with EU directive nr. 93/15/EEG (Dutch). The illegal use of explosives is covered under the Wet Wapens en Munitie (Weapons and Munition Act) (Dutch).
United Kingdom
The new Explosives Regulations 2014 (ER 2014) came into force on 1 October 2014 and defines "explosive" as:
United States
During World War I, numerous laws were created to regulate war related industries and increase security within the United States. In 1917, the 65th United States Congress created many laws, including the Espionage Act of 1917 and Explosives Act of 1917.
The Explosives Act of 1917 (session 1, chapter 83, ) was signed on 6 October 1917 and went into effect on 16 November 1917. The legal summary is "An Act to prohibit the manufacture, distribution, storage, use, and possession in time of war of explosives, providing regulations for the safe manufacture, distribution, storage, use, and possession of the same, and for other purposes". This was the first federal regulation of licensing explosives purchases. The act was deactivated after World War I ended.
After the United States entered World War II, the Explosives Act of 1917 was reactivated. In 1947, the act was deactivated by President Truman.
The Organized Crime Control Act of 1970 () transferred many explosives regulations to the Bureau of Alcohol, Tobacco and Firearms (ATF) of the Department of Treasury. The bill became effective in 1971.
Currently, regulations are governed by Title 18 of the United States Code and Title 27 of the Code of Federal Regulations:
"Importation, Manufacture, Distribution and Storage of Explosive Materials" (18 U.S.C. Chapter 40).
"Commerce in Explosives" (27 C.F.R. Chapter II, Part 555).
List of explosives
Compounds
Acetylides
Copper(I) acetylide, Dichloroacetylene, Silver acetylide
Fulminates
Fulminic Acid, Fulminating Gold, Mercury(II) fulminate, Platinum fulminate, Potassium fulminate, Silver fulminate
Nitro
MonoNitro: Nitroguanidine, Nitroethane, Nitromethane, Nitropropane, Nitrourea
DiNitro: Diazo dinitro phenol, Dinitrobenzene, Dinitroethylene urea, DNN, Dinitrophenol, Dinitrophenolate, DNPH, Dinitroresorcinol, Dinitropentano nitrile, Polydinitropropyl acrylate, Dinitro cerine, Dipicryl sulfone, Dipicrylamine, EDNP, KDNBF, BEAF, DADNE
TriNitro: RDX, Diaminotrinitrobenzene, Triaminotrinitrobenzene, Lead styphnate, Lead picrate, Trinitroaniline, Trinitroanisole, TNAS, TNB, TNBA, Styphnic acid, MC, Trinitroethyl formal, TNOC, TNOF, TNP, TNT, TNN, TNPG, TNR, BTNEN, BTNEC, Ammonium picrate, TNS
TetraNitro: Tetryl, HMX
HexaNitro: HNS, HNIW, HHTDD
HeptaNitro: Heptanitrocubane
OctaNitro: Octanitrocubane
Nitrosos
Tetranitrosos: R-salt
Nitrates
Mononitrates: Ammonium nitrate, Methyl ammonium nitrate, Urea Nitrate
Dinitrates: Diethyleneglycol dinitrate, Ethylenediamine dinitrate, Ethylene dinitramine, Ethylene glycol dinitrate, Hexamethylenetetramine dinitrate, Triethylene glycol dinitrate
Trinitrates: 1,2,4-Butanetriol trinitrate, Trimethylolethane trinitrate, Nitroglycerin
Tetranitrates: Erythritol tetranitrate, Pentaerythritol tetranitrate, Tetranitratoxycarbon
Pentanitrates: Xylitol pentanitrate
Polynitrates: Nitrocellulose, Nitrostarch, Mannitol hexanitrate
Amines
Tertiary Amines: Nitrogen tribromide, Nitrogen trichloride, Nitrogen triiodide, Nitrogen trisulfide, Selenium nitride, Silver nitride
Diamines: Disulfur dinitride
Tetramines: Tetrazene, Tetrazole, Azidoazide azide
Pentamines: Pentazenium
Octamines: Octaazacubane, 1,1'-Azobis-1,2,3-triazole
Azides
Inorganic: Chlorine azide, Copper(II) azide, Fluorine azide, Hydrazoic acid, Lead(II) azide, Silver azide, Sodium azide, Rubidium azide, Selenium tetraazide, Silicon tetraazide, Tellurium tetraazide, Titanium tetraazide
Organic: Cyanuric triazide, Cyanogen azide, Ethyl azide, Tetraazidomethane
Peroxides
Acetone peroxide (TATP), Cumene hydroperoxide, Diacetyl peroxide, Dibenzoyl peroxide, Diethyl ether peroxide, Hexamethylene triperoxide diamine, Methyl ethyl ketone peroxide, Tert-butyl hydroperoxide, Tetramethylene diperoxide dicarbamide
Oxides
Xenon oxytetrafluoride, Xenon dioxide, Xenon trioxide, Xenon tetroxide
Unsorted
Alkali metal Ozonides
Ammonium chlorate
Ammonium perchlorate
Ammonium permanganate
Azidotetrazolates
Azoclathrates
Benzvalene
Chlorine oxides
DMAPP
Fluorine perchlorate
Fulminating gold
Fulminating silver (several substances)
Hexafluoroantimonate
Hexafluoroarsenate
Hypofluorous acid
Manganese heptoxide
Mercury nitride
Nitronium perchlorate
Nitrotetrazolate-N-Oxides
Peroxy acids
Peroxymonosulfuric acid
Tetramine copper complexes
Tetrasulfur tetranitride
Mixtures
Aluminum Orphorite, Amatex, Amatol, Ammonal, Armstrong's mixture, ANFO, ANNMAL, Astrolite
Baranol, Baratol, Ballistite, Butyl tetryl
Carbonite, Composition A, Composition B, Composition C, Composition 1, Composition 2, Composition 3, Composition 4, Composition 5, Composition H6, Cordtex, Cyclotol
Danubit, Detasheet, Detonating cord, Dualin, Dunnite, Dynamite
Ecrasite, Ednatol
Flash powder
Gelignite, Gunpowder
Hexanite, Hydromite 600
Kinetite
Minol
Octol, Oxyliquit
Panclastite, Pentolite, Picratol, PNNM, Pyrotol
Schneiderite, Semtex, Shellite
Tannerit simply, Tannerite, Titadine, Tovex, Torpex, Tritonal
Elements and isotopes
Alkali metals
Explosive antimony
Plutonium-239
Uranium-235
| Technology | Energy | null |
10201 | https://en.wikipedia.org/wiki/Exothermic%20process | Exothermic process | In thermodynamics, an exothermic process () is a thermodynamic process or reaction that releases energy from the system to its surroundings, usually in the form of heat, but also in a form of light (e.g. a spark, flame, or flash), electricity (e.g. a battery), or sound (e.g. explosion heard when burning hydrogen). The term exothermic was first coined by 19th-century French chemist Marcellin Berthelot.
The opposite of an exothermic process is an endothermic process, one that absorbs energy, usually in the form of heat. The concept is frequently applied in the physical sciences to chemical reactions where chemical bond energy is converted to thermal energy (heat).
Two types of chemical reactions
Exothermic and endothermic describe two types of chemical reactions or systems found in nature, as follows:
Exothermic
An exothermic reaction occurs when heat is released to the surroundings. According to the IUPAC, an exothermic reaction is "a reaction for which the overall standard enthalpy change ΔH⚬ is negative". Some examples of exothermic process are fuel combustion, condensation and nuclear fission, which is used in nuclear power plants to release large amounts of energy.
Endothermic
In an endothermic reaction or system, energy is taken from the surroundings in the course of the reaction, usually driven by a favorable entropy increase in the system. An example of an endothermic reaction is a first aid cold pack, in which the reaction of two chemicals, or dissolving of one in another, requires calories from the surroundings, and the reaction cools the pouch and surroundings by absorbing heat from them.
Photosynthesis, the process that allows plants to convert carbon dioxide and water to sugar and oxygen, is an endothermic process: plants absorb radiant energy from the sun and use it in an endothermic, otherwise non-spontaneous process. The chemical energy stored can be freed by the inverse (spontaneous) process: combustion of sugar, which gives carbon dioxide, water and heat (radiant energy).
Energy release
Exothermic refers to a transformation in which a closed system releases energy (heat) to the surroundings, expressed by
When the transformation occurs at constant pressure and without exchange of electrical energy, heat is equal to the enthalpy change, i.e.
while at constant volume, according to the first law of thermodynamics it equals internal energy () change, i.e.
In an adiabatic system (i.e. a system that does not exchange heat with the surroundings), an otherwise exothermic process results in an increase in temperature of the system.
In exothermic chemical reactions, the heat that is released by the reaction takes the form of electromagnetic energy or kinetic energy of molecules. The transition of electrons from one quantum energy level to another causes light to be released. This light is equivalent in energy to some of the stabilization energy of the energy for the chemical reaction, i.e. the bond energy. This light that is released can be absorbed by other molecules in solution to give rise to molecular translations and rotations, which gives rise to the classical understanding of heat. In an exothermic reaction, the activation energy (energy needed to start the reaction) is less than the energy that is subsequently released, so there is a net release of energy.
Examples
Some examples of exothermic processes are:
Combustion of fuels such as wood, coal and oil/petroleum
The thermite reaction
The reaction of alkali metals and other highly electropositive metals with water
Condensation of rain from water vapor
Mixing water and strong acids or strong bases
The reaction of acids and bases
Dehydration of carbohydrates by sulfuric acid
The setting of cement and concrete
Some polymerization reactions such as the setting of epoxy resin
The reaction of most metals with halogens or oxygen
Nuclear fusion in hydrogen bombs and in stellar cores (to iron)
Nuclear fission of heavy elements
The reaction between zinc and hydrochloric acid
Respiration (breaking down of glucose to release energy in cells)
Implications for chemical reactions
Chemical exothermic reactions are generally more spontaneous than their counterparts, endothermic reactions.
In a thermochemical reaction that is exothermic, the heat may be listed among the products of the reaction.
| Physical sciences | Thermodynamics | Physics |
10225 | https://en.wikipedia.org/wiki/Elliptic%20curve | Elliptic curve | In mathematics, an elliptic curve is a smooth, projective, algebraic curve of genus one, on which there is a specified point . An elliptic curve is defined over a field and describes points in , the Cartesian product of with itself. If the field's characteristic is different from 2 and 3, then the curve can be described as a plane algebraic curve which consists of solutions for:
for some coefficients and in . The curve is required to be non-singular, which means that the curve has no cusps or self-intersections. (This is equivalent to the condition , that is, being square-free in .) It is always understood that the curve is really sitting in the projective plane, with the point being the unique point at infinity. Many sources define an elliptic curve to be simply a curve given by an equation of this form. (When the coefficient field has characteristic 2 or 3, the above equation is not quite general enough to include all non-singular cubic curves; see below.)
An elliptic curve is an abelian variety – that is, it has a group law defined algebraically, with respect to which it is an abelian group – and serves as the identity element.
If , where is any polynomial of degree three in with no repeated roots, the solution set is a nonsingular plane curve of genus one, an elliptic curve. If has degree four and is square-free this equation again describes a plane curve of genus one; however, it has no natural choice of identity element. More generally, any algebraic curve of genus one, for example the intersection of two quadric surfaces embedded in three-dimensional projective space, is called an elliptic curve, provided that it is equipped with a marked point to act as the identity.
Using the theory of elliptic functions, it can be shown that elliptic curves defined over the complex numbers correspond to embeddings of the torus into the complex projective plane. The torus is also an abelian group, and this correspondence is also a group isomorphism.
Elliptic curves are especially important in number theory, and constitute a major area of current research; for example, they were used in Andrew Wiles's proof of Fermat's Last Theorem. They also find applications in elliptic curve cryptography (ECC) and integer factorization.
An elliptic curve is not an ellipse in the sense of a projective conic, which has genus zero: see elliptic integral for the origin of the term. However, there is a natural representation of real elliptic curves with shape invariant as ellipses in the hyperbolic plane . Specifically, the intersections of the Minkowski hyperboloid with quadric surfaces characterized by a certain constant-angle property produce the Steiner ellipses in (generated by orientation-preserving collineations). Further, the orthogonal trajectories of these ellipses comprise the elliptic curves with , and any ellipse in described as a locus relative to two foci is uniquely the elliptic curve sum of two Steiner ellipses, obtained by adding the pairs of intersections on each orthogonal trajectory. Here, the vertex of the hyperboloid serves as the identity on each trajectory curve.
Topologically, a complex elliptic curve is a torus, while a complex ellipse is a sphere.
Elliptic curves over the real numbers
Although the formal definition of an elliptic curve requires some background in algebraic geometry, it is possible to describe some features of elliptic curves over the real numbers using only introductory algebra and geometry.
In this context, an elliptic curve is a plane curve defined by an equation of the form
after a linear change of variables ( and are real numbers). This type of equation is called a Weierstrass equation, and said to be in Weierstrass form, or Weierstrass normal form.
The definition of elliptic curve also requires that the curve be non-singular. Geometrically, this means that the graph has no cusps, self-intersections, or isolated points. Algebraically, this holds if and only if the discriminant, , is not equal to zero.
The discriminant is zero when .
(Although the factor −16 is irrelevant to whether or not the curve is non-singular, this definition of the discriminant is useful in a more advanced study of elliptic curves.)
The real graph of a non-singular curve has two components if its discriminant is positive, and one component if it is negative. For example, in the graphs shown in figure to the right, the discriminant in the first case is 64, and in the second case is −368. Following the convention at Conic_section#Discriminant, elliptic curves require that the discriminant is negative.
The group law
When working in the projective plane, the equation in homogeneous coordinates becomes :
This equation is not defined on the line at infinity, but we can multiply by to get one that is :
This resulting equation is defined on the whole projective plane, and the curve it defines projects onto the elliptic curve of interest. To find its intersection with the line at infinity, we can just posit . This implies , which in a field means . on the other hand can take any value, and thus all triplets satisfy the equation. In projective geometry this set is simply the point , which is thus the unique intersection of the curve with the line at infinity.
Since the curve is smooth, hence continuous, it can be shown that this point at infinity is the identity element of a group structure whose operation is geometrically described as follows:
Since the curve is symmetric about the -axis, given any point , we can take to be the point opposite it. We then have , as lies on the -plane, so that is also the symmetrical of about the origin, and thus represents the same projective point.
If and are two points on the curve, then we can uniquely describe a third point in the following way. First, draw the line that intersects and . This will generally intersect the cubic at a third point, . We then take to be , the point opposite .
This definition for addition works except in a few special cases related to the point at infinity and intersection multiplicity. The first is when one of the points is . Here, we define , making the identity of the group. If we only have one point, thus we cannot define the line between them. In this case, we use the tangent line to the curve at this point as our line. In most cases, the tangent will intersect a second point and we can take its opposite. If and are opposites of each other, we define . Lastly, If is an inflection point (a point where the concavity of the curve changes), we take to be itself and is simply the point opposite itself, i.e. itself.
Let be a field over which the curve is defined (that is, the coefficients of the defining equation or equations of the curve are in ) and denote the curve by . Then the -rational points of are the points on whose coordinates all lie in , including the point at infinity. The set of -rational points is denoted by . is a group, because properties of polynomial equations show that if is in , then is also in , and if two of , , are in , then so is the third. Additionally, if is a subfield of , then is a subgroup of .
Algebraic interpretation
The above groups can be described algebraically as well as geometrically. Given the curve over the field (whose characteristic we assume to be neither 2 nor 3), and points and on the curve, assume first that (case 1). Let be the equation of the line that intersects and , which has the following slope:
The line equation and the curve equation intersect at the points , , and , so the equations have identical values at these values.
which is equivalent to
Since , , and are solutions, this equation has its roots at exactly the same values as
and because both equations are cubics they must be the same polynomial up to a scalar. Then equating the coefficients of in both equations
and solving for the unknown .
follows from the line equation
and this is an element of , because is.
If , then there are two options: if (case 3), including the case where (case 4), then the sum is defined as 0; thus, the inverse of each point on the curve is found by reflecting it across the -axis.
If , then and (case 2 using as ). The slope is given by the tangent to the curve at (xP, yP).
A more general expression for that works in both case 1 and case 2 is
where equality to relies on and obeying .
Non-Weierstrass curves
For the curve (the general form of an elliptic curve with characteristic 3), the formulas are similar, with and .
For a general cubic curve not in Weierstrass normal form, we can still define a group structure by designating one of its nine inflection points as the identity . In the projective plane, each line will intersect a cubic at three points when accounting for multiplicity. For a point , is defined as the unique third point on the line passing through and . Then, for any and , is defined as where is the unique third point on the line containing and .
For an example of the group law over a non-Weierstrass curve, see Hessian curves.
Elliptic curves over the rational numbers
A curve E defined over the field of rational numbers is also defined over the field of real numbers. Therefore, the law of addition (of points with real coordinates) by the tangent and secant method can be applied to E. The explicit formulae show that the sum of two points P and Q with rational coordinates has again rational coordinates, since the line joining P and Q has rational coefficients. This way, one shows that the set of rational points of E forms a subgroup of the group of real points of E.
Integral points
This section is concerned with points P = (x, y) of E such that x is an integer.
For example, the equation y2 = x3 + 17 has eight integral solutions with y > 0:
(x, y) = (−2, 3), (−1, 4), (2, 5), (4, 9), (8, 23), (43, 282), (52, 375), (, ).
As another example, Ljunggren's equation, a curve whose Weierstrass form is y2 = x3 − 2x, has only four solutions with y ≥ 0 :
(x, y) = (0, 0), (−1, 1), (2, 2), (338, ).
The structure of rational points
Rational points can be constructed by the method of tangents and secants detailed above, starting with a finite number of rational points. More precisely the Mordell–Weil theorem states that the group E(Q) is a finitely generated (abelian) group. By the fundamental theorem of finitely generated abelian groups it is therefore a finite direct sum of copies of Z and finite cyclic groups.
The proof of the theorem involves two parts. The first part shows that for any integer m > 1, the quotient group E(Q)/mE(Q) is finite (this is the weak Mordell–Weil theorem). Second, introducing a height function h on the rational points E(Q) defined by h(P0) = 0 and if P (unequal to the point at infinity P0) has as abscissa the rational number x = p/q (with coprime p and q). This height function h has the property that h(mP) grows roughly like the square of m. Moreover, only finitely many rational points with height smaller than any constant exist on E.
The proof of the theorem is thus a variant of the method of infinite descent and relies on the repeated application of Euclidean divisions on E: let P ∈ E(Q) be a rational point on the curve, writing P as the sum 2P1 + Q1 where Q1 is a fixed representant of P in E(Q)/2E(Q), the height of P1 is about of the one of P (more generally, replacing 2 by any m > 1, and by ). Redoing the same with P1, that is to say P1 = 2P2 + Q2, then P2 = 2P3 + Q3, etc. finally expresses P as an integral linear combination of points Qi and of points whose height is bounded by a fixed constant chosen in advance: by the weak Mordell–Weil theorem and the second property of the height function P is thus expressed as an integral linear combination of a finite number of fixed points.
The theorem however doesn't provide a method to determine any representatives of E(Q)/mE(Q).
The rank of E(Q), that is the number of copies of Z in E(Q) or, equivalently, the number of independent points of infinite order, is called the rank of E. The Birch and Swinnerton-Dyer conjecture is concerned with determining the rank. One conjectures that it can be arbitrarily large, even if only examples with relatively small rank are known. The elliptic curve with the currently largest exactly-known rank is
y2 + xy + y = x3 − x2 − x +
It has rank 20, found by Noam Elkies and Zev Klagsbrun in 2020. Curves of rank higher than 20 have been known since 1994, with lower bounds on their ranks ranging from 21 to 29, but their exact ranks are not known and in particular it is not proven which of them have higher rank than the others or which is the true "current champion".
As for the groups constituting the torsion subgroup of E(Q), the following is known: the torsion subgroup of E(Q) is one of the 15 following groups (a theorem due to Barry Mazur): Z/NZ for N = 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, or 12, or Z/2Z × Z/2NZ with N = 1, 2, 3, 4. Examples for every case are known. Moreover, elliptic curves whose Mordell–Weil groups over Q have the same torsion groups belong to a parametrized family.
The Birch and Swinnerton-Dyer conjecture
The Birch and Swinnerton-Dyer conjecture (BSD) is one of the Millennium problems of the Clay Mathematics Institute. The conjecture relies on analytic and arithmetic objects defined by the elliptic curve in question.
At the analytic side, an important ingredient is a function of a complex variable, L, the Hasse–Weil zeta function of E over Q. This function is a variant of the Riemann zeta function and Dirichlet L-functions. It is defined as an Euler product, with one factor for every prime number p.
For a curve E over Q given by a minimal equation
with integral coefficients , reducing the coefficients modulo p defines an elliptic curve over the finite field Fp (except for a finite number of primes p, where the reduced curve has a singularity and thus fails to be elliptic, in which case E is said to be of bad reduction at p).
The zeta function of an elliptic curve over a finite field Fp is, in some sense, a generating function assembling the information of the number of points of E with values in the finite field extensions Fpn of Fp. It is given by
The interior sum of the exponential resembles the development of the logarithm and, in fact, the so-defined zeta function is a rational function in T:
where the 'trace of Frobenius' term is defined to be the difference between the 'expected' number and the number of points on the elliptic curve over , viz.
or equivalently,
.
We may define the same quantities and functions over an arbitrary finite field of characteristic , with replacing everywhere.
The L-function of E over Q is then defined by collecting this information together, for all primes p. It is defined by
where N is the conductor of E, i.e. the product of primes with bad reduction ), in which case ap is defined differently from the method above: see Silverman (1986) below.
For example has bad reduction at 17, because has .
This product converges for Re(s) > 3/2 only. Hasse's conjecture affirms that the L-function admits an analytic continuation to the whole complex plane and satisfies a functional equation relating, for any s, L(E, s) to L(E, 2 − s). In 1999 this was shown to be a consequence of the proof of the Shimura–Taniyama–Weil conjecture, which asserts that every elliptic curve over Q is a modular curve, which implies that its L-function is the L-function of a modular form whose analytic continuation is known. One can therefore speak about the values of L(E, s) at any complex number s.
At s=1 (the conductor product can be discarded as it is finite), the L-function becomes
The Birch and Swinnerton-Dyer conjecture relates the arithmetic of the curve to the behaviour of this L-function at s = 1. It affirms that the vanishing order of the L-function at s = 1 equals the rank of E and predicts the leading term of the Laurent series of L(E, s) at that point in terms of several quantities attached to the elliptic curve.
Much like the Riemann hypothesis, the truth of the BSD conjecture would have multiple consequences, including the following two:
A congruent number is defined as an odd square-free integer n which is the area of a right triangle with rational side lengths. It is known that n is a congruent number if and only if the elliptic curve has a rational point of infinite order; assuming BSD, this is equivalent to its L-function having a zero at s = 1. Tunnell has shown a related result: assuming BSD, n is a congruent number if and only if the number of triplets of integers (x, y, z) satisfying is twice the number of triples satisfying . The interest in this statement is that the condition is easy to check.
In a different direction, certain analytic methods allow for an estimation of the order of zero in the center of the critical strip for certain L-functions. Admitting BSD, these estimations correspond to information about the rank of families of the corresponding elliptic curves. For example: assuming the generalized Riemann hypothesis and BSD, the average rank of curves given by is smaller than 2.
Elliptic curves over finite fields
Let K = Fq be the finite field with q elements and E an elliptic curve defined over K. While the precise number of rational points of an elliptic curve E over K is in general difficult to compute, Hasse's theorem on elliptic curves gives the following inequality:
In other words, the number of points on the curve grows proportionally to the number of elements in the field. This fact can be understood and proven with the help of some general theory; see local zeta function and étale cohomology for example.
The set of points E(Fq) is a finite abelian group. It is always cyclic or the product of two cyclic groups. For example, the curve defined by
over F71 has 72 points (71 affine points including (0,0) and one point at infinity) over this field, whose group structure is given by Z/2Z × Z/36Z. The number of points on a specific curve can be computed with Schoof's algorithm.
Studying the curve over the field extensions of Fq is facilitated by the introduction of the local zeta function of E over Fq, defined by a generating series (also see above)
where the field Kn is the (unique up to isomorphism) extension of K = Fq of degree n (that is, ).
The zeta function is a rational function in T. To see this, consider the integer such that
There is a complex number such that
where is the complex conjugate, and so we have
We choose so that its absolute value is , that is , and that . Note that .
can then be used in the local zeta function as its values when raised to the various powers of can be said to reasonably approximate the behaviour of , in that
Using the Taylor series for the natural logarithm,
Then , so finally
For example, the zeta function of E : y2 + y = x3 over the field F2 is given by
which follows from:
as , then , so .
The functional equation is
As we are only interested in the behaviour of , we can use a reduced zeta function
and so
which leads directly to the local L-functions
The Sato–Tate conjecture is a statement about how the error term in Hasse's theorem varies with the different primes q, if an elliptic curve E over Q is reduced modulo q. It was proven (for almost all such curves) in 2006 due to the results of Taylor, Harris and Shepherd-Barron, and says that the error terms are equidistributed.
Elliptic curves over finite fields are notably applied in cryptography and for the factorization of large integers. These algorithms often make use of the group structure on the points of E. Algorithms that are applicable to general groups, for example the group of invertible elements in finite fields, F*q, can thus be applied to the group of points on an elliptic curve. For example, the discrete logarithm is such an algorithm. The interest in this is that choosing an elliptic curve allows for more flexibility than choosing q (and thus the group of units in Fq). Also, the group structure of elliptic curves is generally more complicated.
Elliptic curves over a general field
Elliptic curves can be defined over any field K; the formal definition of an elliptic curve is a non-singular projective algebraic curve over K with genus 1 and endowed with a distinguished point defined over K.
If the characteristic of K is neither 2 nor 3, then every elliptic curve over K can be written in the form
after a linear change of variables. Here p and q are elements of K such that the right hand side polynomial x3 − px − q does not have any double roots. If the characteristic is 2 or 3, then more terms need to be kept: in characteristic 3, the most general equation is of the form
for arbitrary constants b2, b4, b6 such that the polynomial on the right-hand side has distinct roots (the notation is chosen for historical reasons). In characteristic 2, even this much is not possible, and the most general equation is
provided that the variety it defines is non-singular. If characteristic were not an obstruction, each equation would reduce to the previous ones by a suitable linear change of variables.
One typically takes the curve to be the set of all points (x,y) which satisfy the above equation and such that both x and y are elements of the algebraic closure of K. Points of the curve whose coordinates both belong to K are called K-rational points.
Many of the preceding results remain valid when the field of definition of E is a number field K, that is to say, a finite field extension of Q. In particular, the group E(K) of K-rational points of an elliptic curve E defined over K is finitely generated, which generalizes the Mordell–Weil theorem above. A theorem due to Loïc Merel shows that for a given integer d, there are (up to isomorphism) only finitely many groups that can occur as the torsion groups of E(K) for an elliptic curve defined over a number field K of degree d. More precisely, there is a number B(d) such that for any elliptic curve E defined over a number field K of degree d, any torsion point of E(K) is of order less than B(d). The theorem is effective: for d > 1, if a torsion point is of order p, with p prime, then
As for the integral points, Siegel's theorem generalizes to the following: Let E be an elliptic curve defined over a number field K, x and y the Weierstrass coordinates. Then there are only finitely many points of E(K) whose x-coordinate is in the ring of integers OK.
The properties of the Hasse–Weil zeta function and the Birch and Swinnerton-Dyer conjecture can also be extended to this more general situation.
Elliptic curves over the complex numbers
The formulation of elliptic curves as the embedding of a torus in the complex projective plane follows naturally from a curious property of Weierstrass's elliptic functions. These functions and their first derivative are related by the formula
Here, and are constants; is the Weierstrass elliptic function and its derivative. It should be clear that this relation is in the form of an elliptic curve (over the complex numbers). The Weierstrass functions are doubly periodic; that is, they are periodic with respect to a lattice ; in essence, the Weierstrass functions are naturally defined on a torus . This torus may be embedded in the complex projective plane by means of the map
This map is a group isomorphism of the torus (considered with its natural group structure) with the chord-and-tangent group law on the cubic curve which is the image of this map. It is also an isomorphism of Riemann surfaces from the torus to the cubic curve, so topologically, an elliptic curve is a torus. If the lattice is related by multiplication by a non-zero complex number to a lattice , then the corresponding curves are isomorphic. Isomorphism classes of elliptic curves are specified by the -invariant.
The isomorphism classes can be understood in a simpler way as well. The constants and , called the modular invariants, are uniquely determined by the lattice, that is, by the structure of the torus. However, all real polynomials factorize completely into linear factors over the complex numbers, since the field of complex numbers is the algebraic closure of the reals. So, the elliptic curve may be written as
One finds that
and
with -invariant and is sometimes called the modular lambda function. For example, let , then which implies , , and therefore of the formula above are all algebraic numbers if involves an imaginary quadratic field. In fact, it yields the integer .
In contrast, the modular discriminant
is generally a transcendental number. In particular, the value of the Dedekind eta function is
Note that the uniformization theorem implies that every compact Riemann surface of genus one can be represented as a torus. This also allows an easy understanding of the torsion points on an elliptic curve: if the lattice is spanned by the fundamental periods and , then the -torsion points are the (equivalence classes of) points of the form
for integers and in the range .
If
is an elliptic curve over the complex numbers and
then a pair of fundamental periods of can be calculated very rapidly by
is the arithmetic–geometric mean of and . At each step of the arithmetic–geometric mean iteration, the signs of arising from the ambiguity of geometric mean iterations are chosen such that where and denote the individual arithmetic mean and geometric mean iterations of and , respectively. When , there is an additional condition that .
Over the complex numbers, every elliptic curve has nine inflection points. Every line through two of these points also passes through a third inflection point; the nine points and 12 lines formed in this way form a realization of the Hesse configuration.
The Dual Isogeny
Given an isogeny
of elliptic curves of degree , the dual isogeny is an isogeny
of the same degree such that
Here denotes the multiplication-by- isogeny which has degree
Construction of the Dual Isogeny
Often only the existence of a dual isogeny is needed, but it can be explicitly given as the composition
where is the group of divisors of degree 0. To do this, we need maps given by where is the neutral point of and given by
To see that , note that the original isogeny can be written as a composite
and that since is finite of degree , is multiplication by on
Alternatively, we can use the smaller Picard group , a quotient of The map descends to an isomorphism, The dual isogeny is
Note that the relation also implies the conjugate relation Indeed, let Then But is surjective, so we must have
Algorithms that use elliptic curves
Elliptic curves over finite fields are used in some cryptographic applications as well as for integer factorization. Typically, the general idea in these applications is that a known algorithm which makes use of certain finite groups is rewritten to use the groups of rational points of elliptic curves. For more see also:
Elliptic curve cryptography
Elliptic-curve Diffie–Hellman key exchange (ECDH)
Supersingular isogeny key exchange
Elliptic curve digital signature algorithm (ECDSA)
EdDSA digital signature algorithm
Dual EC DRBG random number generator
Lenstra elliptic-curve factorization
Elliptic curve primality proving
Alternative representations of elliptic curves
Hessian curve
Edwards curve
Twisted curve
Twisted Hessian curve
Twisted Edwards curve
Doubling-oriented Doche–Icart–Kohel curve
Tripling-oriented Doche–Icart–Kohel curve
Jacobian curve
Montgomery curve
| Mathematics | Two-dimensional space | null |
10229 | https://en.wikipedia.org/wiki/Equidae | Equidae | Equidae (commonly known as the horse family) is the taxonomic family of horses and related animals, including the extant horses, asses, and zebras, and many other species known only from fossils. The family evolved more than 50 million years ago, in the Eocene epoch, from a small, multi-toed ungulate into larger, single-toed animals. All extant species are in the genus Equus, which originated in North America. Equidae belongs to the order Perissodactyla, which includes the extant tapirs and rhinoceros, and several extinct families. It is more specifically grouped within the superfamily Equoidea, the only other family being the extinct Palaeotheriidae.
The term equid refers to any member of this family, including any equine.
Evolution
The oldest known fossils assigned to Equidae were found in North America, and date from the early Eocene epoch, 54 million years ago. They were once assigned to the genus Hyracotherium, but the type species of that genus is now regarded as a palaeothere. The other species have been split off into different genera. These early equids were fox-sized animals with three toes on the hind feet, and four on the front feet. They were herbivorous browsers on relatively soft plants, and already adapted for running. The complexity of their brains suggest that they already were alert and intelligent animals. Later species reduced the number of toes, and developed teeth more suited for grinding up grasses and other tough plant food.
The equids, like other perissodactyls, are hindgut fermenters. They have evolved specialized teeth that cut and shear tough plant matter to accommodate their fibrous diet. Their seemingly inefficient digestion strategy is a result of their size at the time of its evolution, as they would have already had to be relatively large mammals to be supported on such a strategy.
The family became relatively diverse during the Miocene epoch, with many new species appearing. By this time, equids were more truly horse like, having developed the typical body shape of the modern animals. Many of these species bore the main weight of their bodies on their central third toe, with the others becoming reduced and barely touching the ground, if at all. The sole surviving genus, Equus, had evolved by the early Pleistocene epoch, and spread rapidly through the world.
Classification
Order Perissodactyla (In addition to Equidae, Perissodactyla includes four species of tapir in a single genus, as well as five living species (belonging to four genera) of rhinoceros.) † indicates extinct taxa.
Family Equidae
Subfamily †Hyracotheriinae
Genus †Epihippus
Genus †Haplohippus
Genus †Eohippus
Genus †Minippus
Genus †Orohippus
Genus †Pliolophus
Genus †Protorohippus
Genus †Sifrhippus
Genus †Xenicohippus
Subfamily †Anchitheriinae
Genus †Anchitherium
Genus †Archaeohippus
Genus †Desmatippus
Genus †Hypohippus
Genus †Kalobatippus
Genus †Megahippus
Genus †Mesohippus
Genus †Miohippus
Genus †Parahippus
Genus †Sinohippus
Subfamily Equinae
Genus †Merychippus
Genus †Scaphohippus
Genus †Acritohippus
Tribe †Hipparionini
Genus †Eurygnathohippus
Genus †Hipparion
Genus †Hippotherium
Genus †Nannippus
Genus †Neohipparion
Genus †Proboscidipparion
Genus †Pseudhipparion
Tribe Equini
Genus †Haringtonhippus
Genus †Heteropliohippus
Genus †Parapliohippus
Subtribe Protohippina
Genus †Calippus
Genus †Protohippus
Subtribe Equina
Genus †Astrohippus
Genus †Dinohippus
Genus Equus (22 species, 7 extant)
Equus ferus Wild horse
Equus ferus caballus Domestic horse
†Equus ferus ferus Tarpan
Equus ferus przewalskii Przewalski's horse
†Equus algericus
†Equus alaskae
†Equus lambei Yukon wild horse
†Equus niobrarensis
†Equus scotti
†Equus conversidens Mexican horse
†Equus semiplicatus
Subgenus †Amerhippus (this subgenus and its species are possibly synonymous with E. ferus)
†Equus andium
†Equus neogeus
†Equus insulatus
Subgenus Asinus
Equus africanus African wild ass
Equus africanus africanus Nubian wild ass
Equus africanus asinus Domestic donkey
†Equus africanus atlanticus Atlas wild ass
Equus africanus somalicus Somali wild ass
Equus hemionus Onager or Asiatic wild ass
Equus hemionus hemionus Mongolian wild ass
†Equus hemionus hemippus Syrian wild ass
Equus hemionus khur Indian wild ass
Equus hemionus kulan Turkmenian kulan
Equus hemionus onager Persian onager
Equus kiang Kiang
Equus kiang chu Northern kiang
Equus kiang kiang Western kiang
Equus kiang holdereri Eastern kiang
Equus kiang polyodon Southern kiang
†Equus hydruntinus European ass
†Equus altidens
†Equus tabeti
†Equus melkiensis
†Equus graziosii
Subgenus Hippotigris
Equus grevyi Grévy's zebra
†Equus koobiforensis
†Equus oldowayensis
Equus quagga Plains zebra
Equus quagga boehmi Grant's zebra
Equus quagga borensis Maneless zebra
Equus quagga burchellii Burchell's zebra
Equus quagga chapmani Chapman's zebra
Equus quagga crawshayi Crawshay's zebra
†Equus quagga quagga Quagga
Equus quagga selousi Selous' zebra
Equus zebra Mountain zebra
Equus zebra hartmannae Hartmann's mountain zebra
Equus zebra zebra Cape mountain zebra
†Equus capensis
†Equus mauritanicus
Subgenus †Parastylidequus
†Equus parastylidens Mooser's horse
†Subgenus Sussemionus
†Equus ovodovi
incertae sedis
†Equus simplicidens Hagerman horse
†Equus cumminsii
†Equus livenzovensis
†Equus sanmeniensis
†Equus teilhardi
†Equus numidicus
†Equus plicidens
†Equus cedralensis
†Equus stenonis group
†Equus stenonis Stenon zebra
†Equus stenonis guthi
†Equus stenonis senezensis
†Equus stenonis pamirensis (Hippotigris pamirensis)
†Equus stenonis petraloniensis
†Equus stenonis vireti
†Equus sivalensis
†Equus stehlini
†Equus sussenbornensis
†Equus verae
†Equus namadicus
†subgenus Allozebra
†Equus (A.) occidentalis western horse
†Equus (A.) excelsus
†subgenus Hesperohippus
†Equus (H.) pacificus
†Equus (H.) mexicanus
†Equus complicatus
†Equus fraternus
†Equus major
†Equus giganteus
†Equus pectinatus
†Equus crenidens
Genus †Cremohipparion
Genus †Hippidion
Genus †Pliohippus
| Biology and health sciences | Perissodactyla | null |
10238 | https://en.wikipedia.org/wiki/Exon | Exon | An exon is any part of a gene that will form a part of the final mature RNA produced by that gene after introns have been removed by RNA splicing. The term exon refers to both the DNA sequence within a gene and to the corresponding sequence in RNA transcripts. In RNA splicing, introns are removed and exons are covalently joined to one another as part of generating the mature RNA. Just as the entire set of genes for a species constitutes the genome, the entire set of exons constitutes the exome.
History
The term exon derives from the expressed region and was coined by American biochemist Walter Gilbert in 1978: "The notion of the cistron... must be replaced by that of a transcription unit containing regions which will be lost from the mature messengerwhich I suggest we call introns (for intragenic regions)alternating with regions which will be expressedexons."
This definition was originally made for protein-coding transcripts that are spliced before being translated. The term later came to include sequences removed from rRNA and tRNA, and other ncRNA and it also was used later for RNA molecules originating from different parts of the genome that are then ligated by trans-splicing.
Contribution to genomes and size distribution
Although unicellular eukaryotes such as yeast have either no introns or very few, metazoans and especially vertebrate genomes have a large fraction of non-coding DNA. For instance, in the human genome only 1.1% of the genome is spanned by exons, whereas 24% is in introns, with 75% of the genome being intergenic DNA. This can provide a practical advantage in omics-aided health care (such as precision medicine) because it makes commercialized whole exome sequencing a smaller and less expensive challenge than commercialized whole genome sequencing. The large variation in genome size and C-value across life forms has posed an interesting challenge called the C-value enigma.
Across all eukaryotic genes in GenBank, there were (in 2002), on average, 5.48 exons per protein coding gene. The average exon encoded 30-36 amino acids. While the longest exon in the human genome is 11555 bp long, several exons have been found to be only 2 bp long. A single-nucleotide exon has been reported from the Arabidopsis genome. In humans, like protein coding mRNA, most non-coding RNA also contain multiple exons
Structure and function
In protein-coding genes, the exons include both the protein-coding sequence and the 5′- and 3′-untranslated regions (UTR). Often the first exon includes both the 5′-UTR and the first part of the coding sequence, but exons containing only regions of 5′-UTR or (more rarely) 3′-UTR occur in some genes, i.e. the UTRs may contain introns. Some non-coding RNA transcripts also have exons and introns.
Mature mRNAs originating from the same gene need not include the same exons, since different introns in the pre-mRNA can be removed by the process of alternative splicing.
Exonization is the creation of a new exon, as a result of mutations in introns.
Experimental approaches using exons
Exon trapping or 'gene trapping' is a molecular biology technique that exploits the existence of the intron-exon splicing to find new genes. The first exon of a 'trapped' gene splices into the exon that is contained in the insertional DNA. This new exon contains the ORF for a reporter gene that can now be expressed using the enhancers that control the target gene. A scientist knows that a new gene has been trapped when the reporter gene is expressed.
Splicing can be experimentally modified so that targeted exons are excluded from mature mRNA transcripts by blocking the access of splice-directing small nuclear ribonucleoprotein particles (snRNPs) to pre-mRNA using Morpholino antisense oligos. This has become a standard technique in developmental biology. Morpholino oligos can also be targeted to prevent molecules that regulate splicing (e.g. splice enhancers, splice suppressors) from binding to pre-mRNA, altering patterns of splicing.
Common misuse of the term
Common incorrect uses of the term exon are that 'exons code for protein', or 'exons code for amino-acids' or 'exons are translated'. However, these sorts of definitions only cover protein-coding genes, and omit those exons that become part of a non-coding RNA or the untranslated region of an mRNA. Such incorrect definitions still occur in overall reputable secondary sources.
| Biology and health sciences | Molecular biology | Biology |
10251 | https://en.wikipedia.org/wiki/EDSAC | EDSAC | The Electronic Delay Storage Automatic Calculator (EDSAC) was an early British computer. Inspired by John von Neumann's seminal First Draft of a Report on the EDVAC, the machine was constructed by Maurice Wilkes and his team at the University of Cambridge Mathematical Laboratory in England. EDSAC was the second electronic digital stored-program computer, after the Manchester Mark 1, to go into regular service.
Later the project was supported by J. Lyons & Co. Ltd., intending to develop a commercially applied computer and resulting in Lyons' development of the LEO I, based on the EDSAC design. Work on EDSAC started during 1947, and it ran its first programs on 6 May 1949, when it calculated a table of square numbers and a list of prime numbers. EDSAC was finally shut down on 11 July 1958, having been superseded by EDSAC 2, which remained in use until 1965.
Technical overview
Physical components
As soon as EDSAC was operational, it began serving the university's research needs. It used mercury delay lines for memory and derated vacuum tubes for logic. Power consumption was 11 kW of electricity. Cycle time was 1.5 ms for all ordinary instructions, 6 ms for multiplication. Input was via five-hole punched tape, and output was via a teleprinter.
Initially, registers were limited to an accumulator and a multiplier register. In 1953, David Wheeler, returning from a stay at the University of Illinois, designed an index register as an extension to the original EDSAC hardware.
A magnetic-tape drive was added in 1952 but never worked sufficiently well to be of real use.
Until 1952, the available main memory (instructions and data) was only 512 18-bit words, and there was no backing store. The delay lines (or "tanks") were arranged in two batteries providing 512 words each. The second battery came into operation in 1952.
The full 1024-word delay-line store was not available until 1955 or early 1956, limiting programs to about 800 words until then.
John Lindley (diploma student 1958–1959) mentioned "the incredible difficulty we had ever to produce a single correct piece of paper tape with the crude and unreliable home-made punching, printing and verifying gear available in the late 50s".
Memory and instructions
The EDSAC's main memory consisted of 1024 locations, though only 512 locations were initially installed. Each contained 18 bits, but the topmost bit was always unavailable due to timing problems, so only 17 bits were used. An instruction consisted of a five-bit instruction code, one spare bit, a 10-bit operand (usually a memory address), and one length bit to control whether the instruction used a 17-bit or a 35-bit operand (two consecutive words, little-endian). All instruction codes were by design represented by one mnemonic letter, so that the Add instruction, for example, used the EDSAC character code for the letter A.
Internally, the EDSAC used two's complement binary numbers. Numbers were either 17 bits (one word) or 35 bits (two words) long. Unusually, the multiplier was designed to treat numbers as fixed-point fractions in the range −1 ≤ x < 1, i.e. the binary point was immediately to the right of the sign. The accumulator could hold 71 bits, including the sign, allowing two long (35-bit) numbers to be multiplied without losing any precision.
The instructions available were:
Add
Subtract
Multiply-and-add
AND-and-add (called "Collate")
Shift left
Arithmetic shift right
Load multiplier register
Store (and optionally clear) accumulator
Conditional goto
Read input tape
Print character
Round accumulator
No-op
Stop
There was no division instruction (but various division subroutines were supplied) and no way to directly load a number into the accumulator (a "Store and zero accumulator" instruction followed by an "Add" instruction were necessary for this). There was no unconditional jump instruction, nor was there a procedure call instruction – it had not yet been invented.
Maurice Wilkes discussed relative addressing modes for the EDSAC in a paper published in 1953. He was making the proposals to facilitate the use of subroutines.
System software
The initial orders were hard-wired on a set of uniselector switches and loaded into the low words of memory at startup. By May 1949, the initial orders provided a primitive relocating assembler taking advantage of the mnemonic design described above, all in 31 words. This was the world's first assembler, and arguably the start of the global software industry. There is a simulation of EDSAC available, and a full description of the initial orders and first programs.
The first calculation done by EDSAC was a program run on 6 May 1949 to compute square numbers. The program was written by Beatrice Worsley, who had travelled from Canada to study the machine.
The machine was used by other members of the university to solve real problems, and many early techniques were developed that are now included in operating systems.
Users prepared their programs by punching them (in assembler) onto a paper tape. They soon became good at being able to hold the paper tape up to the light and read back the codes. When a program was ready, it was hung on a length of line strung up near the paper-tape reader. The machine operators, who were present during the day, selected the next tape from the line and loaded it into EDSAC. This is of course well known today as job queues. If it printed something, then the tape and the printout were returned to the user, otherwise they were informed at which memory location it had stopped. Debuggers were some time away, but a cathode-ray tube screen could be set to display the contents of a particular piece of memory. This was used to see whether a number was converging, for example. A loudspeaker was connected to the accumulator's sign bit; experienced users knew healthy and unhealthy sounds of programs, particularly programs "hung" in a loop.
After office hours certain "authorised users" were allowed to run the machine for themselves, which went on late into the night until a valve blew – which usually happened according to one such user. This is alluded to by Fred Hoyle in his novel The Black Cloud
Programming technique
The early programmers had to make use of techniques frowned upon today—in particular, the use of self-modifying code. As there was no index register until much later, the only way of accessing an array was to alter which memory location a particular instruction was referencing.
David Wheeler, who earned the world's first Computer Science PhD working on the project, is credited with inventing the concept of a subroutine. Users wrote programs that called a routine by jumping to the start of the subroutine with the return address (i.e. the location-plus-one of the jump itself) in the accumulator (a Wheeler Jump). By convention the subroutine expected this, and the first thing it did was to modify its concluding jump instruction to that return address. Multiple and nested subroutines could be called so long as the user knew the length of each one in order to calculate the location to jump to; recursive calls were forbidden. The user then copied the code for the subroutine from a master tape onto their own tape following the end of their own program. (However, Alan Turing discussed subroutines in a paper of 1945 on design proposals for the NPL ACE, going so far as to invent the concept of a return-address stack, which would have allowed recursion.)
The lack of an index register also posed a problem to the writer of a subroutine in that they could not know in advance where in memory the subroutine would be loaded, and therefore they could not know how to address any regions of the code that were used for storage of data (so-called "pseudo-orders"). This was solved by use of an initial input routine, which was responsible for loading subroutines from punched tape into memory. On loading a subroutine, it would note the start location and increment internal memory references as required. Thus, as Wilkes wrote, "the code used to represent orders outside the machine differs from that used inside, the differences being dictated by the different requirements of the programmer on the one hand, and of the control circuits of the machine on the other".
EDSAC's programmers used special techniques to make best use of the limited available memory. For example, at the point of loading a subroutine from punched tape into memory, it might happen that a particular constant would have to be calculated, a constant that would not subsequently need recalculation. In this situation, the constant would be calculated in an "interlude". The code required to calculate the constant would be supplied along with the full subroutine. After the initial input routine had loaded the calculation-code, it would transfer control to this code. Once the constant had been calculated and written into memory, control would return to the initial input routine, which would continue to write the remainder of the subroutine into memory, but first adjusting its starting point so as to overwrite the code that had calculated the constant. This allowed quite complicated adjustments to be made to a general-purpose subroutine without making its final footprint in memory any larger than had it been tailored to a specific circumstance.
Application software
The subroutine concept led to the availability of a substantial subroutine library. By 1951, 87 subroutines in the following categories were available for general use: floating-point arithmetic; arithmetic operations on complex numbers; checking; division; exponentiation; routines relating to functions; differential equations; special functions; power series; logarithms; miscellaneous; print and layout; quadrature; read (input); nth root; trigonometric functions; counting operations (simulating repeat until loops, while loops and for loops); vectors; and matrices.
The first assembly language appeared for the EDSAC, and inspired several other assembly languages:
Applications of EDSAC
EDSAC was designed specifically to form part of the Mathematical Laboratory's support service for calculation. The first scientific paper to be published using a computer for calculations was by Ronald Fisher. Wilkes and Wheeler had used EDSAC to solve a differential equation relating to gene frequencies for him. In 1951, Miller and Wheeler used the machine to discover a 79-digit prime – the largest known at the time.
The winners of three Nobel Prizes John Kendrew and Max Perutz (Chemistry, 1962), Andrew Huxley (Medicine, 1963) and Martin Ryle (Physics, 1974) benefitted from EDSAC's revolutionary computing power. In their acceptance prize speeches, each acknowledged the role that EDSAC had played in their research.
In the early 1960s Peter Swinnerton-Dyer used the EDSAC computer to calculate the number of points modulo p (denoted by Np) for a large number of primes p on elliptic curves whose rank was known. Based on these numerical results, conjectured that Np for a curve E with rank r obeys an asymptotic law, the Birch and Swinnerton-Dyer conjecture, considered one of the top unsolved problems in mathematics as of 2024.
Games
In 1952, Sandy Douglas developed OXO, a version of noughts and crosses (tic-tac-toe) for the EDSAC, with graphical output to a VCR97 6" cathode-ray tube. This may well have been the world's first video game.
Another video game was created by Stanley Gill and involved a dot (termed a sheep) approaching a line in which one of two gates could be opened. The Stanley Gill game was controlled via the lightbeam of the EDSAC's paper-tape reader. Interrupting it (such as by the player placing their hand in it) would open the upper gate. Leaving the beam unbroken would result in the lower gate opening.
Further developments
EDSAC's successor, EDSAC 2, was commissioned in 1958.
In 1961, an EDSAC 2 version of Autocode, an ALGOL-like high-level programming language for scientists and engineers, was developed by David Hartley.
In the mid-1960s, a successor to the EDSAC 2 was planned, but the move was instead made to the Titan, a prototype Atlas 2 developed from the Atlas Computer of the University of Manchester, Ferranti, and Plessey.
EDSAC Replica Project
On 13 January 2011, the Computer Conservation Society announced that it planned to build a working replica of EDSAC, at the National Museum of Computing (TNMoC) in Bletchley Park supervised by Andrew Herbert, who studied under Maurice Wilkes. The first parts of the replica were switched on in November 2014. The EDSAC logical circuits were meticulously reconstructed through the development of a simulator and the reexamination of some rediscovered original schematics. This documentation has been released under a Creative Commons license. The ongoing project is open to visitors of the museum. In 2016, two original EDSAC operators, Margaret Marrs and Joyce Wheeler, visited the museum to assist the project. As of November 2016, commissioning of the fully completed and operational state of the replica was estimated to be the autumn of 2017. However, unforeseen project delays have resulted in an unknown date for a completed and fully operational machine.
| Technology | Early computers | null |
10274 | https://en.wikipedia.org/wiki/Enthalpy | Enthalpy | Enthalpy () is the sum of a thermodynamic system's internal energy and the product of its pressure and volume. It is a state function in thermodynamics used in many measurements in chemical, biological, and physical systems at a constant external pressure, which is conveniently provided by the large ambient atmosphere. The pressure–volume term expresses the work that was done against constant external pressure to establish the system's physical dimensions from to some final volume (as ), i.e. to make room for it by displacing its surroundings.
The pressure-volume term is very small for solids and liquids at common conditions, and fairly small for gases. Therefore, enthalpy is a stand-in for energy in chemical systems; bond, lattice, solvation, and other chemical "energies" are actually enthalpy differences. As a state function, enthalpy depends only on the final configuration of internal energy, pressure, and volume, not on the path taken to achieve it.
In the International System of Units (SI), the unit of measurement for enthalpy is the joule. Other historical conventional units still in use include the calorie and the British thermal unit (BTU).
The total enthalpy of a system cannot be measured directly because the internal energy contains components that are unknown, not easily accessible, or are not of interest for the thermodynamic problem at hand. In practice, a change in enthalpy is the preferred expression for measurements at constant pressure, because it simplifies the description of energy transfer. When transfer of matter into or out of the system is also prevented and no electrical or mechanical (stirring shaft or lift pumping) work is done, at constant pressure the enthalpy change equals the energy exchanged with the environment by heat.
In chemistry, the standard enthalpy of reaction is the enthalpy change when reactants in their standard states ( usually ) change to products in their standard states.
This quantity is the standard heat of reaction at constant pressure and temperature, but it can be measured by calorimetric methods even if the temperature does vary during the measurement, provided that the initial and final pressure and temperature correspond to the standard state. The value does not depend on the path from initial to final state because enthalpy is a state function.
Enthalpies of chemical substances are usually listed for pressure as a standard state. Enthalpies and enthalpy changes for reactions vary as a function of temperature,
but tables generally list the standard heats of formation of substances at . For endothermic (heat-absorbing) processes, the change is a positive value; for exothermic (heat-releasing) processes it is negative.
The enthalpy of an ideal gas is independent of its pressure or volume, and depends only on its temperature, which correlates to its thermal energy. Real gases at common temperatures and pressures often closely approximate this behavior, which simplifies practical thermodynamic design and analysis.
The word "enthalpy" is derived from the Greek word enthalpein, which means "to heat".
Definition
The enthalpy of a thermodynamic system is defined as the sum of its internal energy and the product of its pressure and volume:
where is the internal energy, is pressure, and is the volume of the system; is sometimes referred to as the pressure energy .
Enthalpy is an extensive property; it is proportional to the size of the system (for homogeneous systems). As intensive properties, the specific enthalpy, is referenced to a unit of mass of the system, and the molar enthalpy, where is the number of moles. For inhomogeneous systems the enthalpy is the sum of the enthalpies of the component subsystems:
where
is the total enthalpy of all the subsystems,
refers to the various subsystems,
refers to the enthalpy of each subsystem.
A closed system may lie in thermodynamic equilibrium in a static gravitational field, so that its pressure varies continuously with altitude, while, because of the equilibrium requirement, its temperature is invariant with altitude. (Correspondingly, the system's gravitational potential energy density also varies with altitude.) Then the enthalpy summation becomes an integral:
where
("rho") is density (mass per unit volume),
is the specific enthalpy (enthalpy per unit mass),
represents the enthalpy density (enthalpy per unit volume),
denotes an infinitesimally small element of volume within the system, for example, the volume of an infinitesimally thin horizontal layer.
The integral therefore represents the sum of the enthalpies of all the elements of the volume.
The enthalpy of a closed homogeneous system is its energy function with its entropy and its pressure as natural state variables which provide a differential relation for of the simplest form, derived as follows. We start from the first law of thermodynamics for closed systems for an infinitesimal process:
where
is a small amount of heat added to the system,
is a small amount of work performed by the system.
In a homogeneous system in which only reversible processes or pure heat transfer are considered, the second law of thermodynamics gives with the absolute temperature and the infinitesimal change in entropy of the system. Furthermore, if only work is done, As a result,
Adding to both sides of this expression gives
or
So
and the coefficients of the natural variable differentials and are just the single variables and .
Other expressions
The above expression of in terms of entropy and pressure may be unfamiliar to some readers. There are also expressions in terms of more directly measurable variables such as temperature and pressure:
Here is the heat capacity at constant pressure and is the coefficient of (cubic) thermal expansion:
With this expression one can, in principle, determine the enthalpy if and are known as functions of and . However the expression is more complicated than because is not a natural variable for the enthalpy .
At constant pressure, so that For an ideal gas, reduces to this form even if the process involves a pressure change, because
In a more general form, the first law describes the internal energy with additional terms involving the chemical potential and the number of particles of various types. The differential statement for then becomes
where is the chemical potential per particle for a type particle, and is the number of such particles. The last term can also be written as (with the number of moles of component added to the system and, in this case, the molar chemical potential) or as (with the mass of component added to the system and, in this case, the specific chemical potential).
Characteristic functions and natural state variables
The enthalpy, expresses the thermodynamics of a system in the energy representation. As a function of state, its arguments include both one intensive and several extensive state variables. The state variables , and are said to be the natural state variables in this representation. They are suitable for describing processes in which they are determined by factors in the surroundings. For example, when a virtual parcel of atmospheric air moves to a different altitude, the pressure surrounding it changes, and the process is often so rapid that there is too little time for heat transfer. This is the basis of the so-called adiabatic approximation that is used in meteorology.
Conjugate with the enthalpy, with these arguments, the other characteristic function of state of a thermodynamic system is its entropy, as a function, of the same list of variables of state, except that the entropy, , is replaced in the list by the enthalpy, . It expresses the entropy representation. The state variables , , and are said to be the natural state variables in this representation. They are suitable for describing processes in which they are experimentally controlled. For example, and can be controlled by allowing heat transfer, and by varying only the external pressure on the piston that sets the volume of the system.
Physical interpretation
The term is the energy of the system, and the term can be interpreted as the work that would be required to "make room" for the system if the pressure of the environment remained constant. When a system, for example, moles of a gas of volume at pressure and temperature , is created or brought to its present state from absolute zero, energy must be supplied equal to its internal energy plus , where is the work done in pushing against the ambient (atmospheric) pressure.
In physics and statistical mechanics it may be more interesting to study the internal properties of a constant-volume system and therefore the internal energy is used.
In chemistry, experiments are often conducted at constant atmospheric pressure, and the pressure–volume work represents a small, well-defined energy exchange with the atmosphere, so that is the appropriate expression for the heat of reaction. For a heat engine, the change in its enthalpy after a full cycle is equal to zero, since the final and initial state are equal.
Relationship to heat
In order to discuss the relation between the enthalpy increase and heat supply, we return to the first law for closed systems, with the physics sign convention: , where the heat is supplied by conduction, radiation, Joule heating. We apply it to the special case with a constant pressure at the surface. In this case the work is given by (where is the pressure at the surface, is the increase of the volume of the system). Cases of long range electromagnetic interaction require further state variables in their formulation, and are not considered here. In this case the first law reads:
Now,
So
If the system is under constant pressure, and consequently, the increase in enthalpy of the system is equal to the heat added:
This is why the
now-obsolete term heat content was used for enthalpy in the 19th century.
Applications
In thermodynamics, one can calculate enthalpy by determining the requirements for creating a system from "nothingness"; the mechanical work required, differs based upon the conditions that obtain during the creation of the thermodynamic system.
Energy must be supplied to remove particles from the surroundings to make space for the creation of the system, assuming that the pressure remains constant; this is the The supplied energy must also provide the change in internal energy, which includes activation energies, ionization energies, mixing energies, vaporization energies, chemical bond energies, and so forth. Together, these constitute the change in the enthalpy For systems at constant pressure, with no external work done other than the work, the change in enthalpy is the heat received by the system.
For a simple system with a constant number of particles at constant pressure, the difference in enthalpy is the maximum amount of thermal energy derivable from an isobaric thermodynamic process.
Heat of reaction
The total enthalpy of a system cannot be measured directly; the enthalpy change of a system is measured instead. Enthalpy change is defined by the following equation:
where
is the "enthalpy change",
is the final enthalpy of the system (in a chemical reaction, the enthalpy of the products or the system at equilibrium),
is the initial enthalpy of the system (in a chemical reaction, the enthalpy of the reactants).
For an exothermic reaction at constant pressure, the system's change in enthalpy, , is negative due to the products of the reaction having a smaller enthalpy than the reactants, and equals the heat released in the reaction if no electrical or shaft work is done. In other words, the overall decrease in enthalpy is achieved by the generation of heat.
Conversely, for a constant-pressure endothermic reaction, is positive and equal to the heat absorbed in the reaction.
From the definition of enthalpy as the enthalpy change at constant pressure is However, for most chemical reactions, the work term is much smaller than the internal energy change , which is approximately equal to . As an example, for the combustion of carbon monoxide and
Since the differences are so small, reaction enthalpies are often described as reaction energies and analyzed in terms of bond energies.
Specific enthalpy
The specific enthalpy of a uniform system is defined as , where is the mass of the system. Its SI unit is joule per kilogram. It can be expressed in other specific quantities by where is the specific internal energy, is the pressure, and is specific volume, which is equal to , where is the density.
Enthalpy changes
An enthalpy change describes the change in enthalpy observed in the constituents of a thermodynamic system when undergoing a transformation or chemical reaction. It is the difference between the enthalpy after the process has completed, i.e. the enthalpy of the products assuming that the reaction goes to completion, and the initial enthalpy of the system, namely the reactants. These processes are specified solely by their initial and final states, so that the enthalpy change for the reverse is the negative of that for the forward process.
A common standard enthalpy change is the enthalpy of formation, which has been determined for a large number of substances. Enthalpy changes are routinely measured and compiled in chemical and physical reference works, such as the CRC Handbook of Chemistry and Physics. The following is a selection of enthalpy changes commonly recognized in thermodynamics.
When used in these recognized terms the qualifier change is usually dropped and the property is simply termed enthalpy of 'process. Since these properties are often used as reference values it is very common to quote them for a standardized set of environmental parameters, or standard conditions, including:
A pressure of one atmosphere (1 atm or 1013.25 hPa) or 1 bar
A temperature of 25 °C or 298.15 K
A concentration of 1.0 M when the element or compound is present in solution
Elements or compounds in their normal physical states, i.e. standard state
For such standardized values the name of the enthalpy is commonly prefixed with the term standard, e.g. standard enthalpy of formation.
Chemical properties
Enthalpy of reaction - is defined as the enthalpy change observed in a constituent of a thermodynamic system when one mole of substance reacts completely.
Enthalpy of formation - is defined as the enthalpy change observed in a constituent of a thermodynamic system when one mole of a compound is formed from its elementary antecedents.
Enthalpy of combustion - is defined as the enthalpy change observed in a constituent of a thermodynamic system when one mole of a substance burns completely with oxygen.
Enthalpy of hydrogenation - is defined as the enthalpy change observed in a constituent of a thermodynamic system when one mole of an unsaturated compound reacts completely with an excess of hydrogen to form a saturated compound.
Enthalpy of atomization - is defined as the enthalpy change required to separate one mole of a substance into its constituent atoms completely.
Enthalpy of neutralization - is defined as the enthalpy change observed in a constituent of a thermodynamic system when one mole of water is formed when an acid and a base react.
Standard Enthalpy of solution - is defined as the enthalpy change observed in a constituent of a thermodynamic system when one mole of a solute is dissolved completely in an excess of solvent, so that the solution is at infinite dilution.
Standard enthalpy of Denaturation (biochemistry) - is defined as the enthalpy change required to denature one mole of compound.
Enthalpy of hydration - is defined as the enthalpy change observed when one mole of gaseous ions are completely dissolved in water forming one mole of aqueous ions.
Physical properties
Enthalpy of fusion - is defined as the enthalpy change required to completely change the state of one mole of substance from solid to liquid.
Enthalpy of vaporization - is defined as the enthalpy change required to completely change the state of one mole of substance from liquid to gas.
Enthalpy of sublimation - is defined as the enthalpy change required to completely change the state of one mole of substance from solid to gas.
Lattice enthalpy - is defined as the energy required to separate one mole of an ionic compound into separated gaseous ions to an infinite distance apart (meaning no force of attraction).
Enthalpy of mixing - is defined as the enthalpy change upon mixing of two (non-reacting) chemical substances.
Open systems
In thermodynamic open systems, mass (of substances) may flow in and out of the system boundaries. The first law of thermodynamics for open systems states: The increase in the internal energy of a system is equal to the amount of energy added to the system by mass flowing in and by heating, minus the amount lost by mass flowing out and in the form of work done by the system:
where is the average internal energy entering the system, and is the average internal energy leaving the system.
The region of space enclosed by the boundaries of the open system is usually called a control volume, and it may or may not correspond to physical walls. If we choose the shape of the control volume such that all flow in or out occurs perpendicular to its surface, then the flow of mass into the system performs work as if it were a piston of fluid pushing mass into the system, and the system performs work on the flow of mass out as if it were driving a piston of fluid. There are then two types of work performed: Flow work described above, which is performed on the fluid (this is also often called work), and mechanical work (shaft work), which may be performed on some mechanical device such as a turbine or pump.
These two types of work are expressed in the equation
Substitution into the equation above for the control volume (cv) yields:
The definition of enthalpy, , permits us to use this thermodynamic potential to account for both internal energy and work in fluids for open systems:
If we allow also the system boundary to move (e.g. due to moving pistons), we get a rather general form of the first law for open systems.
In terms of time derivatives, using Newton's dot notation for time derivatives, it reads:
with sums over the various places where heat is supplied, mass flows into the system, and boundaries are moving. The terms represent enthalpy flows, which can be written as
with the mass flow and the molar flow at position respectively. The term represents the rate of change of the system volume at position that results in power done by the system. The parameter represents all other forms of power done by the system such as shaft power, but it can also be, say, electric power produced by an electrical power plant.
Note that the previous expression holds true only if the kinetic energy flow rate is conserved between system inlet and outlet. Otherwise, it has to be included in the enthalpy balance. During steady-state operation of a device (see turbine, pump, and engine), the average may be set equal to zero. This yields a useful expression for the average power generation for these devices in the absence of chemical reactions:
where the angle brackets denote time averages. The technical importance of the enthalpy is directly related to its presence in the first law for open systems, as formulated above.
Diagrams
The enthalpy values of important substances can be obtained using commercial software. Practically all relevant material properties can be obtained either in tabular or in graphical form. There are many types of diagrams, such as diagrams, which give the specific enthalpy as function of temperature for various pressures, and diagrams, which give as function of for various . One of the most common diagrams is the temperature–specific entropy diagram ( diagram). It gives the melting curve and saturated liquid and vapor values together with isobars and isenthalps. These diagrams are powerful tools in the hands of the thermal engineer.
Some basic applications
The points through in the figure play a role in the discussion in this section.
{| class="wikitable" style="text-align:center"
|-
|Point
! !! !! !!
|- style="background:#EEEEEE;"
| Unit || K || bar || ||
|-
| || 300 || 1 || 6.85 || 461
|-
| || 380 || 2 || 6.85 || 530
|-
| || 300 || 200 || 5.16 || 430
|-
| || 270 || 1 || 6.79 || 430
|-
| || 108 || 13 || 3.55 || 100
|-
| || 77.2 || 1 || 3.75 || 100
|-
| || 77.2 || 1 || 2.83 || 28
|-
| || 77.2 || 1 || 5.41 || 230
|}
Points and are saturated liquids, and point is a saturated gas.
Throttling
One of the simple applications of the concept of enthalpy is the so-called throttling process, also known as Joule–Thomson expansion. It concerns a steady adiabatic flow of a fluid through a flow resistance (valve, porous plug, or any other type of flow resistance) as shown in the figure. This process is very important, since it is at the heart of domestic refrigerators, where it is responsible for the temperature drop between ambient temperature and the interior of the refrigerator. It is also the final stage in many types of liquefiers.
For a steady state flow regime, the enthalpy of the system (dotted rectangle) has to be constant. Hence
Since the mass flow is constant, the specific enthalpies at the two sides of the flow resistance are the same:
that is, the enthalpy per unit mass does not change during the throttling. The consequences of this relation can be demonstrated using the diagram above.
Example 1
Point c is at 200 bar and room temperature (300 K). A Joule–Thomson expansion from 200 bar to 1 bar follows a curve of constant enthalpy of roughly 425 (not shown in the diagram) lying between the 400 and 450 isenthalps and ends in point d, which is at a temperature of about 270 K . Hence the expansion from 200 bar to 1 bar cools nitrogen from 300 K to 270 K . In the valve, there is a lot of friction, and a lot of entropy is produced, but still the final temperature is below the starting value.
Example 2
Point e is chosen so that it is on the saturated liquid line with It corresponds roughly with and Throttling from this point to a pressure of 1 bar ends in the two-phase region (point f). This means that a mixture of gas and liquid leaves the throttling valve. Since the enthalpy is an extensive parameter, the enthalpy in is equal to the enthalpy in multiplied by the liquid fraction in plus the enthalpy in multiplied by the gas fraction in So
With numbers:
so
This means that the mass fraction of the liquid in the liquid–gas mixture that leaves the throttling valve is 64%.
Compressors
A power is applied e.g. as electrical power. If the compression is adiabatic, the gas temperature goes up. In the reversible case it would be at constant entropy, which corresponds with a vertical line in the diagram. For example, compressing nitrogen from 1 bar (point a) to 2 bar (point b''') would result in a temperature increase from 300 K to 380 K. In order to let the compressed gas exit at ambient temperature , heat exchange, e.g. by cooling water, is necessary. In the ideal case the compression is isothermal. The average heat flow to the surroundings is . Since the system is in the steady state the first law gives
The minimal power needed for the compression is realized if the compression is reversible. In that case the second law of thermodynamics for open systems gives
Eliminating gives for the minimal power
For example, compressing 1 kg of nitrogen from 1 bar to 200 bar costs at least :
With the data, obtained with the diagram, we find a value of
The relation for the power can be further simplified by writing it as
With
this results in the final relation
History and etymology
The term enthalpy was coined relatively late in the history of thermodynamics, in the early 20th century. Energy was introduced in a modern sense by Thomas Young in 1802, while entropy by Rudolf Clausius in 1865. Energy uses the root of the Greek word (ergon), meaning "work", to express the idea of capacity to perform work. Entropy uses the Greek word (tropē) meaning transformation or turning. Enthalpy uses the root of the Greek word (thalpos) "warmth, heat".
The term expresses the obsolete concept of heat content, as refers to the amount of heat gained in a process at constant pressure only, but not in the general case when pressure is variable. J. W. Gibbs used the term "a heat function for constant pressure" for clarity.
Introduction of the concept of "heat content" is associated with Benoît Paul Émile Clapeyron and Rudolf Clausius (Clausius–Clapeyron relation, 1850).
The term enthalpy first appeared in print in 1909. It is attributed to Heike Kamerlingh Onnes, who most likely introduced it orally the year before, at the first meeting of the Institute of Refrigeration in Paris. It gained currency only in the 1920s, notably with the Mollier Steam Tables and Diagrams'', published in 1927.
Until the 1920s, the symbol was used, somewhat inconsistently, for "heat" in general. The definition of as strictly limited to enthalpy or "heat content at constant pressure" was formally proposed by A. W. Porter in 1922.
| Physical sciences | Thermodynamics | null |
10290 | https://en.wikipedia.org/wiki/Emulsion | Emulsion | An emulsion is a mixture of two or more liquids that are normally immiscible (unmixable or unblendable) owing to liquid-liquid phase separation. Emulsions are part of a more general class of two-phase systems of matter called colloids. Although the terms colloid and emulsion are sometimes used interchangeably, emulsion should be used when both phases, dispersed and continuous, are liquids. In an emulsion, one liquid (the dispersed phase) is dispersed in the other (the continuous phase). Examples of emulsions include vinaigrettes, homogenized milk, liquid biomolecular condensates, and some cutting fluids for metal working.
Two liquids can form different types of emulsions. As an example, oil and water can form, first, an oil-in-water emulsion, in which the oil is the dispersed phase, and water is the continuous phase. Second, they can form a water-in-oil emulsion, in which water is the dispersed phase and oil is the continuous phase. Multiple emulsions are also possible, including a "water-in-oil-in-water" emulsion and an "oil-in-water-in-oil" emulsion.
Emulsions, being liquids, do not exhibit a static internal structure. The droplets dispersed in the continuous phase (sometimes referred to as the "dispersion medium") are usually assumed to be statistically distributed to produce roughly spherical droplets.
The term "emulsion" is also used to refer to the photo-sensitive side of photographic film. Such a photographic emulsion consists of silver halide colloidal particles dispersed in a gelatin matrix. Nuclear emulsions are similar to photographic emulsions, except that they are used in particle physics to detect high-energy elementary particles.
Etymology
The word "emulsion" comes from the Latin emulgere "to milk out", from ex "out" + mulgere "to milk", as milk is an emulsion of fat and water, along with other components, including colloidal casein micelles (a type of secreted biomolecular condensate).
Appearance and properties
Emulsions contain both a dispersed and a continuous phase, with the boundary between the phases called the "interface". Emulsions tend to have a cloudy appearance because the many phase interfaces scatter light as it passes through the emulsion. Emulsions appear white when all light is scattered equally. If the emulsion is dilute enough, higher-frequency (shorter-wavelength) light will be scattered more, and the emulsion will appear bluer – this is called the "Tyndall effect". If the emulsion is concentrated enough, the color will be distorted toward comparatively longer wavelengths, and will appear more yellow. This phenomenon is easily observable when comparing skimmed milk, which contains little fat, to cream, which contains a much higher concentration of milk fat. One example would be a mixture of water and oil.
Two special classes of emulsions – microemulsions and nanoemulsions, with droplet sizes below 100 nm – appear translucent. This property is due to the fact that light waves are scattered by the droplets only if their sizes exceed about one-quarter of the wavelength of the incident light. Since the visible spectrum of light is composed of wavelengths between 390 and 750 nanometers (nm), if the droplet sizes in the emulsion are below about 100 nm, the light can penetrate through the emulsion without being scattered. Due to their similarity in appearance, translucent nanoemulsions and microemulsions are frequently confused. Unlike translucent nanoemulsions, which require specialized equipment to be produced, microemulsions are spontaneously formed by "solubilizing" oil molecules with a mixture of surfactants, co-surfactants, and co-solvents. The required surfactant concentration in a microemulsion is, however, several times higher than that in a translucent nanoemulsion, and significantly exceeds the concentration of the dispersed phase. Because of many undesirable side-effects caused by surfactants, their presence is disadvantageous or prohibitive in many applications. In addition, the stability of a microemulsion is often easily compromised by dilution, by heating, or by changing pH levels.
Common emulsions are inherently unstable and, thus, do not tend to form spontaneously. Energy input – through shaking, stirring, homogenizing, or exposure to power ultrasound – is needed to form an emulsion. Over time, emulsions tend to revert to the stable state of the phases comprising the emulsion. An example of this is seen in the separation of the oil and vinegar components of vinaigrette, an unstable emulsion that will quickly separate unless shaken almost continuously. There are important exceptions to this rule – microemulsions are thermodynamically stable, while translucent nanoemulsions are kinetically stable.
Whether an emulsion of oil and water turns into a "water-in-oil" emulsion or an "oil-in-water" emulsion depends on the volume fraction of both phases and the type of emulsifier (surfactant) (see Emulsifier, below) present.
Instability
Emulsion stability refers to the ability of an emulsion to resist change in its properties over time. There are four types of instability in emulsions: flocculation, coalescence, creaming/sedimentation, and Ostwald ripening. Flocculation occurs when there is an attractive force between the droplets, so they form flocs, like bunches of grapes. This process can be desired, if controlled in its extent, to tune physical properties of emulsions such as their flow behaviour. Coalescence occurs when droplets bump into each other and combine to form a larger droplet, so the average droplet size increases over time. Emulsions can also undergo creaming, where the droplets rise to the top of the emulsion under the influence of buoyancy, or under the influence of the centripetal force induced when a centrifuge is used. Creaming is a common phenomenon in dairy and non-dairy beverages (i.e. milk, coffee milk, almond milk, soy milk) and usually does not change the droplet size. Sedimentation is the opposite phenomenon of creaming and normally observed in water-in-oil emulsions. Sedimentation happens when the dispersed phase is denser than the continuous phase and the gravitational forces pull the denser globules towards the bottom of the emulsion. Similar to creaming, sedimentation follows Stokes' law.
An appropriate surface active agent (or surfactant) can increase the kinetic stability of an emulsion so that the size of the droplets does not change significantly with time. The stability of an emulsion, like a suspension, can be studied in terms of zeta potential, which indicates the repulsion between droplets or particles. If the size and dispersion of droplets does not change over time, it is said to be stable. For example, oil-in-water emulsions containing mono- and diglycerides and milk protein as surfactant showed that stable oil droplet size over 28 days storage at 25 °C.
Monitoring physical stability
The stability of emulsions can be characterized using techniques such as light scattering, focused beam reflectance measurement, centrifugation, and rheology. Each method has advantages and disadvantages.
Accelerating methods for shelf life prediction
The kinetic process of destabilization can be rather long – up to several months, or even years for some products. Often the formulator must accelerate this process in order to test products in a reasonable time during product design. Thermal methods are the most commonly used – these consist of increasing the emulsion temperature to accelerate destabilization (if below critical temperatures for phase inversion or chemical degradation). Temperature affects not only the viscosity but also the interfacial tension in the case of non-ionic surfactants or, on a broader scope, interactions between droplets within the system. Storing an emulsion at high temperatures enables the simulation of realistic conditions for a product (e.g., a tube of sunscreen emulsion in a car in the summer heat), but also accelerates destabilization processes up to 200 times.
Mechanical methods of acceleration, including vibration, centrifugation, and agitation, can also be used.
These methods are almost always empirical, without a sound scientific basis.
Emulsifiers
An emulsifier is a substance that stabilizes an emulsion by reducing the oil-water interface tension. Emulsifiers are a part of a broader group of compounds known as surfactants, or "surface-active agents". Surfactants are compounds that are typically amphiphilic, meaning they have a polar or hydrophilic (i.e., water-soluble) part and a non-polar (i.e., hydrophobic or lipophilic) part. Emulsifiers that are more soluble in water (and, conversely, less soluble in oil) will generally form oil-in-water emulsions, while emulsifiers that are more soluble in oil will form water-in-oil emulsions.
Examples of food emulsifiers are:
Egg yolk – in which the main emulsifying and thickening agent is lecithin.
Mustard – where a variety of chemicals in the mucilage surrounding the seed hull act as emulsifiers
Soy lecithin is another emulsifier and thickener
Pickering stabilization – uses particles under certain circumstances
Mono- and diglycerides – a common emulsifier found in many food products (coffee creamers, ice creams, spreads, breads, cakes)
Sodium stearoyl lactylate
DATEM (diacetyl tartaric acid esters of mono- and diglycerides) – an emulsifier used primarily in baking
Proteins – those with both hydrophilic and hydrophobic regions, e.g. sodium caseinate, as in meltable cheese product
In food emulsions, the type of emulsifier greatly affects how emulsions are structured in the stomach and how accessible the oil is for gastric lipases, thereby influencing how fast emulsions are digested and trigger a satiety inducing hormone response.
Detergents are another class of surfactant, and will interact physically with both oil and water, thus stabilizing the interface between the oil and water droplets in suspension. This principle is exploited in soap, to remove grease for the purpose of cleaning. Many different emulsifiers are used in pharmacy to prepare emulsions such as creams and lotions. Common examples include emulsifying wax, polysorbate 20, and ceteareth 20.
Sometimes the inner phase itself can act as an emulsifier, and the result is a nanoemulsion, where the inner state disperses into "nano-size" droplets within the outer phase. A well-known example of this phenomenon, the "ouzo effect", happens when water is poured into a strong alcoholic anise-based beverage, such as ouzo, pastis, absinthe, arak, or raki. The anisolic compounds, which are soluble in ethanol, then form nano-size droplets and emulsify within the water. The resulting color of the drink is opaque and milky white.
Mechanisms of emulsification
A number of different chemical and physical processes and mechanisms can be involved in the process of emulsification:
Surface tension theory – according to this theory, emulsification takes place by reduction of interfacial tension between two phases
Repulsion theory – According to this theory, the emulsifier creates a film over one phase that forms globules, which repel each other. This repulsive force causes them to remain suspended in the dispersion medium
Viscosity modification – emulgents like acacia and tragacanth, which are hydrocolloids, as well as PEG (polyethylene glycol), glycerine, and other polymers like CMC (carboxymethyl cellulose), all increase the viscosity of the medium, which helps create and maintain the suspension of globules of dispersed phase
Uses
In food
Oil-in-water emulsions are common in food products:
Mayonnaise and Hollandaise sauces – these are oil-in-water emulsions stabilized with egg yolk lecithin, or with other types of food additives, such as sodium stearoyl lactylate
Homogenized milk – an emulsion of milk fat in water, with milk proteins as the emulsifier
Vinaigrette – an emulsion of vegetable oil in vinegar, if this is prepared using only oil and vinegar (i.e., without an emulsifier), an unstable emulsion results
Water-in-oil emulsions are less common in food, but still exist:
Butter – an emulsion of water in butterfat
Margarine
Other foods can be turned into products similar to emulsions, for example meat emulsion is a suspension of meat in liquid that is similar to true emulsions.
In health care
In pharmaceutics, hairstyling, personal hygiene, and cosmetics, emulsions are frequently used. These are usually oil and water emulsions but dispersed, and which is continuous depends in many cases on the pharmaceutical formulation. These emulsions may be called creams, ointments, liniments (balms), pastes, films, or liquids, depending mostly on their oil-to-water ratios, other additives, and their intended route of administration. The first 5 are topical dosage forms, and may be used on the surface of the skin, transdermally, ophthalmically, rectally, or vaginally. A highly liquid emulsion may also be used orally, or may be injected in some cases.
Microemulsions are used to deliver vaccines and kill microbes. Typical emulsions used in these techniques are nanoemulsions of soybean oil, with particles that are 400–600 nm in diameter. The process is not chemical, as with other types of antimicrobial treatments, but mechanical. The smaller the droplet the greater the surface tension and thus the greater the force required to merge with other lipids. The oil is emulsified with detergents using a high-shear mixer to stabilize the emulsion so, when they encounter the lipids in the cell membrane or envelope of bacteria or viruses, they force the lipids to merge with themselves. On a mass scale, in effect this disintegrates the membrane and kills the pathogen. The soybean oil emulsion does not harm normal human cells, or the cells of most other higher organisms, with the exceptions of sperm cells and blood cells, which are vulnerable to nanoemulsions due to the peculiarities of their membrane structures. For this reason, these nanoemulsions are not currently used intravenously (IV). The most effective application of this type of nanoemulsion is for the disinfection of surfaces. Some types of nanoemulsions have been shown to effectively destroy HIV-1 and tuberculosis pathogens on non-porous surfaces.
Applications in Pharmaceutical industry
Oral drug delivery: Emulsions may provide an efficient means of administering drugs that are poorly soluble or have low bioavailability or dissolution rates, increasing both dissolution rates and absorption to increase bioavailability and improve bioavailability. By increasing surface area provided by an emulsion, dissolution rates and absorption rates of drugs are increased, improving their bioavailability.
Topical formulations: Emulsions are widely utilized as bases for topical drug delivery formulations such as creams, lotions and ointments. Their incorporation allows lipophilic as well as hydrophilic drugs to be mixed together for maximum skin penetration and permeation of active ingredients.
Parenteral drug delivery: Emulsions serve as carriers for intravenous or intramuscular administration of drugs, solubilizing lipophilic ones while protecting from degradation and decreasing injection site irritation. Examples include propofol as a widely used anesthetic and lipid-based solutions used for total parenteral nutrition delivery.
Ocular Drug Delivery: Emulsions can be used to formulate eye drops and other ocular drug delivery systems, increasing drug retention time in the eye and permeating through corneal barriers more easily while providing sustained release of active ingredients and thus increasing therapeutic efficacy.
Nasal and Pulmonary Drug Delivery: Emulsions can be an ideal vehicle for creating nasal sprays and inhalable drug products, enhancing drug absorption through nasal and pulmonary mucosa while providing sustained release with reduced local irritation.
Vaccine Adjuvants: Emulsions can serve as vaccine adjuvants by strengthening immune responses against specific antigens. Emulsions can enhance antigen solubility and uptake by immune cells while simultaneously providing controlled release, amplifying an immunological response and thus amplifying its effect.
Taste Masking: Emulsions can be used to encase bitter or otherwise unpleasant-tasting drugs, masking their taste and increasing patient compliance - particularly with pediatric formulations.
Cosmeceuticals: Emulsions are widely utilized in cosmeceuticals products that combine cosmetic and pharmaceutical properties. These emulsions act as carriers for active ingredients like vitamins, antioxidants and skin lightening agents to provide improved skin penetration and increased stability.
In firefighting
Emulsifying agents are effective at extinguishing fires on small, thin-layer spills of flammable liquids (class B fires). Such agents encapsulate the fuel in a fuel-water emulsion, thereby trapping the flammable vapors in the water phase. This emulsion is achieved by applying an aqueous surfactant solution to the fuel through a high-pressure nozzle. Emulsifiers are not effective at extinguishing large fires involving bulk/deep liquid fuels, because the amount of emulsifier agent needed for extinguishment is a function of the volume of the fuel, whereas other agents such as aqueous film-forming foam need cover only the surface of the fuel to achieve vapor mitigation.
Chemical synthesis
Emulsions are used to manufacture polymer dispersions – polymer production in an emulsion 'phase' has a number of process advantages, including prevention of coagulation of product. Products produced by such polymerisations may be used as the emulsions – products including primary components for glues and paints. Synthetic latexes (rubbers) are also produced by this process.
| Physical sciences | Chemical mixtures: General | null |
10294 | https://en.wikipedia.org/wiki/Encryption | Encryption | In cryptography, encryption (more specifically, encoding) is the process of transforming information in a way that, ideally, only authorized parties can decode. This process converts the original representation of the information, known as plaintext, into an alternative form known as ciphertext. Despite its goal, encryption does not itself prevent interference but denies the intelligible content to a would-be interceptor.
For technical reasons, an encryption scheme usually uses a pseudo-random encryption key generated by an algorithm. It is possible to decrypt the message without possessing the key but, for a well-designed encryption scheme, considerable computational resources and skills are required. An authorized recipient can easily decrypt the message with the key provided by the originator to recipients but not to unauthorized users.
Historically, various forms of encryption have been used to aid in cryptography. Early encryption techniques were often used in military messaging. Since then, new techniques have emerged and become commonplace in all areas of modern computing. Modern encryption schemes use the concepts of public-key and symmetric-key. Modern encryption techniques ensure security because modern computers are inefficient at cracking the encryption.
History
Ancient
One of the earliest forms of encryption is symbol replacement, which was first found in the tomb of Khnumhotep II, who lived in 1900 BC Egypt. Symbol replacement encryption is “non-standard,” which means that the symbols require a cipher or key to understand. This type of early encryption was used throughout Ancient Greece and Rome for military purposes. One of the most famous military encryption developments was the Caesar cipher, in which a plaintext letter is shifted a fixed number of positions along the alphabet to get the encoded letter. A message encoded with this type of encryption could be decoded with a fixed number on the Caesar cipher.
Around 800 AD, Arab mathematician Al-Kindi developed the technique of frequency analysis – which was an attempt to crack ciphers systematically, including the Caesar cipher. This technique looked at the frequency of letters in the encrypted message to determine the appropriate shift: for example, the most common letter in English text is E and is therefore likely to be represented by the letter that appears most commonly in the ciphertext. This technique was rendered ineffective by the polyalphabetic cipher, described by Al-Qalqashandi (1355–1418) and Leon Battista Alberti (in 1465), which varied the substitution alphabet as encryption proceeded in order to confound such analysis.
19th–20th century
Around 1790, Thomas Jefferson theorized a cipher to encode and decode messages to provide a more secure way of military correspondence. The cipher, known today as the Wheel Cipher or the Jefferson Disk, although never actually built, was theorized as a spool that could jumble an English message up to 36 characters. The message could be decrypted by plugging in the jumbled message to a receiver with an identical cipher.
A similar device to the Jefferson Disk, the M-94, was developed in 1917 independently by US Army Major Joseph Mauborne. This device was used in U.S. military communications until 1942.
In World War II, the Axis powers used a more advanced version of the M-94 called the Enigma Machine. The Enigma Machine was more complex because unlike the Jefferson Wheel and the M-94, each day the jumble of letters switched to a completely new combination. Each day's combination was only known by the Axis, so many thought the only way to break the code would be to try over 17,000 combinations within 24 hours. The Allies used computing power to severely limit the number of reasonable combinations they needed to check every day, leading to the breaking of the Enigma Machine.
Modern
Today, encryption is used in the transfer of communication over the Internet for security and commerce. As computing power continues to increase, computer encryption is constantly evolving to prevent eavesdropping attacks. One of the first "modern" cipher suites, DES, used a 56-bit key with 72,057,594,037,927,936 possibilities; it was cracked in 1999 by EFF's brute-force DES cracker, which required 22 hours and 15 minutes to do so. Modern encryption standards often use stronger key sizes, such as AES (256-bit mode), TwoFish, ChaCha20-Poly1305, Serpent (configurable up to 512-bit). Cipher suites that use a 128-bit or higher key, like AES, will not be able to be brute-forced because the total amount of keys is 3.4028237e+38 possibilities. The most likely option for cracking ciphers with high key size is to find vulnerabilities in the cipher itself, like inherent biases and backdoors or by exploiting physical side effects through Side-channel attacks. For example, RC4, a stream cipher, was cracked due to inherent biases and vulnerabilities in the cipher.
Encryption in cryptography
In the context of cryptography, encryption serves as a mechanism to ensure confidentiality. Since data may be visible on the Internet, sensitive information such as passwords and personal communication may be exposed to potential interceptors. The process of encrypting and decrypting messages involves keys. The two main types of keys in cryptographic systems are symmetric-key and public-key (also known as asymmetric-key).
Many complex cryptographic algorithms often use simple modular arithmetic in their implementations.
Types
In symmetric-key schemes, the encryption and decryption keys are the same. Communicating parties must have the same key in order to achieve secure communication. The German Enigma Machine used a new symmetric-key each day for encoding and decoding messages. In addition to traditional encryption types, individuals can enhance their security by using VPNs or specific browser settings to encrypt their internet connection, providing additional privacy protection while browsing the web.
In public-key encryption schemes, the encryption key is published for anyone to use and encrypt messages. However, only the receiving party has access to the decryption key that enables messages to be read. Public-key encryption was first described in a secret document in 1973; beforehand, all encryption schemes were symmetric-key (also called private-key). Although published subsequently, the work of Diffie and Hellman was published in a journal with a large readership, and the value of the methodology was explicitly described. The method became known as the Diffie-Hellman key exchange.
RSA (Rivest–Shamir–Adleman) is another notable public-key cryptosystem. Created in 1978, it is still used today for applications involving digital signatures. Using number theory, the RSA algorithm selects two prime numbers, which help generate both the encryption and decryption keys.
A publicly available public-key encryption application called Pretty Good Privacy (PGP) was written in 1991 by Phil Zimmermann, and distributed free of charge with source code. PGP was purchased by Symantec in 2010 and is regularly updated.
Uses
Encryption has long been used by militaries and governments to facilitate secret communication. It is now commonly used in protecting information within many kinds of civilian systems. For example, the Computer Security Institute reported that in 2007, 71% of companies surveyed used encryption for some of their data in transit, and 53% used encryption for some of their data in storage. Encryption can be used to protect data "at rest", such as information stored on computers and storage devices (e.g. USB flash drives). In recent years, there have been numerous reports of confidential data, such as customers' personal records, being exposed through loss or theft of laptops or backup drives; encrypting such files at rest helps protect them if physical security measures fail. Digital rights management systems, which prevent unauthorized use or reproduction of copyrighted material and protect software against reverse engineering (see also copy protection), is another somewhat different example of using encryption on data at rest.
Encryption is also used to protect data in transit, for example data being transferred via networks (e.g. the Internet, e-commerce), mobile telephones, wireless microphones, wireless intercom systems, Bluetooth devices and bank automatic teller machines. There have been numerous reports of data in transit being intercepted in recent years. Data should also be encrypted when transmitted across networks in order to protect against eavesdropping of network traffic by unauthorized users.
Data erasure
Conventional methods for permanently deleting data from a storage device involve overwriting the device's whole content with zeros, ones, or other patterns – a process which can take a significant amount of time, depending on the capacity and the type of storage medium. Cryptography offers a way of making the erasure almost instantaneous. This method is called crypto-shredding. An example implementation of this method can be found on iOS devices, where the cryptographic key is kept in a dedicated 'effaceable storage'. Because the key is stored on the same device, this setup on its own does not offer full privacy or security protection if an unauthorized person gains physical access to the device.
Limitations
Encryption is used in the 21st century to protect digital data and information systems. As computing power increased over the years, encryption technology has only become more advanced and secure. However, this advancement in technology has also exposed a potential limitation of today's encryption methods.
The length of the encryption key is an indicator of the strength of the encryption method. For example, the original encryption key, DES (Data Encryption Standard), was 56 bits, meaning it had 2^56 combination possibilities. With today's computing power, a 56-bit key is no longer secure, being vulnerable to brute force attacks.
Quantum computing uses properties of quantum mechanics in order to process large amounts of data simultaneously. Quantum computing has been found to achieve computing speeds thousands of times faster than today's supercomputers. This computing power presents a challenge to today's encryption technology. For example, RSA encryption uses the multiplication of very large prime numbers to create a semiprime number for its public key. Decoding this key without its private key requires this semiprime number to be factored, which can take a very long time to do with modern computers. It would take a supercomputer anywhere between weeks to months to factor in this key. However, quantum computing can use quantum algorithms to factor this semiprime number in the same amount of time it takes for normal computers to generate it. This would make all data protected by current public-key encryption vulnerable to quantum computing attacks. Other encryption techniques like elliptic curve cryptography and symmetric key encryption are also vulnerable to quantum computing.
While quantum computing could be a threat to encryption security in the future, quantum computing as it currently stands is still very limited. Quantum computing currently is not commercially available, cannot handle large amounts of code, and only exists as computational devices, not computers. Furthermore, quantum computing advancements will be able to be used in favor of encryption as well. The National Security Agency (NSA) is currently preparing post-quantum encryption standards for the future. Quantum encryption promises a level of security that will be able to counter the threat of quantum computing.
Attacks and countermeasures
Encryption is an important tool but is not sufficient alone to ensure the security or privacy of sensitive information throughout its lifetime. Most applications of encryption protect information only at rest or in transit, leaving sensitive data in clear text and potentially vulnerable to improper disclosure during processing, such as by a cloud service for example. Homomorphic encryption and secure multi-party computation are emerging techniques to compute encrypted data; these techniques are general and Turing complete but incur high computational and/or communication costs.
In response to encryption of data at rest, cyber-adversaries have developed new types of attacks. These more recent threats to encryption of data at rest include cryptographic attacks, stolen ciphertext attacks, attacks on encryption keys, insider attacks, data corruption or integrity attacks, data destruction attacks, and ransomware attacks. Data fragmentation and active defense data protection technologies attempt to counter some of these attacks, by distributing, moving, or mutating ciphertext so it is more difficult to identify, steal, corrupt, or destroy.
The debate around encryption
The question of balancing the need for national security with the right to privacy has been debated for years, since encryption has become critical in today's digital society. The modern encryption debate started around the '90s when US government tried to ban cryptography because, according to them, it would threaten national security. The debate is polarized around two opposing views. Those who see strong encryption as a problem making it easier for criminals to hide their illegal acts online and others who argue that encryption keep digital communications safe. The debate heated up in 2014, when Big Tech like Apple and Google set encryption by default in their devices. This was the start of a series of controversies that puts governments, companies and internet users at stake.
Integrity protection of Ciphertexts
Encryption, by itself, can protect the confidentiality of messages, but other techniques are still needed to protect the integrity and authenticity of a message; for example, verification of a message authentication code (MAC) or a digital signature usually done by a hashing algorithm or a PGP signature. Authenticated encryption algorithms are designed to provide both encryption and integrity protection together. Standards for cryptographic software and hardware to perform encryption are widely available, but successfully using encryption to ensure security may be a challenging problem. A single error in system design or execution can allow successful attacks. Sometimes an adversary can obtain unencrypted information without directly undoing the encryption. See for example traffic analysis, TEMPEST, or Trojan horse.
Integrity protection mechanisms such as MACs and digital signatures must be applied to the ciphertext when it is first created, typically on the same device used to compose the message, to protect a message end-to-end along its full transmission path; otherwise, any node between the sender and the encryption agent could potentially tamper with it. Encrypting at the time of creation is only secure if the encryption device itself has correct keys and has not been tampered with. If an endpoint device has been configured to trust a root certificate that an attacker controls, for example, then the attacker can both inspect and tamper with encrypted data by performing a man-in-the-middle attack anywhere along the message's path. The common practice of TLS interception by network operators represents a controlled and institutionally sanctioned form of such an attack, but countries have also attempted to employ such attacks as a form of control and censorship.
Ciphertext length and padding
Even when encryption correctly hides a message's content and it cannot be tampered with at rest or in transit, a message's length is a form of metadata that can still leak sensitive information about the message. For example, the well-known CRIME and BREACH attacks against HTTPS were side-channel attacks that relied on information leakage via the length of encrypted content. Traffic analysis is a broad class of techniques that often employs message lengths to infer sensitive implementation about traffic flows by aggregating information about a large number of messages.
Padding a message's payload before encrypting it can help obscure the cleartext's true length, at the cost of increasing the ciphertext's size and introducing or increasing bandwidth overhead. Messages may be padded randomly or deterministically, with each approach having different tradeoffs. Encrypting and padding messages to form padded uniform random blobs or PURBs is a practice guaranteeing that the cipher text leaks no metadata about its cleartext's content, and leaks asymptotically minimal information via its length.
| Technology | Cryptography | null |
10296 | https://en.wikipedia.org/wiki/Einstein%E2%80%93Podolsky%E2%80%93Rosen%20paradox | Einstein–Podolsky–Rosen paradox | The Einstein–Podolsky–Rosen (EPR) paradox is a thought experiment proposed by physicists Albert Einstein, Boris Podolsky and Nathan Rosen, which argues that the description of physical reality provided by quantum mechanics is incomplete. In a 1935 paper titled "Can Quantum-Mechanical Description of Physical Reality be Considered Complete?", they argued for the existence of "elements of reality" that were not part of quantum theory, and speculated that it should be possible to construct a theory containing these hidden variables. Resolutions of the paradox have important implications for the interpretation of quantum mechanics.
The thought experiment involves a pair of particles prepared in what would later become known as an entangled state. Einstein, Podolsky, and Rosen pointed out that, in this state, if the position of the first particle were measured, the result of measuring the position of the second particle could be predicted. If instead the momentum of the first particle were measured, then the result of measuring the momentum of the second particle could be predicted. They argued that no action taken on the first particle could instantaneously affect the other, since this would involve information being transmitted faster than light, which is impossible according to the theory of relativity. They invoked a principle, later known as the "EPR criterion of reality", which posited that: "If, without in any way disturbing a system, we can predict with certainty (i.e., with probability equal to unity) the value of a physical quantity, then there exists an element of reality corresponding to that quantity." From this, they inferred that the second particle must have a definite value of both position and of momentum prior to either quantity being measured. But quantum mechanics considers these two observables incompatible and thus does not associate simultaneous values for both to any system. Einstein, Podolsky, and Rosen therefore concluded that quantum theory does not provide a complete description of reality.
The "Paradox" paper
The term "Einstein–Podolsky–Rosen paradox" or "EPR" arose from a paper written in 1934 after Einstein joined the Institute for Advanced Study, having fled the rise of Nazi Germany.
The original paper purports to describe what must happen to "two systems I and II, which we permit to interact", and after some time "we suppose that there is no longer any interaction between the two parts." The EPR description involves "two particles, A and B, [which] interact briefly and then move off in opposite directions." According to Heisenberg's uncertainty principle, it is impossible to measure both the momentum and the position of particle B exactly; however, it is possible to measure the exact position of particle A. By calculation, therefore, with the exact position of particle A known, the exact position of particle B can be known. Alternatively, the exact momentum of particle A can be measured, so the exact momentum of particle B can be worked out. As Manjit Kumar writes, "EPR argued that they had proved that ... [particle] B can have simultaneously exact values of position and momentum. ... Particle B has a position that is real and a momentum that is real. EPR appeared to have contrived a means to establish the exact values of either the momentum or the position of B due to measurements made on particle A, without the slightest possibility of particle B being physically disturbed."
EPR tried to set up a paradox to question the range of true application of quantum mechanics: quantum theory predicts that both values cannot be known for a particle, and yet the EPR thought experiment purports to show that they must all have determinate values. The EPR paper says: "We are thus forced to conclude that the quantum-mechanical description of physical reality given by wave functions is not complete." The EPR paper ends by saying: "While we have thus shown that the wave function does not provide a complete description of the physical reality, we left open the question of whether or not such a description exists. We believe, however, that such a theory is possible." The 1935 EPR paper condensed the philosophical discussion into a physical argument. The authors claim that given a specific experiment, in which the outcome of a measurement is known before the measurement takes place, there must exist something in the real world, an "element of reality", that determines the measurement outcome. They postulate that these elements of reality are, in modern terminology, local, in the sense that each belongs to a certain point in spacetime. Each element may, again in modern terminology, only be influenced by events that are located in the backward light cone of its point in spacetime (i.e. in the past). These claims are founded on assumptions about nature that constitute what is now known as local realism.
Though the EPR paper has often been taken as an exact expression of Einstein's views, it was primarily authored by Podolsky, based on discussions at the Institute for Advanced Study with Einstein and Rosen. Einstein later expressed to Erwin Schrödinger that, "it did not come out as well as I had originally wanted; rather, the essential thing was, so to speak, smothered by the formalism." Einstein would later go on to present an individual account of his local realist ideas. Shortly before the EPR paper appeared in the Physical Review, The New York Times ran a news story about it, under the headline "Einstein Attacks Quantum Theory". The story, which quoted Podolsky, irritated Einstein, who wrote to the Times, "Any information upon which the article 'Einstein Attacks Quantum Theory' in your issue of May 4 is based was given to you without authority. It is my invariable practice to discuss scientific matters only in the appropriate forum and I deprecate advance publication of any announcement in regard to such matters in the secular press."
The Times story also sought out comment from physicist Edward Condon, who said, "Of course, a great deal of the argument hinges on just what meaning is to be attached to the word 'reality' in physics." The physicist and historian Max Jammer later noted, "[I]t remains a historical fact that the earliest criticism of the EPR paper – moreover, a criticism that correctly saw in Einstein's conception of physical reality the key problem of the whole issue – appeared in a daily newspaper prior to the publication of the criticized paper itself."
Bohr's reply
The publication of the paper prompted a response by Niels Bohr, which he published in the same journal (Physical Review), in the same year, using the same title. (This exchange was only one chapter in a prolonged debate between Bohr and Einstein about the nature of quantum reality.)
He argued that EPR had reasoned fallaciously. Bohr said measurements of position and of momentum are complementary, meaning the choice to measure one excludes the possibility of measuring the other. Consequently, a fact deduced regarding one arrangement of laboratory apparatus could not be combined with a fact deduced by means of the other, and so, the inference of predetermined position and momentum values for the second particle was not valid. Bohr concluded that EPR's "arguments do not justify their conclusion that the quantum description turns out to be essentially incomplete."
Einstein's own argument
In his own publications and correspondence, Einstein indicated that he was not satisfied with the EPR paper and that Rosen had authored most of it. He later used a different argument to insist that quantum mechanics is an incomplete theory. He explicitly de-emphasized EPR's attribution of "elements of reality" to the position and momentum of particle B, saying that "I couldn't care less" whether the resulting states of particle B allowed one to predict the position and momentum with certainty.
For Einstein, the crucial part of the argument was the demonstration of nonlocality, that the choice of measurement done in particle A, either position or momentum, would lead to two different quantum states of particle B. He argued that, because of locality, the real state of particle B could not depend on which kind of measurement was done in A and that the quantum states therefore cannot be in one-to-one correspondence with the real states. Einstein struggled unsuccessfully for the rest of his life to find a theory that could better comply with his idea of locality.
Later developments
Bohm's variant
In 1951, David Bohm proposed a variant of the EPR thought experiment in which the measurements have discrete ranges of possible outcomes, unlike the position and momentum measurements considered by EPR. The EPR–Bohm thought experiment can be explained using electron–positron pairs. Suppose we have a source that emits electron–positron pairs, with the electron sent to destination A, where there is an observer named Alice, and the positron sent to destination B, where there is an observer named Bob. According to quantum mechanics, we can arrange our source so that each emitted pair occupies a quantum state called a spin singlet. The particles are thus said to be entangled. This can be viewed as a quantum superposition of two states, which we call state I and state II. In state I, the electron has spin pointing upward along the z-axis (+z) and the positron has spin pointing downward along the z-axis (−z). In state II, the electron has spin −z and the positron has spin +z. Because it is in a superposition of states, it is impossible without measuring to know the definite state of spin of either particle in the spin singlet.
Alice now measures the spin along the z-axis. She can obtain one of two possible outcomes: +z or −z. Suppose she gets +z. Informally speaking, the quantum state of the system collapses into state I. The quantum state determines the probable outcomes of any measurement performed on the system. In this case, if Bob subsequently measures spin along the z-axis, there is 100% probability that he will obtain −z. Similarly, if Alice gets −z, Bob will get +z. There is nothing special about choosing the z-axis: according to quantum mechanics the spin singlet state may equally well be expressed as a superposition of spin states pointing in the x direction.
Whatever axis their spins are measured along, they are always found to be opposite. In quantum mechanics, the x-spin and z-spin are "incompatible observables", meaning the Heisenberg uncertainty principle applies to alternating measurements of them: a quantum state cannot possess a definite value for both of these variables. Suppose Alice measures the z-spin and obtains +z, so that the quantum state collapses into state I. Now, instead of measuring the z-spin as well, Bob measures the x-spin. According to quantum mechanics, when the system is in state I, Bob's x-spin measurement will have a 50% probability of producing +x and a 50% probability of -x. It is impossible to predict which outcome will appear until Bob actually performs the measurement. Therefore, Bob's positron will have a definite spin when measured along the same axis as Alice's electron, but when measured in the perpendicular axis its spin will be uniformly random. It seems as if information has propagated (faster than light) from Alice's apparatus to make Bob's positron assume a definite spin in the appropriate axis.
Bell's theorem
In 1964, John Stewart Bell published a paper investigating the puzzling situation at that time: on one hand, the EPR paradox purportedly showed that quantum mechanics was nonlocal, and suggested that a hidden-variable theory could heal this nonlocality. On the other hand, David Bohm had recently developed the first successful hidden-variable theory, but it had a grossly nonlocal character. Bell set out to investigate whether it was indeed possible to solve the nonlocality problem with hidden variables, and found out that first, the correlations shown in both EPR's and Bohm's versions of the paradox could indeed be explained in a local way with hidden variables, and second, that the correlations shown in his own variant of the paradox couldn't be explained by any local hidden-variable theory. This second result became known as the Bell theorem.
To understand the first result, consider the following toy hidden-variable theory introduced later by J.J. Sakurai: in it, quantum spin-singlet states emitted by the source are actually approximate descriptions for "true" physical states possessing definite values for the z-spin and x-spin. In these "true" states, the positron going to Bob always has spin values opposite to the electron going to Alice, but the values are otherwise completely random. For example, the first pair emitted by the source might be "(+z, −x) to Alice and (−z, +x) to Bob", the next pair "(−z, −x) to Alice and (+z, +x) to Bob", and so forth. Therefore, if Bob's measurement axis is aligned with Alice's, he will necessarily get the opposite of whatever Alice gets; otherwise, he will get "+" and "−" with equal probability.
Bell showed, however, that such models can only reproduce the singlet correlations when Alice and Bob make measurements on the same axis or on perpendicular axes. As soon as other angles between their axes are allowed, local hidden-variable theories become unable to reproduce the quantum mechanical correlations. This difference, expressed using inequalities known as "Bell's inequalities", is in principle experimentally testable. After the publication of Bell's paper, a variety of experiments to test Bell's inequalities were carried out, notably by the group of Alain Aspect in the 1980s; all experiments conducted to date have found behavior in line with the predictions of quantum mechanics. The present view of the situation is that quantum mechanics flatly contradicts Einstein's philosophical postulate that any acceptable physical theory must fulfill "local realism". The fact that quantum mechanics violates Bell inequalities indicates that any hidden-variable theory underlying quantum mechanics must be non-local; whether this should be taken to imply that quantum mechanics itself is non-local is a matter of continuing debate.
Steering
Inspired by Schrödinger's treatment of the EPR paradox back in 1935, Howard M. Wiseman et al. formalised it in 2007 as the phenomenon of quantum steering. They defined steering as the situation where Alice's measurements on a part of an entangled state steer Bob's part of the state. That is, Bob's observations cannot be explained by a local hidden state model, where Bob would have a fixed quantum state in his side, that is classically correlated but otherwise independent of Alice's.
Locality
Locality has several different meanings in physics. EPR describe the principle of locality as asserting that physical processes occurring at one place should have no immediate effect on the elements of reality at another location. At first sight, this appears to be a reasonable assumption to make, as it seems to be a consequence of special relativity, which states that energy can never be transmitted faster than the speed of light without violating causality; however, it turns out that the usual rules for combining quantum mechanical and classical descriptions violate EPR's principle of locality without violating special relativity or causality. Causality is preserved because there is no way for Alice to transmit messages (i.e., information) to Bob by manipulating her measurement axis. Whichever axis she uses, she has a 50% probability of obtaining "+" and 50% probability of obtaining "−", completely at random; according to quantum mechanics, it is fundamentally impossible for her to influence what result she gets. Furthermore, Bob is able to perform his measurement only once: there is a fundamental property of quantum mechanics, the no-cloning theorem, which makes it impossible for him to make an arbitrary number of copies of the electron he receives, perform a spin measurement on each, and look at the statistical distribution of the results. Therefore, in the one measurement he is allowed to make, there is a 50% probability of getting "+" and 50% of getting "−", regardless of whether or not his axis is aligned with Alice's.
As a summary, the results of the EPR thought experiment do not contradict the predictions of special relativity. Neither the EPR paradox nor any quantum experiment demonstrates that superluminal signaling is possible; however, the principle of locality appeals powerfully to physical intuition, and Einstein, Podolsky and Rosen were unwilling to abandon it. Einstein derided the quantum mechanical predictions as "spooky action at a distance". The conclusion they drew was that quantum mechanics is not a complete theory.
Mathematical formulation
Bohm's variant of the EPR paradox can be expressed mathematically using the quantum mechanical formulation of spin. The spin degree of freedom for an electron is associated with a two-dimensional complex vector space V, with each quantum state corresponding to a vector in that space. The operators corresponding to the spin along the x, y, and z direction, denoted Sx, Sy, and Sz respectively, can be represented using the Pauli matrices:
where is the reduced Planck constant (or the Planck constant divided by 2π).
The eigenstates of Sz are represented as
and the eigenstates of Sx are represented as
The vector space of the electron-positron pair is , the tensor product of the electron's and positron's vector spaces. The spin singlet state is
where the two terms on the right hand side are what we have referred to as state I and state II above.
From the above equations, it can be shown that the spin singlet can also be written as
where the terms on the right hand side are what we have referred to as state Ia and state IIa.
To illustrate the paradox, we need to show that after Alice's measurement of Sz (or Sx), Bob's value of Sz (or Sx) is uniquely determined and Bob's value of Sx (or Sz) is uniformly random. This follows from the principles of measurement in quantum mechanics. When Sz is measured, the system state collapses into an eigenvector of Sz. If the measurement result is +z, this means that immediately after measurement the system state collapses to
Similarly, if Alice's measurement result is −z, the state collapses to
The left hand side of both equations show that the measurement of Sz on Bob's positron is now determined, it will be −z in the first case or +z in the second case. The right hand side of the equations show that the measurement of Sx on Bob's positron will return, in both cases, +x or −x with probability 1/2 each.
| Physical sciences | Quantum mechanics | Physics |
10303 | https://en.wikipedia.org/wiki/Evaporation | Evaporation | Evaporation is a type of vaporization that occurs on the surface of a liquid as it changes into the gas phase. A high concentration of the evaporating substance in the surrounding gas significantly slows down evaporation, such as when humidity affects rate of evaporation of water. When the molecules of the liquid collide, they transfer energy to each other based on how they collide. When a molecule near the surface absorbs enough energy to overcome the vapor pressure, it will escape and enter the surrounding air as a gas. When evaporation occurs, the energy removed from the vaporized liquid will reduce the temperature of the liquid, resulting in evaporative cooling.
On average, only a fraction of the molecules in a liquid have enough heat energy to escape from the liquid. The evaporation will continue until an equilibrium is reached when the evaporation of the liquid is equal to its condensation. In an enclosed environment, a liquid will evaporate until the surrounding air is saturated.
Evaporation is an essential part of the water cycle. The sun (solar energy) drives evaporation of water from oceans, lakes, moisture in the soil, and other sources of water. In hydrology, evaporation and transpiration (which involves evaporation within plant stomata) are collectively termed evapotranspiration. Evaporation of water occurs when the surface of the liquid is exposed, allowing molecules to escape and form water vapor; this vapor can then rise up and form clouds. With sufficient energy, the liquid will turn into vapor.
Theory
For molecules of a liquid to evaporate, they must be located near the surface, they have to be moving in the proper direction, and have sufficient kinetic energy to overcome liquid-phase intermolecular forces. When only a small proportion of the molecules meet these criteria, the rate of evaporation is low. Since the kinetic energy of a molecule is proportional to its temperature, evaporation proceeds more quickly at higher temperatures. As the faster-moving molecules escape, the remaining molecules have lower average kinetic energy, and the temperature of the liquid decreases. This phenomenon is also called evaporative cooling. This is why evaporating sweat cools the human body.
Evaporation also tends to proceed more quickly with higher flow rates between the gaseous and liquid phase and in liquids with higher vapor pressure. For example, laundry on a clothes line will dry (by evaporation) more rapidly on a windy day than on a still day. Three key parts to evaporation are heat, atmospheric pressure (determines the percent humidity), and air movement.
On a molecular level, there is no strict boundary between the liquid state and the vapor state. Instead, there is a Knudsen layer, where the phase is undetermined. Because this layer is only a few molecules thick, at a macroscopic scale a clear phase transition interface cannot be seen.
Liquids that do not evaporate visibly at a given temperature in a given gas (e.g., cooking oil at room temperature) have molecules that do not tend to transfer energy to each other in a pattern sufficient to frequently give a molecule the heat energy necessary to turn into vapor. However, these liquids are evaporating. It is just that the process is much slower and thus significantly less visible.
Evaporative equilibrium
If evaporation takes place in an enclosed area, the escaping molecules accumulate as a vapor above the liquid. Many of the molecules return to the liquid, with returning molecules becoming more frequent as the density and pressure of the vapor increases. When the process of escape and return reaches an equilibrium, the vapor is said to be "saturated", and no further change in either vapor pressure and density or liquid temperature will occur. For a system consisting of vapor and liquid of a pure substance, this equilibrium state is directly related to the vapor pressure of the substance, as given by the Clausius–Clapeyron relation:
where P1, P2 are the vapor pressures at temperatures T1, T2 respectively, ΔHvap is the enthalpy of vaporization, and R is the universal gas constant. The rate of evaporation in an open system is related to the vapor pressure found in a closed system. If a liquid is heated, when the vapor pressure reaches the ambient pressure the liquid will boil.
The ability for a molecule of a liquid to evaporate is based largely on the amount of kinetic energy an individual particle may possess. Even at lower temperatures, individual molecules of a liquid can evaporate if they have more than the minimum amount of kinetic energy required for vaporization.
Factors influencing the rate of evaporation
Note: Air is used here as a common example of the surrounding gas; however, other gases may hold that role.
Concentration of the substance evaporating in the air If the air already has a high concentration of the substance evaporating, then the given substance will evaporate more slowly.
Flow rate of air This is in part related to the concentration points above. If "fresh" air (i.e., air which is neither already saturated with the substance nor with other substances) is moving over the substance all the time, then the concentration of the substance in the air is less likely to go up with time, thus encouraging faster evaporation. This is the result of the boundary layer at the evaporation surface decreasing with flow velocity, decreasing the diffusion distance in the stagnant layer.
The amount of minerals dissolved in the liquid
Inter-molecular forces The stronger the forces keeping the molecules together in the liquid state, the more energy one must get to escape. This is characterized by the enthalpy of vaporization.
Pressure Evaporation happens faster if there is less exertion on the surface keeping the molecules from launching themselves.
Surface area A substance that has a larger surface area will evaporate faster, as there are more surface molecules per unit of volume that are potentially able to escape.
Temperature of the substance the higher the temperature of the substance the greater the kinetic energy of the molecules at its surface and therefore the faster the rate of their evaporation.
Photomolecular effect The amount of light will affect the evaporation. When photons hits the surface area of the liquid they can make individual molecules break free and disappear into the air without any need for additional heat.
In the US, the National Weather Service measures, at various outdoor locations nationwide, the actual rate of evaporation from a standardized "pan" open water surface. Others do likewise around the world. The US data is collected and compiled into an annual evaporation map. The measurements range from under 30 to over per year.
Because it typically takes place in a complex environment, where 'evaporation is an extremely rare event', the mechanism for the evaporation of water is not completely understood. Theoretical calculations require prohibitively long and large computer simulations. 'The rate of evaporation of liquid water is one of the principal uncertainties in modern climate modeling.'
Thermodynamics
Evaporation is an endothermic process, since heat is absorbed during evaporation.
Applications
Industrial applications include many printing and coating processes; recovering salts from solutions; and drying a variety of materials such as lumber, paper, cloth and chemicals.
The use of evaporation to dry or concentrate samples is a common preparatory step for many laboratory analyses such as spectroscopy and chromatography. Systems used for this purpose include rotary evaporators and centrifugal evaporators.
When clothes are hung on a laundry line, even though the ambient temperature is below the boiling point of water, water evaporates. This is accelerated by factors such as low humidity, heat (from the sun), and wind. In a clothes dryer, hot air is blown through the clothes, allowing water to evaporate very rapidly.
The matki/matka, a traditional Indian porous clay container used for storing and cooling water and other liquids.
The botijo, a traditional Spanish porous clay container designed to cool the contained water by evaporation.
Evaporative coolers, which can significantly cool a building by simply blowing dry air over a filter saturated with water.
Combustion vaporization
Fuel droplets vaporize as they receive heat by mixing with the hot gases in the combustion chamber. Heat (energy) can also be received by radiation from any hot refractory wall of the combustion chamber.
Pre-combustion vaporization
Internal combustion engines rely upon the vaporization of the fuel in the cylinders to form a fuel/air mixture in order to burn well.
The chemically correct air/fuel mixture for total burning of gasoline has been determined to be about 15 parts air to one part gasoline or 15/1 by weight. Changing this to a volume ratio yields 8000 parts air to one part gasoline or 8,000/1 by volume.
Film deposition
Thin films may be deposited by evaporating a substance and condensing it onto a substrate, or by dissolving the substance in a solvent, spreading the resulting solution thinly over a substrate, and evaporating the solvent. The Hertz–Knudsen equation is often used to estimate the rate of evaporation in these instances.
| Physical sciences | Phase transitions | null |
10326 | https://en.wikipedia.org/wiki/Human%20evolution | Human evolution | Human evolution is the evolutionary process within the history of primates that led to the emergence of Homo sapiens as a distinct species of the hominid family that includes all the great apes. This process involved the gradual development of traits such as human bipedalism, dexterity, and complex language, as well as interbreeding with other hominins (a tribe of the African hominid subfamily), indicating that human evolution was not linear but weblike. The study of the origins of humans involves several scientific disciplines, including physical and evolutionary anthropology, paleontology, and genetics; the field is also known by the terms anthropogeny, anthropogenesis, and anthropogony. (The latter two terms are sometimes used to refer to the related subject of hominization.)
Primates diverged from other mammals about (mya), in the Late Cretaceous period, with their earliest fossils appearing over 55 mya, during the Paleocene. Primates produced successive clades leading to the ape superfamily, which gave rise to the hominid and the gibbon families; these diverged some 15–20 mya. African and Asian hominids (including orangutans) diverged about 14 mya. Hominins (including the Australopithecine and Panina subtribes) parted from the Gorillini tribe between 8 and 9 mya; Australopithecine (including the extinct biped ancestors of humans) separated from the Pan genus (containing chimpanzees and bonobos) 4–7 mya. The Homo genus is evidenced by the appearance of H. habilis over 2 mya, while anatomically modern humans emerged in Africa approximately 300,000 years ago.
Before Homo
Early evolution of primates
The evolutionary history of primates can be traced back 65 million years. One of the oldest known primate-like mammal species, the Plesiadapis, came from North America; another, Archicebus, came from China. Other similar basal primates were widespread in Eurasia and Africa during the tropical conditions of the Paleocene and Eocene.
David R. Begun concluded that early primates flourished in Eurasia and that a lineage leading to the African apes and humans, including to Dryopithecus, migrated south from Europe or Western Asia into Africa. The surviving tropical population of primates—which is seen most completely in the Upper Eocene and lowermost Oligocene fossil beds of the Faiyum depression southwest of Cairo—gave rise to all extant primate species, including the lemurs of Madagascar, lorises of Southeast Asia, galagos or "bush babies" of Africa, and to the anthropoids, which are the Platyrrhines or New World monkeys, the Catarrhines or Old World monkeys, and the great apes, including humans and other hominids.
The earliest known catarrhine is Kamoyapithecus from the uppermost Oligocene at Eragaleit in the northern Great Rift Valley in Kenya, dated to 24 million years ago. Its ancestry is thought to be species related to Aegyptopithecus, Propliopithecus, and Parapithecus from the Faiyum, at around 35 mya. In 2010, Saadanius was described as a close relative of the last common ancestor of the crown catarrhines, and tentatively dated to 29–28 mya, helping to fill an 11-million-year gap in the fossil record.
In the Early Miocene, about 22 million years ago, the many kinds of arboreally-adapted (tree-dwelling) primitive catarrhines from East Africa suggest a long history of prior diversification. Fossils at 20 million years ago include fragments attributed to Victoriapithecus, the earliest Old World monkey. Among the genera thought to be in the ape lineage leading up to 13 million years ago are Proconsul, Rangwapithecus, Dendropithecus, Limnopithecus, Nacholapithecus, Equatorius, Nyanzapithecus, Afropithecus, Heliopithecus, and Kenyapithecus, all from East Africa.
The presence of other generalized non-cercopithecids of Middle Miocene from sites far distant, such as Otavipithecus from cave deposits in Namibia, and Pierolapithecus and Dryopithecus from France, Spain and Austria, is evidence of a wide diversity of forms across Africa and the Mediterranean basin during the relatively warm and equable climatic regimes of the Early and Middle Miocene. The youngest of the Miocene hominoids, Oreopithecus, is from coal beds in Italy that have been dated to 9 million years ago.
Molecular evidence indicates that the lineage of gibbons diverged from the line of great apes some 18–12 mya, and that of orangutans (subfamily Ponginae) diverged from the other great apes at about 12 million years; there are no fossils that clearly document the ancestry of gibbons, which may have originated in a so-far-unknown Southeast Asian hominoid population, but fossil proto-orangutans may be represented by Sivapithecus from India and Griphopithecus from Turkey, dated to around 10 mya.
Hominidae subfamily Homininae (African hominids) diverged from Ponginae (orangutans) about 14 mya. Hominins (including humans and the Australopithecine and Panina subtribes) parted from the Gorillini tribe (gorillas) between 8 and 9 mya; Australopithecine (including the extinct biped ancestors of humans) separated from the Pan genus (containing chimpanzees and bonobos) 4–7 mya. The Homo genus is evidenced by the appearance of H. habilis over 2 mya, while anatomically modern humans emerged in Africa approximately 300,000 years ago.
Divergence of the human clade from other great apes
Species close to the last common ancestor of gorillas, chimpanzees and humans may be represented by Nakalipithecus fossils found in Kenya and Ouranopithecus found in Greece. Molecular evidence suggests that between 8 and 4 million years ago, first the gorillas, and then the chimpanzees (genus Pan) split off from the line leading to the humans. Human DNA is approximately 98.4% identical to that of chimpanzees when comparing single nucleotide polymorphisms (see human evolutionary genetics). The fossil record, however, of gorillas and chimpanzees is limited; both poor preservation – rain forest soils tend to be acidic and dissolve bone – and sampling bias probably contribute to this problem.
Other hominins probably adapted to the drier environments outside the equatorial belt; and there they encountered antelope, hyenas, dogs, pigs, elephants, horses, and others. The equatorial belt contracted after about 8 million years ago, and there is very little fossil evidence for the split—thought to have occurred around that time—of the hominin lineage from the lineages of gorillas and chimpanzees. The earliest fossils argued by some to belong to the human lineage are Sahelanthropus tchadensis (7 Ma) and Orrorin tugenensis (6 Ma), followed by Ardipithecus (5.5–4.4 Ma), with species Ar. kadabba and Ar. ramidus.
It has been argued in a study of the life history of Ar. ramidus that the species provides evidence for a suite of anatomical and behavioral adaptations in very early hominins unlike any species of extant great ape. This study demonstrated affinities between the skull morphology of Ar. ramidus and that of infant and juvenile chimpanzees, suggesting the species evolved a juvenalised or paedomorphic craniofacial morphology via heterochronic dissociation of growth trajectories. It was also argued that the species provides support for the notion that very early hominins, akin to bonobos (Pan paniscus) the less aggressive species of the genus Pan, may have evolved via the process of self-domestication. Consequently, arguing against the so-called "chimpanzee referential model" the authors suggest it is no longer tenable to use chimpanzee (Pan troglodytes) social and mating behaviors in models of early hominin social evolution. When commenting on the absence of aggressive canine morphology in Ar. ramidus and the implications this has for the evolution of hominin social psychology, they wrote:
The authors argue that many of the basic human adaptations evolved in the ancient forest and woodland ecosystems of late Miocene and early Pliocene Africa. Consequently, they argue that humans may not represent evolution from a chimpanzee-like ancestor as has traditionally been supposed. This suggests many modern human adaptations represent phylogenetically deep traits and that the behavior and morphology of chimpanzees may have evolved subsequent to the split with the common ancestor they share with humans.
Genus Australopithecus
The genus Australopithecus evolved in eastern Africa around 4 million years ago before spreading throughout the continent and eventually becoming extinct 2 million years ago. During this time period various forms of australopiths existed, including Australopithecus anamensis, A. afarensis, A. sediba, and A. africanus. There is still some debate among academics whether certain African hominid species of this time, such as A. robustus and A. boisei, constitute members of the same genus; if so, they would be considered to be "robust australopiths" while the others would be considered "gracile australopiths". However, if these species do indeed constitute their own genus, then they may be given their own name, Paranthropus.
Australopithecus (4–1.8 Ma), with species A. anamensis, A. afarensis, A. africanus, A. bahrelghazali, A. garhi, and A. sediba;
Kenyanthropus (3–2.7 Ma), with species K. platyops;
Paranthropus (3–1.2 Ma), with species P. aethiopicus, P. boisei, and P. robustus
A new proposed species Australopithecus deyiremeda is claimed to have been discovered living at the same time period of A. afarensis. There is debate whether A. deyiremeda is a new species or is A. afarensis. Australopithecus prometheus, otherwise known as Little Foot has recently been dated at 3.67 million years old through a new dating technique, making the genus Australopithecus as old as afarensis. Given the opposable big toe found on Little Foot, it seems that the specimen was a good climber. It is thought given the night predators of the region that he built a nesting platform at night in the trees in a similar fashion to chimpanzees and gorillas.
Evolution of genus Homo
The earliest documented representative of the genus Homo is Homo habilis, which evolved around , and is arguably the earliest species for which there is positive evidence of the use of stone tools. The brains of these early hominins were about the same size as that of a chimpanzee, although it has been suggested that this was the time in which the human SRGAP2 gene doubled, producing a more rapid wiring of the frontal cortex. During the next million years a process of rapid encephalization occurred, and with the arrival of Homo erectus and Homo ergaster in the fossil record, cranial capacity had doubled to 850 cm3. (Such an increase in human brain size is equivalent to each generation having 125,000 more neurons than their parents.) It is believed that H. erectus and H. ergaster were the first to use fire and complex tools, and were the first of the hominin line to leave Africa, spreading throughout Africa, Asia, and Europe between .
According to the recent African origin theory, modern humans evolved in Africa possibly from H. heidelbergensis, H. rhodesiensis or H. antecessor and migrated out of the continent some 50,000 to 100,000 years ago, gradually replacing local populations of H. erectus, Denisova hominins, H. floresiensis, H. luzonensis and H. neanderthalensis, whose ancestors had left Africa in earlier migrations. Archaic Homo sapiens, the forerunner of anatomically modern humans, evolved in the Middle Paleolithic between 400,000 and 250,000 years ago. Recent DNA evidence suggests that several haplotypes of Neanderthal origin are present among all non-African populations, and Neanderthals and other hominins, such as Denisovans, may have contributed up to 6% of their genome to present-day humans, suggestive of a limited interbreeding between these species. According to some anthropologists, the transition to behavioral modernity with the development of symbolic culture, language, and specialized lithic technology happened around 50,000 years ago (beginning of the Upper Paleolithic), although others point to evidence of a gradual change over a longer time span during the Middle Paleolithic.
Homo sapiens is the only extant species of its genus, Homo. While some (extinct) Homo species might have been ancestors of Homo sapiens, many, perhaps most, were likely "cousins", having speciated away from the ancestral hominin line. There is yet no consensus as to which of these groups should be considered a separate species and which should be subspecies; this may be due to the dearth of fossils or to the slight differences used to classify species in the genus Homo. The Sahara pump theory (describing an occasionally passable "wet" Sahara desert) provides one possible explanation of the intermittent migration and speciation in the genus Homo.
Based on archaeological and paleontological evidence, it has been possible to infer, to some extent, the ancient dietary practices of various Homo species and to study the role of diet in physical and behavioral evolution within Homo.
Some anthropologists and archaeologists subscribe to the Toba catastrophe theory, which posits that the supereruption of Lake Toba on Sumatra in Indonesia some 70,000 years ago caused global starvation, killing the majority of humans and creating a population bottleneck that affected the genetic inheritance of all humans today. The genetic and archaeological evidence for this remains in question however. A 2023 genetic study suggests that a similar human population bottleneck of between 1,000 and 100,000 survivors occurred "around 930,000 and 813,000 years ago ... lasted for about 117,000 years and brought human ancestors close to extinction."
H. habilis and H. gautengensis
Homo habilis lived from about 2.8 to 1.4 Ma. The species evolved in South and East Africa in the Late Pliocene or Early Pleistocene, 2.5–2 Ma, when it diverged from the australopithecines with the development of smaller molars and larger brains. One of the first known hominins, it made tools from stone and perhaps animal bones, leading to its name homo habilis (Latin 'handy man') bestowed by discoverer Louis Leakey. Some scientists have proposed moving this species from Homo into Australopithecus due to the morphology of its skeleton being more adapted to living in trees rather than walking on two legs like later hominins.
In May 2010, a new species, Homo gautengensis, was discovered in South Africa.
H. rudolfensis and H. georgicus
These are proposed species names for fossils from about 1.9–1.6 Ma, whose relation to Homo habilis is not yet clear.
Homo rudolfensis refers to a single, incomplete skull from Kenya. Scientists have suggested that this was a specimen of Homo habilis, but this has not been confirmed.
Homo georgicus, from Georgia, may be an intermediate form between Homo habilis and Homo erectus, or a subspecies of Homo erectus.
H. ergaster and H. erectus
The first fossils of Homo erectus were discovered by Dutch physician Eugene Dubois in 1891 on the Indonesian island of Java. He originally named the material Anthropopithecus erectus (1892–1893, considered at this point as a chimpanzee-like fossil primate) and Pithecanthropus erectus (1893–1894, changing his mind as of based on its morphology, which he considered to be intermediate between that of humans and apes). Years later, in the 20th century, the German physician and paleoanthropologist Franz Weidenreich (1873–1948) compared in detail the characters of Dubois' Java Man, then named Pithecanthropus erectus, with the characters of the Peking Man, then named Sinanthropus pekinensis. Weidenreich concluded in 1940 that because of their anatomical similarity with modern humans it was necessary to gather all these specimens of Java and China in a single species of the genus Homo, the species H. erectus.
Homo erectus lived from about 1.8 Ma to about 70,000 years ago – which would indicate that they were probably wiped out by the Toba catastrophe; however, nearby H. floresiensis survived it. The early phase of H. erectus, from 1.8 to 1.25 Ma, is considered by some to be a separate species, H. ergaster, or as H. erectus ergaster, a subspecies of H. erectus. Many paleoanthropologists now use the term Homo ergaster for the non-Asian forms of this group, and reserve H. erectus only for those fossils that are found in Asia and meet certain skeletal and dental requirements which differ slightly from H. ergaster.
In Africa in the Early Pleistocene, 1.5–1 Ma, some populations of Homo habilis are thought to have evolved larger brains and to have made more elaborate stone tools; these differences and others are sufficient for anthropologists to classify them as a new species, Homo erectus—in Africa. This species also may have used fire to cook meat. Richard Wrangham notes that Homo seems to have been ground dwelling, with reduced intestinal length, smaller dentition, and "brains [swollen] to their current, horrendously fuel-inefficient size", and hypothesizes that control of fire and cooking, which released increased nutritional value, was the key adaptation that separated Homo from tree-sleeping Australopithecines.
H. cepranensis and H. antecessor
These are proposed as species intermediate between H. erectus and H. heidelbergensis.
H. antecessor is known from fossils from Spain and England that are dated 1.2 Ma–500 ka.
H. cepranensis refers to a single skull cap from Italy, estimated to be about 800,000 years old.
H. heidelbergensis
H. heidelbergensis ("Heidelberg Man") lived from about 800,000 to about 300,000 years ago. Also proposed as Homo sapiens heidelbergensis or Homo sapiens paleohungaricus.
H. rhodesiensis, and the Gawis cranium
H. rhodesiensis, estimated to be 300,000–125,000 years old. Most current researchers place Rhodesian Man within the group of Homo heidelbergensis, though other designations such as archaic Homo sapiens and Homo sapiens rhodesiensis have been proposed.
In February 2006 a fossil, the Gawis cranium, was found which might possibly be a species intermediate between H. erectus and H. sapiens or one of many evolutionary dead ends. The skull from Gawis, Ethiopia, is believed to be 500,000–250,000 years old. Only summary details are known, and the finders have not yet released a peer-reviewed study. Gawis man's facial features suggest that it is either an intermediate species or an example of a "Bodo man" female.
Neanderthal and Denisovan
Homo neanderthalensis, alternatively designated as Homo sapiens neanderthalensis, lived in Europe and Asia from 400,000 to about 28,000 years ago.
There are a number of clear anatomical differences between anatomically modern humans (AMH) and Neanderthal specimens, many relating to the superior Neanderthal adaptation to cold environments. Neanderthal surface to volume ratio was even lower than that among modern Inuit populations, indicating superior retention of body heat.
Neanderthals also had significantly larger brains, as shown from brain endocasts, casting doubt on their intellectual inferiority to modern humans. However, the higher body mass of Neanderthals may have required larger brain mass for body control. Also, recent research by Pearce, Stringer, and Dunbar has shown important differences in brain architecture. The larger size of the Neanderthal orbital chamber and occipital lobe suggests that they had a better visual acuity than modern humans, useful in the dimmer light of glacial Europe.
Neanderthals may have had less brain capacity available for social functions. Inferring social group size from endocranial volume (minus occipital lobe size) suggests that Neanderthal groups may have been limited to 120 individuals, compared to 144 possible relationships for modern humans. Larger social groups could imply that modern humans had less risk of inbreeding within their clan, trade over larger areas (confirmed in the distribution of stone tools), and faster spread of social and technological innovations. All these may have all contributed to modern Homo sapiens replacing Neanderthal populations by 28,000 BP.
Earlier evidence from sequencing mitochondrial DNA suggested that no significant gene flow occurred between H. neanderthalensis and H. sapiens, and that the two were separate species that shared a common ancestor about 660,000 years ago. However, a sequencing of the Neanderthal genome in 2010 indicated that Neanderthals did indeed interbreed with anatomically modern humans c. 45,000-80,000 years ago, around the time modern humans migrated out from Africa, but before they dispersed throughout Europe, Asia and elsewhere. The genetic sequencing of a 40,000-year-old human skeleton from Romania showed that 11% of its genome was Neanderthal, implying the individual had a Neanderthal ancestor 4–6 generations previously, in addition to a contribution from earlier interbreeding in the Middle East. Though this interbred Romanian population seems not to have been ancestral to modern humans, the finding indicates that interbreeding happened repeatedly.
All modern non-African humans have about 1% to 4% (or 1.5% to 2.6% by more recent data) of their DNA derived from Neanderthals. This finding is consistent with recent studies indicating that the divergence of some human alleles dates to one Ma, although this interpretation has been questioned. Neanderthals and AMH Homo sapiens could have co-existed in Europe for as long as 10,000 years, during which AMH populations exploded, vastly outnumbering Neanderthals, possibly outcompeting them by sheer numbers.
In 2008, archaeologists working at the site of Denisova Cave in the Altai Mountains of Siberia uncovered a small bone fragment from the fifth finger of a juvenile member of another human species, the Denisovans. Artifacts, including a bracelet, excavated in the cave at the same level were carbon dated to around 40,000 BP. As DNA had survived in the fossil fragment due to the cool climate of the Denisova Cave, both mtDNA and nuclear DNA were sequenced.
While the divergence point of the mtDNA was unexpectedly deep in time, the full genomic sequence suggested the Denisovans belonged to the same lineage as Neanderthals, with the two diverging shortly after their line split from the lineage that gave rise to modern humans. Modern humans are known to have overlapped with Neanderthals in Europe and the Near East for possibly more than 40,000 years, and the discovery raises the possibility that Neanderthals, Denisovans, and modern humans may have co-existed and interbred. The existence of this distant branch creates a much more complex picture of humankind during the Late Pleistocene than previously thought. Evidence has also been found that as much as 6% of the DNA of some modern Melanesians derive from Denisovans, indicating limited interbreeding in Southeast Asia.
Alleles thought to have originated in Neanderthals and Denisovans have been identified at several genetic loci in the genomes of modern humans outside Africa. HLA haplotypes from Denisovans and Neanderthal represent more than half the HLA alleles of modern Eurasians, indicating strong positive selection for these introgressed alleles. Corinne Simoneti at Vanderbilt University, in Nashville and her team have found from medical records of 28,000 people of European descent that the presence of Neanderthal DNA segments may be associated with a higher rate of depression.
The flow of genes from Neanderthal populations to modern humans was not all one way. Sergi Castellano of the Max Planck Institute for Evolutionary Anthropology reported in 2016 that while Denisovan and Neanderthal genomes are more related to each other than they are to us, Siberian Neanderthal genomes show more similarity to modern human genes than do European Neanderthal populations. This suggests Neanderthal populations interbred with modern humans around 100,000 years ago, probably somewhere in the Near East.
Studies of a Neanderthal child at Gibraltar show from brain development and tooth eruption that Neanderthal children may have matured more rapidly than Homo sapiens.
H. floresiensis
H. floresiensis, which lived from approximately 190,000 to 50,000 years before present (BP), has been nicknamed the hobbit for its small size, possibly a result of insular dwarfism. H. floresiensis is intriguing both for its size and its age, being an example of a recent species of the genus Homo that exhibits derived traits not shared with modern humans. In other words, H. floresiensis shares a common ancestor with modern humans, but split from the modern human lineage and followed a distinct evolutionary path. The main find was a skeleton believed to be a woman of about 30 years of age. Found in 2003, it has been dated to approximately 18,000 years old. The living woman was estimated to be one meter in height, with a brain volume of just 380 cm3 (considered small for a chimpanzee and less than a third of the H. sapiens average of 1400 cm3).
However, there is an ongoing debate over whether H. floresiensis is indeed a separate species. Some scientists hold that H. floresiensis was a modern H. sapiens with pathological dwarfism. This hypothesis is supported in part, because some modern humans who live on Flores, the Indonesian island where the skeleton was found, are pygmies. This, coupled with pathological dwarfism, could have resulted in a significantly diminutive human. The other major attack on H. floresiensis as a separate species is that it was found with tools only associated with H. sapiens.
The hypothesis of pathological dwarfism, however, fails to explain additional anatomical features that are unlike those of modern humans (diseased or not) but much like those of ancient members of our genus. Aside from cranial features, these features include the form of bones in the wrist, forearm, shoulder, knees, and feet. Additionally, this hypothesis fails to explain the find of multiple examples of individuals with these same characteristics, indicating they were common to a large population, and not limited to one individual.
In 2016, fossil teeth and a partial jaw from hominins assumed to be ancestral to H. floresiensis were discovered at Mata Menge, about from Liang Bua. They date to about 700,000 years ago and are noted by Australian archaeologist Gerrit van den Bergh for being even smaller than the later fossils.
H. luzonensis
A small number of specimens from the island of Luzon, dated 50,000 to 67,000 years ago, have recently been assigned by their discoverers, based on dental characteristics, to a novel human species, H. luzonensis.
H. sapiens
H. sapiens (the adjective sapiens is Latin for "wise" or "intelligent") emerged in Africa around 300,000 years ago, likely derived from H. heidelbergensis or a related lineage. In September 2019, scientists reported the computerized determination, based on 260 CT scans, of a virtual skull shape of the last common human ancestor to modern humans (H. sapiens), representative of the earliest modern humans, and suggested that modern humans arose between 260,000 and 350,000 years ago through a merging of populations in East and South Africa.
Between 400,000 years ago and the second interglacial period in the Middle Pleistocene, around 250,000 years ago, the trend in intra-cranial volume expansion and the elaboration of stone tool technologies developed, providing evidence for a transition from H. erectus to H. sapiens. The direct evidence suggests there was a migration of H. erectus out of Africa, then a further speciation of H. sapiens from H. erectus in Africa. A subsequent migration (both within and out of Africa) eventually replaced the earlier dispersed H. erectus. This migration and origin theory is usually referred to as the "recent single-origin hypothesis" or "out of Africa" theory. H. sapiens interbred with archaic humans both in Africa and in Eurasia, in Eurasia notably with Neanderthals and Denisovans.
The Toba catastrophe theory, which postulates a population bottleneck for H. sapiens about 70,000 years ago, was controversial from its first proposal in the 1990s and by the 2010s had very little support. Distinctive human genetic variability has arisen as the result of the founder effect, by archaic admixture and by recent evolutionary pressures.
Anatomical changes
Since Homo sapiens separated from its last common ancestor shared with chimpanzees, human evolution is characterized by a number of morphological, developmental, physiological, behavioral, and environmental changes. Environmental (cultural) evolution discovered much later during the Pleistocene played a significant role in human evolution observed via human transitions between subsistence systems. The most significant of these adaptations are bipedalism, increased brain size, lengthened ontogeny (gestation and infancy), and decreased sexual dimorphism. The relationship between these changes is the subject of ongoing debate. Other significant morphological changes included the evolution of a power and precision grip, a change first occurring in H. erectus.
Bipedalism
Bipedalism, (walking on two legs), is the basic adaptation of the hominid and is considered the main cause behind a suite of skeletal changes shared by all bipedal hominids. The earliest hominin, of presumably primitive bipedalism, is considered to be either Sahelanthropus or Orrorin, both of which arose some 6 to 7 million years ago. The non-bipedal knuckle-walkers, the gorillas and chimpanzees, diverged from the hominin line over a period covering the same time, so either Sahelanthropus or Orrorin may be our last shared ancestor. Ardipithecus, a full biped, arose approximately 5.6 million years ago.
The early bipeds eventually evolved into the australopithecines and still later into the genus Homo. There are several theories of the adaptation value of bipedalism. It is possible that bipedalism was favored because it freed the hands for reaching and carrying food, saved energy during locomotion, enabled long-distance running and hunting, provided an enhanced field of vision, and helped avoid hyperthermia by reducing the surface area exposed to direct sun; features all advantageous for thriving in the new savanna and woodland environment created as a result of the East African Rift Valley uplift versus the previous closed forest habitat. A 2007 study provides support for the hypothesis that bipedalism evolved because it used less energy than quadrupedal knuckle-walking. However, recent studies suggest that bipedality without the ability to use fire would not have allowed global dispersal. This change in gait saw a lengthening of the legs proportionately when compared to the length of the arms, which were shortened through the removal of the need for brachiation. Another change is the shape of the big toe. Recent studies suggest that australopithecines still lived part of the time in trees as a result of maintaining a grasping big toe. This was progressively lost in habilines.
Anatomically, the evolution of bipedalism has been accompanied by a large number of skeletal changes, not just to the legs and pelvis, but also to the vertebral column, feet and ankles, and skull. The femur evolved into a slightly more angular position to move the center of gravity toward the geometric center of the body. The knee and ankle joints became increasingly robust to better support increased weight. To support the increased weight on each vertebra in the upright position, the human vertebral column became S-shaped and the lumbar vertebrae became shorter and wider. In the feet the big toe moved into alignment with the other toes to help in forward locomotion. The arms and forearms shortened relative to the legs making it easier to run. The foramen magnum migrated under the skull and more anterior.
The most significant changes occurred in the pelvic region, where the long downward facing iliac blade was shortened and widened as a requirement for keeping the center of gravity stable while walking; bipedal hominids have a shorter but broader, bowl-like pelvis due to this. A drawback is that the birth canal of bipedal apes is smaller than in knuckle-walking apes, though there has been a widening of it in comparison to that of australopithecine and modern humans, thus permitting the passage of newborns due to the increase in cranial size. This is limited to the upper portion, since further increase can hinder normal bipedal movement.
The shortening of the pelvis and smaller birth canal evolved as a requirement for bipedalism and had significant effects on the process of human birth, which is much more difficult in modern humans than in other primates. During human birth, because of the variation in size of the pelvic region, the fetal head must be in a transverse position (compared to the mother) during entry into the birth canal and rotate about 90 degrees upon exit. The smaller birth canal became a limiting factor to brain size increases in early humans and prompted a shorter gestation period leading to the relative immaturity of human offspring, who are unable to walk much before 12 months and have greater neoteny, compared to other primates, who are mobile at a much earlier age. The increased brain growth after birth and the increased dependency of children on mothers had a major effect upon the female reproductive cycle, and the more frequent appearance of alloparenting in humans when compared with other hominids. Delayed human sexual maturity also led to the evolution of menopause with one explanation, the grandmother hypothesis, providing that elderly women could better pass on their genes by taking care of their daughter's offspring, as compared to having more children of their own.
Encephalization
The human species eventually developed a much larger brain than that of other primates—typically in modern humans, nearly three times the size of a chimpanzee or gorilla brain. After a period of stasis with Australopithecus anamensis and Ardipithecus, species which had smaller brains as a result of their bipedal locomotion, the pattern of encephalization started with Homo habilis, whose brain was slightly larger than that of chimpanzees. This evolution continued in Homo erectus with , and reached a maximum in Neanderthals with , larger even than modern Homo sapiens. This brain increase manifested during postnatal brain growth, far exceeding that of other apes (heterochrony). It also allowed for extended periods of social learning and language acquisition in juvenile humans, beginning as much as 2 million years ago. Encephalization may be due to a dependency on calorie-dense, difficult-to-acquire food.
Furthermore, the changes in the structure of human brains may be even more significant than the increase in size. Fossilized skulls shows the brain size in early humans fell within the range of modern humans 300,000 years ago, but only got its present-day brain shape between 100,000 and 35,000 years ago. The temporal lobes, which contain centers for language processing, have increased disproportionately, as has the prefrontal cortex, which has been related to complex decision-making and moderating social behavior. Encephalization has been tied to increased starches and meat in the diet, however a 2022 meta study called into question the role of meat. Other factors are the development of cooking, and it has been proposed that intelligence increased as a response to an increased necessity for solving social problems as human society became more complex. Changes in skull morphology, such as smaller mandibles and mandible muscle attachments, allowed more room for the brain to grow.
The increase in volume of the neocortex also included a rapid increase in size of the cerebellum. Its function has traditionally been associated with balance and fine motor control, but more recently with speech and cognition. The great apes, including hominids, had a more pronounced cerebellum relative to the neocortex than other primates. It has been suggested that because of its function of sensory-motor control and learning complex muscular actions, the cerebellum may have underpinned human technological adaptations, including the preconditions of speech.
The immediate survival advantage of encephalization is difficult to discern, as the major brain changes from Homo erectus to Homo heidelbergensis were not accompanied by major changes in technology. It has been suggested that the changes were mainly social and behavioural, including increased empathic abilities, increases in size of social groups, and increased behavioral plasticity. Humans are unique in the ability to acquire information through social transmission and adapt that information. The emerging field of cultural evolution studies human sociocultural change from an evolutionary perspective.
Sexual dimorphism
The reduced degree of sexual dimorphism in humans is visible primarily in the reduction of the male canine tooth relative to other ape species (except gibbons) and reduced brow ridges and general robustness of males. Another important physiological change related to sexuality in humans was the evolution of hidden estrus. Humans are the only hominoids in which the female is fertile year round and in which no special signals of fertility are produced by the body (such as genital swelling or overt changes in proceptivity during estrus).
Nonetheless, humans retain a degree of sexual dimorphism in the distribution of body hair and subcutaneous fat, and in the overall size, males being around 15% larger than females. These changes taken together have been interpreted as a result of an increased emphasis on pair bonding as a possible solution to the requirement for increased parental investment due to the prolonged infancy of offspring.
Ulnar opposition
The ulnar opposition—the contact between the thumb and the tip of the little finger of the same hand—is unique to the genus Homo, including Neanderthals, the Sima de los Huesos hominins and anatomically modern humans. In other primates, the thumb is short and unable to touch the little finger. The ulnar opposition facilitates the precision grip and power grip of the human hand, underlying all the skilled manipulations.
Other changes
A number of other changes have also characterized the evolution of humans, among them an increased reliance on vision rather than smell (highly reduced olfactory bulb); a longer juvenile developmental period and higher infant dependency; a smaller gut and small, misaligned teeth; faster basal metabolism; loss of body hair; an increase in
eccrine sweat gland density that is ten times higher than any other catarrhinian primates, yet humans use 30% to 50% less water per day compared to chimps and gorillas; more REM sleep but less sleep in total; a change in the shape of the dental arcade from u-shaped to parabolic; development of a chin (found in Homo sapiens alone); styloid processes; and a descended larynx. As the human hand and arms adapted to the making of tools and were used less for climbing, the shoulder blades changed too. As a side effect, it allowed human ancestors to throw objects with greater force, speed and accuracy.
Use of tools
The use of tools has been interpreted as a sign of intelligence, and it has been theorized that tool use may have stimulated certain aspects of human evolution, especially the continued expansion of the human brain. Paleontology has yet to explain the expansion of this organ over millions of years despite being extremely demanding in terms of energy consumption. The brain of a modern human consumes, on average, about 13 watts (260 kilocalories per day), a fifth of the body's resting power consumption. Increased tool use would allow hunting for energy-rich meat products, and would enable processing more energy-rich plant products. Researchers have suggested that early hominins were thus under evolutionary pressure to increase their capacity to create and use tools.
Precisely when early humans started to use tools is difficult to determine, because the more primitive these tools are (for example, sharp-edged stones) the more difficult it is to decide whether they are natural objects or human artifacts. There is some evidence that the australopithecines (4 Ma) may have used broken bones as tools, but this is debated.
Many species make and use tools, but it is the human genus that dominates the areas of making and using more complex tools. The oldest known tools are flakes from West Turkana, Kenya, which date to 3.3 million years ago. The next oldest stone tools are from Gona, Ethiopia, and are considered the beginning of the Oldowan technology. These tools date to about 2.6 million years ago. A Homo fossil was found near some Oldowan tools, and its age was noted at 2.3 million years old, suggesting that maybe the Homo species did indeed create and use these tools. It is a possibility but does not yet represent solid evidence. The third metacarpal styloid process enables the hand bone to lock into the wrist bones, allowing for greater amounts of pressure to be applied to the wrist and hand from a grasping thumb and fingers. It allows humans the dexterity and strength to make and use complex tools. This unique anatomical feature separates humans from apes and other nonhuman primates, and is not seen in human fossils older than 1.8 million years.
Bernard Wood noted that Paranthropus co-existed with the early Homo species in the area of the "Oldowan Industrial Complex" over roughly the same span of time. Although there is no direct evidence which identifies Paranthropus as the tool makers, their anatomy lends to indirect evidence of their capabilities in this area. Most paleoanthropologists agree that the early Homo species were indeed responsible for most of the Oldowan tools found. They argue that when most of the Oldowan tools were found in association with human fossils, Homo was always present, but Paranthropus was not.
In 1994, Randall Susman used the anatomy of opposable thumbs as the basis for his argument that both the Homo and Paranthropus species were toolmakers. He compared bones and muscles of human and chimpanzee thumbs, finding that humans have 3 muscles which are lacking in chimpanzees. Humans also have thicker metacarpals with broader heads, allowing more precise grasping than the chimpanzee hand can perform. Susman posited that modern anatomy of the human opposable thumb is an evolutionary response to the requirements associated with making and handling tools and that both species were indeed toolmakers.
Transition to behavioral modernity
Anthropologists describe modern human behavior to include cultural and behavioral traits such as specialization of tools, use of jewellery and images (such as cave drawings), organization of living space, rituals (such as grave gifts), specialized hunting techniques, exploration of less hospitable geographical areas, and barter trade networks, as well as more general traits such as language and complex symbolic thinking. Debate continues as to whether a "revolution" led to modern humans ("big bang of human consciousness"), or whether the evolution was more gradual.
Until about 50,000–40,000 years ago, the use of stone tools seems to have progressed stepwise. Each phase (H. habilis, H. ergaster, H. neanderthalensis) marked a new technology, followed by very slow development until the next phase. Currently paleoanthropologists are debating whether these Homo species possessed some or many modern human behaviors. They seem to have been culturally conservative, maintaining the same technologies and foraging patterns over very long periods.
Around 50,000 BP, human culture started to evolve more rapidly. The transition to behavioral modernity has been characterized by some as a "Great Leap Forward", or as the "Upper Palaeolithic Revolution", due to the sudden appearance in the archaeological record of distinctive signs of modern behavior and big game hunting. Evidence of behavioral modernity significantly earlier also exists from Africa, with older evidence of abstract imagery, widened subsistence strategies, more sophisticated tools and weapons, and other "modern" behaviors, and many scholars have recently argued that the transition to modernity occurred sooner than previously believed.
Other scholars consider the transition to have been more gradual, noting that some features had already appeared among archaic African Homo sapiens 300,000–200,000 years ago. Recent evidence suggests that the Australian Aboriginal population separated from the African population 75,000 years ago, and that they made a sea journey 60,000 years ago, which may diminish the significance of the Upper Paleolithic Revolution.
Modern humans started burying their dead, making clothing from animal hides, hunting with more sophisticated techniques (such as using pit traps or driving animals off cliffs), and cave painting. As human culture advanced, different populations innovated existing technologies: artifacts such as fish hooks, buttons, and bone needles show signs of cultural variation, which had not been seen prior to 50,000 BP. Typically, the older H. neanderthalensis populations did not vary in their technologies, although the Chatelperronian assemblages have been found to be Neanderthal imitations of H. sapiens Aurignacian technologies.
Recent and ongoing human evolution
Anatomically modern human populations continue to evolve, as they are affected by both natural selection and genetic drift. Although selection pressure on some traits, such as resistance to smallpox, has decreased in the modern age, humans are still undergoing natural selection for many other traits. Some of these are due to specific environmental pressures, while others are related to lifestyle changes since the development of agriculture (10,000 years ago), urbanization (5,000), and industrialization (250 years ago). It has been argued that human evolution has accelerated since the development of agriculture 10,000 years ago and civilization some 5,000 years ago, resulting, it is claimed, in substantial genetic differences between different current human populations, and more recent research indicates that for some traits, the developments and innovations of human culture have driven a new form of selection that coexists with, and in some cases has largely replaced, natural selection.
Particularly conspicuous is variation in superficial characteristics, such as Afro-textured hair, or the recent evolution of light skin and blond hair in some populations, which are attributed to differences in climate. Particularly strong selective pressures have resulted in high-altitude adaptation in humans, with different ones in different isolated populations. Studies of the genetic basis show that some developed very recently, with Tibetans evolving over 3,000 years to have high proportions of an allele of EPAS1 that is adaptive to high altitudes.
Other evolution is related to endemic diseases: the presence of malaria selects for sickle cell trait (the heterozygous form of sickle cell gene), while in the absence of malaria, the health effects of sickle-cell anemia select against this trait. For another example, the population at risk of the severe debilitating disease kuru has significant over-representation of an immune variant of the prion protein gene G127V versus non-immune alleles. The frequency of this genetic variant is due to the survival of immune persons. Some reported trends remain unexplained and the subject of ongoing research in the novel field of evolutionary medicine: polycystic ovary syndrome (PCOS) reduces fertility and thus is expected to be subject to extremely strong negative selection, but its relative commonality in human populations suggests a counteracting selection pressure. The identity of that pressure remains the subject of some debate.
Recent human evolution related to agriculture includes genetic resistance to infectious disease that has appeared in human populations by crossing the species barrier from domesticated animals, as well as changes in metabolism due to changes in diet, such as lactase persistence.
Culturally-driven evolution can defy the expectations of natural selection: while human populations experience some pressure that drives a selection for producing children at younger ages, the advent of effective contraception, higher education, and changing social norms have driven the observed selection in the opposite direction. However, culturally-driven selection need not necessarily work counter or in opposition to natural selection: some proposals to explain the high rate of recent human brain expansion indicate a kind of feedback whereupon the brain's increased social learning efficiency encourages cultural developments that in turn encourage more efficiency, which drive more complex cultural developments that demand still-greater efficiency, and so forth. Culturally-driven evolution has an advantage in that in addition to the genetic effects, it can be observed also in the archaeological record: the development of stone tools across the Palaeolithic period connects to culturally-driven cognitive development in the form of skill acquisition supported by the culture and the development of increasingly complex technologies and the cognitive ability to elaborate them.
In contemporary times, since industrialization, some trends have been observed: for instance, menopause is evolving to occur later. Other reported trends appear to include lengthening of the human reproductive period and reduction in cholesterol levels, blood glucose and blood pressure in some populations.
History of study
Before Darwin
The name of the biological genus to which humans belong is Latin for 'human'. It was chosen originally by Carl Linnaeus in his classification system. The English word human is from the Latin , the adjectival form of . The Latin derives from the Indo-European root *, or 'earth'. Linnaeus and other scientists of his time also considered the great apes to be the closest relatives of humans based on morphological and anatomical similarities.
Darwin
The possibility of linking humans with earlier apes by descent became clear only after 1859 with the publication of Charles Darwin's On the Origin of Species, in which he argued for the idea of the evolution of new species from earlier ones. Darwin's book did not address the question of human evolution, saying only that "Light will be thrown on the origin of man and his history."
The first debates about the nature of human evolution arose between Thomas Henry Huxley and Richard Owen. Huxley argued for human evolution from apes by illustrating many of the similarities and differences between humans and other apes, and did so particularly in his 1863 book Evidence as to Man's Place in Nature. Many of Darwin's early supporters (such as Alfred Russel Wallace and Charles Lyell) did not initially agree that the origin of the mental capacities and the moral sensibilities of humans could be explained by natural selection, though this later changed. Darwin applied the theory of evolution and sexual selection to humans in his 1871 book The Descent of Man, and Selection in Relation to Sex.
First fossils
A major problem in the 19th century was the lack of fossil intermediaries. Neanderthal remains were discovered in a limestone quarry in 1856, three years before the publication of On the Origin of Species, and Neanderthal fossils had been discovered in Gibraltar even earlier, but it was originally claimed that these were the remains of a modern human who had suffered some kind of illness. Despite the 1891 discovery by Eugène Dubois of what is now called Homo erectus at Trinil, Java, it was only in the 1920s when such fossils were discovered in Africa, that intermediate species began to accumulate. In 1925, Raymond Dart described Australopithecus africanus. The type specimen was the Taung Child, an australopithecine infant which was discovered in a cave. The child's remains were a remarkably well-preserved tiny skull and an endocast of the brain.
Although the brain was small (410 cm3), its shape was rounded, unlike that of chimpanzees and gorillas, and more like a modern human brain. Also, the specimen showed short canine teeth, and the position of the foramen magnum (the hole in the skull where the spine enters) was evidence of bipedal locomotion. All of these traits convinced Dart that the Taung Child was a bipedal human ancestor, a transitional form between apes and humans.
The East African fossils
During the 1960s and 1970s, hundreds of fossils were found in East Africa in the regions of the Olduvai Gorge and Lake Turkana. These searches were carried out by the Leakey family, with Louis Leakey and his wife Mary Leakey, and later their son Richard and daughter-in-law Meave, fossil hunters and paleoanthropologists. From the fossil beds of Olduvai and Lake Turkana they amassed specimens of the early hominins: the australopithecines and Homo species, and even H. erectus.
These finds cemented Africa as the cradle of humankind. In the late 1970s and the 1980s, Ethiopia emerged as the new hot spot of paleoanthropology after "Lucy", the most complete fossil member of the species Australopithecus afarensis, was found in 1974 by Donald Johanson near Hadar in the desertic Afar Triangle region of northern Ethiopia. Although the specimen had a small brain, the pelvis and leg bones were almost identical in function to those of modern humans, showing with certainty that these hominins had walked erect. Lucy was classified as a new species, Australopithecus afarensis, which is thought to be more closely related to the genus Homo as a direct ancestor, or as a close relative of an unknown ancestor, than any other known hominid or hominin from this early time range. (The specimen was nicknamed "Lucy" after the Beatles' song "Lucy in the Sky with Diamonds", which was played loudly and repeatedly in the camp during the excavations.) The Afar Triangle area would later yield discovery of many more hominin fossils, particularly those uncovered or described by teams headed by Tim D. White in the 1990s, including Ardipithecus ramidus and A. kadabba.
In 2013, fossil skeletons of Homo naledi, an extinct species of hominin assigned (provisionally) to the genus Homo, were found in the Rising Star Cave system, a site in South Africa's Cradle of Humankind region in Gauteng province near Johannesburg. , fossils of at least fifteen individuals, amounting to 1,550 specimens, have been excavated from the cave. The species is characterized by a body mass and stature similar to small-bodied human populations, a smaller endocranial volume similar to Australopithecus, and a cranial morphology (skull shape) similar to early Homo species. The skeletal anatomy combines primitive features known from australopithecines with features known from early hominins. The individuals show signs of having been deliberately disposed of within the cave near the time of death. The fossils were dated close to 250,000 years ago, and thus are not ancestral but contemporary with the first appearance of larger-brained anatomically modern humans.
The genetic revolution
The genetic revolution in studies of human evolution started when Vincent Sarich and Allan Wilson measured the strength of immunological cross-reactions of blood serum albumin between pairs of creatures, including humans and African apes (chimpanzees and gorillas). The strength of the reaction could be expressed numerically as an immunological distance, which was in turn proportional to the number of amino acid differences between homologous proteins in different species. By constructing a calibration curve of the ID of species' pairs with known divergence times in the fossil record, the data could be used as a molecular clock to estimate the times of divergence of pairs with poorer or unknown fossil records.
In their seminal 1967 paper in Science, Sarich and Wilson estimated the divergence time of humans and apes as four to five million years ago, at a time when standard interpretations of the fossil record gave this divergence as at least 10 to as much as 30 million years. Subsequent fossil discoveries, notably "Lucy", and reinterpretation of older fossil materials, notably Ramapithecus, showed the younger estimates to be correct and validated the albumin method.
Progress in DNA sequencing, specifically mitochondrial DNA (mtDNA) and then Y-chromosome DNA (Y-DNA) advanced the understanding of human origins. Application of the molecular clock principle revolutionized the study of molecular evolution.
On the basis of a separation from the orangutan between 10 and 20 million years ago, earlier studies of the molecular clock suggested that there were about 76 mutations per generation that were not inherited by human children from their parents; this evidence supported the divergence time between hominins and chimpanzees noted above. However, a 2012 study in Iceland of 78 children and their parents suggests a mutation rate of only 36 mutations per generation; this datum extends the separation between humans and chimpanzees to an earlier period greater than 7 million years ago (Ma). Additional research with 226 offspring of wild chimpanzee populations in eight locations suggests that chimpanzees reproduce at age 26.5 years on average; which suggests the human divergence from chimpanzees occurred between 7 and 13 mya. And these data suggest that Ardipithecus (4.5 Ma), Orrorin (6 Ma) and Sahelanthropus (7 Ma) all may be on the hominid lineage, and even that the separation may have occurred outside the East African Rift region.
Furthermore, analysis of the two species' genes in 2006 provides evidence that after human ancestors had started to diverge from chimpanzees, interspecies mating between "proto-human" and "proto-chimpanzees" nonetheless occurred regularly enough to change certain genes in the new gene pool:
A new comparison of the human and chimpanzee genomes suggests that after the two lineages separated, they may have begun interbreeding... A principal finding is that the X chromosomes of humans and chimpanzees appear to have diverged about 1.2 million years more recently than the other chromosomes.
The research suggests:
There were in fact two splits between the human and chimpanzee lineages, with the first being followed by interbreeding between the two populations and then a second split. The suggestion of a hybridization has startled paleoanthropologists, who nonetheless are treating the new genetic data seriously.
The quest for the earliest hominin
In the 1990s, several teams of paleoanthropologists were working throughout Africa looking for evidence of the earliest divergence of the hominin lineage from the great apes. In 1994, Meave Leakey discovered Australopithecus anamensis. The find was overshadowed by Tim D. White's 1995 discovery of Ardipithecus ramidus, which pushed back the fossil record to .
In 2000, Martin Pickford and Brigitte Senut discovered, in the Tugen Hills of Kenya, a 6-million-year-old bipedal hominin which they named Orrorin tugenensis. And in 2001, a team led by Michel Brunet discovered the skull of Sahelanthropus tchadensis which was dated as , and which Brunet argued was a bipedal, and therefore a hominid—that is, a hominin ( Hominidae; terms "hominids" and hominins).
Human dispersal
Anthropologists in the 1980s were divided regarding some details of reproductive barriers and migratory dispersals of the genus Homo. Subsequently, genetics has been used to investigate and resolve these issues. According to the Sahara pump theory evidence suggests that the genus Homo have migrated out of Africa at least three and possibly four times (e.g. Homo erectus, Homo heidelbergensis and two or three times for Homo sapiens). Recent evidence suggests these dispersals are closely related to fluctuating periods of climate change.
Recent evidence suggests that humans may have left Africa half a million years earlier than previously thought. A joint Franco-Indian team has found human artifacts in the Siwalk Hills north of New Delhi dating back at least 2.6 million years. This is earlier than the previous earliest finding of genus Homo at Dmanisi, in Georgia, dating to 1.85 million years. Although controversial, tools found at a Chinese cave strengthen the case that humans used tools as far back as 2.48 million years ago. This suggests that the Asian "Chopper" tool tradition, found in Java and northern China may have left Africa before the appearance of the Acheulian hand axe.
Dispersal of modern Homo sapiens
Up until the genetic evidence became available, there were two dominant models for the dispersal of modern humans. The multiregional hypothesis proposed that the genus Homo contained only a single interconnected population as it does today (not separate species), and that its evolution took place worldwide continuously over the last couple of million years. This model was proposed in 1988 by Milford H. Wolpoff. In contrast, the "out of Africa" model proposed that modern H. sapiens speciated in Africa recently (that is, approximately 200,000 years ago) and the subsequent migration through Eurasia resulted in the nearly complete replacement of other Homo species. This model has been developed by Chris Stringer and Peter Andrews.
Sequencing mtDNA and Y-DNA sampled from a wide range of indigenous populations revealed ancestral information relating to both male and female genetic heritage, and strengthened the "out of Africa" theory and weakened the views of multiregional evolutionism. Aligned in genetic tree differences were interpreted as supportive of a recent single origin.
"Out of Africa" has thus gained much support from research using female mitochondrial DNA and the male Y chromosome. After analysing genealogy trees constructed using 133 types of mtDNA, researchers concluded that all were descended from a female African progenitor, dubbed Mitochondrial Eve. "Out of Africa" is also supported by the fact that mitochondrial genetic diversity is highest among African populations.
A broad study of African genetic diversity, headed by Sarah Tishkoff, found the San people had the greatest genetic diversity among the 113 distinct populations sampled, making them one of 14 "ancestral population clusters". The research also located a possible origin of modern human migration in southwestern Africa, near the coastal border of Namibia and Angola. The fossil evidence was insufficient for archaeologist Richard Leakey to resolve the debate about exactly where in Africa modern humans first appeared. Studies of haplogroups in Y-chromosomal DNA and mitochondrial DNA have largely supported a recent African origin. All the evidence from autosomal DNA also predominantly supports a Recent African origin. However, evidence for archaic admixture in modern humans, both in Africa and later, throughout Eurasia has recently been suggested by a number of studies.
Recent sequencing of Neanderthal and Denisovan genomes shows that some admixture with these populations has occurred. All modern human groups outside Africa have 1–4% or (according to more recent research) about 1.5–2.6% Neanderthal alleles in their genome, and some Melanesians have an additional 4–6% of Denisovan alleles. These new results do not contradict the "out of Africa" model, except in its strictest interpretation, although they make the situation more complex. After recovery from a genetic bottleneck that some researchers speculate might be linked to the Toba supervolcano catastrophe, a fairly small group left Africa and interbred with Neanderthals, probably in the Middle East, on the Eurasian steppe or even in North Africa before their departure. Their still predominantly African descendants spread to populate the world. A fraction in turn interbred with Denisovans, probably in southeastern Asia, before populating Melanesia. HLA haplotypes of Neanderthal and Denisova origin have been identified in modern Eurasian and Oceanian populations. The Denisovan EPAS1 gene has also been found in Tibetan populations. Studies of the human genome using machine learning have identified additional genetic contributions in Eurasians from an "unknown" ancestral population potentially related to the Neanderthal-Denisovan lineage.
There are still differing theories on whether there was a single exodus from Africa or several. A multiple dispersal model involves the Southern Dispersal theory, which has gained support in recent years from genetic, linguistic and archaeological evidence. In this theory, there was a coastal dispersal of modern humans from the Horn of Africa crossing the Bab el Mandib to Yemen at a lower sea level around 70,000 years ago. This group helped to populate Southeast Asia and Oceania, explaining the discovery of early human sites in these areas much earlier than those in the Levant. This group seems to have been dependent upon marine resources for their survival.
Stephen Oppenheimer has proposed a second wave of humans may have later dispersed through the Persian Gulf oases, and the Zagros mountains into the Middle East. Alternatively it may have come across the Sinai Peninsula into Asia, from shortly after 50,000 yrs BP, resulting in the bulk of the human populations of Eurasia. It has been suggested that this second group possibly possessed a more sophisticated "big game hunting" tool technology and was less dependent on coastal food sources than the original group. Much of the evidence for the first group's expansion would have been destroyed by the rising sea levels at the end of each glacial maximum. The multiple dispersal model is contradicted by studies indicating that the populations of Eurasia and the populations of Southeast Asia and Oceania are all descended from the same mitochondrial DNA L3 lineages, which support a single migration out of Africa that gave rise to all non-African populations.
On the basis of the early date of Badoshan Iranian Aurignacian, Oppenheimer suggests that this second dispersal may have occurred with a pluvial period about 50,000 years before the present, with modern human big-game hunting cultures spreading up the Zagros Mountains, carrying modern human genomes from Oman, throughout the Persian Gulf, northward into Armenia and Anatolia, with a variant travelling south into Israel and to Cyrenicia.
Recent genetic evidence suggests that all modern non-African populations, including those of Eurasia and Oceania, are descended from a single wave that left Africa between 65,000 and 50,000 years ago.
Evidence
The evidence on which scientific accounts of human evolution are based comes from many fields of natural science. The main source of knowledge about the evolutionary process has traditionally been the fossil record, but since the development of genetics beginning in the 1970s, DNA analysis has come to occupy a place of comparable importance. The studies of ontogeny, phylogeny and especially evolutionary developmental biology of both vertebrates and invertebrates offer considerable insight into the evolution of all life, including how humans evolved. The specific study of the origin and life of humans is anthropology, particularly paleoanthropology which focuses on the study of human prehistory.
Evidence from genetics
The closest living relatives of humans are bonobos and chimpanzees (both genus Pan) and gorillas (genus Gorilla). With the sequencing of both the human and chimpanzee genome, estimates of the similarity between their DNA sequences range between 95% and 99%. It is also noteworthy that mice share around 97.5% of their working DNA with humans. By using the technique called the molecular clock which estimates the time required for the number of divergent mutations to accumulate between two lineages, the approximate date for the split between lineages can be calculated.
The gibbons (family Hylobatidae) and then the orangutans (genus Pongo) were the first groups to split from the line leading to the hominins, including humans—followed by gorillas (genus Gorilla), and, ultimately, by the chimpanzees (genus Pan). The splitting date between hominin and chimpanzee lineages is placed by some between , that is, during the Late Miocene. Speciation, however, appears to have been unusually drawn out. Initial divergence occurred sometime between , but ongoing hybridization blurred the separation and delayed complete separation during several millions of years. Patterson (2006) dated the final divergence at .
Genetic evidence has also been employed to compare species within the genus Homo, investigating gene flow between early modern humans and Neanderthals, and to enhance the understanding of the early human migration patterns and splitting dates. By comparing the parts of the genome that are not under natural selection and which therefore accumulate mutations at a fairly steady rate, it is possible to reconstruct a genetic tree incorporating the entire human species since the last shared ancestor.
Each time a certain mutation (single-nucleotide polymorphism) appears in an individual and is passed on to his or her descendants, a haplogroup is formed including all of the descendants of the individual who will also carry that mutation. By comparing mitochondrial DNA which is inherited only from the mother, geneticists have concluded that the last female common ancestor whose genetic marker is found in all modern humans, the so-called mitochondrial Eve, must have lived around 200,000 years ago.
Human evolutionary genetics studies how human genomes differ among individuals, the evolutionary past that gave rise to them, and their current effects. Differences between genomes have anthropological, medical and forensic implications and applications. Genetic data can provide important insight into human evolution.
In May 2023, scientists reported a more complicated pathway of human evolution than previously understood. According to the studies, humans evolved from different places and times in Africa, instead of from a single location and period of time.
Evidence from the fossil record
There is little fossil evidence for the divergence of the gorilla, chimpanzee and hominin lineages. The earliest fossils that have been proposed as members of the hominin lineage are Sahelanthropus tchadensis dating from , Orrorin tugenensis dating from , and Ardipithecus kadabba dating to . Each of these have been argued to be a bipedal ancestor of later hominins but, in each case, the claims have been contested. It is also possible that one or more of these species are ancestors of another branch of African apes, or that they represent a shared ancestor between hominins and other apes.
The question then of the relationship between these early fossil species and the hominin lineage is still to be resolved. From these early species, the australopithecines arose around and diverged into robust (also called Paranthropus) and gracile branches, one of which (possibly A. garhi) probably went on to become ancestors of the genus Homo. The australopithecine species that is best represented in the fossil record is Australopithecus afarensis with more than 100 fossil individuals represented, found from Northern Ethiopia (such as the famous "Lucy"), to Kenya, and South Africa. Fossils of robust australopithecines such as A. robustus (or alternatively Paranthropus robustus) and A./P. boisei are particularly abundant in South Africa at sites such as Kromdraai and Swartkrans, and around Lake Turkana in Kenya.
The earliest member of the genus Homo is Homo habilis which evolved around . H. habilis is the first species for which we have positive evidence of the use of stone tools. They developed the Oldowan lithic technology, named after the Olduvai Gorge in which the first specimens were found. Some scientists consider Homo rudolfensis, a larger bodied group of fossils with similar morphology to the original H. habilis fossils, to be a separate species, while others consider them to be part of H. habilis—simply representing intraspecies variation, or perhaps even sexual dimorphism. The brains of these early hominins were about the same size as that of a chimpanzee, and their main adaptation was bipedalism as an adaptation to terrestrial living.
During the next million years, a process of encephalization began and, by the arrival (about ) of H. erectus in the fossil record, cranial capacity had doubled. H. erectus were the first of the hominins to emigrate from Africa, and, from , this species spread through Africa, Asia, and Europe. One population of H. erectus, also sometimes classified as separate species H. ergaster, remained in Africa and evolved into H. sapiens. It is believed that H. erectus and H. ergaster were the first to use fire and complex tools. In Eurasia, H. erectus evolved into species such as H. antecessor, H. heidelbergensis and H. neanderthalensis. The earliest fossils of anatomically modern humans are from the Middle Paleolithic, about 300–200,000 years ago such as the Herto and Omo remains of Ethiopia, Jebel Irhoud remains of Morocco, and Florisbad remains of South Africa; later fossils from the Skhul Cave in Israel and Southern Europe begin around 90,000 years ago ().
As modern humans spread out from Africa, they encountered other hominins such as H. neanderthalensis and the Denisovans, who may have evolved from populations of H. erectus that had left Africa around . The nature of interaction between early humans and these sister species has been a long-standing source of controversy, the question being whether humans replaced these earlier species or whether they were in fact similar enough to interbreed, in which case these earlier populations may have contributed genetic material to modern humans.
This migration out of Africa is estimated to have begun about 70–50,000 years BP and modern humans subsequently spread globally, replacing earlier hominins either through competition or hybridization. They inhabited Eurasia and Oceania by 40,000 years BP, and the Americas by at least 14,500 years BP.
Inter-species breeding
The hypothesis of interbreeding, also known as hybridization, admixture or hybrid-origin theory, has been discussed ever since the discovery of Neanderthal remains in the 19th century. The linear view of human evolution began to be abandoned in the 1970s as different species of humans were discovered that made the linear concept increasingly unlikely. In the 21st century with the advent of molecular biology techniques and computerization, whole-genome sequencing of Neanderthal and human genome were performed, confirming recent admixture between different human species. In 2010, evidence based on molecular biology was published, revealing unambiguous examples of interbreeding between archaic and modern humans during the Middle Paleolithic and early Upper Paleolithic. It has been demonstrated that interbreeding happened in several independent events that included Neanderthals and Denisovans, as well as several unidentified hominins. Today, approximately 2% of DNA from all non-African populations (including Europeans, Asians, and Oceanians) is Neanderthal, with traces of Denisovan heritage. Also, 4–6% of modern Melanesian genetics are Denisovan. Comparisons of the human genome to the genomes of Neandertals, Denisovans and apes can help identify features that set modern humans apart from other hominin species. In a 2016 comparative genomics study, a Harvard Medical School/UCLA research team made a world map on the distribution and made some predictions about where Denisovan and Neanderthal genes may be impacting modern human biology.
For example, comparative studies in the mid-2010s found several traits related to neurological, immunological, developmental, and metabolic phenotypes, that were developed by archaic humans to European and Asian environments and inherited to modern humans through admixture with local hominins.
Although the narratives of human evolution are often contentious, several discoveries since 2010 show that human evolution should not be seen as a simple linear or branched progression, but a mix of related species. In fact, genomic research has shown that hybridization between substantially diverged lineages is the rule, not the exception, in human evolution. Furthermore, it is argued that hybridization was an essential creative force in the emergence of modern humans.
Stone tools
Stone tools are first attested around 2.6 million years ago, when hominins in Eastern Africa used so-called core tools, choppers made out of round cores that had been split by simple strikes. This marks the beginning of the Paleolithic, or Old Stone Age; its end is taken to be the end of the last Ice Age, around 10,000 years ago. The Paleolithic is subdivided into the Lower Paleolithic (Early Stone Age), ending around 350,000–300,000 years ago, the Middle Paleolithic (Middle Stone Age), until 50,000–30,000 years ago, and the Upper Paleolithic, (Late Stone Age), 50,000–10,000 years ago.
Archaeologists working in the Great Rift Valley in Kenya have discovered the oldest known stone tools in the world. Dated to around 3.3 million years ago, the implements are some 700,000 years older than stone tools from Ethiopia that previously held this distinction.
The period from 700,000 to 300,000 years ago is also known as the Acheulean, when H. ergaster (or erectus) made large stone hand axes out of flint and quartzite, at first quite rough (Early Acheulian), later "retouched" by additional, more-subtle strikes at the sides of the flakes. After 350,000 BP the more refined so-called Levallois technique was developed, a series of consecutive strikes, by which scrapers, slicers ("racloirs"), needles, and flattened needles were made. Finally, after about 50,000 BP, ever more refined and specialized flint tools were made by the Neanderthals and the immigrant Cro-Magnons (knives, blades, skimmers). Bone tools were also made by H. sapiens in Africa by 90,000–70,000 years ago and are also known from early H. sapiens sites in Eurasia by about 50,000 years ago.
Species list
This list is in chronological order across the table by genus. Some species/subspecies names are well-established, and some are less established – especially in genus Homo. Please see articles for more information.
| Biology and health sciences | Biology | null |
10339 | https://en.wikipedia.org/wiki/Electrochemical%20cell | Electrochemical cell | An electrochemical cell is a device that generates electrical energy from chemical reactions. Electrical energy can also be applied to these cells to cause chemical reactions to occur. Electrochemical cells that generate an electric current are called voltaic or galvanic cells and those that generate chemical reactions, via electrolysis for example, are called electrolytic cells.
Both galvanic and electrolytic cells can be thought of as having two half-cells: consisting of separate oxidation and reduction reactions.
When one or more electrochemical cells are connected in parallel or series they make a battery. Primary cells are single use batteries.
Types of electrochemical cells
Galvanic cell
A galvanic cell (voltaic cell), named after Luigi Galvani (Alessandro Volta), is an electrochemical cell that generates electrical energy from spontaneous redox reactions.
A wire connects two different metals (e.g. zinc and copper). Each metal is in a separate solution; often the aqueous sulphate or nitrate forms of the metal, however more generally metal salts and water which conduct current. A salt bridge or porous membrane connects the two solutions, keeping electric neutrality and the avoidance of charge accumulation. The metal's differences in oxidation/reduction potential drive the reaction until equilibrium.
Key features:
spontaneous reaction
generates electric current
current flows through a wire, and ions flow through a salt bridge
anode (negative), cathode (positive)
Half cells
Galvanic cells consists of two half-cells. Each half-cell consists of an electrode and an electrolyte (both half-cells may use the same or different electrolytes).
The chemical reactions in the cell involve the electrolyte, electrodes, and/or an external substance (fuel cells may use hydrogen gas as a reactant). In a full electrochemical cell, species from one half-cell lose electrons (oxidation) to their electrode while species from the other half-cell gain electrons (reduction) from their electrode.
A salt bridge (e.g., filter paper soaked in KNO3, NaCl, or some other electrolyte) is used to ionically connect two half-cells with different electrolytes, but it prevents the solutions from mixing and unwanted side reactions. An alternative to a salt bridge is to allow direct contact (and mixing) between the two half-cells, for example in simple electrolysis of water.
As electrons flow from one half-cell to the other through an external circuit, a difference in charge is established. If no ionic contact were provided, this charge difference would quickly prevent the further flow of electrons. A salt bridge allows the flow of negative or positive ions to maintain a steady-state charge distribution between the oxidation and reduction vessels, while keeping the contents otherwise separate. Other devices for achieving separation of solutions are porous pots and gelled solutions. A porous pot is used in the Bunsen cell.
Equilibrium reaction
Each half-cell has a characteristic voltage (depending on the metal and its characteristic reduction potential). Each reaction is undergoing an equilibrium reaction between different oxidation states of the ions: when equilibrium is reached, the cell cannot provide further voltage. In the half-cell performing oxidation, the closer the equilibrium lies to the ion/atom with the more positive oxidation state the more potential this reaction will provide. Likewise, in the reduction reaction, the closer the equilibrium lies to the ion/atom with the more negative oxidation state the higher the potential.
Cell potential
The cell potential can be predicted through the use of electrode potentials (the voltages of each half-cell). These half-cell potentials are defined relative to the assignment of 0 volts to the standard hydrogen electrode (SHE). (See table of standard electrode potentials). The difference in voltage between electrode potentials gives a prediction for the potential measured. When calculating the difference in voltage, one must first rewrite the half-cell reaction equations to obtain a balanced oxidation-reduction equation.
Reverse the reduction reaction with the smallest potential (to create an oxidation reaction/overall positive cell potential)
Half-reactions must be multiplied by integers to achieve electron balance.
Cell potentials have a possible range of roughly zero to 6 volts. Cells using water-based electrolytes are usually limited to cell potentials less than about 2.5 volts due to high reactivity of the powerful oxidizing and reducing agents with water which is needed to produce a higher voltage. Higher cell potentials are possible with cells using other solvents instead of water. For instance, lithium cells with a voltage of 3 volts are commonly available.
The cell potential depends on the concentration of the reactants, as well as their type. As the cell is discharged, the concentration of the reactants decreases and the cell potential also decreases.
Electrolytic cell
An electrolytic cell is an electrochemical cell in which applied electrical energy drives a non-spontaneous redox reaction.
They are often used to decompose chemical compounds, in a process called electrolysis. (The Greek word "lysis" (λύσις) means "loosing" or "setting free".)
Important examples of electrolysis are the decomposition of water into hydrogen and oxygen, and of bauxite into aluminium and other chemicals. Electroplating (e.g. of Copper, Silver, Nickel or Chromium) is done using an electrolytic cell. Electrolysis is a technique that uses a direct electric current (DC).
The components of an electrolytic cell are:
an electrolyte: usually a solution of water or other solvents in which ions are dissolved. Molten salts such as sodium chloride are also electrolytes.
two electrodes (a cathode and an anode) which are electrical terminals consisting of a suitable substance at which oxidation or reduction can take place, and maintained at two different electric potentials.
When driven by an external voltage (potential difference) applied to the electrodes, the ions in the electrolyte are attracted to the electrode with the opposite potential, where charge-transferring (also called faradaic or redox) reactions can take place. Only with a sufficient external voltage can an electrolytic cell decompose a normally stable, or inert chemical compound in the solution. Thus the electrical energy provided produces a chemical reaction which would not occur spontaneously otherwise.Key features:
non-spontaneous reaction
generates current
current flows through a wire, and ions flow through salt bridge
anode (positive), cathode (negative)
Primary cell
A primary cell produces current by irreversible chemical reactions (ex. small disposable batteries) and is not rechargeable.
They are used for their portability, low cost, and short lifetime.
Primary cells are made in a range of standard sizes to power small household appliances such as flashlights and portable radios.
As chemical reactions proceed in a primary cell, the battery uses up the chemicals that generate the power; when they are gone, the battery stops producing electricity.
Primary batteries make up about 90% of the $50 billion battery market, but secondary batteries have been gaining market share. About 15 billion primary batteries are thrown away worldwide every year, virtually all ending up in landfills. Due to the toxic heavy metals and strong acids or alkalis they contain, batteries are hazardous waste. Most municipalities classify them as such and require separate disposal. The energy needed to manufacture a battery is about 50 times greater than the energy it contains. Due to their high pollutant content compared to their small energy content, the primary battery is considered a wasteful, environmentally unfriendly technology. Mainly due to the increasing sales of wireless devices and cordless tools, which cannot be economically powered by primary batteries and come with integral rechargeable batteries, the secondary battery industry has high growth and has slowly been replacing the primary battery in high end products.
Secondary cell
A secondary cell produces current by reversible chemical reactions (ex. lead-acid battery car battery) and is rechargeable.
Lead-acid batteries are used in an automobile to start an engine and to operate the car's electrical accessories when the engine is not running. The alternator, once the car is running, recharges the battery.
It can perform as a galvanic cell and an electrolytic cell. It is a convenient way to store electricity: when current flows one way, the levels of one or more chemicals build up (charging); while it is discharging, they reduce and the resulting electromotive force can do work.
They are used for their high voltage, low costs, reliability, and long lifetime.
Fuel cell
A fuel cell is an electrochemical cell that reacts hydrogen fuel with oxygen or another oxidizing agent, to convert chemical energy to electricity.
Fuel cells are different from batteries in requiring a continuous source of fuel and oxygen (usually from air) to sustain the chemical reaction, whereas in a battery the chemical energy comes from chemicals already present in the battery.
Fuel cells can produce electricity continuously for as long as fuel and oxygen are supplied.
They are used for primary and backup power for commercial, industrial and residential buildings and in remote or inaccessible areas. They are also used to power fuel cell vehicles, including forklifts, automobiles, buses, boats, motorcycles and submarines.
Fuel cells are classified by the type of electrolyte they use and by the difference in startup time, which ranges from 1 second for proton-exchange membrane fuel cells (PEM fuel cells, or PEMFC) to 10 minutes for solid oxide fuel cells (SOFC).
There are many types of fuel cells, but they all consist of:
anode At the anode a catalyst causes the fuel to undergo oxidation reactions that generate protons (positively charged hydrogen ions) and electrons. The protons flow from the anode to the cathode through the electrolyte after the reaction. At the same time, electrons are drawn from the anode to the cathode through an external circuit, producing direct current electricity.
cathode At the cathode, another catalyst causes hydrogen ions, electrons, and oxygen to react, forming water.
electrolyte Allows positively charged hydrogen ions (protons) to move between the two sides of the fuel cell.
A related technology are flow batteries, in which the fuel can be regenerated by recharging. Individual fuel cells produce relatively small electrical potentials, about 0.7 volts, so cells are "stacked", or placed in series, to create sufficient voltage to meet an application's requirements. In addition to electricity, fuel cells produce water, heat and, depending on the fuel source, very small amounts of nitrogen dioxide and other emissions. The energy efficiency of a fuel cell is generally between 40 and 60%; however, if waste heat is captured in a cogeneration scheme, efficiencies up to 85% can be obtained.
In 2022, the global fuel cell market was estimated to be $6.3 billion, and is expected to increase by 19.9% by 2030. Many countries are attempting to enter the market by setting renewable energy GW goals.
| Physical sciences | Electrochemistry | Chemistry |
10356 | https://en.wikipedia.org/wiki/Endothermic%20process | Endothermic process | An endothermic process is a chemical or physical process that absorbs heat from its surroundings. In terms of thermodynamics, it is a thermodynamic process with an increase in the enthalpy (or internal energy ) of the system. In an endothermic process, the heat that a system absorbs is thermal energy transfer into the system. Thus, an endothermic reaction generally leads to an increase in the temperature of the system and a decrease in that of the surroundings.
The term was coined by 19th-century French chemist Marcellin Berthelot. The term endothermic comes from the Greek ἔνδον (endon) meaning 'within' and θερμ- (therm) meaning 'hot' or 'warm'.
An endothermic process may be a chemical process, such as dissolving ammonium nitrate () in water (), or a physical process, such as the melting of ice cubes.
The opposite of an endothermic process is an exothermic process, one that releases or "gives out" energy, usually in the form of heat and sometimes as electrical energy. Thus, endo in endothermic refers to energy or heat going in, and exo in exothermic refers to energy or heat going out. In each term (endothermic and exothermic) the prefix refers to where heat (or electrical energy) goes as the process occurs.
In chemistry
Due to bonds breaking and forming during various processes (changes in state, chemical reactions), there is usually a change in energy. If the energy of the forming bonds is greater than the energy of the breaking bonds, then energy is released. This is known as an exothermic reaction. However, if more energy is needed to break the bonds than the energy being released, energy is taken up. Therefore, it is an endothermic reaction.
Details
Whether a process can occur spontaneously depends not only on the enthalpy change but also on the entropy change () and absolute temperature . If a process is a spontaneous process at a certain temperature, the products have a lower Gibbs free energy than the reactants (an exergonic process), even if the enthalpy of the products is higher. Thus, an endothermic process usually requires a favorable entropy increase () in the system that overcomes the unfavorable increase in enthalpy so that still . While endothermic phase transitions into more disordered states of higher entropy, e.g. melting and vaporization, are common, spontaneous chemical processes at moderate temperatures are rarely endothermic. The enthalpy increase in a hypothetical strongly endothermic process usually results in , which means that the process will not occur (unless driven by electrical or photon energy). An example of an endothermic and exergonic process is
C6H12O6 + 6 H2O -> 12 H2 + 6 CO2
.
Examples
Evaporation
Sublimation
Cracking of alkanes
Thermal decomposition
Hydrolysis
Nucleosynthesis of elements heavier than nickel in stellar cores
High-energy neutrons can produce tritium from lithium-7 in an endothermic process, consuming 2.466 MeV. This was discovered when the 1954 Castle Bravo nuclear test produced an unexpectedly high yield.
Nuclear fusion of elements heavier than iron in supernovae
Dissolving together barium hydroxide and ammonium chloride
Distinction between endothermic and endotherm
The terms "endothermic" and "endotherm" are both derived from Greek "within" and "heat", but depending on context, they can have very different meanings.
In physics, thermodynamics applies to processes involving a system and its surroundings, and the term "endothermic" is used to describe a reaction where energy is taken "(with)in" by the system (vs. an "exothermic" reaction, which releases energy "outwards").
In biology, thermoregulation is the ability of an organism to maintain its body temperature, and the term "endotherm" refers to an organism that can do so from "within" by using the heat released by its internal bodily functions (vs. an "ectotherm", which relies on external, environmental heat sources) to maintain an adequate temperature.
| Physical sciences | Thermodynamics | Physics |
10363 | https://en.wikipedia.org/wiki/European%20Space%20Agency | European Space Agency | The European Space Agency (ESA) is a 23-member intergovernmental body devoted to space exploration. With its headquarters in Paris and a staff of around 2,547 people globally as of 2023, the ESA was founded in 1975. Its 2024 annual budget was €7.79 billion.
The ESA's space flight programme includes human spaceflight (mainly through participation in the International Space Station program); the launch and operation of crewless exploration missions to other planets (such as Mars) and the Moon; Earth observation, science and telecommunication; designing launch vehicles; and maintaining a major spaceport, the Guiana Space Centre at Kourou (French Guiana), France. The main European launch vehicle Ariane 6 will be operated through Arianespace with the ESA sharing in the costs of launching and further developing this launch vehicle. The agency is also working with NASA to manufacture the Orion spacecraft service module that flies on the Space Launch System.
History
Foundation
After World War II, many European scientists left Western Europe in order to work with the United States. Although the 1950s boom made it possible for Western European countries to invest in research and specifically in space-related activities, Western European scientists realised solely national projects would not be able to compete with the two main superpowers. In 1958, only months after the Sputnik shock, Edoardo Amaldi (Italy) and Pierre Auger (France), two prominent members of the Western European scientific community, met to discuss the foundation of a common Western European space agency. The meeting was attended by scientific representatives from eight countries.
The Western European nations decided to have two agencies: one concerned with developing a launch system, ELDO (European Launcher Development Organisation), and the other the precursor of the European Space Agency, ESRO (European Space Research Organisation). The latter was established on 20 March 1964 by an agreement signed on 14 June 1962. From 1968 to 1972, ESRO launched seven research satellites, but ELDO was not able to deliver a launch vehicle. Both agencies struggled with the underfunding and diverging interests of their participants.
The ESA in its current form was founded with the ESA Convention in 1975, when ESRO was merged with ELDO. The ESA had ten founding member states: Belgium, Denmark, France, West Germany, Italy, the Netherlands, Spain, Sweden, Switzerland, and the United Kingdom. These signed the ESA Convention in 1975 and deposited the instruments of ratification by 1980, when the convention came into force. During this interval the agency functioned in a de facto fashion. The ESA launched its first major scientific mission in 1975, Cos-B, a space probe monitoring gamma-ray emissions in the universe, which was first worked on by ESRO.
Later activities
The ESA collaborated with NASA on the International Ultraviolet Explorer (IUE), the world's first high-orbit telescope, which was launched in 1978 and operated successfully for 18 years. A number of successful Earth-orbit projects followed, and in 1986 the ESA began Giotto, its first deep-space mission, to study the comets Halley and Grigg–Skjellerup. Hipparcos, a star-mapping mission, was launched in 1989 and in the 1990s SOHO, Ulysses and the Hubble Space Telescope were all jointly carried out with NASA. Later scientific missions in cooperation with NASA include the Cassini–Huygens space probe, to which the ESA contributed by building the Titan landing module Huygens.
As the successor of ELDO, the ESA has also constructed rockets for scientific and commercial payloads. Ariane 1, launched in 1979, carried mostly commercial payloads into orbit from 1984 onward. The next two versions of the Ariane rocket were intermediate stages in the development of a more advanced launch system, the Ariane 4, which operated between 1988 and 2003 and established the ESA as the world leader in commercial space launches in the 1990s. Although the succeeding Ariane 5 experienced a failure on its first flight, it has since firmly established itself within the heavily competitive commercial space launch market with 112 successful launches until 2021. The successor launch vehicle, the Ariane 6, is under development and had a successful long-firing engine test in November 2023. The ESA plans for the Ariane 6 to launch in June or July 2024.
The beginning of the new millennium saw the ESA become, along with agencies like NASA, JAXA, ISRO, the CSA and Roscosmos, one of the major participants in scientific space research. Although the ESA had relied on co-operation with NASA in previous decades, especially the 1990s, changed circumstances (such as tough legal restrictions on information sharing by the United States military) led to decisions to rely more on itself and on co-operation with Russia. A 2011 press issue thus stated:
Notable ESA programmes include SMART-1, a probe testing cutting-edge space propulsion technology, the Mars Express and Venus Express missions, as well as the development of the Ariane 5 rocket and its role in the ISS partnership. The ESA maintains its scientific and research projects mainly for astronomy-space missions such as Corot, launched on 27 December 2006, a milestone in the search for exoplanets.
On 21 January 2019, ArianeGroup and Arianespace announced a one-year contract with the ESA to study and prepare for a mission to mine the Moon for lunar regolith.
In 2021 the ESA ministerial council agreed to the "Matosinhos manifesto" which set three priority areas (referred to as accelerators) "space for a green future, a rapid and resilient crisis response, and the protection of space assets", and two further high visibility projects (referred to as inspirators) an icy moon sample return mission; and human space exploration. In the same year the recruitment process began for the 2022 European Space Agency Astronaut Group.
1 July 2023 saw the launch of the Euclid spacecraft, developed jointly with the Euclid Consortium, after 10 years of planning and building it is designed to better understand dark energy and dark matter by accurately measuring the accelerating expansion of the universe.
Facilities
The agency's facilities date back to ESRO and are deliberately distributed among various countries and areas. The most important are the following centres:
ESA headquarters in Paris, France;
ESA science missions are based at ESTEC in Noordwijk, Netherlands;
Earth Observation missions at the ESA Centre for Earth Observation in Frascati, Italy;
ESA Mission Control (ESOC) is in Darmstadt, Germany;
The European Astronaut Centre (EAC) that trains astronauts for future missions is situated in Cologne, Germany;
The European Centre for Space Applications and Telecommunications (ECSAT), a research institute created in 2009, is located in Harwell, England, United Kingdom;
The European Space Astronomy Centre (ESAC) is located in Villanueva de la Cañada, Madrid, Spain.
The European Space Security and Education Centre (ESEC), located in Redu, Belgium;
The ESTRACK tracking and deep space communication network.
Many other facilities are operated by national space agencies in close collaboration with ESA.
Esrange near Kiruna in Sweden;
Guiana Space Centre in Kourou, France;
Toulouse Space Centre, France;
Institute of Space Propulsion in Lampoldshausen, Germany;
Columbus Control Centre in Oberpfaffenhofen, Germany.
Mission
The treaty establishing the European Space Agency reads:
The ESA is responsible for setting a unified space and related industrial policy, recommending space objectives to the member states, and integrating national programs like satellite development, into the European program as much as possible.
Jean-Jacques Dordain – ESA's Director General (2003–2015) – outlined the European Space Agency's mission in a 2003 interview:
Activities and programmes
The ESA describes its work in two overlapping ways:
For the general public, the various fields of work are described as "Activities".
Budgets are organised as "Programmes".
These are either mandatory or optional.
Activities
According to the ESA website, the activities are:
Observing the Earth
Human and Robotic Exploration
Launchers
Navigation
Space Science
Space Engineering & Technology
Operations
Telecommunications & Integrated Applications
Preparing for the Future
Space for Climate
Programmes
Mandatory
Every member country (known as 'Member States') must contribute to these programmes: The European Space Agency Science Programme is a long-term programme of space science missions.
Technology Development Element Programme
Science Core Technology Programme
General Study Programme
European Component Initiative
Optional
Depending on their individual choices the countries can contribute to the following programmes, becoming 'Participating States', listed according to:
Employment
As of 2023, Many other facilities are operated by national space agencies in close collaboration with the ESA. The ESA employs around 2,547 people, and thousands of contractors. Initially, new employees are contracted for an expandable four-year term, which is until the organization's retirement age of 63. According to the ESA's documents, the staff can receive myriad of perks, such as financial childcare support, retirement plans, and financial help when migrating. The ESA also prevents employees from disclosing any private documents or correspondences to outside parties. Ars Technica'''s 2023 report, which contained testimonies of 18 people, suggested that there is a widespread harassment between management and its employees, especially with its contractors. Since the ESA is an international organization, unaffiliated with any single nation, any form of legal action is difficult to raise against the organization.
Member states, funding and budget
Membership and contribution to the ESA
Member states participate to varying degrees with both mandatory space programs and those that are optional. , the mandatory programmes made up 25% of total expenditures while optional space programmes were the other 75%. The ESA has traditionally implemented a policy of "georeturn", where funds that ESA member states provide to the ESA "are returned in the form of contracts to companies in those countries."
By 2015, the ESA was an intergovernmental organisation of 22 member states.
The 2008 ESA budget amounted to €3.0 billion whilst the 2009 budget amounted to €3.6 billion. The total budget amounted to about €3.7 billion in 2010, €3.99 billion in 2011, €4.02 billion in 2012, €4.28 billion in 2013, €4.10 billion in 2014, €4.43 billion in 2015, €5.25 billion in 2016, €5.75 billion in 2017, €5.60 billion in 2018, €5.72 billion in 2019, €6,68 billion in 2020, €6.49 billion in 2021, €7.15 billion in 2022, €7.46 billion in 2023 and €7.79 billion in 2024.
English and French are the two official languages of the ESA. Additionally, official documents are also provided in German and documents regarding the Spacelab have been also provided in Italian. If found appropriate, the agency may conduct its correspondence in any language of a member state.
The following table lists all the member states and adjunct members, their ESA convention ratification dates, and their contributions as of 2024:
Non-full member states
Previously associated members were Austria, Norway and Finland and Slovenia, all of which later joined the ESA as full members. Since January 2025 there have been four associate members: Latvia, Lithuania, Slovakia and Canada. The three European members have shown interest in full membership and may eventually apply within the next years.
Latvia
Latvia became the second current associated member on 30 June 2020, when the Association Agreement was signed by ESA Director Jan Wörner and the Minister of Education and Science of Latvia, Ilga Šuplinska in Riga. The Saeima ratified it on 27 July.
Lithuania
In May 2021, Lithuania became the third current associated member. As a consequence its citizens became eligible to apply to the 2022 ESA Astronaut group, applications for which were scheduled to close one week later. The deadline was therefore extended by three weeks to allow Lithuanians a fair chance to apply.
Slovakia
Slovakia's Associate membership came into effect on 13 October 2022, for an initial duration of seven years. The Association Agreement supersedes the European Cooperating State (ECS) Agreement, which entered into force upon Slovakia's subscription to the Plan for European Cooperating States Charter on 4 February 2016, a scheme introduced at ESA in 2001. The ECS Agreement was subsequently extended until 3 August 2022.
Canada
Since 1 January 1979, Canada has had the special status of a Cooperating State within the ESA. By virtue of this accord, the Canadian Space Agency takes part in the ESA's deliberative bodies and decision-making and also in the ESA's programmes and activities. Canadian firms can bid for and receive contracts to work on programmes. The accord has a provision ensuring a fair industrial return to Canada. The most recent Cooperation Agreement was signed on 15 December 2010 with a term extending to 2020. For 2014, Canada's annual assessed contribution to the ESA general budget was €6,059,449 (CAD$8,559,050). For 2017, Canada has increased its annual contribution to €21,600,000 (CAD$30,000,000).
Budget appropriation and allocation
The ESA is funded from annual contributions by national governments of members as well as from an annual contribution by the European Union (EU).
The budget of the ESA was €5.250 billion in 2016. Every 3–4 years, ESA member states agree on a budget plan for several years at an ESA member states conference. This plan can be amended in future years, however provides the major guideline for the ESA for several years. The 2016 budget allocations for major areas of the ESA activity are shown in the chart on the right.
Countries typically have their own space programmes that differ in how they operate organisationally and financially with the ESA. For example, the French space agency CNES has a total budget of €2,015 million, of which €755 million is paid as direct financial contribution to the ESA. Several space-related projects are joint projects between national space agencies and the ESA (e.g. COROT). Also, the ESA is not the only European governmental space organisation (for example European Union Satellite Centre and the European Union Space Programme Agency).
Enlargement
After the decision of the ESA Council of 21/22 March 2001, the procedure for accession of the European states was detailed as described the document titled "The Plan for European Co-operating States (PECS)". Nations that want to become a full member of the ESA do so in 3 stages. First a Cooperation Agreement is signed between the country and ESA. In this stage, the country has very limited financial responsibilities. If a country wants to co-operate more fully with ESA, it signs a European Cooperating State (ECS) Agreement, albeit to be a candidate for said agreement, a country must be European. The ECS Agreement makes companies based in the country eligible for participation in ESA procurements. The country can also participate in all ESA programmes, except for the Basic Technology Research Programme. While the financial contribution of the country concerned increases, it is still much lower than that of a full member state. The agreement is normally followed by a Plan For European Cooperating State (or PECS Charter). This is a 5-year programme of basic research and development activities aimed at improving the nation's space industry capacity. At the end of the 5-year period, the country can either begin negotiations to become a full member state or an associated state or sign a new PECS Charter. Many countries, most of which joined the EU in both 2004 and 2007, have started to co-operate with the ESA on various levels:
During the Ministerial Meeting in December 2014, ESA ministers approved a resolution calling for discussions to begin with Israel, Australia and South Africa on future association agreements. The ministers noted that "concrete cooperation is at an advanced stage" with these nations and that "prospects for mutual benefits are existing".
A separate space exploration strategy resolution calls for further co-operation with the United States, Russia and China on "LEO exploration, including a continuation of ISS cooperation and the development of a robust plan for the coordinated use of space transportation vehicles and systems for exploration purposes, participation in robotic missions for the exploration of the Moon, the robotic exploration of Mars, leading to a broad Mars Sample Return mission in which Europe should be involved as a full partner, and human missions beyond LEO in the longer term."
In August 2019, the ESA and the Australian Space Agency signed a joint statement of intent "to explore deeper cooperation and identify projects in a range of areas including deep space, communications, navigation, remote asset management, data analytics and mission support." Details of the cooperation were laid out in a framework agreement signed by the two entities.
On 17 November 2020, ESA signed a memorandum of understanding (MOU) with the South African National Space Agency (SANSA). SANSA CEO Dr. Valanathan Munsami tweeted: "Today saw another landmark event for SANSA with the signing of an MoU with the ESA. This builds on initiatives that we have been discussing for a while already and which gives effect to these. Thanks Jan for your hand of friendship and making this possible."
Launch vehicles
The ESA currently has two operational launch vehicles Vega-C and Ariane 6. Rocket launches are carried out by Arianespace, which has 23 shareholders representing the industry that manufactures the Ariane 5 as well as CNES, at the ESA's Guiana Space Centre. Because many communication satellites have equatorial orbits, launches from French Guiana are able to take larger payloads into space than from spaceports at higher latitudes. In addition, equatorial launches give spacecraft an extra 'push' of nearly 500 m/s due to the higher rotational velocity of the Earth at the equator compared to near the Earth's poles where rotational velocity approaches zero.
Ariane 6
Ariane 6 is a heavy lift expendable launch vehicle developed by Arianespace. The Ariane 6 entered into its inaugural flight campaign on 26 April 2024 with the flight conducted on 9 July 2024.
Vega-C
Vega is the ESA's carrier for small satellites. Developed by seven ESA members led by Italy. It is capable of carrying a payload with a mass of between 300 and 1500 kg to an altitude of 700 km, for low polar orbit. Its maiden launch from Kourou was on 13 February 2012. Vega began full commercial exploitation in December 2015.
The rocket has three solid propulsion stages and a liquid propulsion upper stage (the AVUM) for accurate orbital insertion and the ability to place multiple payloads into different orbits.
A larger version of the Vega launcher, Vega-C had its first flight in July 2022. The new evolution of the rocket incorporates a larger first stage booster, the P120C replacing the P80, an upgraded Zefiro (rocket stage) second stage, and the AVUM+ upper stage. This new variant enables larger single payloads, dual payloads, return missions, and orbital transfer capabilities.
Ariane launch vehicle development funding
Historically, the Ariane family rockets have been funded primarily "with money contributed by ESA governments seeking to participate in the program rather than through competitive industry bids. This [has meant that] governments commit multiyear funding to the development with the expectation of a roughly 90% return on investment in the form of industrial workshare." ESA is proposing changes to this scheme by moving to competitive bids for the development of the Ariane 6.
Future rocket development
Future projects include the Prometheus reusable engine technology demonstrator, Phoebus (an upgraded second stage for Ariane 6), and Themis (a reusable first stage).
Human space flight
Formation and development
At the time the ESA was formed, its main goals did not encompass human space flight; rather it considered itself to be primarily a scientific research organisation for uncrewed space exploration in contrast to its American and Soviet counterparts. It is therefore not surprising that the first non-Soviet European in space was not an ESA astronaut on a European space craft; it was Czechoslovak Vladimír Remek who in 1978 became the first non-Soviet or American in space (the first man in space being Yuri Gagarin of the Soviet Union) – on a Soviet Soyuz spacecraft, followed by the Pole Mirosław Hermaszewski and East German Sigmund Jähn in the same year. This Soviet co-operation programme, known as Intercosmos, primarily involved the participation of Eastern bloc countries. In 1982, however, Jean-Loup Chrétien became the first non-Communist Bloc astronaut on a flight to the Soviet Salyut 7 space station.
Because Chrétien did not officially fly into space as an ESA astronaut, but rather as a member of the French CNES astronaut corps, the German Ulf Merbold is considered the first ESA astronaut to fly into space. He participated in the STS-9 Space Shuttle mission that included the first use of the European-built Spacelab in 1983. STS-9 marked the beginning of an extensive ESA/NASA joint partnership that included dozens of space flights of ESA astronauts in the following years. Some of these missions with Spacelab were fully funded and organisationally and scientifically controlled by the ESA (such as two missions by Germany and one by Japan) with European astronauts as full crew members rather than guests on board. Beside paying for Spacelab flights and seats on the shuttles, the ESA continued its human space flight co-operation with the Soviet Union and later Russia, including numerous visits to Mir.
During the latter half of the 1980s, European human space flights changed from being the exception to routine and therefore, in 1990, the European Astronaut Centre in Cologne, Germany was established. It selects and trains prospective astronauts and is responsible for the co-ordination with international partners, especially with regard to the International Space Station. As of 2006, the ESA astronaut corps officially included twelve members, including nationals from most large European countries except the United Kingdom.
In 2008, the ESA started to recruit new astronauts so that final selection would be due in spring 2009. Almost 10,000 people registered as astronaut candidates before registration ended in June 2008. 8,413 fulfilled the initial application criteria. Of the applicants, 918 were chosen to take part in the first stage of psychological testing, which narrowed down the field to 192. After two-stage psychological tests and medical evaluation in early 2009, as well as formal interviews, six new members of the European Astronaut Corps were selected – five men and one woman.
Crew vehicles
In the 1980s, France pressed for an independent European crew launch vehicle. Around 1978, it was decided to pursue a reusable spacecraft model and starting in November 1987 a project to create a mini-shuttle by the name of Hermes was introduced. The craft was comparable to early proposals for the Space Shuttle and consisted of a small reusable spaceship that would carry 3 to 5 astronauts and 3 to 4 metric tons of payload for scientific experiments. With a total maximum weight of 21 metric tons it would have been launched on the Ariane 5 rocket, which was being developed at that time. It was planned solely for use in low Earth orbit space flights. The planning and pre-development phase concluded in 1991; the production phase was never fully implemented because at that time the political landscape had changed significantly. With the fall of the Soviet Union, the ESA looked forward to co-operation with Russia to build a next-generation space vehicle. Thus the Hermes programme was cancelled in 1995 after about 3 billion dollars had been spent. The Columbus space station programme had a similar fate.
In the 21st century, the ESA started new programmes in order to create its own crew vehicles, most notable among its various projects and proposals is Hopper, whose prototype by EADS, called Phoenix, has already been tested. While projects such as Hopper are neither concrete nor to be realised within the next decade, other possibilities for human spaceflight in co-operation with the Russian Space Agency have emerged. Following talks with the Russian Space Agency in 2004 and June 2005, a co-operation between the ESA and the Russian Space Agency was announced to jointly work on the Russian-designed Kliper, a reusable spacecraft that would be available for space travel beyond LEO (e.g. the moon or even Mars). It was speculated that Europe would finance part of it. A €50 million participation study for Kliper, which was expected to be approved in December 2005, was finally not approved by ESA member states. The Russian state tender for the project was subsequently cancelled in 2006.
In June 2006, ESA member states granted 15 million to the Crew Space Transportation System (CSTS) study, a two-year study to design a spacecraft capable of going beyond Low-Earth orbit based on the current Soyuz design. This project was pursued with Roskosmos instead of the cancelled Kliper proposal. A decision on the actual implementation and construction of the CSTS spacecraft was contemplated for 2008.
In mid-2009 EADS Astrium was awarded a €21 million study into designing a crew vehicle based on the European ATV which is believed to now be the basis of the Advanced Crew Transportation System design.
In November 2012, the ESA decided to join NASA's Orion programme. The ATV would form the basis of a propulsion unit for NASA's new crewed spacecraft. The ESA may also seek to work with NASA on Orion's launch system as well in order to secure a seat on the spacecraft for its own astronauts.
In September 2014, the ESA signed an agreement with Sierra Nevada Corporation for co-operation in Dream Chaser project. Further studies on the Dream Chaser for European Utilization or DC4EU project were funded, including the feasibility of launching a Europeanised Dream Chaser onboard Ariane 5.
Cooperation with other countries and organisations
The ESA has signed co-operation agreements with the following states that currently neither plan to integrate as tightly with ESA institutions as Canada, nor envision future membership of the ESA: Argentina, Brazil, China, India (for the Chandrayan mission), Russia and Turkey.
Additionally, the ESA has joint projects with the EUSPA of the European Union, NASA of the United States and is participating in the International Space Station together with the United States (NASA), Russia and Japan (JAXA).
National space organisations of member states
The Centre National d'Études Spatiales (CNES) (National Centre for Space Study) is the French government space agency (administratively, a "public establishment of industrial and commercial character"). Its headquarters are in central Paris. CNES is the main participant on the Ariane project. Indeed, CNES designed and tested all Ariane family rockets (mainly from its centre in Évry near Paris)
The UK Space Agency is a partnership of the UK government departments which are active in space. Through the UK Space Agency, the partners provide delegates to represent the UK on the various ESA governing bodies. Each partner funds its own programme.
The Italian Space Agency (Agenzia Spaziale Italiana or ASI) was founded in 1988 to promote, co-ordinate and conduct space activities in Italy. Operating under the Ministry of the Universities and of Scientific and Technological Research, the agency cooperates with numerous entities active in space technology and with the president of the Council of Ministers. Internationally, the ASI provides Italy's delegation to the Council of the European Space Agency and to its subordinate bodies.
The German Aerospace Center (DLR) (German: Deutsches Zentrum für Luft- und Raumfahrt e. V.) is the national research centre for aviation and space flight of the Federal Republic of Germany and of other member states in the Helmholtz Association. Its extensive research and development projects are included in national and international cooperative programmes. In addition to its research projects, the centre is the assigned space agency of Germany bestowing headquarters of German space flight activities and its associates.
The Instituto Nacional de Técnica Aeroespacial (INTA) (National Institute for Aerospace Technique) is a Public Research Organisation specialised in aerospace research and technology development in Spain. Among other functions, it serves as a platform for space research and acts as a significant testing facility for the aeronautic and space sector in the country.
NASA
The ESA has a long history of collaboration with NASA. Since ESA's astronaut corps was formed, the Space Shuttle has been the primary launch vehicle used by the ESA's astronauts to get into space through partnership programmes with NASA. In the 1980s and 1990s, the Spacelab programme was an ESA-NASA joint research programme that had the ESA develop and manufacture orbital labs for the Space Shuttle for several flights in which the ESA participates with astronauts in experiments.
In robotic science mission and exploration missions, NASA has been the ESA's main partner. Cassini–Huygens was a joint NASA-ESA mission, along with the Infrared Space Observatory, INTEGRAL, SOHO, and others. Also, the Hubble Space Telescope is a joint project of NASA and the ESA. Future ESA-NASA joint projects include the James Webb Space Telescope and the proposed Laser Interferometer Space Antenna. NASA has supported the ESA's MarcoPolo-R mission which landed on asteroid Bennu in October 2020 and is scheduled to return a sample to Earth for further analysis in 2023. NASA and the ESA will also likely join for a Mars sample-return mission. In October 2020, the ESA entered into a memorandum of understanding (MOU) with NASA to work together on the Artemis program, which will provide an orbiting Lunar Gateway and also accomplish the first crewed lunar landing in 50 years, whose team will include the first woman on the Moon. Astronaut selection announcements are expected within two years of the 2024 scheduled launch date. The ESA also purchases seats on the NASA operated Commercial Crew Program. The first ESA astronaut to be on a Commercial Crew Program mission is Thomas Pesquet. Pesquet launched into space aboard Crew Dragon Endeavour on the Crew-2 mission. The ESA also has seats on Crew-3 with Matthias Maurer and Crew-4 with Samantha Cristoforetti.
SpaceX
In 2023, following the successful launch of the Euclid telescope in July on a Falcon 9 rocket, the ESA approached SpaceX to launch four Galileo communication satellites on two Falcon 9 rockets in 2024, however it would require approval from the European Commission and all member states of the European Union to proceed.
Cooperation with other space agencies
Since China has invested more money into space activities, the Chinese Space Agency has sought international partnerships. Besides the Russian Space Agency, ESA is one of its most important partners. Both space agencies cooperated in the development of the Double Star Mission. In 2017, the ESA sent two astronauts to China for two weeks sea survival training with Chinese astronauts in Yantai, Shandong.
The ESA entered into a major joint venture with Russia in the form of the CSTS, the preparation of French Guiana spaceport for launches of Soyuz-2 rockets and other projects. With India, the ESA agreed to send instruments into space aboard the ISRO's Chandrayaan-1 in 2008. The ESA is also co-operating with Japan, the most notable current project in collaboration with JAXA is the BepiColombo mission to Mercury.
International Space Station
With regard to the International Space Station (ISS), the ESA is not represented by all of its member states: 11 of the 22 ESA member states currently participate in the project: Belgium, Denmark, France, Germany, Italy, Netherlands, Norway, Spain, Sweden, Switzerland and United Kingdom. Austria, Finland and Ireland chose not to participate, because of lack of interest or concerns about the expense of the project. Portugal, Luxembourg, Greece, the Czech Republic, Romania, Poland, Estonia and Hungary joined ESA after the agreement had been signed.
The ESA takes part in the construction and operation of the ISS, with contributions such as Columbus, a science laboratory module that was brought into orbit by NASA's STS-122 Space Shuttle mission, and the Cupola observatory module that was completed in July 2005 by Alenia Spazio for the ESA. The current estimates for the ISS are approaching €100 billion in total (development, construction and 10 years of maintaining the station) of which the ESA has committed to paying €8 billion. About 90% of the costs of the ESA's ISS share will be contributed by Germany (41%), France (28%) and Italy (20%). German ESA astronaut Thomas Reiter was the first long-term ISS crew member.
The ESA has developed the Automated Transfer Vehicle for ISS resupply. Each ATV has a cargo capacity of . The first ATV, Jules Verne, was launched on 9 March 2008 and on 3 April 2008 successfully docked with the ISS. This manoeuvre, considered a major technical feat, involved using automated systems to allow the ATV to track the ISS, moving at 27,000 km/h, and attach itself with an accuracy of 2 cm. Five vehicles were launched before the program ended with the launch of the fifth ATV, Georges Lemaître, in 2014.
As of 2020, the spacecraft establishing supply links to the ISS are the Russian Progress and Soyuz, Japanese Kounotori (HTV), and the United States vehicles Cargo Dragon 2 and Cygnus stemmed from the Commercial Resupply Services program.
European Life and Physical Sciences research on board the International Space Station (ISS) is mainly based on the European Programme for Life and Physical Sciences in Space programme that was initiated in 2001.
Facilities
ESA Headquarters, Paris, France
European Space Operations Centre (ESOC), Darmstadt, Germany
European Space Research and Technology Centre (ESTEC), Noordwijk, Netherlands
European Space Astronomy Centre (ESAC), Madrid, Spain
European Centre for Space Applications and Telecommunications (ECSAT), Oxfordshire, United Kingdom
European Astronaut Centre (EAC), Cologne, Germany
ESA Centre for Earth Observation (ESRIN), Frascati, Italy
Guiana Space Centre (CSG), Kourou, French Guiana
European Space Tracking Network (ESTRACK)
European Data Relay System
Link between ESA and EU
The ESA is an independent space agency and not under the jurisdiction of the European Union, although they have common goals, share funding, and work together often.
The initial aim of the European Union (EU) was to make the European Space Agency an agency of the EU by 2014. While the EU and its member states fund together 86% of the budget of the ESA, it is not an EU agency. Furthermore, the ESA has several non-EU members, most notably the United Kingdom which left the EU while remaining a full member of the ESA. The ESA is partnered with the EU on its two current flagship space programmes, the Copernicus series of Earth observation satellites and the Galileo satellite navigation system, with the ESA providing technical oversight and, in the case of Copernicus, some of the funding. The EU, though, has shown an interest in expanding into new areas, whence the proposal to rename and expand its satellite navigation agency (the European GNSS Agency) into the EU Agency for the Space Programme. The proposal drew strong criticism from the ESA, as it was perceived as encroaching on the ESA's turf.
In January 2021, after years of acrimonious relations, EU and ESA officials mended their relationship, with the EU Internal Market commissioner Thierry Breton saying "The European space policy will continue to rely on the ESA and its unique technical, engineering and science expertise," and that the "ESA will continue to be the European agency for space matters. If we are to be successful in our European strategy for space, and we will be, I will need the ESA by my side." ESA director Aschbacher reciprocated, saying "I would really like to make the ESA the main agency, the go-to agency of the European Commission for all its flagship programmes." The ESA and EUSPA are now seen to have distinct roles and competencies, which will be officialised in the Financial Framework Partnership Agreement (FFPA). Whereas the ESA's focus will be on the technical elements of the EU space programmes, the EUSPA will handle the operational elements of those programmes.
Security incidents
On 3 August 1984, the ESA's Paris headquarters were severely damaged and six people were hurt when a bomb exploded. It was planted by the far-left armed Action Directe group.
On 14 December 2015, hackers from Anonymous breached the ESA's subdomains and leaked thousands of login credentials.
| Technology | Programs and launch sites | null |
10372 | https://en.wikipedia.org/wiki/Entire%20function | Entire function | In complex analysis, an entire function, also called an integral function, is a complex-valued function that is holomorphic on the whole complex plane. Typical examples of entire functions are polynomials and the exponential function, and any finite sums, products and compositions of these, such as the trigonometric functions sine and cosine and their hyperbolic counterparts sinh and cosh, as well as derivatives and integrals of entire functions such as the error function. If an entire function has a
root at , then , taking the limit value at , is an entire function. On the other hand, the natural logarithm, the reciprocal function, and the square root are all not entire functions, nor can they be continued analytically to an entire function.
A transcendental entire function is an entire function that is not a polynomial.
Just as meromorphic functions can be viewed as a generalization of rational fractions, entire functions can be viewed as a generalization of polynomials. In particular, if for meromorphic functions one can generalize the factorization into simple fractions (the Mittag-Leffler theorem on the decomposition of a meromorphic function), then for entire functions there is a generalization of the factorization — the Weierstrass theorem on entire functions.
Properties
Every entire function can be represented as a single power series
that converges everywhere in the complex plane, hence uniformly on compact sets. The radius of convergence is infinite, which implies that
or, equivalently,
Any power series satisfying this criterion will represent an entire function.
If (and only if) the coefficients of the power series are all real then the function evidently takes real values for real arguments, and the value of the function at the complex conjugate of will be the complex conjugate of the value at Such functions are sometimes called self-conjugate (the conjugate function, being given by
If the real part of an entire function is known in a neighborhood of a point then both the real and imaginary parts are known for the whole complex plane, up to an imaginary constant. For instance, if the real part is known in a neighborhood of zero, then we can find the coefficients for from the following derivatives with respect to a real variable :
(Likewise, if the imaginary part is known in a neighborhood then the function is determined up to a real constant.) In fact, if the real part is known just on an arc of a circle, then the function is determined up to an imaginary constant.}
Note however that an entire function is not determined by its real part on all curves. In particular, if the real part is given on any curve in the complex plane where the real part of some other entire function is zero, then any multiple of that function can be added to the function we are trying to determine. For example, if the curve where the real part is known is the real line, then we can add times any self-conjugate function. If the curve forms a loop, then it is determined by the real part of the function on the loop since the only functions whose real part is zero on the curve are those that are everywhere equal to some imaginary number.
The Weierstrass factorization theorem asserts that any entire function can be represented by a product involving its zeroes (or "roots").
The entire functions on the complex plane form an integral domain (in fact a Prüfer domain). They also form a commutative unital associative algebra over the complex numbers.
Liouville's theorem states that any bounded entire function must be constant.
As a consequence of Liouville's theorem, any function that is entire on the whole Riemann sphere
is constant. Thus any non-constant entire function must have a singularity at the complex point at infinity, either a pole for a polynomial or an essential singularity for a transcendental entire function. Specifically, by the Casorati–Weierstrass theorem, for any transcendental entire function and any complex there is a sequence such that
Picard's little theorem is a much stronger result: Any non-constant entire function takes on every complex number as value, possibly with a single exception. When an exception exists, it is called a lacunary value of the function. The possibility of a lacunary value is illustrated by the exponential function, which never takes on the value . One can take a suitable branch of the logarithm of an entire function that never hits , so that this will also be an entire function (according to the Weierstrass factorization theorem). The logarithm hits every complex number except possibly one number, which implies that the first function will hit any value other than an infinite number of times. Similarly, a non-constant, entire function that does not hit a particular value will hit every other value an infinite number of times.
Liouville's theorem is a special case of the following statement:
Growth
Entire functions may grow as fast as any increasing function: for any increasing function
there exists an entire function such that
for all real . Such a function may be easily found of the form:
for a constant and a strictly increasing sequence of positive integers . Any such sequence defines an entire function , and if the powers are chosen appropriately we may satisfy the inequality for all real . (For instance, it certainly holds if one chooses and, for any integer one chooses an even exponent such that ).
Order and type
The order (at infinity) of an entire function is defined using the limit superior as:
where is the disk of radius and denotes the supremum norm of on . The order is a non-negative real number or infinity (except when for all ). In other words, the order of is the infimum of all such that:
The example of shows that this does not mean if
is of order .
If one can also define the type:
If the order is 1 and the type is , the function is said to be "of exponential type ". If it is of order less than 1 it is said to be of exponential type 0.
If then the order and type can be found by the formulas
Let denote the -th derivative of . Then we may restate these formulas in terms of the derivatives at any arbitrary point :
The type may be infinite, as in the case of the reciprocal gamma function, or zero (see example below under ).
Another way to find out the order and type is Matsaev's theorem.
Examples
Here are some examples of functions of various orders:
Order ρ
For arbitrary positive numbers and one can construct an example of an entire function of order and type using:
Order 0
Non-zero polynomials
Order 1/4
where
Order 1/3
where
Order 1/2
with (for which the type is given by )
Order 1
with ()
the Bessel functions and spherical Bessel functions for integer values of
the reciprocal gamma function ( is infinite)
Order 3/2
Airy function
Order 2
with ()
The Barnes G-function ( is infinite).
Order infinity
Genus
Entire functions of finite order have Hadamard's canonical representation (Hadamard factorization theorem):
where are those roots of that are not zero (), is the order of the zero of at (the case being taken to mean ), a polynomial (whose degree we shall call ), and is the smallest non-negative integer such that the series
converges. The non-negative integer is called the genus of the entire function .
If the order is not an integer, then is the integer part of . If the order is a positive integer, then there are two possibilities: or .
For example, , and are entire functions of genus .
Other examples
According to J. E. Littlewood, the Weierstrass sigma function is a 'typical' entire function. This statement can be made precise in the theory of random entire functions: the asymptotic behavior of almost all entire functions is similar to that of the sigma function. Other examples include the Fresnel integrals, the Jacobi theta function, and the reciprocal Gamma function. The exponential function and the error function are special cases of the Mittag-Leffler function. According to the fundamental theorem of Paley and Wiener, Fourier transforms of functions (or distributions) with bounded support are entire functions of order and finite type.
Other examples are solutions of linear differential equations with polynomial coefficients. If the coefficient at the highest derivative is constant, then all solutions of such equations are entire functions. For example, the exponential function, sine, cosine, Airy functions and Parabolic cylinder functions arise in this way. The class of entire functions is closed with respect to compositions. This makes it possible to study dynamics of entire functions.
An entire function of the square root of a complex number is entire if the original function is even, for example .
If a sequence of polynomials all of whose roots are real converges in a neighborhood of the origin to a limit which is not identically equal to zero, then this limit is an entire function. Such entire functions form the Laguerre–Pólya class, which can also be characterized in terms of the Hadamard product, namely, belongs to this class if and only if in the Hadamard representation all are real, , and
, where and are real, and . For example, the sequence of polynomials
converges, as increases, to . The polynomials
have all real roots, and converge to . The polynomials
also converge to , showing the buildup of the Hadamard product for cosine.
| Mathematics | Functions: General | null |
10375 | https://en.wikipedia.org/wiki/Error%20detection%20and%20correction | Error detection and correction | In information theory and coding theory with applications in computer science and telecommunications, error detection and correction (EDAC) or error control are techniques that enable reliable delivery of digital data over unreliable communication channels. Many communication channels are subject to channel noise, and thus errors may be introduced during transmission from the source to a receiver. Error detection techniques allow detecting such errors, while error correction enables reconstruction of the original data in many cases.
Definitions
Error detection is the detection of errors caused by noise or other impairments during transmission from the transmitter to the receiver.
Error correction is the detection of errors and reconstruction of the original, error-free data.
History
In classical antiquity, copyists of the Hebrew Bible were paid for their work according to the number of stichs (lines of verse). As the prose books of the Bible were hardly ever written in stichs, the copyists, in order to estimate the amount of work, had to count the letters. This also helped ensure accuracy in the transmission of the text with the production of subsequent copies. Between the 7th and 10th centuries CE a group of Jewish scribes formalized and expanded this to create the Numerical Masorah to ensure accurate reproduction of the sacred text. It included counts of the number of words in a line, section, book and groups of books, noting the middle stich of a book, word use statistics, and commentary. Standards became such that a deviation in even a single letter in a Torah scroll was considered unacceptable. The effectiveness of their error correction method was verified by the accuracy of copying through the centuries demonstrated by discovery of the Dead Sea Scrolls in 1947–1956, dating from .
The modern development of error correction codes is credited to Richard Hamming in 1947. A description of Hamming's code appeared in Claude Shannon's A Mathematical Theory of Communication and was quickly generalized by Marcel J. E. Golay.
Principles
All error-detection and correction schemes add some redundancy (i.e., some extra data) to a message, which receivers can use to check consistency of the delivered message and to recover data that has been determined to be corrupted. Error detection and correction schemes can be either systematic or non-systematic. In a systematic scheme, the transmitter sends the original (error-free) data and attaches a fixed number of check bits (or parity data), which are derived from the data bits by some encoding algorithm. If error detection is required, a receiver can simply apply the same algorithm to the received data bits and compare its output with the received check bits; if the values do not match, an error has occurred at some point during the transmission. If error correction is required, a receiver can apply the decoding algorithm to the received data bits and the received check bits to recover the original error-free data. In a system that uses a non-systematic code, the original message is transformed into an encoded message carrying the same information and that has at least as many bits as the original message.
Good error control performance requires the scheme to be selected based on the characteristics of the communication channel. Common channel models include memoryless models where errors occur randomly and with a certain probability, and dynamic models where errors occur primarily in bursts. Consequently, error-detecting and -correcting codes can be generally distinguished between random-error-detecting/correcting and burst-error-detecting/correcting. Some codes can also be suitable for a mixture of random errors and burst errors.
If the channel characteristics cannot be determined, or are highly variable, an error-detection scheme may be combined with a system for retransmissions of erroneous data. This is known as automatic repeat request (ARQ), and is most notably used in the Internet. An alternate approach for error control is hybrid automatic repeat request (HARQ), which is a combination of ARQ and error-correction coding.
Types of error correction
There are three major types of error correction:
Automatic repeat request
Automatic repeat request (ARQ) is an error control method for data transmission that makes use of error-detection codes, acknowledgment and/or negative acknowledgment messages, and timeouts to achieve reliable data transmission. An acknowledgment is a message sent by the receiver to indicate that it has correctly received a data frame.
Usually, when the transmitter does not receive the acknowledgment before the timeout occurs (i.e., within a reasonable amount of time after sending the data frame), it retransmits the frame until it is either correctly received or the error persists beyond a predetermined number of retransmissions.
Three types of ARQ protocols are Stop-and-wait ARQ, Go-Back-N ARQ, and Selective Repeat ARQ.
ARQ is appropriate if the communication channel has varying or unknown capacity, such as is the case on the Internet. However, ARQ requires the availability of a back channel, results in possibly increased latency due to retransmissions, and requires the maintenance of buffers and timers for retransmissions, which in the case of network congestion can put a strain on the server and overall network capacity.
For example, ARQ is used on shortwave radio data links in the form of ARQ-E, or combined with multiplexing as ARQ-M.
Forward error correction
Forward error correction (FEC) is a process of adding redundant data such as an error-correcting code (ECC) to a message so that it can be recovered by a receiver even when a number of errors (up to the capability of the code being used) are introduced, either during the process of transmission or on storage. Since the receiver does not have to ask the sender for retransmission of the data, a backchannel is not required in forward error correction. Error-correcting codes are used in lower-layer communication such as cellular network, high-speed fiber-optic communication and Wi-Fi, as well as for reliable storage in media such as flash memory, hard disk and RAM.
Error-correcting codes are usually distinguished between convolutional codes and block codes:
Convolutional codes are processed on a bit-by-bit basis. They are particularly suitable for implementation in hardware, and the Viterbi decoder allows optimal decoding.
Block codes are processed on a block-by-block basis. Early examples of block codes are repetition codes, Hamming codes and multidimensional parity-check codes. They were followed by a number of efficient codes, Reed–Solomon codes being the most notable due to their current widespread use. Turbo codes and low-density parity-check codes (LDPC) are relatively new constructions that can provide almost optimal efficiency.
Shannon's theorem is an important theorem in forward error correction, and describes the maximum information rate at which reliable communication is possible over a channel that has a certain error probability or signal-to-noise ratio (SNR). This strict upper limit is expressed in terms of the channel capacity. More specifically, the theorem says that there exist codes such that with increasing encoding length the probability of error on a discrete memoryless channel can be made arbitrarily small, provided that the code rate is smaller than the channel capacity. The code rate is defined as the fraction k/n of k source symbols and n encoded symbols.
The actual maximum code rate allowed depends on the error-correcting code used, and may be lower. This is because Shannon's proof was only of existential nature, and did not show how to construct codes that are both optimal and have efficient encoding and decoding algorithms.
Hybrid schemes
Hybrid ARQ is a combination of ARQ and forward error correction. There are two basic approaches:
Messages are always transmitted with FEC parity data (and error-detection redundancy). A receiver decodes a message using the parity information and requests retransmission using ARQ only if the parity data was not sufficient for successful decoding (identified through a failed integrity check).
Messages are transmitted without parity data (only with error-detection information). If a receiver detects an error, it requests FEC information from the transmitter using ARQ and uses it to reconstruct the original message.
The latter approach is particularly attractive on an erasure channel when using a rateless erasure code.
Types of error detection
Error detection is most commonly realized using a suitable hash function (or specifically, a checksum, cyclic redundancy check or other algorithm). A hash function adds a fixed-length tag to a message, which enables receivers to verify the delivered message by recomputing the tag and comparing it with the one provided.
There exists a vast variety of different hash function designs. However, some are of particularly widespread use because of either their simplicity or their suitability for detecting certain kinds of errors (e.g., the cyclic redundancy check's performance in detecting burst errors).
Minimum distance coding
A random-error-correcting code based on minimum distance coding can provide a strict guarantee on the number of detectable errors, but it may not protect against a preimage attack.
Repetition codes
A repetition code is a coding scheme that repeats the bits across a channel to achieve error-free communication. Given a stream of data to be transmitted, the data are divided into blocks of bits. Each block is transmitted some predetermined number of times. For example, to send the bit pattern , the four-bit block can be repeated three times, thus producing . If this twelve-bit pattern was received as – where the first block is unlike the other two – an error has occurred.
A repetition code is very inefficient and can be susceptible to problems if the error occurs in exactly the same place for each group (e.g., in the previous example would be detected as correct). The advantage of repetition codes is that they are extremely simple, and are in fact used in some transmissions of numbers stations.
Parity bit
A parity bit is a bit that is added to a group of source bits to ensure that the number of set bits (i.e., bits with value 1) in the outcome is even or odd. It is a very simple scheme that can be used to detect single or any other odd number (i.e., three, five, etc.) of errors in the output. An even number of flipped bits will make the parity bit appear correct even though the data is erroneous.
Parity bits added to each word sent are called transverse redundancy checks, while those added at the end of a stream of words are called longitudinal redundancy checks. For example, if each of a series of m-bit words has a parity bit added, showing whether there were an odd or even number of ones in that word, any word with a single error in it will be detected. It will not be known where in the word the error is, however. If, in addition, after each stream of n words a parity sum is sent, each bit of which shows whether there were an odd or even number of ones at that bit-position sent in the most recent group, the exact position of the error can be determined and the error corrected. This method is only guaranteed to be effective, however, if there are no more than 1 error in every group of n words. With more error correction bits, more errors can be detected and in some cases corrected.
There are also other bit-grouping techniques.
Checksum
A checksum of a message is a modular arithmetic sum of message code words of a fixed word length (e.g., byte values). The sum may be negated by means of a ones'-complement operation prior to transmission to detect unintentional all-zero messages.
Checksum schemes include parity bits, check digits, and longitudinal redundancy checks. Some checksum schemes, such as the Damm algorithm, the Luhn algorithm, and the Verhoeff algorithm, are specifically designed to detect errors commonly introduced by humans in writing down or remembering identification numbers.
Cyclic redundancy check
A cyclic redundancy check (CRC) is a non-secure hash function designed to detect accidental changes to digital data in computer networks. It is not suitable for detecting maliciously introduced errors. It is characterized by specification of a generator polynomial, which is used as the divisor in a polynomial long division over a finite field, taking the input data as the dividend. The remainder becomes the result.
A CRC has properties that make it well suited for detecting burst errors. CRCs are particularly easy to implement in hardware and are therefore commonly used in computer networks and storage devices such as hard disk drives.
The parity bit can be seen as a special-case 1-bit CRC.
Cryptographic hash function
The output of a cryptographic hash function, also known as a message digest, can provide strong assurances about data integrity, whether changes of the data are accidental (e.g., due to transmission errors) or maliciously introduced. Any modification to the data will likely be detected through a mismatching hash value. Furthermore, given some hash value, it is typically infeasible to find some input data (other than the one given) that will yield the same hash value. If an attacker can change not only the message but also the hash value, then a keyed hash or message authentication code (MAC) can be used for additional security. Without knowing the key, it is not possible for the attacker to easily or conveniently calculate the correct keyed hash value for a modified message.
Digital signature
Digital signatures can provide strong assurances about data integrity, whether the changes of the data are accidental or maliciously introduced.
Digital signatures are perhaps most notable for being part of the HTTPS protocol for securely browsing the web.
Error correction code
Any error-correcting code can be used for error detection. A code with minimum Hamming distance, d, can detect up to d − 1 errors in a code word. Using minimum-distance-based error-correcting codes for error detection can be suitable if a strict limit on the minimum number of errors to be detected is desired.
Codes with minimum Hamming distance d = 2 are degenerate cases of error-correcting codes and can be used to detect single errors. The parity bit is an example of a single-error-detecting code.
Applications
Applications that require low latency (such as telephone conversations) cannot use automatic repeat request (ARQ); they must use forward error correction (FEC). By the time an ARQ system discovers an error and re-transmits it, the re-sent data will arrive too late to be usable.
Applications where the transmitter immediately forgets the information as soon as it is sent (such as most television cameras) cannot use ARQ; they must use FEC because when an error occurs, the original data is no longer available.
Applications that use ARQ must have a return channel; applications having no return channel cannot use ARQ.
Applications that require extremely low error rates (such as digital money transfers) must use ARQ due to the possibility of uncorrectable errors with FEC.
Reliability and inspection engineering also make use of the theory of error-correcting codes.
Internet
In a typical TCP/IP stack, error control is performed at multiple levels:
Each Ethernet frame uses CRC-32 error detection. Frames with detected errors are discarded by the receiver hardware.
The IPv4 header contains a checksum protecting the contents of the header. Packets with incorrect checksums are dropped within the network or at the receiver.
The checksum was omitted from the IPv6 header in order to minimize processing costs in network routing and because current link layer technology is assumed to provide sufficient error detection (see also RFC 3819).
UDP has an optional checksum covering the payload and addressing information in the UDP and IP headers. Packets with incorrect checksums are discarded by the network stack. The checksum is optional under IPv4, and required under IPv6. When omitted, it is assumed the data-link layer provides the desired level of error protection.
TCP provides a checksum for protecting the payload and addressing information in the TCP and IP headers. Packets with incorrect checksums are discarded by the network stack and eventually get retransmitted using ARQ, either explicitly (such as through three-way handshake) or implicitly due to a timeout.
Deep-space telecommunications
The development of error-correction codes was tightly coupled with the history of deep-space missions due to the extreme dilution of signal power over interplanetary distances, and the limited power availability aboard space probes. Whereas early missions sent their data uncoded, starting in 1968, digital error correction was implemented in the form of (sub-optimally decoded) convolutional codes and Reed–Muller codes. The Reed–Muller code was well suited to the noise the spacecraft was subject to (approximately matching a bell curve), and was implemented for the Mariner spacecraft and used on missions between 1969 and 1977.
The Voyager 1 and Voyager 2 missions, which started in 1977, were designed to deliver color imaging and scientific information from Jupiter and Saturn. This resulted in increased coding requirements, and thus, the spacecraft were supported by (optimally Viterbi-decoded) convolutional codes that could be concatenated with an outer Golay (24,12,8) code. The Voyager 2 craft additionally supported an implementation of a Reed–Solomon code. The concatenated Reed–Solomon–Viterbi (RSV) code allowed for very powerful error correction, and enabled the spacecraft's extended journey to Uranus and Neptune. After ECC system upgrades in 1989, both crafts used V2 RSV coding.
The Consultative Committee for Space Data Systems currently recommends usage of error correction codes with performance similar to the Voyager 2 RSV code as a minimum. Concatenated codes are increasingly falling out of favor with space missions, and are replaced by more powerful codes such as Turbo codes or LDPC codes.
The different kinds of deep space and orbital missions that are conducted suggest that trying to find a one-size-fits-all error correction system will be an ongoing problem. For missions close to Earth, the nature of the noise in the communication channel is different from that which a spacecraft on an interplanetary mission experiences. Additionally, as a spacecraft increases its distance from Earth, the problem of correcting for noise becomes more difficult.
Satellite broadcasting
The demand for satellite transponder bandwidth continues to grow, fueled by the desire to deliver television (including new channels and high-definition television) and IP data. Transponder availability and bandwidth constraints have limited this growth. Transponder capacity is determined by the selected modulation scheme and the proportion of capacity consumed by FEC.
Data storage
Error detection and correction codes are often used to improve the reliability of data storage media. A parity track capable of detecting single-bit errors was present on the first magnetic tape data storage in 1951. The optimal rectangular code used in group coded recording tapes not only detects but also corrects single-bit errors. Some file formats, particularly archive formats, include a checksum (most often CRC32) to detect corruption and truncation and can employ redundancy or parity files to recover portions of corrupted data. Reed-Solomon codes are used in compact discs to correct errors caused by scratches.
Modern hard drives use Reed–Solomon codes to detect and correct minor errors in sector reads, and to recover corrupted data from failing sectors and store that data in the spare sectors. RAID systems use a variety of error correction techniques to recover data when a hard drive completely fails. Filesystems such as ZFS or Btrfs, as well as some RAID implementations, support data scrubbing and resilvering, which allows bad blocks to be detected and (hopefully) recovered before they are used. The recovered data may be re-written to exactly the same physical location, to spare blocks elsewhere on the same piece of hardware, or the data may be rewritten onto replacement hardware.
Error-correcting memory
Dynamic random-access memory (DRAM) may provide stronger protection against soft errors by relying on error-correcting codes. Such error-correcting memory, known as ECC or EDAC-protected memory, is particularly desirable for mission-critical applications, such as scientific computing, financial, medical, etc. as well as extraterrestrial applications due to the increased radiation in space.
Error-correcting memory controllers traditionally use Hamming codes, although some use triple modular redundancy. Interleaving allows distributing the effect of a single cosmic ray potentially upsetting multiple physically neighboring bits across multiple words by associating neighboring bits to different words. As long as a single-event upset (SEU) does not exceed the error threshold (e.g., a single error) in any particular word between accesses, it can be corrected (e.g., by a single-bit error-correcting code), and the illusion of an error-free memory system may be maintained.
In addition to hardware providing features required for ECC memory to operate, operating systems usually contain related reporting facilities that are used to provide notifications when soft errors are transparently recovered. One example is the Linux kernel's EDAC subsystem (previously known as Bluesmoke), which collects the data from error-checking-enabled components inside a computer system; besides collecting and reporting back the events related to ECC memory, it also supports other checksumming errors, including those detected on the PCI bus. A few systems also support memory scrubbing to catch and correct errors early before they become unrecoverable.
| Mathematics | Discrete mathematics | null |
10377 | https://en.wikipedia.org/wiki/Euclidean%20algorithm | Euclidean algorithm | In mathematics, the Euclidean algorithm, or Euclid's algorithm, is an efficient method for computing the greatest common divisor (GCD) of two integers, the largest number that divides them both without a remainder. It is named after the ancient Greek mathematician Euclid, who first described it in his Elements ().
It is an example of an algorithm, a step-by-step procedure for performing a calculation according to well-defined rules,
and is one of the oldest algorithms in common use. It can be used to reduce fractions to their simplest form, and is a part of many other number-theoretic and cryptographic calculations.
The Euclidean algorithm is based on the principle that the greatest common divisor of two numbers does not change if the larger number is replaced by its difference with the smaller number. For example, is the GCD of and (as and , and the same number is also the GCD of and . Since this replacement reduces the larger of the two numbers, repeating this process gives successively smaller pairs of numbers until the two numbers become equal. When that occurs, that number is the GCD of the original two numbers. By reversing the steps or using the extended Euclidean algorithm, the GCD can be expressed as a linear combination of the two original numbers, that is the sum of the two numbers, each multiplied by an integer (for example, ). The fact that the GCD can always be expressed in this way is known as Bézout's identity.
The version of the Euclidean algorithm described above—which follows Euclid's original presentation—may require many subtraction steps to find the GCD when one of the given numbers is much bigger than the other. A more efficient version of the algorithm shortcuts these steps, instead replacing the larger of the two numbers by its remainder when divided by the smaller of the two (with this version, the algorithm stops when reaching a zero remainder). With this improvement, the algorithm never requires more steps than five times the number of digits (base 10) of the smaller integer. This was proven by Gabriel Lamé in 1844 (Lamé's Theorem), and marks the beginning of computational complexity theory. Additional methods for improving the algorithm's efficiency were developed in the 20th century.
The Euclidean algorithm has many theoretical and practical applications. It is used for reducing fractions to their simplest form and for performing division in modular arithmetic. Computations using this algorithm form part of the cryptographic protocols that are used to secure internet communications, and in methods for breaking these cryptosystems by factoring large composite numbers. The Euclidean algorithm may be used to solve Diophantine equations, such as finding numbers that satisfy multiple congruences according to the Chinese remainder theorem, to construct continued fractions, and to find accurate rational approximations to real numbers. Finally, it can be used as a basic tool for proving theorems in number theory such as Lagrange's four-square theorem and the uniqueness of prime factorizations.
The original algorithm was described only for natural numbers and geometric lengths (real numbers), but the algorithm was generalized in the 19th century to other types of numbers, such as Gaussian integers and polynomials of one variable. This led to modern abstract algebraic notions such as Euclidean domains.
Background: greatest common divisor
The Euclidean algorithm calculates the greatest common divisor (GCD) of two natural numbers and . The greatest common divisor is the largest natural number that divides both and without leaving a remainder. Synonyms for GCD include greatest common factor (GCF), highest common factor (HCF), highest common divisor (HCD), and greatest common measure (GCM). The greatest common divisor is often written as or, more simply, as , although the latter notation is ambiguous, also used for concepts such as an ideal in the ring of integers, which is closely related to GCD.
If , then and are said to be coprime (or relatively prime). This property does not imply that or are themselves prime numbers. For example, and factor as and , so they are not prime, but their prime factors are different, so and are coprime, with no common factors other than .
Let . Since and are both multiples of , they can be written and , and there is no larger number for which this is true. The natural numbers and must be coprime, since any common factor could be factored out of and to make greater. Thus, any other number that divides both and must also divide . The greatest common divisor of and is the unique (positive) common divisor of and that is divisible by any other common divisor .
The greatest common divisor can be visualized as follows. Consider a rectangular area by , and any common divisor that divides both and exactly. The sides of the rectangle can be divided into segments of length , which divides the rectangle into a grid of squares of side length . The GCD is the largest value of for which this is possible. For illustration, a rectangular area can be divided into a grid of: squares, squares, squares, squares, squares or squares. Therefore, is the GCD of and . A rectangular area can be divided into a grid of squares, with two squares along one edge () and five squares along the other ().
The greatest common divisor of two numbers and is the product of the prime factors shared by the two numbers, where each prime factor can be repeated as many times as it divides both and . For example, since can be factored into , and can be factored into , the GCD of and equals , the product of their shared prime factors (with 3 repeated since divides both). If two numbers have no common prime factors, their GCD is (obtained here as an instance of the empty product); in other words, they are coprime. A key advantage of the Euclidean algorithm is that it can find the GCD efficiently without having to compute the prime factors. Factorization of large integers is believed to be a computationally very difficult problem, and the security of many widely used cryptographic protocols is based upon its infeasibility.
Another definition of the GCD is helpful in advanced mathematics, particularly ring theory. The greatest common divisor of two nonzero numbers and is also their smallest positive integral linear combination, that is, the smallest positive number of the form where and are integers. The set of all integral linear combinations of and is actually the same as the set of all multiples of (, where is an integer). In modern mathematical language, the ideal generated by and is the ideal generated by alone (an ideal generated by a single element is called a principal ideal, and all ideals of the integers are principal ideals). Some properties of the GCD are in fact easier to see with this description, for instance the fact that any common divisor of and also divides the GCD (it divides both terms of ). The equivalence of this GCD definition with the other definitions is described below.
The GCD of three or more numbers equals the product of the prime factors common to all the numbers, but it can also be calculated by repeatedly taking the GCDs of pairs of numbers. For example,
Thus, Euclid's algorithm, which computes the GCD of two integers, suffices to calculate the GCD of arbitrarily many integers.
Procedure
The Euclidean algorithm can be thought of as constructing a sequence of non-negative integers that begins with the two given integers and and will eventually terminate with the integer zero: with . The integer will then be the GCD and we can state . The algorithm indicates how to construct the intermediate remainders via division-with-remainder on the preceding pair by finding an integer quotient so that:
Because the sequence of non-negative integers is strictly decreasing, it eventually must terminate. In other words, since for every , and each is an integer that is strictly smaller than the preceding , there eventually cannot be a non-negative integer smaller than zero, and hence the algorithm must terminate. In fact, the algorithm will always terminate at the th step with equal to zero.
To illustrate, suppose the GCD of 1071 and 462 is requested. The sequence is initially and in order to find , we need to find integers and such that:
.
This is the quotient since . This determines and so the sequence is now . The next step is to continue the sequence to find by finding integers and such that:
.
This is the quotient since . This determines and so the sequence is now . The next step is to continue the sequence to find by finding integers and such that:
.
This is the quotient since . This determines and so the sequence is completed as as no further non-negative integer smaller than can be found. The penultimate remainder is therefore the requested GCD:
We can generalize slightly by dropping any ordering requirement on the initial two values and . If , the algorithm may continue and trivially find that as the sequence of remainders will be . If , then we can also continue since , suggesting the next remainder should be itself, and the sequence is . Normally, this would be invalid because it breaks the requirement but now we have by construction, so the requirement is automatically satisfied and the Euclidean algorithm can continue as normal. Therefore, dropping any ordering between the first two integers does not affect the conclusion that the sequence must eventually terminate because the next remainder will always satisfy and everything continues as above. The only modifications that need to be made are that only for , and that the sub-sequence of non-negative integers for is strictly decreasing, therefore excluding from both statements.
Proof of validity
The validity of the Euclidean algorithm can be proven by a two-step argument. In the first step, the final nonzero remainder is shown to divide both and . Since it is a common divisor, it must be less than or equal to the greatest common divisor . In the second step, it is shown that any common divisor of and , including , must divide ; therefore, must be less than or equal to . These two opposite inequalities imply .
To demonstrate that divides both and (the first step), divides its predecessor
since the final remainder is zero. also divides its next predecessor
because it divides both terms on the right-hand side of the equation. Iterating the same argument, divides all the preceding remainders, including and . None of the preceding remainders , , etc. divide and , since they leave a remainder. Since is a common divisor of and , .
In the second step, any natural number that divides both and (in other words, any common divisor of and ) divides the remainders . By definition, and can be written as multiples of : and , where and are natural numbers. Therefore, divides the initial remainder , since . An analogous argument shows that also divides the subsequent remainders , , etc. Therefore, the greatest common divisor must divide , which implies that . Since the first part of the argument showed the reverse (), it follows that . Thus, is the greatest common divisor of all the succeeding pairs:
.
Worked example
For illustration, the Euclidean algorithm can be used to find the greatest common divisor of and . To begin, multiples of are subtracted from until the remainder is less than . Two such multiples can be subtracted (), leaving a remainder of :
.
Then multiples of are subtracted from until the remainder is less than . Three multiples can be subtracted (), leaving a remainder of :
.
Then multiples of are subtracted from until the remainder is less than . Seven multiples can be subtracted (), leaving no remainder:
.
Since the last remainder is zero, the algorithm ends with as the greatest common divisor of and . This agrees with the found by prime factorization above. In tabular form, the steps are:
Visualization
The Euclidean algorithm can be visualized in terms of the tiling analogy given above for the greatest common divisor. Assume that we wish to cover an rectangle with square tiles exactly, where is the larger of the two numbers. We first attempt to tile the rectangle using square tiles; however, this leaves an residual rectangle untiled, where . We then attempt to tile the residual rectangle with square tiles. This leaves a second residual rectangle , which we attempt to tile using square tiles, and so on. The sequence ends when there is no residual rectangle, i.e., when the square tiles cover the previous residual rectangle exactly. The length of the sides of the smallest square tile is the GCD of the dimensions of the original rectangle. For example, the smallest square tile in the adjacent figure is (shown in red), and is the GCD of and , the dimensions of the original rectangle (shown in green).
Euclidean division
At every step , the Euclidean algorithm computes a quotient and remainder from two numbers and
,
where the is non-negative and is strictly less than the absolute value of . The theorem which underlies the definition of the Euclidean division ensures that such a quotient and remainder always exist and are unique.
In Euclid's original version of the algorithm, the quotient and remainder are found by repeated subtraction; that is, is subtracted from repeatedly until the remainder is smaller than . After that and are exchanged and the process is iterated. Euclidean division reduces all the steps between two exchanges into a single step, which is thus more efficient. Moreover, the quotients are not needed, thus one may replace Euclidean division by the modulo operation, which gives only the remainder. Thus the iteration of the Euclidean algorithm becomes simply
.
Implementations
Implementations of the algorithm may be expressed in pseudocode. For example, the division-based version may be programmed as
function gcd(a, b)
while b ≠ 0
t := b
b := a mod b
a := t
return a
At the beginning of the th iteration, the variable holds the latest remainder , whereas the variable holds its predecessor, . The step is equivalent to the above recursion formula . The temporary variable holds the value of while the next remainder is being calculated. At the end of the loop iteration, the variable holds the remainder , whereas the variable holds its predecessor, .
(If negative inputs are allowed, or if the mod function may return negative values, the last line must be replaced with .)
In the subtraction-based version, which was Euclid's original version, the remainder calculation () is replaced by repeated subtraction. Contrary to the division-based version, which works with arbitrary integers as input, the subtraction-based version supposes that the input consists of positive integers and stops when :
function gcd(a, b)
while a ≠ b
if a > b
a := a − b
else
b := b − a
return a
The variables and alternate holding the previous remainders and . Assume that is larger than at the beginning of an iteration; then equals , since . During the loop iteration, is reduced by multiples of the previous remainder until is smaller than . Then is the next remainder . Then is reduced by multiples of until it is again smaller than , giving the next remainder , and so on.
The recursive version is based on the equality of the GCDs of successive remainders and the stopping condition .
function gcd(a, b)
if b = 0
return a
else
return gcd(b, a mod b)
(As above, if negative inputs are allowed, or if the mod function may return negative values, the instruction must be replaced by .)
For illustration, the is calculated from the equivalent . The latter GCD is calculated from the , which in turn is calculated from the .
Method of least absolute remainders
In another version of Euclid's algorithm, the quotient at each step is increased by one if the resulting negative remainder is smaller in magnitude than the typical positive remainder. Previously, the equation
assumed that . However, an alternative negative remainder can be computed:
if or
if .
If is replaced by . when , then one gets a variant of Euclidean algorithm such that
at each step.
Leopold Kronecker has shown that this version requires the fewest steps of any version of Euclid's algorithm. More generally, it has been proven that, for every input numbers a and b, the number of steps is minimal if and only if is chosen in order that where is the golden ratio.
Historical development
The Euclidean algorithm is one of the oldest algorithms in common use. It appears in Euclid's Elements (c. 300 BC), specifically in Book 7 (Propositions 1–2) and Book 10 (Propositions 2–3). In Book 7, the algorithm is formulated for integers, whereas in Book 10, it is formulated for lengths of line segments. (In modern usage, one would say it was formulated there for real numbers. But lengths, areas, and volumes, represented as real numbers in modern usage, are not measured in the same units and there is no natural unit of length, area, or volume; the concept of real numbers was unknown at that time.) The latter algorithm is geometrical. The GCD of two lengths and corresponds to the greatest length that measures and evenly; in other words, the lengths and are both integer multiples of the length .
The algorithm was probably not discovered by Euclid, who compiled results from earlier mathematicians in his Elements. The mathematician and historian B. L. van der Waerden suggests that Book VII derives from a textbook on number theory written by mathematicians in the school of Pythagoras. The algorithm was probably known by Eudoxus of Cnidus (about 375 BC). The algorithm may even pre-date Eudoxus, judging from the use of the technical term ἀνθυφαίρεσις (anthyphairesis, reciprocal subtraction) in works by Euclid and Aristotle. Claude Brezinski, following remarks by Pappus of Alexandria, credits the algorithm to Theaetetus (c. 417 – c. 369 BC).
Centuries later, Euclid's algorithm was discovered independently both in India and in China, primarily to solve Diophantine equations that arose in astronomy and making accurate calendars. In the late 5th century, the Indian mathematician and astronomer Aryabhata described the algorithm as the "pulverizer", perhaps because of its effectiveness in solving Diophantine equations. Although a special case of the Chinese remainder theorem had already been described in the Chinese book Sunzi Suanjing, the general solution was published by Qin Jiushao in his 1247 book Shushu Jiuzhang (數書九章 Mathematical Treatise in Nine Sections). The Euclidean algorithm was first described numerically and popularized in Europe in the second edition of Bachet's Problèmes plaisants et délectables (Pleasant and enjoyable problems, 1624). In Europe, it was likewise used to solve Diophantine equations and in developing continued fractions. The extended Euclidean algorithm was published by the English mathematician Nicholas Saunderson, who attributed it to Roger Cotes as a method for computing continued fractions efficiently.
In the 19th century, the Euclidean algorithm led to the development of new number systems, such as Gaussian integers and Eisenstein integers. In 1815, Carl Gauss used the Euclidean algorithm to demonstrate unique factorization of Gaussian integers, although his work was first published in 1832. Gauss mentioned the algorithm in his Disquisitiones Arithmeticae (published 1801), but only as a method for continued fractions. Peter Gustav Lejeune Dirichlet seems to have been the first to describe the Euclidean algorithm as the basis for much of number theory. Lejeune Dirichlet noted that many results of number theory, such as unique factorization, would hold true for any other system of numbers to which the Euclidean algorithm could be applied. Lejeune Dirichlet's lectures on number theory were edited and extended by Richard Dedekind, who used Euclid's algorithm to study algebraic integers, a new general type of number. For example, Dedekind was the first to prove Fermat's two-square theorem using the unique factorization of Gaussian integers. Dedekind also defined the concept of a Euclidean domain, a number system in which a generalized version of the Euclidean algorithm can be defined (as described below). In the closing decades of the 19th century, the Euclidean algorithm gradually became eclipsed by Dedekind's more general theory of ideals.
Other applications of Euclid's algorithm were developed in the 19th century. In 1829, Charles Sturm showed that the algorithm was useful in the Sturm chain method for counting the real roots of polynomials in any given interval.
The Euclidean algorithm was the first integer relation algorithm, which is a method for finding integer relations between commensurate real numbers. Several novel integer relation algorithms have been developed, such as the algorithm of Helaman Ferguson and R.W. Forcade (1979) and the LLL algorithm.
In 1969, Cole and Davie developed a two-player game based on the Euclidean algorithm, called The Game of Euclid, which has an optimal strategy. The players begin with two piles of and stones. The players take turns removing multiples of the smaller pile from the larger. Thus, if the two piles consist of and stones, where is larger than , the next player can reduce the larger pile from stones to stones, as long as the latter is a nonnegative integer. The winner is the first player to reduce one pile to zero stones.
Mathematical applications
Bézout's identity
Bézout's identity states that the greatest common divisor of two integers and can be represented as a linear sum of the original two numbers and . In other words, it is always possible to find integers and such that .
The integers and can be calculated from the quotients , , etc. by reversing the order of equations in Euclid's algorithm. Beginning with the next-to-last equation, can be expressed in terms of the quotient and the two preceding remainders, and :
.
Those two remainders can be likewise expressed in terms of their quotients and preceding remainders,
and
.
Substituting these formulae for and into the first equation yields as a linear sum of the remainders and . The process of substituting remainders by formulae involving their predecessors can be continued until the original numbers and are reached:
.
After all the remainders , , etc. have been substituted, the final equation expresses as a linear sum of and , so that .
The Euclidean algorithm, and thus Bézout's identity, can be generalized to the context of Euclidean domains.
Principal ideals and related problems
Bézout's identity provides yet another definition of the greatest common divisor of two numbers and . Consider the set of all numbers , where and are any two integers. Since and are both divisible by , every number in the set is divisible by . In other words, every number of the set is an integer multiple of . This is true for every common divisor of and . However, unlike other common divisors, the greatest common divisor is a member of the set; by Bézout's identity, choosing and gives . A smaller common divisor cannot be a member of the set, since every member of the set must be divisible by . Conversely, any multiple of can be obtained by choosing and , where and are the integers of Bézout's identity. This may be seen by multiplying Bézout's identity by m,
.
Therefore, the set of all numbers is equivalent to the set of multiples of . In other words, the set of all possible sums of integer multiples of two numbers ( and ) is equivalent to the set of multiples of . The GCD is said to be the generator of the ideal of and . This GCD definition led to the modern abstract algebraic concepts of a principal ideal (an ideal generated by a single element) and a principal ideal domain (a domain in which every ideal is a principal ideal).
Certain problems can be solved using this result. For example, consider two measuring cups of volume and . By adding/subtracting multiples of the first cup and multiples of the second cup, any volume can be measured out. These volumes are all multiples of .
Extended Euclidean algorithm
The integers and of Bézout's identity can be computed efficiently using the extended Euclidean algorithm. This extension adds two recursive equations to Euclid's algorithm
with the starting values
.
Using this recursion, Bézout's integers and are given by and , where is the step on which the algorithm terminates with .
The validity of this approach can be shown by induction. Assume that the recursion formula is correct up to step of the algorithm; in other words, assume that
for all less than . The th step of the algorithm gives the equation
.
Since the recursion formula has been assumed to be correct for and , they may be expressed in terms of the corresponding and variables
.
Rearranging this equation yields the recursion formula for step , as required
.
Matrix method
The integers and can also be found using an equivalent matrix method. The sequence of equations of Euclid's algorithm
can be written as a product of quotient matrices multiplying a two-dimensional remainder vector
Let represent the product of all the quotient matrices
This simplifies the Euclidean algorithm to the form
To express as a linear sum of and , both sides of this equation can be multiplied by the inverse of the matrix . The determinant of equals , since it equals the product of the determinants of the quotient matrices, each of which is negative one. Since the determinant of is never zero, the vector of the final remainders can be solved using the inverse of
Since the top equation gives
,
the two integers of Bézout's identity are and . The matrix method is as efficient as the equivalent recursion, with two multiplications and two additions per step of the Euclidean algorithm.
Euclid's lemma and unique factorization
Bézout's identity is essential to many applications of Euclid's algorithm, such as demonstrating the unique factorization of numbers into prime factors. To illustrate this, suppose that a number can be written as a product of two factors and , that is, . If another number also divides but is coprime with , then must divide , by the following argument: If the greatest common divisor of and is , then integers and can be found such that
by Bézout's identity. Multiplying both sides by gives the relation:
Since divides both terms on the right-hand side, it must also divide the left-hand side, . This result is known as Euclid's lemma. Specifically, if a prime number divides , then it must divide at least one factor of . Conversely, if a number is coprime to each of a series of numbers , , ..., , then is also coprime to their product, .
Euclid's lemma suffices to prove that every number has a unique factorization into prime numbers. To see this, assume the contrary, that there are two independent factorizations of into and prime factors, respectively
.
Since each prime divides by assumption, it must also divide one of the factors; since each is prime as well, it must be that . Iteratively dividing by the factors shows that each has an equal counterpart ; the two prime factorizations are identical except for their order. The unique factorization of numbers into primes has many applications in mathematical proofs, as shown below.
Linear Diophantine equations
Diophantine equations are equations in which the solutions are restricted to integers; they are named after the 3rd-century Alexandrian mathematician Diophantus. A typical linear Diophantine equation seeks integers and such that
where , and are given integers. This can be written as an equation for in modular arithmetic:
.
Let be the greatest common divisor of and . Both terms in are divisible by ; therefore, must also be divisible by , or the equation has no solutions. By dividing both sides by , the equation can be reduced to Bezout's identity
,
where and can be found by the extended Euclidean algorithm. This provides one solution to the Diophantine equation, and .
In general, a linear Diophantine equation has no solutions, or an infinite number of solutions. To find the latter, consider two solutions, and , where
or equivalently
.
Therefore, the smallest difference between two solutions is , whereas the smallest difference between two solutions is . Thus, the solutions may be expressed as
.
By allowing to vary over all possible integers, an infinite family of solutions can be generated from a single solution . If the solutions are required to be positive integers , only a finite number of solutions may be possible. This restriction on the acceptable solutions allows some systems of Diophantine equations with more unknowns than equations to have a finite number of solutions; this is impossible for a system of linear equations when the solutions can be any real number (see Underdetermined system).
Multiplicative inverses and the RSA algorithm
A finite field is a set of numbers with four generalized operations. The operations are called addition, subtraction, multiplication and division and have their usual properties, such as commutativity, associativity and distributivity. An example of a finite field is the set of 13 numbers using modular arithmetic. In this field, the results of any mathematical operation (addition, subtraction, multiplication, or division) is reduced modulo ; that is, multiples of are added or subtracted until the result is brought within the range –. For example, the result of . Such finite fields can be defined for any prime ; using more sophisticated definitions, they can also be defined for any power of a prime . Finite fields are often called Galois fields, and are abbreviated as or ).
In such a field with numbers, every nonzero element has a unique modular multiplicative inverse, such that . This inverse can be found by solving the congruence equation , or the equivalent linear Diophantine equation
.
This equation can be solved by the Euclidean algorithm, as described above. Finding multiplicative inverses is an essential step in the RSA algorithm, which is widely used in electronic commerce; specifically, the equation determines the integer used to decrypt the message. Although the RSA algorithm uses rings rather than fields, the Euclidean algorithm can still be used to find a multiplicative inverse where one exists. The Euclidean algorithm also has other applications in error-correcting codes; for example, it can be used as an alternative to the Berlekamp–Massey algorithm for decoding BCH and Reed–Solomon codes, which are based on Galois fields.
Chinese remainder theorem
Euclid's algorithm can also be used to solve multiple linear Diophantine equations. Such equations arise in the Chinese remainder theorem, which describes a novel method to represent an integer x. Instead of representing an integer by its digits, it may be represented by its remainders xi modulo a set of N coprime numbers mi:
The goal is to determine x from its N remainders xi. The solution is to combine the multiple equations into a single linear Diophantine equation with a much larger modulus M that is the product of all the individual moduli mi, and define Mi as
Thus, each Mi is the product of all the moduli except mi. The solution depends on finding N new numbers hi such that
With these numbers hi, any integer x can be reconstructed from its remainders xi by the equation
Since these numbers hi are the multiplicative inverses of the Mi, they may be found using Euclid's algorithm as described in the previous subsection.
Stern–Brocot tree
The Euclidean algorithm can be used to arrange the set of all positive rational numbers into an infinite binary search tree, called the Stern–Brocot tree.
The number 1 (expressed as a fraction 1/1) is placed at the root of the tree, and the location of any other number a/b can be found by computing gcd(a,b) using the original form of the Euclidean algorithm, in which each step replaces the larger of the two given numbers by its difference with the smaller number (not its remainder), stopping when two equal numbers are reached. A step of the Euclidean algorithm that replaces the first of the two numbers corresponds to a step in the tree from a node to its right child, and a step that replaces the second of the two numbers corresponds to a step in the tree from a node to its left child. The sequence of steps constructed in this way does not depend on whether a/b is given in lowest terms, and forms a path from the root to a node containing the number a/b. This fact can be used to prove that each positive rational number appears exactly once in this tree.
For example, 3/4 can be found by starting at the root, going to the left once, then to the right twice:
The Euclidean algorithm has almost the same relationship to another binary tree on the rational numbers called the Calkin–Wilf tree. The difference is that the path is reversed: instead of producing a path from the root of the tree to a target, it produces a path from the target to the root.
Continued fractions
The Euclidean algorithm has a close relationship with continued fractions. The sequence of equations can be written in the form
The last term on the right-hand side always equals the inverse of the left-hand side of the next equation. Thus, the first two equations may be combined to form
The third equation may be used to substitute the denominator term r1/r0, yielding
The final ratio of remainders rk/rk−1 can always be replaced using the next equation in the series, up to the final equation. The result is a continued fraction
In the worked example above, the gcd(1071, 462) was calculated, and the quotients qk were 2, 3 and 7, respectively. Therefore, the fraction 1071/462 may be written
as can be confirmed by calculation.
Factorization algorithms
Calculating a greatest common divisor is an essential step in several integer factorization algorithms, such as Pollard's rho algorithm, Shor's algorithm, Dixon's factorization method and the Lenstra elliptic curve factorization. The Euclidean algorithm may be used to find this GCD efficiently. Continued fraction factorization uses continued fractions, which are determined using Euclid's algorithm.
Algorithmic efficiency
The computational efficiency of Euclid's algorithm has been studied thoroughly. This efficiency can be described by the number of division steps the algorithm requires, multiplied by the computational expense of each step. The first known analysis of Euclid's algorithm is due to A. A. L. Reynaud in 1811, who showed that the number of division steps on input (u, v) is bounded by v; later he improved this to v/2 + 2. Later, in 1841, P. J. E. Finck showed that the number of division steps is at most 2 log2 v + 1, and hence Euclid's algorithm runs in time polynomial in the size of the input. Émile Léger, in 1837, studied the worst case, which is when the inputs are consecutive Fibonacci numbers. Finck's analysis was refined by Gabriel Lamé in 1844, who showed that the number of steps required for completion is never more than five times the number h of base-10 digits of the smaller number b.
In the uniform cost model (suitable for analyzing the complexity of gcd calculation on numbers that fit into a single machine word), each step of the algorithm takes constant time, and Lamé's analysis implies that the total running time is also O(h). However, in a model of computation suitable for computation with larger numbers, the computational expense of a single remainder computation in the algorithm can be as large as O(h2). In this case the total time for all of the steps of the algorithm can be analyzed using a telescoping series, showing that it is also O(h2). Modern algorithmic techniques based on the Schönhage–Strassen algorithm for fast integer multiplication can be used to speed this up, leading to quasilinear algorithms for the GCD.
Number of steps
The number of steps to calculate the GCD of two natural numbers, a and b, may be denoted by T(a, b). If g is the GCD of a and b, then a = mg and b = ng for two coprime numbers m and n. Then
as may be seen by dividing all the steps in the Euclidean algorithm by g. By the same argument, the number of steps remains the same if a and b are multiplied by a common factor w: T(a, b) = T(wa, wb). Therefore, the number of steps T may vary dramatically between neighboring pairs of numbers, such as T(a, b) and T(a, b + 1), depending on the size of the two GCDs.
The recursive nature of the Euclidean algorithm gives another equation
where T(x, 0) = 0 by assumption.
Worst-case
If the Euclidean algorithm requires N steps for a pair of natural numbers a > b > 0, the smallest values of a and b for which this is true are the Fibonacci numbers FN+2 and FN+1, respectively. More precisely, if the Euclidean algorithm requires N steps for the pair a > b, then one has a ≥ FN+2 and b ≥ FN+1. This can be shown by induction. If N = 1, b divides a with no remainder; the smallest natural numbers for which this is true is b = 1 and a = 2, which are F2 and F3, respectively. Now assume that the result holds for all values of N up to M − 1. The first step of the M-step algorithm is a = q0b + r0, and the Euclidean algorithm requires M − 1 steps for the pair b > r0. By induction hypothesis, one has b ≥ FM+1 and r0 ≥ FM. Therefore, a = q0b + r0 ≥ b + r0 ≥ FM+1 + FM = FM+2,
which is the desired inequality.
This proof, published by Gabriel Lamé in 1844, represents the beginning of computational complexity theory, and also the first practical application of the Fibonacci numbers.
This result suffices to show that the number of steps in Euclid's algorithm can never be more than five times the number of its digits (base 10). For if the algorithm requires N steps, then b is greater than or equal to FN+1 which in turn is greater than or equal to φN−1, where φ is the golden ratio. Since b ≥ φN−1, then N − 1 ≤ logφb. Since log10φ > 1/5, (N − 1)/5 < log10φ logφb = log10b. Thus, N ≤ 5 log10b. Thus, the Euclidean algorithm always needs less than O(h) divisions, where h is the number of digits in the smaller number b.
Average
The average number of steps taken by the Euclidean algorithm has been defined in three different ways. The first definition is the average time T(a) required to calculate the GCD of a given number a and a smaller natural number b chosen with equal probability from the integers 0 to a − 1
However, since T(a, b) fluctuates dramatically with the GCD of the two numbers, the averaged function T(a) is likewise "noisy".
To reduce this noise, a second average τ(a) is taken over all numbers coprime with a
There are φ(a) coprime integers less than a, where φ is Euler's totient function. This tau average grows smoothly with a
with the residual error being of order a−(1/6)+ε, where ε is infinitesimal. The constant C in this formula is called Porter's constant and equals
where is the Euler–Mascheroni constant and is the derivative of the Riemann zeta function. The leading coefficient (12/π2) ln 2 was determined by two independent methods.
Since the first average can be calculated from the tau average by summing over the divisors d of a
it can be approximated by the formula
where Λ(d) is the Mangoldt function.
A third average Y(n) is defined as the mean number of steps required when both a and b are chosen randomly (with uniform distribution) from 1 to n
Substituting the approximate formula for T(a) into this equation yields an estimate for Y(n)
Computational expense per step
In each step k of the Euclidean algorithm, the quotient qk and remainder rk are computed for a given pair of integers rk−2 and rk−1
The computational expense per step is associated chiefly with finding qk, since the remainder rk can be calculated quickly from rk−2, rk−1, and qk
The computational expense of dividing h-bit numbers scales as , where is the length of the quotient.
For comparison, Euclid's original subtraction-based algorithm can be much slower. A single integer division is equivalent to the quotient q number of subtractions. If the ratio of a and b is very large, the quotient is large and many subtractions will be required. On the other hand, it has been shown that the quotients are very likely to be small integers. The probability of a given quotient q is approximately where . For illustration, the probability of a quotient of 1, 2, 3, or 4 is roughly 41.5%, 17.0%, 9.3%, and 5.9%, respectively. Since the operation of subtraction is faster than division, particularly for large numbers, the subtraction-based Euclid's algorithm is competitive with the division-based version. This is exploited in the binary version of Euclid's algorithm.
Combining the estimated number of steps with the estimated computational expense per step shows that the Euclid's algorithm grows quadratically (h2) with the average number of digits h in the initial two numbers a and b. Let represent the number of digits in the successive remainders . Since the number of steps N grows linearly with h, the running time is bounded by
Alternative methods
Euclid's algorithm is widely used in practice, especially for small numbers, due to its simplicity. For comparison, the efficiency of alternatives to Euclid's algorithm may be determined.
One inefficient approach to finding the GCD of two natural numbers a and b is to calculate all their common divisors; the GCD is then the largest common divisor. The common divisors can be found by dividing both numbers by successive integers from 2 to the smaller number b. The number of steps of this approach grows linearly with b, or exponentially in the number of digits. Another inefficient approach is to find the prime factors of one or both numbers. As noted above, the GCD equals the product of the prime factors shared by the two numbers a and b. Present methods for prime factorization are also inefficient; many modern cryptography systems even rely on that inefficiency.
The binary GCD algorithm is an efficient alternative that substitutes division with faster operations by exploiting the binary representation used by computers. However, this alternative also scales like O(h²). It is generally faster than the Euclidean algorithm on real computers, even though it scales in the same way. Additional efficiency can be gleaned by examining only the leading digits of the two numbers a and b. The binary algorithm can be extended to other bases (k-ary algorithms), with up to fivefold increases in speed. Lehmer's GCD algorithm uses the same general principle as the binary algorithm to speed up GCD computations in arbitrary bases.
A recursive approach for very large integers (with more than 25,000 digits) leads to quasilinear integer GCD algorithms, such as those of Schönhage, and Stehlé and Zimmermann. These algorithms exploit the 2×2 matrix form of the Euclidean algorithm given above. These quasilinear methods generally scale as
Generalizations
Although the Euclidean algorithm is used to find the greatest common divisor of two natural numbers (positive integers), it may be generalized to the real numbers, and to other mathematical objects, such as polynomials, quadratic integers and Hurwitz quaternions. In the latter cases, the Euclidean algorithm is used to demonstrate the crucial property of unique factorization, i.e., that such numbers can be factored uniquely into irreducible elements, the counterparts of prime numbers. Unique factorization is essential to many proofs of number theory.
Rational and real numbers
Euclid's algorithm can be applied to real numbers, as described by Euclid in Book 10 of his Elements. The goal of the algorithm is to identify a real number such that two given real numbers, and , are integer multiples of it: and , where and are integers. This identification is equivalent to finding an integer relation among the real numbers and ; that is, it determines integers and such that . If such an equation is possible, a and b are called commensurable lengths, otherwise they are incommensurable lengths.
The real-number Euclidean algorithm differs from its integer counterpart in two respects. First, the remainders are real numbers, although the quotients are integers as before. Second, the algorithm is not guaranteed to end in a finite number of steps. If it does, the fraction is a rational number, i.e., the ratio of two integers
and can be written as a finite continued fraction . If the algorithm does not stop, the fraction is an irrational number and can be described by an infinite continued fraction . Examples of infinite continued fractions are the golden ratio and the square root of two, . The algorithm is unlikely to stop, since almost all ratios of two real numbers are irrational.
An infinite continued fraction may be truncated at a step to yield an approximation to that improves as is increased. The approximation is described by convergents ; the numerator and denominators are coprime and obey the recurrence relation
where and are the initial values of the recursion. The convergent is the best rational number approximation to with denominator :
Polynomials
Polynomials in a single variable x can be added, multiplied and factored into irreducible polynomials, which are the analogs of the prime numbers for integers. The greatest common divisor polynomial of two polynomials and is defined as the product of their shared irreducible polynomials, which can be identified using the Euclidean algorithm. The basic procedure is similar to that for integers. At each step , a quotient polynomial and a remainder polynomial are identified to satisfy the recursive equation
where and . Each quotient polynomial is chosen such that each remainder is either zero or has a degree that is smaller than the degree of its predecessor: . Since the degree is a nonnegative integer, and since it decreases with every step, the Euclidean algorithm concludes in a finite number of steps. The last nonzero remainder is the greatest common divisor of the original two polynomials, and .
For example, consider the following two quartic polynomials, which each factor into two quadratic polynomials
Dividing by yields a remainder . In the next step, is divided by yielding a remainder . Finally, dividing by yields a zero remainder, indicating that is the greatest common divisor polynomial of and , consistent with their factorization.
Many of the applications described above for integers carry over to polynomials. The Euclidean algorithm can be used to solve linear Diophantine equations and Chinese remainder problems for polynomials; continued fractions of polynomials can also be defined.
The polynomial Euclidean algorithm has other applications, such as Sturm chains, a method for counting the zeros of a polynomial that lie inside a given real interval. This in turn has applications in several areas, such as the Routh–Hurwitz stability criterion in control theory.
Finally, the coefficients of the polynomials need not be drawn from integers, real numbers or even the complex numbers. For example, the coefficients may be drawn from a general field, such as the finite fields described above. The corresponding conclusions about the Euclidean algorithm and its applications hold even for such polynomials.
Gaussian integers
The Gaussian integers are complex numbers of the form , where and are ordinary integers and is the square root of negative one. By defining an analog of the Euclidean algorithm, Gaussian integers can be shown to be uniquely factorizable, by the argument above. This unique factorization is helpful in many applications, such as deriving all Pythagorean triples or proving Fermat's theorem on sums of two squares. In general, the Euclidean algorithm is convenient in such applications, but not essential; for example, the theorems can often be proven by other arguments.
The Euclidean algorithm developed for two Gaussian integers and is nearly the same as that for ordinary integers, but differs in two respects. As before, we set and , and the task at each step is to identify a quotient and a remainder such that
where every remainder is strictly smaller than its predecessor: . The first difference is that the quotients and remainders are themselves Gaussian integers, and thus are complex numbers. The quotients are generally found by rounding the real and complex parts of the exact ratio (such as the complex number ) to the nearest integers. The second difference lies in the necessity of defining how one complex remainder can be "smaller" than another. To do this, a norm function is defined, which converts every Gaussian integer into an ordinary integer. After each step of the Euclidean algorithm, the norm of the remainder is smaller than the norm of the preceding remainder, . Since the norm is a nonnegative integer and decreases with every step, the Euclidean algorithm for Gaussian integers ends in a finite number of steps. The final nonzero remainder is , the Gaussian integer of largest norm that divides both and ; it is unique up to multiplication by a unit, or .
Many of the other applications of the Euclidean algorithm carry over to Gaussian integers. For example, it can be used to solve linear Diophantine equations and Chinese remainder problems for Gaussian integers; continued fractions of Gaussian integers can also be defined.
Euclidean domains
A set of elements under two binary operations, denoted as addition and multiplication, is called a Euclidean domain if it forms a commutative ring and, roughly speaking, if a generalized Euclidean algorithm can be performed on them. The two operations of such a ring need not be the addition and multiplication of ordinary arithmetic; rather, they can be more general, such as the operations of a mathematical group or monoid. Nevertheless, these general operations should respect many of the laws governing ordinary arithmetic, such as commutativity, associativity and distributivity.
The generalized Euclidean algorithm requires a Euclidean function, i.e., a mapping from into the set of nonnegative integers such that, for any two nonzero elements and in , there exist and in such that and . Examples of such mappings are the absolute value for integers, the degree for univariate polynomials, and the norm for Gaussian integers above. The basic principle is that each step of the algorithm reduces f inexorably; hence, if can be reduced only a finite number of times, the algorithm must stop in a finite number of steps. This principle relies on the well-ordering property of the non-negative integers, which asserts that every non-empty set of non-negative integers has a smallest member.
The fundamental theorem of arithmetic applies to any Euclidean domain: Any number from a Euclidean domain can be factored uniquely into irreducible elements. Any Euclidean domain is a unique factorization domain (UFD), although the converse is not true. The Euclidean domains and the UFD's are subclasses of the GCD domains, domains in which a greatest common divisor of two numbers always exists. In other words, a greatest common divisor may exist (for all pairs of elements in a domain), although it may not be possible to find it using a Euclidean algorithm. A Euclidean domain is always a principal ideal domain (PID), an integral domain in which every ideal is a principal ideal. Again, the converse is not true: not every PID is a Euclidean domain.
The unique factorization of Euclidean domains is useful in many applications. For example, the unique factorization of the Gaussian integers is convenient in deriving formulae for all Pythagorean triples and in proving Fermat's theorem on sums of two squares. Unique factorization was also a key element in an attempted proof of Fermat's Last Theorem published in 1847 by Gabriel Lamé, the same mathematician who analyzed the efficiency of Euclid's algorithm, based on a suggestion of Joseph Liouville. Lamé's approach required the unique factorization of numbers of the form , where and are integers, and is an th root of 1, that is, . Although this approach succeeds for some values of (such as , the Eisenstein integers), in general such numbers do factor uniquely. This failure of unique factorization in some cyclotomic fields led Ernst Kummer to the concept of ideal numbers and, later, Richard Dedekind to ideals.
Unique factorization of quadratic integers
The quadratic integer rings are helpful to illustrate Euclidean domains. Quadratic integers are generalizations of the Gaussian integers in which the imaginary unit i is replaced by a number . Thus, they have the form , where and are integers and has one of two forms, depending on a parameter . If does not equal a multiple of four plus one, then
If, however, does equal a multiple of four plus one, then
If the function corresponds to a norm function, such as that used to order the Gaussian integers above, then the domain is known as norm-Euclidean. The norm-Euclidean rings of quadratic integers are exactly those where is one of the values −11, −7, −3, −2, −1, 2, 3, 5, 6, 7, 11, 13, 17, 19, 21, 29, 33, 37, 41, 57, or 73. The cases and yield the Gaussian integers and Eisenstein integers, respectively.
If is allowed to be any Euclidean function, then the list of possible values of for which the domain is Euclidean is not yet known. The first example of a Euclidean domain that was not norm-Euclidean (with ) was published in 1994. In 1973, Weinberger proved that a quadratic integer ring with is Euclidean if, and only if, it is a principal ideal domain, provided that the generalized Riemann hypothesis holds.
Noncommutative rings
The Euclidean algorithm may be applied to some noncommutative rings such as the set of Hurwitz quaternions. Let and represent two elements from such a ring. They have a common right divisor if and for some choice of and in the ring. Similarly, they have a common left divisor if and for some choice of and in the ring. Since multiplication is not commutative, there are two versions of the Euclidean algorithm, one for right divisors and one for left divisors. Choosing the right divisors, the first step in finding the by the Euclidean algorithm can be written
where represents the quotient and the remainder. Here the quotent and remainder are chosen so that (if nonzero) the remainder has for a "Euclidean function" N defined analogously to the Euclidean functions of Euclidean domains in the non-commutative case. This equation shows that any common right divisor of and is likewise a common divisor of the remainder . The analogous equation for the left divisors would be
With either choice, the process is repeated as above until the greatest common right or left divisor is identified. As in the Euclidean domain, the "size" of the remainder (formally, its Euclidean function or "norm") must be strictly smaller than , and there must be only a finite number of possible sizes for , so that the algorithm is guaranteed to terminate.
Many results for the GCD carry over to noncommutative numbers. For example, Bézout's identity states that the right can be expressed as a linear combination of and . In other words, there are numbers and such that
The analogous identity for the left GCD is nearly the same:
Bézout's identity can be used to solve Diophantine equations. For instance, one of the standard proofs of Lagrange's four-square theorem, that every positive integer can be represented as a sum of four squares, is based on quaternion GCDs in this way.
| Mathematics | Basics | null |
10412 | https://en.wikipedia.org/wiki/Elementary%20function | Elementary function | In mathematics, an elementary function is a function of a single variable (typically real or complex) that is defined as taking sums, products, roots and compositions of finitely many polynomial, rational, trigonometric, hyperbolic, and exponential functions, and their inverses (e.g., arcsin, log, or x1/n).
All elementary functions are continuous on their domains.
Elementary functions were introduced by Joseph Liouville in a series of papers from 1833 to 1841. An algebraic treatment of elementary functions was started by Joseph Fels Ritt in the 1930s. Many textbooks and dictionaries do not give a precise definition of the elementary functions, and mathematicians differ on it.
Examples
Basic examples
Elementary functions of a single variable include:
Constant functions: etc.
Rational powers of : etc.
Exponential functions:
Logarithms:
Trigonometric functions: etc.
Inverse trigonometric functions: etc.
Hyperbolic functions: etc.
Inverse hyperbolic functions: etc.
All functions obtained by adding, subtracting, multiplying or dividing a finite number of any of the previous functions
All functions obtained by root extraction of a polynomial with coefficients in elementary functions
All functions obtained by composing a finite number of any of the previously listed functions
Certain elementary functions of a single complex variable , such as and , may be multivalued. Additionally, certain classes of functions may be obtained by others using the final two rules. For example, the exponential function composed with addition, subtraction, and division provides the hyperbolic functions, while initial composition with instead provides the trigonometric functions.
Composite examples
Examples of elementary functions include:
Addition, e.g. (+1)
Multiplication, e.g. (2)
Polynomial functions
The last function is equal to , the inverse cosine, in the entire complex plane.
All monomials, polynomials, rational functions and algebraic functions are elementary.
The absolute value function, for real , is also elementary as it can be expressed as the composition of a power and root of : .
Non-elementary functions
Many mathematicians exclude non-analytic functions such as the absolute value function or discontinuous functions such as the step function, but others allow them. Some have proposed extending the set to include, for example, the Lambert W function.
Some examples of functions that are not elementary:
tetration
the gamma function
non-elementary Liouvillian functions, including
the exponential integral (Ei), logarithmic integral (Li or li) and Fresnel integrals (S and C).
the error function, a fact that may not be immediately obvious, but can be proven using the Risch algorithm.
other nonelementary integrals, including the Dirichlet integral and elliptic integral.
Closure
It follows directly from the definition that the set of elementary functions is closed under arithmetic operations, root extraction and composition. The elementary functions are closed under differentiation. They are not closed under limits and infinite sums. Importantly, the elementary functions are closed under integration, as shown by Liouville's theorem, see nonelementary integral. The Liouvillian functions are defined as the elementary functions and, recursively, the integrals of the Liouvillian functions.
Differential algebra
The mathematical definition of an elementary function, or a function in elementary form, is considered in the context of differential algebra. A differential algebra is an algebra with the extra operation of derivation (algebraic version of differentiation). Using the derivation operation new equations can be written and their solutions used in extensions of the algebra. By starting with the field of rational functions, two special types of transcendental extensions (the logarithm and the exponential) can be added to the field building a tower containing elementary functions.
A differential field F is a field F0 (rational functions over the rationals Q for example) together with a derivation map u → ∂u. (Here ∂u is a new function. Sometimes the notation u′ is used.) The derivation captures the properties of differentiation, so that for any two elements of the base field, the derivation is linear
and satisfies the Leibniz product rule
An element h is a constant if ∂h = 0. If the base field is over the rationals, care must be taken when extending the field to add the needed transcendental constants.
A function u of a differential extension F[u] of a differential field F is an elementary function over F if the function u
is algebraic over F, or
is an exponential, that is, ∂u = u ∂a for a ∈ F, or
is a logarithm, that is, ∂u = ∂a / a for a ∈ F.
(see also Liouville's theorem)
| Mathematics | Specific functions | null |
10500 | https://en.wikipedia.org/wiki/Earless%20seal | Earless seal | The earless seals, phocids, or true seals are one of the three main groups of mammals within the seal lineage, Pinnipedia. All true seals are members of the family Phocidae (). They are sometimes called crawling seals to distinguish them from the fur seals and sea lions of the family Otariidae. Seals live in the oceans of both hemispheres and, with the exception of the more tropical monk seals, are mostly confined to polar, subpolar, and temperate climates. The Baikal seal is the only species of exclusively freshwater seal.
Taxonomy and evolution
Evolution
The earliest known fossil earless seal is Noriphoca gaudini from the late Oligocene or earliest Miocene (Aquitanian) of Italy. Other early fossil phocids date from the mid-Miocene, 15 million years ago in the north Atlantic. Until recently, many researchers believed that phocids evolved separately from otariids and odobenids; and that they evolved from otter-like animals, such as Potamotherium, which inhabited European freshwater lakes. Recent evidence strongly suggests a monophyletic origin for all pinnipeds from a single ancestor, possibly Enaliarctos, most closely related to the mustelids and bears.
Monk seals and elephant seals were previously believed to have first entered the Pacific through the open straits between North and South America, with the Antarctic true seals either using the same route or travelled down the west coast of Africa. It is now thought that the monk seals, elephant seals, and Antarctic seals all evolved in the southern hemisphere, and likely dispersed to their current distributions from more southern latitudes.
Taxonomy
In the 1980s and 1990s, morphological phylogenetic analysis of the phocids led to new conclusions about the interrelatedness of the various genera. More recent molecular phylogenetic analyses have confirmed the monophyly of the two phocid subfamilies (Phocinae and Monachinae). The Monachinae (known as the "southern" seals), is composed of three tribes; the Lobodontini, Miroungini, and Monachini. The four Antarctic genera Hydrurga, Leptonychotes, Lobodon, and Ommatophoca are part of the tribe Lobodontini. Tribe Miroungini is composed of the elephant seals. The Monk seals (Monachus and Neomonachus) are all part of the tribe Monachini. Likewise, subfamily Phocinae (the "northern" seals) also includes three tribes; Erignathini (Erignathus), Cystophorini (Cystophora), and Phocini (all other phocines). More recently, five species have been split off from Phoca, forming three additional genera.
Alternatively the three monachine tribes have been evaluated to familiar status, which elephant seals and the Antarctic seals are more closely related to the phocines.
Extant genera
Biology
External anatomy
Adult phocids vary from in length and in weight in the ringed seal to and in the southern elephant seal, which is the largest member of the order Carnivora. Phocids have fewer teeth than land-based members of the Carnivora, although they retain powerful canines. Some species lack molars altogether. The dental formula is:
While otariids are known for speed and maneuverability, phocids are known for efficient, economical movement. This allows most phocids to forage far from land to exploit prey resources, while otariids are tied to rich upwelling zones close to breeding sites. Phocids swim by sideways movements of their bodies, using their hind flippers to fullest effect. Their fore flippers are used primarily for steering, while their hind flippers are bound to the pelvis in such a way that they cannot bring them under their bodies to walk on them. They are more streamlined than fur seals and sea lions, so they can swim more effectively over long distances. However, because they cannot turn their hind flippers downward, they are very clumsy on land, having to wriggle with their front flippers and abdominal muscles.
Phocid respiratory and circulatory systems are adapted to allow diving to considerable depths, and they can spend a long time underwater between breaths. Air is forced from the lungs during a dive and into the upper respiratory passages, where gases cannot easily be absorbed into the bloodstream. This helps protect the seal from the bends. The middle ear is also lined with blood sinuses that inflate during diving, helping to maintain a constant pressure.
Phocids are more specialized for aquatic life than otariids. They lack external ears and have sleek, streamlined bodies. Retractable nipples, internal testicles, and an internal penile sheath provide further streamlining. A smooth layer of blubber lies underneath the skin. Phocids are able to divert blood flow to this layer to help control their temperatures.
Communication
Unlike otariids, true seals do not communicate by 'barking'. Instead, they communicate by slapping the water and grunting.
Reproduction
Phocids spend most of their time at sea, although they return to land or pack ice to breed and give birth. Pregnant females spend long periods foraging at sea, building up fat reserves, and then return to the breeding site to use their stored energy to nurse pups. However, the common seal displays a reproductive strategy similar to that used by otariids, in which the mother makes short foraging trips between nursing bouts.
Because a phocid mother's feeding grounds are often hundreds of kilometers from the breeding site, she must fast while lactating. This combination of fasting with lactation requires the mother to provide large amounts of energy to her pup at a time when she is not eating (and often, not drinking). Mothers must supply their own metabolic needs while nursing. This is a miniature version of the humpback whales' strategy, which involves fasting during their months-long migration from arctic feeding areas to tropical breeding/nursing areas and back.
Phocids produce thick, fat-rich milk that allows them to provide their pups with large amounts of energy in a short period. This allows the mother to return to the sea in time to replenish her reserves. Lactation ranges from five to seven weeks in the monk seal to just three to five days in the hooded seal. The mother ends nursing by leaving her pup at the breeding site to search for food (pups continue to nurse if given the opportunity). "Milk stealers" that suckle from unrelated, sleeping females are not uncommon; this often results in the death of the mother's pup, since a female can only feed one pup.
Growth and maturation
The pup's diet is so high in calories that it builds up a fat store. Before the pup is ready to forage, the mother abandons it, and the pup consumes its own fat for weeks or even months while it matures. Seals, like all marine mammals, need time to develop the oxygen stores, swimming muscles, and neural pathways necessary for effective diving and foraging. Seal pups typically eat no food and drink no water during the period, although some polar species eat snow. The postweaning fast ranges from two weeks in the hooded seal to 9–12 weeks in the northern elephant seal. The physiological and behavioral adaptations that allow phocid pups to endure these remarkable fasts, which are among the longest for any mammal, remain an area of active study and research.
Feeding strategy
Phocids make use of at least four different feeding strategies: suction feeding, grip and tear feeding, filter feeding, and pierce feeding. Each of these feeding strategies is aided by a specialized skull, mandible, and tooth morphology. However, despite morphological specialization, most phocids are opportunistic and employ multiple strategies to capture and eat prey. For example, the leopard seal, Hydrurga leptonyx, uses grip and tear feeding to prey on penguins, suction feeding to consume small fish, and filter feeding to catch krill.
| Biology and health sciences | Pinnipeds | null |
10511 | https://en.wikipedia.org/wiki/Epilepsy | Epilepsy | Epilepsy is a group of non-communicable neurological disorders characterized by recurrent epileptic seizures. An epileptic seizure is the clinical manifestation of an abnormal, excessive, and synchronized electrical discharge in the neurons. The occurrence of two or more unprovoked seizures defines epilepsy. The occurrence of just one seizure may warrant the definition (set out by the International League Against Epilepsy) in a more clinical usage where recurrence may be able to be prejudged. Epileptic seizures can vary from brief and nearly undetectable periods to long periods of vigorous shaking due to abnormal electrical activity in the brain. These episodes can result in physical injuries, either directly, such as broken bones, or through causing accidents. In epilepsy, seizures tend to recur and may have no detectable underlying cause. Isolated seizures that are provoked by a specific cause such as poisoning are not deemed to represent epilepsy. People with epilepsy may be treated differently in various areas of the world and experience varying degrees of social stigma due to the alarming nature of their symptoms.
The underlying mechanism of an epileptic seizure is excessive and abnormal neuronal activity in the cortex of the brain, which can be observed in the electroencephalogram (EEG) of an individual. The reason this occurs in most cases of epilepsy is unknown (cryptogenic); some cases occur as the result of brain injury, stroke, brain tumors, infections of the brain, or birth defects through a process known as epileptogenesis. Known genetic mutations are directly linked to a small proportion of cases. The diagnosis involves ruling out other conditions that might cause similar symptoms, such as fainting, and determining if another cause of seizures is present, such as alcohol withdrawal or electrolyte problems. This may be partly done by imaging the brain and performing blood tests. Epilepsy can often be confirmed with an EEG, but a normal reading does not rule out the condition.
Epilepsy that occurs as a result of other issues may be preventable. Seizures are controllable with medication in about 69% of cases; inexpensive anti-seizure medications are often available. In those whose seizures do not respond to medication; surgery, neurostimulation or dietary changes may be considered. Not all cases of epilepsy are lifelong, and many people improve to the point that treatment is no longer needed.
, about 51 million people have epilepsy. Nearly 80% of cases occur in the developing world. In 2021, it resulted in 140,000 deaths, an increase from 125,000 in 1990. Epilepsy is more common in children and older people. In the developed world, onset of new cases occurs most frequently in babies and the elderly. In the developing world, onset is more common at the extremes of age – in younger children and in older children and young adults due to differences in the frequency of the underlying causes. About 5–10% of people will have an unprovoked seizure by the age of 80. The chance of experiencing a second seizure within two years after the first is around 40%. In many areas of the world, those with epilepsy either have restrictions placed on their ability to drive or are not permitted to drive until they are free of seizures for a specific length of time. The word epilepsy is from Ancient Greek , 'to seize, possess, or afflict'.
Signs and symptoms
Epilepsy is characterized by a long-term risk of recurrent epileptic seizures. These seizures may present in several ways depending on the parts of the brain involved and the person's age.
Seizures
The most common type (60%) of seizures are convulsive which involve involuntary muscle contractions. Of these, one-third begin as generalized seizures from the start, affecting both hemispheres of the brain and impairing consciousness. Two-thirds begin as focal seizures (which affect one hemisphere of the brain) which may progress to generalized seizures. The remaining 40% of seizures are non-convulsive. An example of this type is the absence seizure, which presents as a decreased level of consciousness and usually lasts about 10 seconds.
Certain experiences, known as auras often precede focal seizures. The seizures can include sensory (visual, hearing, or smell), psychic, autonomic, and motor phenomena depending on which part of the brain is involved. Muscle jerks may start in a specific muscle group and spread to surrounding muscle groups in which case it is known as a Jacksonian march. Automatisms may occur, which are non-consciously generated activities and mostly simple repetitive movements like smacking the lips or more complex activities such as attempts to pick up something.
There are six main types of generalized seizures:
tonic-clonic,
tonic,
clonic,
myoclonic,
absence, and
atonic seizures.
They all involve loss of consciousness and typically happen without warning.
Tonic-clonic seizures occur with a contraction of the limbs followed by their extension and arching of the back which lasts 10–30 seconds (the tonic phase). A cry may be heard due to contraction of the chest muscles, followed by a shaking of the limbs in unison (clonic phase). Tonic seizures produce constant contractions of the muscles. A person often turns blue as breathing is stopped. In clonic seizures there is shaking of the limbs in unison. After the shaking has stopped it may take 10–30 minutes for the person to return to normal; this period is called the "postictal state" or "postictal phase." Loss of bowel or bladder control may occur during a seizure. People experiencing a seizure may bite their tongue, either the tip or on the sides; in tonic-clonic seizure, bites to the sides are more common. Tongue bites are also relatively common in psychogenic non-epileptic seizures. Psychogenic non-epileptic seizures are seizure like behavior without an associated synchronised electrical discharge on EEG and are considered a dissociative disorder.
Myoclonic seizures involve very brief muscle spasms in either a few areas or all over. These sometimes cause the person to fall, which can cause injury. Absence seizures can be subtle with only a slight turn of the head or eye blinking with impaired consciousness; typically, the person does not fall over and returns to normal right after it ends. Atonic seizures involve losing muscle activity for greater than one second, typically occurring on both sides of the body. Rarer seizure types can cause involuntary unnatural laughter (gelastic), crying (dyscrastic), or more complex experiences such as déjà vu.
About 6% of those with epilepsy have seizures that are often triggered by specific events and are known as reflex seizures. Those with reflex epilepsy have seizures that are only triggered by specific stimuli. Common triggers include flashing lights and sudden noises. In certain types of epilepsy, seizures happen more often during sleep, and in other types they occur almost only when sleeping. In 2017, the International League Against Epilepsy published new uniform guidelines for the classification of seizures as well as epilepsies along with their cause and comorbidities.
Seizure clusters
People with epilepsy may experience seizure clusters which may be broadly defined as an acute deterioration in seizure control. The prevalence of seizure clusters is uncertain given that studies have used different definitions to define them. However, estimates suggest that the prevalence may range from 5% to 50% of people with epilepsy. People with refractory epilepsy who have a high seizure frequency are at the greatest risk for having seizure clusters. Seizure clusters are associated with increased healthcare use, worse quality of life, impaired psychosocial functioning, and possibly increased mortality. Benzodiazepines are used as an acute treatment for seizure clusters.
Post-ictal
After the active portion of a seizure (the ictal state) there is typically a period of recovery during which there is confusion, referred to as the postictal period, before a normal level of consciousness returns. It usually lasts 3 to 15 minutes but may last for hours. Other common symptoms include feeling tired, headache, difficulty speaking, and abnormal behavior. Psychosis after a seizure is relatively common, occurring in 6–10% of people. Often people do not remember what happened during this time. Localized weakness, known as Todd's paralysis, may also occur after a focal seizure. It would typically last for seconds to minutes but may rarely last for a day or two.
Psychosocial
Epilepsy can have adverse effects on social and psychological well-being. These effects may include social isolation, stigmatization, or disability. They may result in lower educational achievement and worse employment outcomes. Learning disabilities are common in those with the condition, and especially among children with epilepsy. The stigma of epilepsy can also affect the families of those with the disorder.
Certain disorders occur more often in people with epilepsy, depending partly on the epilepsy syndrome present. These include depression, anxiety, obsessive–compulsive disorder (OCD), and migraine. Attention deficit hyperactivity disorder (ADHD) affects three to five times more children with epilepsy than children without the condition. ADHD and epilepsy have significant consequences on a child's behavioral, learning, and social development. Epilepsy is also more common in children with autism.
Approximately, one-in-three people with epilepsy have a lifetime history of a psychiatric disorder. There are believed to be multiple causes for this including pathophysiological changes related to the epilepsy itself as well as adverse experiences related to living with epilepsy (e.g., stigma, discrimination). In addition, it is thought that the relationship between epilepsy and psychiatric disorders is not unilateral but rather bidirectional. For example, people with depression have an increased risk for developing new-onset epilepsy.
The presence of comorbid depression or anxiety in people with epilepsy is associated with a poorer quality of life, increased mortality, increased healthcare use and a worse response to treatment (including surgical). Anxiety disorders and depression may explain more variability in quality of life than seizure type or frequency. There is evidence that both depression and anxiety disorders are underdiagnosed and undertreated in people with epilepsy.
Causes
Epilepsy can have both genetic and acquired causes, with the interaction of these factors in many cases. Established acquired causes include serious brain trauma, stroke, tumours, and brain problems resulting from a previous infection. In about 60% of cases, the cause is unknown. Epilepsies caused by genetic, congenital, or developmental conditions are more common among younger people, while brain tumors and strokes are more likely in older people.
Seizures may also occur as a consequence of other health problems; if they occur right around a specific cause, such as a stroke, head injury, toxic ingestion, or metabolic problem, they are known as acute symptomatic seizures and are in the broader classification of seizure-related disorders rather than epilepsy itself.
Genetics
Genetics is believed to be involved in the majority of cases, either directly or indirectly. Some epilepsies are due to a single gene defect (1–2%); most are due to the interaction of multiple genes and environmental factors. Each of the single gene defects is rare, with more than 200 in all described. Most genes involved affect ion channels, either directly or indirectly. These include genes for ion channels, enzymes, GABA, and G protein-coupled receptors.
In identical twins, if one is affected, there is a 50–60% chance that the other will also be affected. In non-identical twins, the risk is 15%. These risks are greater in those with generalized rather than focal seizures. If both twins are affected, most of the time they have the same epileptic syndrome (70–90%). Other close relatives of a person with epilepsy have a risk five times that of the general population. Between 1 and 10% of those with Down syndrome and 90% of those with Angelman syndrome have epilepsy.
Phakomatoses
Phakomatoses, also known as neurocutaneous disorders, are a group of multisystemic diseases that most prominently affect the skin and central nervous system. They are caused by defective development of the embryonic ectodermal tissue that is most often due to a single genetic mutation. The brain, as well as other neural tissue and the skin, are all derived from the ectoderm and thus defective development may result in epilepsy as well as other manifestations such as autism and intellectual disability. Some types of phakomatoses such as tuberous sclerosis complex and Sturge-Weber syndrome have a higher prevalence of epilepsy relative to others such as neurofibromatosis type 1.
Tuberous sclerosis complex is an autosomal dominant disorder that is caused by mutations in either the TSC1 or TSC2 gene and it affects approximately 1 in 6,000–10,000 live births. These mutations result in the upregulation of the mechanistic target of rapamycin (mTOR) pathway which leads to the growth of tumors in many organs including the brain, skin, heart, eyes and kidneys. In addition, abnormal mTOR activity is believed to alter neural excitability. The prevalence of epilepsy is estimated to be 80-90%. The majority of cases of epilepsy present within the first 3 years of life and are medically refractory. Relatively recent developments for the treatment of epilepsy in people with TSC include mTOR inhibitors, cannabidiol and vigabatrin. Epilepsy surgery is often pursued.
Sturge-Weber syndrome is caused by an activating somatic mutation in the GNAQ gene and it affects approximately 1 in 20,000–50,000 live births. The mutation results in vascular malformations affecting the brain, skin and eyes. The typical presentation includes a facial port-wine birthmark, ocular angiomas and cerebral vascular malformations which are most often unilateral but are bilateral in 15% of cases. The prevalence of epilepsy is 75-100% and is higher in those with bilateral involvement. Seizures typically occur within the first two years of life and are refractory in nearly half of cases. However, high rates of seizure freedom with surgery have been reported in as many as 83%.
Neurofibromatosis type 1 is the most common phakomatoses and occurs in approximately 1 in 3,000 live births. It is caused by autosomal dominant mutations in the Neurofibromin 1 gene. Clinical manifestations are variable but may include hyperpigmented skin marks, hamartomas of the iris called Lisch nodules, neurofibromas, optic pathway gliomas and cognitive impairment. The prevalence of epilepsy is estimated to be 4–7%. Seizures are typically easier to control with anti-seizure medications relative to other phakomatoses but in some refractory cases surgery may need to be pursued.
Acquired
Epilepsy may occur as a result of several other conditions, including tumors, strokes, head trauma, previous infections of the central nervous system, genetic abnormalities, and as a result of brain damage around the time of birth. Of those with brain tumors, almost 30% have epilepsy, making them the cause of about 4% of cases. The risk is greatest for tumors in the temporal lobe and those that grow slowly. Other mass lesions such as cerebral cavernous malformations and arteriovenous malformations have risks as high as 4060%. Of those who have had a stroke, 6–10% develop epilepsy. Risk factors for post-stroke epilepsy include stroke severity, cortical involvement, hemorrhage and early seizures. Between 6 and 20% of epilepsy is believed to be due to head trauma. Mild brain injury increases the risk about two-fold while severe brain injury increases the risk seven-fold. In those who have experienced a high-powered gunshot wound to the head, the risk is about 50%.
Some evidence links epilepsy and celiac disease and non-celiac gluten sensitivity, while other evidence does not. There appears to be a specific syndrome that includes coeliac disease, epilepsy, and calcifications in the brain. A 2012 review estimates that between 1% and 6% of people with epilepsy have coeliac disease while 1% of the general population has the condition.
The risk of epilepsy following meningitis is less than 10%; it more commonly causes seizures during the infection itself. In herpes simplex encephalitis the risk of a seizure is around 50% with a high risk of epilepsy following (up to 25%). A form of an infection with the pork tapeworm (cysticercosis), in the brain, is known as neurocysticercosis, and is the cause of up to half of epilepsy cases in areas of the world where the parasite is common. Epilepsy may also occur after other brain infections such as cerebral malaria, toxoplasmosis, and toxocariasis. Chronic alcohol use increases the risk of epilepsy: those who drink six units of alcohol per day have a 2.5-fold increase in risk. Other risks include Alzheimer's disease, multiple sclerosis, and autoimmune encephalitis. Getting vaccinated does not increase the risk of epilepsy. Malnutrition is a risk factor seen mostly in the developing world, although it is unclear however if it is a direct cause or an association. People with cerebral palsy have an increased risk of epilepsy, with half of people with spastic quadriplegia and spastic hemiplegia having the condition.
Mechanism
Normally brain electrical activity is non-synchronous, as large numbers of neurons do not normally fire at the same time, but rather fire in order as signals travel throughout the brain. Neuron activity is regulated by various factors both within the cell and the cellular environment. Factors within the neuron include the type, number and distribution of ion channels, changes to receptors and changes of gene expression. Factors around the neuron include ion concentrations, synaptic plasticity and regulation of transmitter breakdown by glial cells.
Epilepsy
The exact mechanism of epilepsy is unknown, but a little is known about its cellular and network mechanisms. However, it is unknown under which circumstances the brain shifts into the activity of a seizure with its excessive synchronization.
In epilepsy, the resistance of excitatory neurons to fire during this period is decreased. This may occur due to changes in ion channels or inhibitory neurons not functioning properly. This then results in a specific area from which seizures may develop, known as a "seizure focus". Another mechanism of epilepsy may be the up-regulation of excitatory circuits or down-regulation of inhibitory circuits following an injury to the brain. These secondary epilepsies occur through processes known as epileptogenesis. Failure of the blood–brain barrier may also be a causal mechanism as it would allow substances in the blood to enter the brain.
Seizures
There is evidence that epileptic seizures are usually not a random event. Seizures are often brought on by factors (also known as triggers) such as stress, excessive alcohol use, flickering light, or a lack of sleep, among others. The term seizure threshold is used to indicate the amount of stimulus necessary to bring about a seizure; this threshold is lowered in epilepsy.
In epileptic seizures a group of neurons begin firing in an abnormal, excessive, and synchronized manner. This results in a wave of depolarization known as a paroxysmal depolarizing shift. Normally, after an excitatory neuron fires it becomes more resistant to firing for a period of time. This is due in part to the effect of inhibitory neurons, electrical changes within the excitatory neuron, and the negative effects of adenosine.
Focal seizures begin in one area of the brain while generalized seizures begin in both hemispheres. Some types of seizures may change brain structure, while others appear to have little effect. Gliosis, neuronal loss, and atrophy of specific areas of the brain are linked to epilepsy but it is unclear if epilepsy causes these changes or if these changes result in epilepsy.
The seizures can be described on different scales, from the cellular level to the whole brain. These are several concomitant factor, which on different scale can "drive" the brain to pathological states and trigger a seizure.
Diagnosis
The diagnosis of epilepsy is typically made based on observation of the seizure onset and the underlying cause. An electroencephalogram (EEG) to look for abnormal patterns of brain waves and neuroimaging (CT scan or MRI) to look at the structure of the brain are also usually part of the initial investigations. While figuring out a specific epileptic syndrome is often attempted, it is not always possible. Video and EEG monitoring may be useful in difficult cases.
Definition
Epilepsy is a disorder of the brain defined by any of the following conditions:
{| cellpadding=5 style="border:1px solid #ccc"
|- bgcolor="#fafafa"
|
At least two unprovoked (or reflex) seizures occurring more than 24 hours apart
One unprovoked (or reflex) seizure and a probability of further seizures similar to the general recurrence risk (at least 60%) after two unprovoked seizures, occurring over the next 10 years
Diagnosis of an epilepsy syndrome
|}
Furthermore, epilepsy is considered to be resolved for individuals who had an age-dependent epilepsy syndrome but are now past that age or those who have remained seizure-free for the last 10 years, with no seizure medicines for the last 5 years.
This 2014 definition of the International League Against Epilepsy (ILAE) is a clarification of the ILAE 2005 conceptual definition, according to which epilepsy is "a disorder of the brain characterized by an enduring predisposition to generate epileptic seizures and by the neurobiologic, cognitive, psychological, and social consequences of this condition. The definition of epilepsy requires the occurrence of at least one epileptic seizure."
It is, therefore, possible to outgrow epilepsy or to undergo treatment that causes epilepsy to be resolved, but with no guarantee that it will not return. In the definition, epilepsy is now called a disease, rather than a disorder. This was a decision of the executive committee of the ILAE, taken because the word disorder, while perhaps having less stigma than does disease, also does not express the degree of seriousness that epilepsy deserves.
The definition is practical in nature and is designed for clinical use. In particular, it aims to clarify when an "enduring predisposition" according to the 2005 conceptual definition is present. Researchers, statistically minded epidemiologists, and other specialized groups may choose to use the older definition or a definition of their own devising. The ILAE considers doing so is perfectly allowable, so long as it is clear what definition is being used.
The ILAE definition for one seizure needs an understanding of projecting an enduring predisposition to the generation of epileptic seizures. WHO, for instance, chooses to just use the traditional definition of two unprovoked seizures.
Classification
In contrast to the classification of seizures which focuses on what happens during a seizure, the classification of epilepsies focuses on the underlying causes. When a person is admitted to hospital after an epileptic seizure the diagnostic workup results preferably in the seizure itself being classified (e.g. tonic-clonic) and in the underlying disease being identified (e.g. hippocampal sclerosis). The name of the diagnosis finally made depends on the available diagnostic results and the applied definitions and classifications (of seizures and epilepsies) and its respective terminology.
The International League Against Epilepsy (ILAE) provided a classification of the epilepsies and epileptic syndromes in 1989 as follows:
{| cellpadding=5 style="border:1px solid #ccc"
|- bgcolor="#fafafa"
|
Localization-related epilepsies and syndromes
Unknown cause (e.g. benign childhood epilepsy with centrotemporal spikes)
Symptomatic/cryptogenic (e.g. temporal lobe epilepsy)
Generalized
Unknown cause (e.g. childhood absence epilepsy)
Cryptogenic or symptomatic (e.g. Lennox-Gastaut syndrome)
Symptomatic (e.g. early infantile epileptic encephalopathy with burst suppression)
Epilepsies and syndromes undetermined whether focal or generalized
With both generalized and focal seizures (e.g. epilepsy with continuous spike-waves during slow wave sleep)
Special syndromes (with situation-related seizures)
|}
This classification was widely accepted but has also been criticized mainly because the underlying causes of epilepsy (which are a major determinant of clinical course and prognosis) were not covered in detail. In 2010 the ILAE Commission for Classification of the Epilepsies addressed this issue and divided epilepsies into three categories (genetic, structural/metabolic, unknown cause) which were refined in their 2011 recommendation into four categories and a number of subcategories reflecting recent technological and scientific advances.
{| cellpadding=5 style="border:1px solid #ccc"
|- bgcolor="#fafafa"
|
Unknown cause (mostly genetic or presumed genetic origin)
Pure epilepsies due to single gene disorders
Pure epilepsies with complex inheritance
Symptomatic (associated with gross anatomic or pathologic abnormalities)
Mostly genetic or developmental causation
Childhood epilepsy syndromes
Progressive myoclonic epilepsies
Neurocutaneous syndromes
Other neurologic single gene disorders
Disorders of chromosome function
Developmental anomalies of cerebral structure
Mostly acquired causes
Hippocampal sclerosis
Perinatal and infantile causes
Cerebral trauma, tumor or infection
Cerebrovascular disorders
Cerebral immunologic disorders
Degenerative and other neurologic conditions
Provoked (a specific systemic or environmental factor is the predominant cause of the seizures)
Provoking factors
Reflex epilepsies
Cryptogenic (presumed symptomatic nature in which the cause has not been identified)
|} A revised, operational classification of seizure types has been introduced by the ILAE. It allows more clearly understood terms and clearly defines focal and generalized onset dichotomy, when possible, even without observing the seizures based on description by patient or observers. The essential changes in terminology are that "partial" is called "focal" with awareness used as a classifier for focal seizures -based on description focal seizures are now defined as behavioral arrest, automatisms, cognitive, autonomic, emotional or hyperkinetic variants while atonic, myoclonic, clonic, infantile spasms, and tonic seizures may be either focal or generalized based on their onset. Several terms that were not clear or consistent in the description were removed such as dyscognitive, psychic, simple, and complex partial, while "secondarily generalized" is replaced by a clearer term "focal to bilateral tonic-clonic seizure". New seizure types now believed to be generalized are eyelid myoclonia, myoclonic atonic, myoclonic absence, and myoclonic tonic-clonic. Sometimes it is possible to classify seizures as focal or generalized based on presenting features even though onset in not known. This system is based on the 1981 seizure classification modified in 2010 and principally is the same with an effort to improve the flexibility and clarity of use to understand seizure types better in keeping with current knowledge.
Syndromes
Cases of epilepsy may be organized into epilepsy syndromes by the specific features that are present. These features include the age that seizures begin, the seizure types, EEG findings, among others. Identifying an epilepsy syndrome is useful as it helps determine the underlying causes as well as what anti-seizure medication should be tried.
The ability to categorize a case of epilepsy into a specific syndrome occurs more often with children since the onset of seizures is commonly early. Less serious examples are benign rolandic epilepsy (2.8 per 100,000), childhood absence epilepsy (0.8 per 100,000) and juvenile myoclonic epilepsy (0.7 per 100,000). Severe syndromes with diffuse brain dysfunction caused, at least partly, by some aspect of epilepsy, are also referred to as developmental and epileptic encephalopathies. These are associated with frequent seizures that are resistant to treatment and cognitive dysfunction, for instance Lennox–Gastaut syndrome (1–2% of all persons with epilepsy), Dravet syndrome (1: 15,000-40,000 worldwide), and West syndrome (1–9: 100,000). Genetics is believed to play an important role in epilepsies by a number of mechanisms. Simple and complex modes of inheritance have been identified for some of them. However, extensive screening have failed to identify many single gene variants of large effect. More recent exome and genome sequencing studies have begun to reveal a number of de novo gene mutations that are responsible for some epileptic encephalopathies, including CHD2 and SYNGAP1 and DNM1, GABBR2, FASN and RYR3.
Syndromes in which causes are not clearly identified are difficult to match with categories of the current classification of epilepsy. Categorization for these cases was made somewhat arbitrarily. The idiopathic (unknown cause) category of the 2011 classification includes syndromes in which the general clinical features and/or age specificity strongly point to a presumed genetic cause. Some childhood epilepsy syndromes are included in the unknown cause category in which the cause is presumed genetic, for instance benign rolandic epilepsy. Clinical syndromes in which epilepsy is not the main feature (e.g. Angelman syndrome) were categorized symptomatic but it was argued to include these within the category idiopathic. Classification of epilepsies and particularly of epilepsy syndromes will change with advances in research.
Tests
An electroencephalogram (EEG) can assist in showing brain activity suggestive of an increased risk of seizures. It is only recommended for those who are likely to have had an epileptic seizure on the basis of symptoms. In the diagnosis of epilepsy, electroencephalography may help distinguish the type of seizure or syndrome present. In children it is typically only needed after a second seizure unless specified by a specialist. It cannot be used to rule out the diagnosis and may be falsely positive in those without the condition. In certain situations it may be useful to perform the EEG while the affected individual is sleeping or sleep deprived.
Diagnostic imaging by CT scan and MRI is recommended after a first non-febrile seizure to detect structural problems in and around the brain. MRI is generally a better imaging test except when bleeding is suspected, for which CT is more sensitive and more easily available. If someone attends the emergency room with a seizure but returns to normal quickly, imaging tests may be done at a later point. If a person has a previous diagnosis of epilepsy with previous imaging, repeating the imaging is usually not needed even if there are subsequent seizures.
For adults, the testing of electrolyte, blood glucose and calcium levels is important to rule out problems with these as causes. An electrocardiogram can rule out problems with the rhythm of the heart. A lumbar puncture may be useful to diagnose a central nervous system infection but is not routinely needed. In children additional tests may be required such as urine biochemistry and blood testing looking for metabolic disorders. Together with EEG and neuroimaging, genetic testing is becoming one of the most important diagnostic technique for epilepsy, as a diagnosis might be achieved in a relevant proportion of cases with severe epilepsies, both in children and adults. For those with negative genetic testing, in some it might be important to repeat or re-analyze previous genetic studies after 2–3 years.
A high blood prolactin level within the first 20 minutes following a seizure may be useful to help confirm an epileptic seizure as opposed to psychogenic non-epileptic seizure. Serum prolactin level is less useful for detecting focal seizures. If it is normal an epileptic seizure is still possible and a serum prolactin does not separate epileptic seizures from syncope. It is not recommended as a routine part of the diagnosis of epilepsy.
Differential diagnosis
Diagnosis of epilepsy can be difficult. A number of other conditions may present very similar signs and symptoms to seizures, including syncope, hyperventilation, migraines, narcolepsy, panic attacks and psychogenic non-epileptic seizures (PNES). In particular, syncope can be accompanied by a short episode of convulsions. Nocturnal frontal lobe epilepsy, often misdiagnosed as nightmares, was considered to be a parasomnia but later identified to be an epilepsy syndrome. Attacks of the movement disorder paroxysmal dyskinesia may be taken for epileptic seizures. The cause of a drop attack can be, among many others, an atonic seizure.
Children may have behaviors that are easily mistaken for epileptic seizures but are not. These include breath-holding spells, bedwetting, night terrors, tics and shudder attacks. Gastroesophageal reflux may cause arching of the back and twisting of the head to the side in infants, which may be mistaken for tonic-clonic seizures.
Misdiagnosis is frequent (occurring in about 5 to 30% of cases). Different studies showed that in many cases seizure-like attacks in apparent treatment-resistant epilepsy have a cardiovascular cause. Approximately 20% of the people seen at epilepsy clinics have PNES and of those who have PNES about 10% also have epilepsy; separating the two based on the seizure episode alone without further testing is often difficult.
Prevention
While many cases are not preventable, efforts to reduce head injuries, provide good care around the time of birth, and reduce environmental parasites such as the pork tapeworm may be effective. After brain injuries, there is a limited window of time to intervene with treatments to prevent epilepsy, similar to the therapeutic approach used in stroke therapy. Epileptogenesis may occur rapidly, further narrowing this window, but a delayed process known as "secondary epileptogenesis" can influence the progression and severity of epilepsy, offering opportunities for intervention even after its onset. Current research focuses on identifying methods and targets to prevent or slow epilepsy development. Promising treatments include drugs such as TrkB inhibitors, losartan, statins, isoflurane, anti-inflammatory and anti-oxidative drugs, the SV2A modulator levetiracetam, and epigenetic interventions. Efforts in one part of Central America to decrease rates of pork tapeworm resulted in a 50% decrease in new cases of epilepsy. Yoga-based Nadi Shodhana Pranayama, also known as Alternate Nostril Breathing, may positively impact the nervous system and help manage seizure disorders. Regular exercise helps balance brain function by providing the body with oxygen and removing carbon dioxide and toxins from the blood.
Complications
Epilepsy can be dangerous when seizure occurs at certain times. The risk of drowning or being involved in a motor vehicle collision is higher. It is also found that people with epilepsy are more likely to have psychological problems. Other complications include aspiration pneumonia and difficulty learning.
Management
Epilepsy is usually treated with daily medication once a second seizure has occurred, while medication may be started after the first seizure in those at high risk for subsequent seizures. Supporting people's self-management of their condition may be useful. In drug-resistant cases different management options may be considered, including special diets, the implantation of a neurostimulator, or neurosurgery.
First aid
Rolling people with an active tonic-clonic seizure onto their side and into the recovery position helps prevent fluids from getting into the lungs. Putting fingers, a bite block or tongue depressor in the mouth is not recommended as it might make the person vomit or result in the rescuer being bitten. Efforts should be taken to prevent further self-injury. Spinal precautions are generally not needed.
If a seizure lasts longer than 5 minutes or if there are more than two seizures in 5 minutes without a return to a normal level of consciousness between them, it is considered a medical emergency known as status epilepticus. This may require medical help to keep the airway open and protected; a nasopharyngeal airway may be useful for this. At home the recommended initial medication for seizure of a long duration is midazolam placed in the nose or mouth. Diazepam may also be used rectally. In hospital, intravenous lorazepam is preferred.
If two doses of benzodiazepines are not effective, other medications such as phenytoin are recommended. Convulsive status epilepticus that does not respond to initial treatment typically requires admission to the intensive care unit and treatment with stronger agents such as midazolam infusion, ketamine, thiopentone or propofol. Most institutions have a preferred pathway or protocol to be used in a seizure emergency like status epilepticus. These protocols have been found to be effective in reducing time to delivery of treatment.
Medications
The mainstay treatment of epilepsy is anticonvulsant medications, possibly for the person's entire life. The choice of anticonvulsant is based on seizure type, epilepsy syndrome, other medications used, other health problems, and the person's age and lifestyle. A single medication is recommended initially; if this is not effective, switching to a single other medication is recommended. Two medications at once is recommended only if a single medication does not work. In about half, the first agent is effective; a second single agent helps in about 13% and a third or two agents at the same time may help an additional 4%. About 30% of people continue to have seizures despite anticonvulsant treatment.
There are a number of medications available including phenytoin, carbamazepine and valproate. Evidence suggests that phenytoin, carbamazepine, and valproate may be equally effective in both focal and generalized seizures. Controlled release carbamazepine appears to work as well as immediate release carbamazepine, and may have fewer side effects. In the United Kingdom, carbamazepine or lamotrigine are recommended as first-line treatment for focal seizures, with levetiracetam and valproate as second-line due to issues of cost and side effects. Valproate is recommended first-line for generalized seizures with lamotrigine being second-line. In those with absence seizures, ethosuximide or valproate are recommended; valproate is particularly effective in myoclonic seizures and tonic or atonic seizures. If seizures are well-controlled on a particular treatment, it is not usually necessary to routinely check the medication levels in the blood.
The least expensive anticonvulsant is phenobarbital at around US$5 a year. The World Health Organization gives it a first-line recommendation in the developing world and it is commonly used there. Access, however, may be difficult as some countries label it as a controlled drug.
Adverse effects from medications are reported in 10% to 90% of people, depending on how and from whom the data is collected. Most adverse effects are dose-related and mild. Some examples include mood changes, sleepiness, or an unsteadiness in gait. Certain medications have side effects that are not related to dose such as rashes, liver toxicity, or suppression of the bone marrow. Up to a quarter of people stop treatment due to adverse effects. Some medications are associated with birth defects when used in pregnancy. Many of the common used medications, such as valproate, phenytoin, carbamazepine, phenobarbital, and gabapentin have been reported to cause increased risk of birth defects, especially when used during the first trimester. Despite this, treatment is often continued once effective, because the risk of untreated epilepsy is believed to be greater than the risk of the medications. Among the antiepileptic medications, levetiracetam and lamotrigine seem to carry the lowest risk of causing birth defects.
Slowly stopping medications may be reasonable in some people who do not have a seizure for two to four years; however, around a third of people have a recurrence, most often during the first six months. Stopping is possible in about 70% of children and 60% of adults. Measuring medication levels is not generally needed in those whose seizures are well controlled.
Surgery
Epilepsy surgery should be considered for any person with epilepsy who is medically refractory. People with epilepsy are evaluated on a case-by-case basis in centres that are familiar with and have expertise in epilepsy surgery. Results from a 2023 systematic review found that surgical interventions for children aged 1–36 months with drug-resistant epilepsy can lead to significant seizure reduction or freedom, especially when other treatments have failed. Epilepsy surgery may be an option for people with focal seizures that remain a problem despite other treatments. These other treatments include at least a trial of two or three medications. The goal of surgery has been total control of seizures. However, most physicians believe that even palliative surgery where the burden of seizures is reduced significantly can help in achieving developmental progress or reversal of developmental stagnation in children with drug-resistant epilepsy and this may be achieved in 60–70% of cases. Common procedures include cutting out the hippocampus via an anterior temporal lobe resection, removal of tumors, and removing parts of the neocortex. Some procedures such as a corpus callosotomy are attempted in an effort to decrease the number of seizures rather than cure the condition. Following surgery, medications may be slowly withdrawn in many cases.
Neurostimulation
Neurostimulation via neuro-cybernetic prosthesis implantation may be another option in those who are not candidates for surgery, providing chronic, pulsatile electrical stimulation of specific nerve or brain regions, alongside standard care. Three types of neurotherapy have been used in those who do not respond to medications: vagus nerve stimulation (VNS), anterior thalamic stimulation, and closed-loop responsive stimulation (RNS).
Vagus nerve stimulation
Non-pharmacological modulation of neurotransmitters via high-level VNS (h-VNS) may reduce seizure frequency in children and adults who do not respond to medical and/or surgical therapy, when compared with low-level VNS (l-VNS). In a 2022 Cochrane review of four randomized controlled trials, with moderate certainty of evidence, people receiving h-VNS treatment were 73% more likely (13% more likely to 164% more likely) to experience a reduction in seizure frequency by at least 50% (the minimum threshold defined for individual clinical response). Potentially 249 (163 to 380) per 1000 people with drug-resistant epilepsy may achieve a 50% reduction in seizures following h-VNS, benefiting an additional 105 per 1000 people compared with l-VNS.
This outcome was limited by the number of studies available, and the quality of one trial in particular, wherein three people received l-VNS in error. A sensitivity analysis suggested that the best case scenario was that the likelihood of clinical response to h-VNS may be 91% (27% to 189%) higher than those receiving l-VNS. In the worst-case scenario, the likelihood of clinical response to h-VNS was still 61% higher (7% higher to 143% higher) than l-VNS.
Despite the potential benefit for h-VNS treatment, the Cochrane review also found that the risk of several adverse-effects was greater than those receiving l-VNS. There was moderate certainty of evidence that voice alteration or hoarseness risk may be 2.17(1.49 to 3.17) fold higher than people receiving l-VNS. Dyspnoea risk was also 2.45 (1.07 to 5.60) times that of l-VNS recipients, although the low number of events and studies meant that the certainty of evidence was low. The risk of rebound-withdrawal symptoms, coughing, pain and paraesthesia was unclear.
Diet
There is promising evidence that a ketogenic diet (high-fat, low-carbohydrate, adequate-protein) decreases the number of seizures and eliminates seizures in some; however, further research is necessary. A 2022 systematic review of the literature has found some evidence to support that a ketogenic diet or modified Atkins diet can be helpful in the treatment of epilepsy in some infants. These types of diets may be beneficial for children with drug-resistant epilepsy; the use for adults remains uncertain. The most commonly reported adverse effects were vomiting, constipation and diarrhoea. It is unclear why this diet works. In people with coeliac disease or non-celiac gluten sensitivity and occipital calcifications, a gluten-free diet may decrease the frequency of seizures.
Other
Avoidance therapy consists of minimizing or eliminating triggers. For example, those who are sensitive to light may have success with using a small television, avoiding video games, or wearing dark glasses. Operant-based biofeedback based on the EEG waves has some support in those who do not respond to medications. Psychological methods should not, however, be used to replace medications.
Exercise has been proposed as possibly useful for preventing seizures, with some data to support this claim. Some dogs, commonly referred to as seizure dogs, may help during or after a seizure. It is not clear if dogs have the ability to predict seizures before they occur.
There is moderate-quality evidence supporting the use of psychological interventions along with other treatments in epilepsy. This can improve quality of life, enhance emotional wellbeing, and reduce fatigue in adults and adolescents. Psychological interventions may also improve seizure control for some individuals by promoting self-management and adherence.
As an add-on therapy in those who are not well controlled with other medications, cannabidiol appears to be useful in some children. In 2018 the FDA approved this product for Lennox–Gastaut syndrome and Dravet syndrome.
There are a few studies on the use of dexamethasone for the successful treatment of drug-resistant seizures in both adults and children.
Alternative medicine
Alternative medicine, including acupuncture, routine vitamins, and yoga, have no reliable evidence to support their use in epilepsy. Melatonin, , is insufficiently supported by evidence. The trials were of poor methodological quality and it was not possible to draw any definitive conclusions.
Several supplements (with varied reliabilities of evidence) have been reported to be helpful for drug-resistant epilepsy. These include high-dose Omega-3, berberine, Manuka honey, reishi and lion's mane mushrooms, curcumin, vitamin E, coenzyme Q-10, and resveratrol. The reason these can work (in theory) is that they reduce inflammation or oxidative stress, two of the major mechanism contributing to epilepsy.
Contraception and pregnancy
Women of child-bearing age, including those with epilepsy, are at risk of unintended pregnancies if they are not using an effective form of contraception. Women with epilepsy may experience a temporary increase in seizure frequency when they begin hormonal contraception.
Some anti-seizure medications interact with enzymes in the liver and cause the drugs in hormonal contraception to be broken down more quickly. These enzyme inducing drugs make hormonal contraception less effective, and this is particularly hazardous if the anti-seizure medication is associated with birth defects. Potent enzyme-inducing anti-seizure medications include carbamazepine, eslicarbazepine acetate, oxcarbazepine, phenobarbital, phenytoin, primidone, and rufinamide. The drugs perampanel and topiramate can be enzyme-inducing at higher doses. Conversely, hormonal contraception can lower the amount of the anti-seizure medication lamotrigine circulating in the body, making it less effective. The failure rate of oral contraceptives, when used correctly, is 1%, but this increases to between 3–6% in women with epilepsy. Overall, intrauterine devices (IUDs) are preferred for women with epilepsy who are not intending to become pregnant.
Women with epilepsy, especially if they have other medical conditions, may have a slightly lower, but still high, chance of becoming pregnant. Women with infertility have about the same chance of success with in vitro fertilisation or other forms of assisted reproductive technology as women without epilepsy. There may be a higher risk of pregnancy loss.
Once pregnant, there are two main concerns related to pregnancy. The first concern is about the risk of seizures during pregnancy, and the second concern is that the anti-seizure medications may result in birth defects. Most women with epilepsy must continue treatment with anti-seizure drugs, and the treatment goal is to balance the need to prevent seizures with the need to prevent drug-induced birth defects.
Pregnancy does not seem to change seizure frequency very much. When seizures happen, however, they can cause some pregnancy complications, such as pre-term births or the babies being smaller than usual when they are born.
All pregnancies have a risk of birth defects, e.g., due to smoking during pregnancy. In addition to this typical level of risk, some anti-seizure drugs significantly increase the risk of birth defects and intrauterine growth restriction, as well as developmental, neurocognitive, and behavioral disorders. Most women with epilepsy receive safe and effective treatment and have typical, healthy children. The highest risks are associated with specific anti-seizure drugs, such as valproic acid and carbamazepine, and with higher doses. Folic acid supplementation, such as through prenatal vitamins, reduced the risk. Planning pregnancies in advance gives women with epilepsy an opportunity to switch to a lower-risk treatment program and reduced drug doses.
Although anti-seizure drugs can be found in breast milk, women with epilepsy can breastfeed their babies, and the benefits usually outweigh the risks.
Prognosis
Epilepsy cannot usually be cured, but medication can control seizures effectively in about 70% of cases. Of those with generalized seizures, more than 80% can be well controlled with medications while this is true in only 50% of people with focal seizures. One predictor of long-term outcome is the number of seizures that occur in the first six months. Other factors increasing the risk of a poor outcome include little response to the initial treatment, generalized seizures, a family history of epilepsy, psychiatric problems, and waves on the EEG representing generalized epileptiform activity. According to the ILAE epilepsy is considered to be resolved if an individual with epilepsy is seizure free for 10 years and off anticonvulsant for 5 years.
In the developing world, 75% of people are either untreated or not appropriately treated. In Africa, 90% do not get treatment. This is partly related to appropriate medications not being available or being too expensive.
Mortality
People with epilepsy may have a higher risk of premature death compared to those without the condition. This risk is estimated to be between 1.6 and 4.1 times greater than that of the general population. The greatest increase in mortality from epilepsy is among the elderly. Those with epilepsy due to an unknown cause have a relatively low increase in risk.
Mortality is often related to the underlying cause of the seizures, status epilepticus, suicide, trauma, and sudden unexpected death in epilepsy (SUDEP). Death from status epilepticus is primarily due to an underlying problem rather than missing doses of medications. The risk of suicide is between two and six times higher in those with epilepsy; the cause of this is unclear. SUDEP appears to be partly related to the frequency of generalized tonic-clonic seizures and accounts for about 15% of epilepsy-related deaths; it is unclear how to decrease its risk.
Risk factors for SUDEP include nocturnal generalized tonic-clonic seizures, seizures, sleeping alone and medically intractable epilepsy.
In the United Kingdom, it is estimated that 40–60% of deaths are possibly preventable. In the developing world, many deaths are due to untreated epilepsy leading to falls or status epilepticus.
Epidemiology
Epilepsy is one of the most common serious neurological disorders affecting about 50 million people . It affects 1% of the population by age 20 and 3% of the population by age 75. It is more common in males than females with the overall difference being small. Most of those with the disorder (80%) are in low income populations or the developing world.
The estimated prevalence of active epilepsy () is in the range 3–10 per 1,000, with active epilepsy defined as someone with epilepsy who has had at least one unprovoked seizure in the last five years. Epilepsy begins each year in 40–70 per 100,000 in developed countries and 80–140 per 100,000 in developing countries. Poverty is a risk and includes both being from a poor country and being poor relative to others within one's country. In the developed world epilepsy most commonly starts either in the young or in the old. In the developing world its onset is more common in older children and young adults due to the higher rates of trauma and infectious diseases. In developed countries the number of cases a year has decreased in children and increased among the elderly between the 1970s and 2003. This has been attributed partly to better survival following strokes in the elderly.
History
The oldest medical records show that epilepsy has been affecting people at least since the beginning of recorded history. Throughout ancient history, the condition was thought to be of a spiritual cause. The world's oldest description of an epileptic seizure comes from a text in Akkadian (a language used in ancient Mesopotamia) and was written around 2000 BC. The person described in the text was diagnosed as being under the influence of a moon god, and underwent an exorcism. Epileptic seizures are listed in the Code of Hammurabi () as reason for which a purchased slave may be returned for a refund, and the Edwin Smith Papyrus () describes cases of individuals with epileptic convulsions.
The oldest known detailed record of the condition itself is in the Sakikku, a Babylonian cuneiform medical text from 10671046 BC. This text gives signs and symptoms, details treatment and likely outcomes, and describes many features of the different seizure types. As the Babylonians had no biomedical understanding of the nature of epilepsy, they attributed the seizures to possession by evil spirits and called for treating the condition through spiritual means. Around 900 BC, Punarvasu Atreya described epilepsy as loss of consciousness; this definition was carried forward into the Ayurvedic text of Charaka Samhita ().
The ancient Greeks had contradictory views of the condition. They thought of epilepsy as a form of spiritual possession, but also associated the condition with genius and the divine. One of the names they gave to it was the sacred disease (). Epilepsy appears within Greek mythology: it is associated with the Moon goddesses Selene and Artemis, who afflicted those who upset them. The Greeks thought that important figures such as Julius Caesar and Hercules had the condition. The notable exception to this divine and spiritual view was that of the school of Hippocrates. In the fifth century BC, Hippocrates rejected the idea that the condition was caused by spirits. In his landmark work On the Sacred Disease, he proposed that epilepsy was not divine in origin and instead was a medically treatable problem originating in the brain. He accused those of attributing a sacred cause to the condition of spreading ignorance through a belief in superstitious magic. Hippocrates proposed that heredity was important as a cause, described worse outcomes if the condition presents at an early age, and made note of the physical characteristics as well as the social shame associated with it. Instead of referring to it as the sacred disease, he used the term great disease, giving rise to the modern term grand mal, used for tonic–clonic seizures. Despite his work detailing the physical origins of the condition, his view was not accepted at the time. Evil spirits continued to be blamed until at least the 17th century.
In Ancient Rome people did not eat or drink with the same pottery as that used by someone who was affected. People of the time would spit on their chest believing that this would keep the problem from affecting them. According to Apuleius and other ancient physicians, to detect epilepsy, it was common to light a piece of gagates, whose smoke would trigger the seizure. Occasionally a spinning potter's wheel was used, perhaps a reference to photosensitive epilepsy.
In most cultures, persons with epilepsy have been stigmatized, shunned, or even imprisoned. As late as in the second half of the 20th century, in Tanzania and other parts of Africa epilepsy was associated with possession by evil spirits, witchcraft, or poisoning and was believed by many to be contagious. In the Salpêtrière, the birthplace of modern neurology, Jean-Martin Charcot found people with epilepsy side by side with the mentally ill, those with chronic syphilis, and the criminally insane. In Ancient Rome, epilepsy was known as the or 'disease of the assembly hall' and was seen as a curse from the gods. In northern Italy, epilepsy was traditionally known as Saint Valentine's malady. In at least the 1840s in the United States of America, epilepsy was known as the falling sickness or the falling fits, and was considered a form of medical insanity. Around the same time period, epilepsy was known in France as the , , , , and . People of epilepsy in France were also known as , due to the seizures and loss of consciousness in an epileptic episode.
In the mid-19th century, the first effective anti-seizure medication, bromide, was introduced. The first modern treatment, phenobarbital, was developed in 1912, with phenytoin coming into use in 1938.
Society and culture
Stigma
Social stigma is commonly experienced, around the world, by those with epilepsy. It can affect people economically, socially and culturally. In India and China, epilepsy may be used as justification to deny marriage. People in some areas still believe those with epilepsy to be cursed. In parts of Africa, such as Tanzania and Uganda, epilepsy is claimed to be associated with possession by evil spirits, witchcraft, or poisoning and is incorrectly believed by many to be contagious. Before 1971 in the United Kingdom, epilepsy was considered grounds for the annulment of marriage. The stigma may result in some people with epilepsy denying that they have ever had seizures. A 2024 cross-sectional study revealed that 64.8% of relatives of epilepsy patients experienced moderate stigma and held moderately positive attitudes toward epilepsy. The study found that higher levels of stigma among participants were associated with more negative attitudes toward the condition. Additionally, relatives of patients who experienced frequent seizures (one or more per month) faced greater stigma, while those of patients who did not adhere to their medication regimen exhibited more negative attitudes toward epilepsy.
Economics
Seizures result in direct economic costs of about one billion dollars in the United States. Epilepsy resulted in economic costs in Europe of around 15.5 billion euros in 2004. In India epilepsy is estimated to result in costs of US$1.7 billion or 0.5% of the GDP. It is the cause of about 1% of emergency department visits (2% for emergency departments for children) in the United States.
Vehicles
Those with epilepsy are at about twice the risk of being involved in a motor vehicular collision and thus in many areas of the world are not allowed to drive or only able to drive if certain conditions are met. Diagnostic delay has been suggested to be a cause of some potentially avoidable motor vehicle collisions since at least one study showed that most motor vehicle accidents occurred in those with undiagnosed non-motor seizures as opposed to those with motor seizures at epilepsy onset. In some places physicians are required by law to report if a person has had a seizure to the licensing body while in others the requirement is only that they encourage the person in question to report it himself. Countries that require physician reporting include Sweden, Austria, Denmark and Spain. Countries that require the individual to report include the UK and New Zealand, and physicians may report if they believe the individual has not already. In Canada, the United States and Australia the requirements around reporting vary by province or state. If seizures are well controlled most feel allowing driving is reasonable. The amount of time a person must be free from seizures before they can drive varies by country. Many countries require one to three years without seizures. In the United States the time needed without a seizure is determined by each state and is between three months and one year.
Those with epilepsy or seizures are typically denied a pilot license.
In Canada if an individual has had no more than one seizure, they may be considered after five years for a limited license if all other testing is normal. Those with febrile seizures and drug related seizures may also be considered.
In the United States, the Federal Aviation Administration does not allow those with epilepsy to get a commercial pilot license. Rarely, exceptions can be made for persons who have had an isolated seizure or febrile seizures and have remained free of seizures into adulthood without medication.
In the United Kingdom, a full national private pilot license requires the same standards as a professional driver's license. This requires a period of ten years without seizures while off medications. Those who do not meet this requirement may acquire a restricted license if free from seizures for five years.
Support organizations
There are organizations that provide support for people and families affected by epilepsy. The Out of the Shadows campaign, a joint effort by the World Health Organization, the ILAE and the International Bureau for Epilepsy, provides help internationally. In the United States, the Epilepsy Foundation is a national organization that works to increase the acceptance of those with the disorder, their ability to function in society and to promote research for a cure. The Epilepsy Foundation, some hospitals, and some individuals also run support groups in the United States. In Australia, the Epilepsy Foundation provides support, delivers education and training and funds research for people living with epilepsy.
International Epilepsy Day (World Epilepsy Day) began in 2015 and occurs on the second Monday in February.
Purple Day, a different world-wide epilepsy awareness day for epilepsy, was initiated by a nine-year-old Canadian named Cassidy Megan in 2008, and is every year on 26 March.
Research
Seizure prediction and modeling
Seizure prediction refers to attempts to forecast epileptic seizures based on the EEG before they occur. , no effective mechanism to predict seizures has been developed. Although no effective device that can predict seizures is available, the science behind seizure prediction and ability to deliver such a tool has made progress.
Kindling, where repeated exposures to events that could cause seizures eventually causes seizures more easily, has been used to create animal models of epilepsy.
Different animal models of epilepsy have been characterized in rodents that recapitulate the EEG and behavioral concomitants of different forms of epilepsy, in particular the occurrence of recurrent spontaneous seizures. Because epileptic seizures of different kinds are observed naturally in some of these animals, strains of mice and rats have been selected to be used as genetic models of epilepsy. In particular, several lines of mice and rats display spike-and-wave discharges when EEG recorded and have been studied to understand absence epilepsy. Among these models, the strain of GAERS (Genetic Absence Epilepsy Rats from Strasbourg) was characterized in the 1980s and has helped to understand the mechanisms underlying childhood absence epilepsy.
Rat brain slices serve as a valuable model for assessing the potential of compounds in reducing epileptiform activity. By evaluating the frequency of epileptiform bursting in hippocampal networks, researchers can identify promising candidates for novel anti-seizure drugs.
Reductionist views on the mechanisms of epileptiform discharges are often expressed through mathematical models. The simplest of these models are based on a few ordinary differential equations, such as the Epileptor model. The more physiologically explicit Epileptor-2 model replicates brief interictal discharges—observed as clusters of action potential spikes in the activity of individual neurons—and longer ictal discharges, represented as clusters of these shorter discharges. According to this model, brief interictal discharges are characterized as stochastic oscillations of the membrane potential and synaptic resources, while ictal discharges emerge as oscillations in the extracellular concentration of potassium ions and the intracellular concentration of sodium ions. These models demonstrate that ionic dynamics play a decisive role in the generation of pathological activity.
One of the hypotheses present in the literature is based on inflammatory pathways. Studies supporting this mechanism revealed that inflammatory, glycolipid, and oxidative factors are higher in people with epilepsy, especially those with generalized epilepsy.
Potential future therapies
Gene therapy is being studied in some types of epilepsy. Medications that alter immune function, such as intravenous immunoglobulins, may reduce the frequency of seizures when including in normal care as an add-on therapy; however, further research is required to determine whether these medications are very well tolerated in children and in adults with epilepsy. Noninvasive stereotactic radiosurgery is, , being compared to standard surgery for certain types of epilepsy.
Other animals
Epilepsy occurs in a number of other animals including dogs and cats; it is in fact the most common brain disorder in dogs. It is typically treated with anticonvulsants such as levetiracetam, phenobarbital, or bromide in dogs and phenobarbital in cats. Imepitoin is also used in dogs. While generalized seizures in horses are fairly easy to diagnose, it may be more difficult in non-generalized seizures and EEGs may be useful. Juvenile idiopathic epilepsy (JIE) in foals is a condition with varying outcomes, depending on the severity and management of the condition. Some foals eventually outgrow the condition without significant long-term effects, while others may face severe consequences, including death or lifelong complications, if left untreated. This variability highlights the importance of timely intervention and care. Earlier research has pointed to a significant genetic influence in the development of JIE, suggesting that the condition may follow the inheritance pattern of a single-gene trait. These findings underscore the need for further genetic studies to confirm this hypothesis and explore potential breeding strategies to reduce the prevalence of JIE.
| Biology and health sciences | Non-infectious disease | null |
10603 | https://en.wikipedia.org/wiki/Field%20%28mathematics%29 | Field (mathematics) | In mathematics, a field is a set on which addition, subtraction, multiplication, and division are defined and behave as the corresponding operations on rational and real numbers. A field is thus a fundamental algebraic structure which is widely used in algebra, number theory, and many other areas of mathematics.
The best known fields are the field of rational numbers, the field of real numbers and the field of complex numbers. Many other fields, such as fields of rational functions, algebraic function fields, algebraic number fields, and p-adic fields are commonly used and studied in mathematics, particularly in number theory and algebraic geometry. Most cryptographic protocols rely on finite fields, i.e., fields with finitely many elements.
The theory of fields proves that angle trisection and squaring the circle cannot be done with a compass and straightedge. Galois theory, devoted to understanding the symmetries of field extensions, provides an elegant proof of the Abel–Ruffini theorem that general quintic equations cannot be solved in radicals.
Fields serve as foundational notions in several mathematical domains. This includes different branches of mathematical analysis, which are based on fields with additional structure. Basic theorems in analysis hinge on the structural properties of the field of real numbers. Most importantly for algebraic purposes, any field may be used as the scalars for a vector space, which is the standard general context for linear algebra. Number fields, the siblings of the field of rational numbers, are studied in depth in number theory. Function fields can help describe properties of geometric objects.
Definition
Informally, a field is a set, along with two operations defined on that set: an addition operation written as , and a multiplication operation written as , both of which behave similarly as they behave for rational numbers and real numbers, including the existence of an additive inverse for all elements , and of a multiplicative inverse for every nonzero element . This allows one to also consider the so-called inverse operations of subtraction, , and division, , by defining:
,
.
Classic definition
Formally, a field is a set together with two binary operations on called addition and multiplication. A binary operation on is a mapping , that is, a correspondence that associates with each ordered pair of elements of a uniquely determined element of . The result of the addition of and is called the sum of and , and is denoted . Similarly, the result of the multiplication of and is called the product of and , and is denoted or . These operations are required to satisfy the following properties, referred to as field axioms.
These axioms are required to hold for all elements , , of the field :
Associativity of addition and multiplication: , and .
Commutativity of addition and multiplication: , and .
Additive and multiplicative identity: there exist two distinct elements and in such that and .
Additive inverses: for every in , there exists an element in , denoted , called the additive inverse of , such that .
Multiplicative inverses: for every in , there exists an element in , denoted by or , called the multiplicative inverse of , such that .
Distributivity of multiplication over addition: .
An equivalent, and more succinct, definition is: a field has two commutative operations, called addition and multiplication; it is a group under addition with as the additive identity; the nonzero elements form a group under multiplication with as the multiplicative identity; and multiplication distributes over addition.
Even more succinctly: a field is a commutative ring where and all nonzero elements are invertible under multiplication.
Alternative definition
Fields can also be defined in different, but equivalent ways. One can alternatively define a field by four binary operations (addition, subtraction, multiplication, and division) and their required properties. Division by zero is, by definition, excluded. In order to avoid existential quantifiers, fields can be defined by two binary operations (addition and multiplication), two unary operations (yielding the additive and multiplicative inverses respectively), and two nullary operations (the constants and ). These operations are then subject to the conditions above. Avoiding existential quantifiers is important in constructive mathematics and computing. One may equivalently define a field by the same two binary operations, one unary operation (the multiplicative inverse), and two (not necessarily distinct) constants and , since and .
Examples
Rational numbers
Rational numbers have been widely used a long time before the elaboration of the concept of field.
They are numbers that can be written as fractions
, where and are integers, and . The additive inverse of such a fraction is , and the multiplicative inverse (provided that ) is , which can be seen as follows:
The abstractly required field axioms reduce to standard properties of rational numbers. For example, the law of distributivity can be proven as follows:
Real and complex numbers
The real numbers , with the usual operations of addition and multiplication, also form a field. The complex numbers consist of expressions
with real,
where is the imaginary unit, i.e., a (non-real) number satisfying .
Addition and multiplication of real numbers are defined in such a way that expressions of this type satisfy all field axioms and thus hold for . For example, the distributive law enforces
It is immediate that this is again an expression of the above type, and so the complex numbers form a field. Complex numbers can be geometrically represented as points in the plane, with Cartesian coordinates given by the real numbers of their describing expression, or as the arrows from the origin to these points, specified by their length and an angle enclosed with some distinct direction. Addition then corresponds to combining the arrows to the intuitive parallelogram (adding the Cartesian coordinates), and the multiplication is – less intuitively – combining rotating and scaling of the arrows (adding the angles and multiplying the lengths). The fields of real and complex numbers are used throughout mathematics, physics, engineering, statistics, and many other scientific disciplines.
Constructible numbers
In antiquity, several geometric problems concerned the (in)feasibility of constructing certain numbers with compass and straightedge. For example, it was unknown to the Greeks that it is, in general, impossible to trisect a given angle in this way. These problems can be settled using the field of constructible numbers. Real constructible numbers are, by definition, lengths of line segments that can be constructed from the points 0 and 1 in finitely many steps using only compass and straightedge. These numbers, endowed with the field operations of real numbers, restricted to the constructible numbers, form a field, which properly includes the field of rational numbers. The illustration shows the construction of square roots of constructible numbers, not necessarily contained within . Using the labeling in the illustration, construct the segments , , and a semicircle over (center at the midpoint ), which intersects the perpendicular line through in a point , at a distance of exactly from when has length one.
Not all real numbers are constructible. It can be shown that is not a constructible number, which implies that it is impossible to construct with compass and straightedge the length of the side of a cube with volume 2, another problem posed by the ancient Greeks.
A field with four elements
In addition to familiar number systems such as the rationals, there are other, less immediate examples of fields. The following example is a field consisting of four elements called , , , and . The notation is chosen such that plays the role of the additive identity element (denoted 0 in the axioms above), and is the multiplicative identity (denoted in the axioms above). The field axioms can be verified by using some more field theory, or by direct computation. For example,
, which equals , as required by the distributivity.
This field is called a finite field or Galois field with four elements, and is denoted or . The subset consisting of and (highlighted in red in the tables at the right) is also a field, known as the binary field or .
Elementary notions
In this section, denotes an arbitrary field and and are arbitrary elements of .
Consequences of the definition
One has and . In particular, one may deduce the additive inverse of every element as soon as one knows .
If then or must be , since, if , then
. This means that every field is an integral domain.
In addition, the following properties are true for any elements and :
if
Additive and multiplicative groups of a field
The axioms of a field imply that it is an abelian group under addition. This group is called the additive group of the field, and is sometimes denoted by when denoting it simply as could be confusing.
Similarly, the nonzero elements of form an abelian group under multiplication, called the multiplicative group, and denoted by or just , or .
A field may thus be defined as set equipped with two operations denoted as an addition and a multiplication such that is an abelian group under addition, is an abelian group under multiplication (where 0 is the identity element of the addition), and multiplication is distributive over addition. Some elementary statements about fields can therefore be obtained by applying general facts of groups. For example, the additive and multiplicative inverses and are uniquely determined by .
The requirement is imposed by convention to exclude the trivial ring, which consists of a single element; this guides any choice of the axioms that define fields.
Every finite subgroup of the multiplicative group of a field is cyclic (see ).
Characteristic
In addition to the multiplication of two elements of , it is possible to define the product of an arbitrary element of by a positive integer to be the -fold sum
(which is an element of .)
If there is no positive integer such that
,
then is said to have characteristic . For example, the field of rational numbers has characteristic 0 since no positive integer is zero. Otherwise, if there is a positive integer satisfying this equation, the smallest such positive integer can be shown to be a prime number. It is usually denoted by and the field is said to have characteristic then.
For example, the field has characteristic since (in the notation of the above addition table) .
If has characteristic , then for all in . This implies that
,
since all other binomial coefficients appearing in the binomial formula are divisible by . Here, ( factors) is the th power, i.e., the -fold product of the element . Therefore, the Frobenius map
is compatible with the addition in (and also with the multiplication), and is therefore a field homomorphism. The existence of this homomorphism makes fields in characteristic quite different from fields of characteristic .
Subfields and prime fields
A subfield of a field is a subset of that is a field with respect to the field operations of . Equivalently is a subset of that contains , and is closed under addition, multiplication, additive inverse and multiplicative inverse of a nonzero element. This means that , that for all both and are in , and that for all in , both and are in .
Field homomorphisms are maps between two fields such that , , and , where and are arbitrary elements of . All field homomorphisms are injective. If is also surjective, it is called an isomorphism (or the fields and are called isomorphic).
A field is called a prime field if it has no proper (i.e., strictly smaller) subfields. Any field contains a prime field. If the characteristic of is (a prime number), the prime field is isomorphic to the finite field introduced below. Otherwise the prime field is isomorphic to .
Finite fields
Finite fields (also called Galois fields) are fields with finitely many elements, whose number is also referred to as the order of the field. The above introductory example is a field with four elements. Its subfield is the smallest field, because by definition a field has at least two distinct elements, and .
The simplest finite fields, with prime order, are most directly accessible using modular arithmetic. For a fixed positive integer , arithmetic "modulo " means to work with the numbers
The addition and multiplication on this set are done by performing the operation in question in the set of integers, dividing by and taking the remainder as result. This construction yields a field precisely if is a prime number. For example, taking the prime results in the above-mentioned field . For and more generally, for any composite number (i.e., any number which can be expressed as a product of two strictly smaller natural numbers), is not a field: the product of two non-zero elements is zero since in , which, as was explained above, prevents from being a field. The field with elements ( being prime) constructed in this way is usually denoted by .
Every finite field has elements, where is prime and . This statement holds since may be viewed as a vector space over its prime field. The dimension of this vector space is necessarily finite, say , which implies the asserted statement.
A field with elements can be constructed as the splitting field of the polynomial
.
Such a splitting field is an extension of in which the polynomial has zeros. This means has as many zeros as possible since the degree of is . For , it can be checked case by case using the above multiplication table that all four elements of satisfy the equation , so they are zeros of . By contrast, in , has only two zeros (namely and ), so does not split into linear factors in this smaller field. Elaborating further on basic field-theoretic notions, it can be shown that two finite fields with the same order are isomorphic. It is thus customary to speak of the finite field with elements, denoted by or .
History
Historically, three algebraic disciplines led to the concept of a field: the question of solving polynomial equations, algebraic number theory, and algebraic geometry. A first step towards the notion of a field was made in 1770 by Joseph-Louis Lagrange, who observed that permuting the zeros of a cubic polynomial in the expression
(with being a third root of unity) only yields two values. This way, Lagrange conceptually explained the classical solution method of Scipione del Ferro and François Viète, which proceeds by reducing a cubic equation for an unknown to a quadratic equation for . Together with a similar observation for equations of degree 4, Lagrange thus linked what eventually became the concept of fields and the concept of groups. Vandermonde, also in 1770, and to a fuller extent, Carl Friedrich Gauss, in his Disquisitiones Arithmeticae (1801), studied the equation
for a prime and, again using modern language, the resulting cyclic Galois group. Gauss deduced that a regular -gon can be constructed if . Building on Lagrange's work, Paolo Ruffini claimed (1799) that quintic equations (polynomial equations of degree ) cannot be solved algebraically; however, his arguments were flawed. These gaps were filled by Niels Henrik Abel in 1824. Évariste Galois, in 1832, devised necessary and sufficient criteria for a polynomial equation to be algebraically solvable, thus establishing in effect what is known as Galois theory today. Both Abel and Galois worked with what is today called an algebraic number field, but conceived neither an explicit notion of a field, nor of a group.
In 1871 Richard Dedekind introduced, for a set of real or complex numbers that is closed under the four arithmetic operations, the German word Körper, which means "body" or "corpus" (to suggest an organically closed entity). The English term "field" was introduced by .
In 1881 Leopold Kronecker defined what he called a domain of rationality, which is a field of rational fractions in modern terms. Kronecker's notion did not cover the field of all algebraic numbers (which is a field in Dedekind's sense), but on the other hand was more abstract than Dedekind's in that it made no specific assumption on the nature of the elements of a field. Kronecker interpreted a field such as abstractly as the rational function field . Prior to this, examples of transcendental numbers were known since Joseph Liouville's work in 1844, until Charles Hermite (1873) and Ferdinand von Lindemann (1882) proved the transcendence of and , respectively.
The first clear definition of an abstract field is due to . In particular, Heinrich Martin Weber's notion included the field . Giuseppe Veronese (1891) studied the field of formal power series, which led to introduce the field of -adic numbers. synthesized the knowledge of abstract field theory accumulated so far. He axiomatically studied the properties of fields and defined many important field-theoretic concepts. The majority of the theorems mentioned in the sections Galois theory, Constructing fields and Elementary notions can be found in Steinitz's work. linked the notion of orderings in a field, and thus the area of analysis, to purely algebraic properties. Emil Artin redeveloped Galois theory from 1928 through 1942, eliminating the dependency on the primitive element theorem.
Constructing fields
Constructing fields from rings
A commutative ring is a set that is equipped with an addition and multiplication operation and satisfies all the axioms of a field, except for the existence of multiplicative inverses . For example, the integers form a commutative ring, but not a field: the reciprocal of an integer is not itself an integer, unless .
In the hierarchy of algebraic structures fields can be characterized as the commutative rings in which every nonzero element is a unit (which means every element is invertible). Similarly, fields are the commutative rings with precisely two distinct ideals, and . Fields are also precisely the commutative rings in which is the only prime ideal.
Given a commutative ring , there are two ways to construct a field related to , i.e., two ways of modifying such that all nonzero elements become invertible: forming the field of fractions, and forming residue fields. The field of fractions of is , the rationals, while the residue fields of are the finite fields .
Field of fractions
Given an integral domain , its field of fractions is built with the fractions of two elements of exactly as Q is constructed from the integers. More precisely, the elements of are the fractions where and are in , and . Two fractions and are equal if and only if . The operation on the fractions work exactly as for rational numbers. For example,
It is straightforward to show that, if the ring is an integral domain, the set of the fractions form a field.
The field of the rational fractions over a field (or an integral domain) is the field of fractions of the polynomial ring . The field of Laurent series
over a field is the field of fractions of the ring of formal power series (in which ). Since any Laurent series is a fraction of a power series divided by a power of (as opposed to an arbitrary power series), the representation of fractions is less important in this situation, though.
Residue fields
In addition to the field of fractions, which embeds injectively into a field, a field can be obtained from a commutative ring by means of a surjective map onto a field . Any field obtained in this way is a quotient , where is a maximal ideal of . If has only one maximal ideal , this field is called the residue field of .
The ideal generated by a single polynomial in the polynomial ring (over a field ) is maximal if and only if is irreducible in , i.e., if cannot be expressed as the product of two polynomials in of smaller degree. This yields a field
This field contains an element (namely the residue class of ) which satisfies the equation
.
For example, is obtained from by adjoining the imaginary unit symbol , which satisfies , where . Moreover, is irreducible over , which implies that the map that sends a polynomial to yields an isomorphism
Constructing fields within a bigger field
Fields can be constructed inside a given bigger container field. Suppose given a field , and a field containing as a subfield. For any element of , there is a smallest subfield of containing and , called the subfield of F generated by and denoted . The passage from to is referred to by adjoining an element to . More generally, for a subset , there is a minimal subfield of containing and , denoted by .
The compositum of two subfields and of some field is the smallest subfield of containing both and . The compositum can be used to construct the biggest subfield of satisfying a certain property, for example the biggest subfield of , which is, in the language introduced below, algebraic over .
Field extensions
The notion of a subfield can also be regarded from the opposite point of view, by referring to being a field extension (or just extension) of , denoted by
,
and read " over ".
A basic datum of a field extension is its degree , i.e., the dimension of as an -vector space. It satisfies the formula
.
Extensions whose degree is finite are referred to as finite extensions. The extensions and are of degree , whereas is an infinite extension.
Algebraic extensions
A pivotal notion in the study of field extensions are algebraic elements. An element is algebraic over if it is a root of a polynomial with coefficients in , that is, if it satisfies a polynomial equation
,
with in , and .
For example, the imaginary unit in is algebraic over , and even over , since it satisfies the equation
.
A field extension in which every element of is algebraic over is called an algebraic extension. Any finite extension is necessarily algebraic, as can be deduced from the above multiplicativity formula.
The subfield generated by an element , as above, is an algebraic extension of if and only if is an algebraic element. That is to say, if is algebraic, all other elements of are necessarily algebraic as well. Moreover, the degree of the extension , i.e., the dimension of as an -vector space, equals the minimal degree such that there is a polynomial equation involving , as above. If this degree is , then the elements of have the form
For example, the field of Gaussian rationals is the subfield of consisting of all numbers of the form where both and are rational numbers: summands of the form (and similarly for higher exponents) do not have to be considered here, since can be simplified to .
Transcendence bases
The above-mentioned field of rational fractions , where is an indeterminate, is not an algebraic extension of since there is no polynomial equation with coefficients in whose zero is . Elements, such as , which are not algebraic are called transcendental. Informally speaking, the indeterminate and its powers do not interact with elements of . A similar construction can be carried out with a set of indeterminates, instead of just one.
Once again, the field extension discussed above is a key example: if is not algebraic (i.e., is not a root of a polynomial with coefficients in ), then is isomorphic to . This isomorphism is obtained by substituting to in rational fractions.
A subset of a field is a transcendence basis if it is algebraically independent (do not satisfy any polynomial relations) over and if is an algebraic extension of . Any field extension has a transcendence basis. Thus, field extensions can be split into ones of the form (purely transcendental extensions) and algebraic extensions.
Closure operations
A field is algebraically closed if it does not have any strictly bigger algebraic extensions or, equivalently, if any polynomial equation
, with coefficients ,
has a solution . By the fundamental theorem of algebra, is algebraically closed, i.e., any polynomial equation with complex coefficients has a complex solution. The rational and the real numbers are not algebraically closed since the equation
does not have any rational or real solution. A field containing is called an algebraic closure of if it is algebraic over (roughly speaking, not too big compared to ) and is algebraically closed (big enough to contain solutions of all polynomial equations).
By the above, is an algebraic closure of . The situation that the algebraic closure is a finite extension of the field is quite special: by the Artin–Schreier theorem, the degree of this extension is necessarily , and is elementarily equivalent to . Such fields are also known as real closed fields.
Any field has an algebraic closure, which is moreover unique up to (non-unique) isomorphism. It is commonly referred to as the algebraic closure and denoted . For example, the algebraic closure of is called the field of algebraic numbers. The field is usually rather implicit since its construction requires the ultrafilter lemma, a set-theoretic axiom that is weaker than the axiom of choice. In this regard, the algebraic closure of , is exceptionally simple. It is the union of the finite fields containing (the ones of order ). For any algebraically closed field of characteristic , the algebraic closure of the field of Laurent series is the field of Puiseux series, obtained by adjoining roots of .
Fields with additional structure
Since fields are ubiquitous in mathematics and beyond, several refinements of the concept have been adapted to the needs of particular mathematical areas.
Ordered fields
A field F is called an ordered field if any two elements can be compared, so that and whenever and . For example, the real numbers form an ordered field, with the usual ordering . The Artin–Schreier theorem states that a field can be ordered if and only if it is a formally real field, which means that any quadratic equation
only has the solution . The set of all possible orders on a fixed field is isomorphic to the set of ring homomorphisms from the Witt ring of quadratic forms over , to .
An Archimedean field is an ordered field such that for each element there exists a finite expression
whose value is greater than that element, that is, there are no infinite elements. Equivalently, the field contains no infinitesimals (elements smaller than all rational numbers); or, yet equivalent, the field is isomorphic to a subfield of .
An ordered field is Dedekind-complete if all upper bounds, lower bounds (see Dedekind cut) and limits, which should exist, do exist. More formally, each bounded subset of is required to have a least upper bound. Any complete field is necessarily Archimedean, since in any non-Archimedean field there is neither a greatest infinitesimal nor a least positive rational, whence the sequence , every element of which is greater than every infinitesimal, has no limit.
Since every proper subfield of the reals also contains such gaps, is the unique complete ordered field, up to isomorphism. Several foundational results in calculus follow directly from this characterization of the reals.
The hyperreals form an ordered field that is not Archimedean. It is an extension of the reals obtained by including infinite and infinitesimal numbers. These are larger, respectively smaller than any real number. The hyperreals form the foundational basis of non-standard analysis.
Topological fields
Another refinement of the notion of a field is a topological field, in which the set is a topological space, such that all operations of the field (addition, multiplication, the maps and ) are continuous maps with respect to the topology of the space.
The topology of all the fields discussed below is induced from a metric, i.e., a function
that measures a distance between any two elements of .
The completion of is another field in which, informally speaking, the "gaps" in the original field are filled, if there are any. For example, any irrational number , such as , is a "gap" in the rationals in the sense that it is a real number that can be approximated arbitrarily closely by rational numbers , in the sense that distance of and given by the absolute value is as small as desired.
The following table lists some examples of this construction. The fourth column shows an example of a zero sequence, i.e., a sequence whose limit (for ) is zero.
The field is used in number theory and -adic analysis. The algebraic closure carries a unique norm extending the one on , but is not complete. The completion of this algebraic closure, however, is algebraically closed. Because of its rough analogy to the complex numbers, it is sometimes called the field of complex p-adic numbers and is denoted by .
Local fields
The following topological fields are called local fields:
finite extensions of (local fields of characteristic zero)
finite extensions of , the field of Laurent series over (local fields of characteristic ).
These two types of local fields share some fundamental similarities. In this relation, the elements and (referred to as uniformizer) correspond to each other. The first manifestation of this is at an elementary level: the elements of both fields can be expressed as power series in the uniformizer, with coefficients in . (However, since the addition in is done using carrying, which is not the case in , these fields are not isomorphic.) The following facts show that this superficial similarity goes much deeper:
Any first-order statement that is true for almost all is also true for almost all . An application of this is the Ax–Kochen theorem describing zeros of homogeneous polynomials in .
Tamely ramified extensions of both fields are in bijection to one another.
Adjoining arbitrary -power roots of (in ), respectively of (in ), yields (infinite) extensions of these fields known as perfectoid fields. Strikingly, the Galois groups of these two fields are isomorphic, which is the first glimpse of a remarkable parallel between these two fields:
Differential fields
Differential fields are fields equipped with a derivation, i.e., allow to take derivatives of elements in the field. For example, the field , together with the standard derivative of polynomials forms a differential field. These fields are central to differential Galois theory, a variant of Galois theory dealing with linear differential equations.
Galois theory
Galois theory studies algebraic extensions of a field by studying the symmetry in the arithmetic operations of addition and multiplication. An important notion in this area is that of finite Galois extensions , which are, by definition, those that are separable and normal. The primitive element theorem shows that finite separable extensions are necessarily simple, i.e., of the form
,
where is an irreducible polynomial (as above). For such an extension, being normal and separable means that all zeros of are contained in and that has only simple zeros. The latter condition is always satisfied if has characteristic .
For a finite Galois extension, the Galois group is the group of field automorphisms of that are trivial on (i.e., the bijections that preserve addition and multiplication and that send elements of to themselves). The importance of this group stems from the fundamental theorem of Galois theory, which constructs an explicit one-to-one correspondence between the set of subgroups of and the set of intermediate extensions of the extension . By means of this correspondence, group-theoretic properties translate into facts about fields. For example, if the Galois group of a Galois extension as above is not solvable (cannot be built from abelian groups), then the zeros of cannot be expressed in terms of addition, multiplication, and radicals, i.e., expressions involving . For example, the symmetric groups is not solvable for . Consequently, as can be shown, the zeros of the following polynomials are not expressible by sums, products, and radicals. For the latter polynomial, this fact is known as the Abel–Ruffini theorem:
(and ),
(where is regarded as a polynomial in , for some indeterminates , is any field, and ).
The tensor product of fields is not usually a field. For example, a finite extension of degree is a Galois extension if and only if there is an isomorphism of -algebras
.
This fact is the beginning of Grothendieck's Galois theory, a far-reaching extension of Galois theory applicable to algebro-geometric objects.
Invariants of fields
Basic invariants of a field include the characteristic and the transcendence degree of over its prime field. The latter is defined as the maximal number of elements in that are algebraically independent over the prime field. Two algebraically closed fields and are isomorphic precisely if these two data agree. This implies that any two uncountable algebraically closed fields of the same cardinality and the same characteristic are isomorphic. For example, and are isomorphic (but not isomorphic as topological fields).
Model theory of fields
In model theory, a branch of mathematical logic, two fields and are called elementarily equivalent if every mathematical statement that is true for is also true for and conversely. The mathematical statements in question are required to be first-order sentences (involving , , the addition and multiplication). A typical example, for , an integer, is
= "any polynomial of degree in has a zero in "
The set of such formulas for all expresses that is algebraically closed.
The Lefschetz principle states that is elementarily equivalent to any algebraically closed field of characteristic zero. Moreover, any fixed statement holds in if and only if it holds in any algebraically closed field of sufficiently high characteristic.
If is an ultrafilter on a set , and is a field for every in , the ultraproduct of the with respect to is a field. It is denoted by
,
since it behaves in several ways as a limit of the fields : Łoś's theorem states that any first order statement that holds for all but finitely many , also holds for the ultraproduct. Applied to the above sentence , this shows that there is an isomorphism
The Ax–Kochen theorem mentioned above also follows from this and an isomorphism of the ultraproducts (in both cases over all primes )
.
In addition, model theory also studies the logical properties of various other types of fields, such as real closed fields or exponential fields (which are equipped with an exponential function ).
Absolute Galois group
For fields that are not algebraically closed (or not separably closed), the absolute Galois group is fundamentally important: extending the case of finite Galois extensions outlined above, this group governs all finite separable extensions of . By elementary means, the group can be shown to be the Prüfer group, the profinite completion of . This statement subsumes the fact that the only algebraic extensions of are the fields for , and that the Galois groups of these finite extensions are given by
.
A description in terms of generators and relations is also known for the Galois groups of -adic number fields (finite extensions of ).
Representations of Galois groups and of related groups such as the Weil group are fundamental in many branches of arithmetic, such as the Langlands program. The cohomological study of such representations is done using Galois cohomology. For example, the Brauer group, which is classically defined as the group of central simple -algebras, can be reinterpreted as a Galois cohomology group, namely
.
K-theory
Milnor K-theory is defined as
The norm residue isomorphism theorem, proved around 2000 by Vladimir Voevodsky, relates this to Galois cohomology by means of an isomorphism
Algebraic K-theory is related to the group of invertible matrices with coefficients the given field. For example, the process of taking the determinant of an invertible matrix leads to an isomorphism . Matsumoto's theorem shows that agrees with . In higher degrees, K-theory diverges from Milnor K-theory and remains hard to compute in general.
Applications
Linear algebra and commutative algebra
If , then the equation
has a unique solution in a field , namely This immediate consequence of the definition of a field is fundamental in linear algebra. For example, it is an essential ingredient of Gaussian elimination and of the proof that any vector space has a basis.
The theory of modules (the analogue of vector spaces over rings instead of fields) is much more complicated, because the above equation may have several or no solutions. In particular systems of linear equations over a ring are much more difficult to solve than in the case of fields, even in the specially simple case of the ring of the integers.
Finite fields: cryptography and coding theory
A widely applied cryptographic routine uses the fact that discrete exponentiation, i.e., computing
( factors, for an integer )
in a (large) finite field can be performed much more efficiently than the discrete logarithm, which is the inverse operation, i.e., determining the solution to an equation
.
In elliptic curve cryptography, the multiplication in a finite field is replaced by the operation of adding points on an elliptic curve, i.e., the solutions of an equation of the form
.
Finite fields are also used in coding theory and combinatorics.
Geometry: field of functions
Functions on a suitable topological space into a field can be added and multiplied pointwise, e.g., the product of two functions is defined by the product of their values within the domain:
.
This makes these functions a -commutative algebra.
For having a field of functions, one must consider algebras of functions that are integral domains. In this case the ratios of two functions, i.e., expressions of the form
form a field, called field of functions.
This occurs in two main cases. When is a complex manifold . In this case, one considers the algebra of holomorphic functions, i.e., complex differentiable functions. Their ratios form the field of meromorphic functions on .
The function field of an algebraic variety (a geometric object defined as the common zeros of polynomial equations) consists of ratios of regular functions, i.e., ratios of polynomial functions on the variety. The function field of the -dimensional space over a field is , i.e., the field consisting of ratios of polynomials in indeterminates. The function field of is the same as the one of any open dense subvariety. In other words, the function field is insensitive to replacing by a (slightly) smaller subvariety.
The function field is invariant under isomorphism and birational equivalence of varieties. It is therefore an important tool for the study of abstract algebraic varieties and for the classification of algebraic varieties. For example, the dimension, which equals the transcendence degree of , is invariant under birational equivalence. For curves (i.e., the dimension is one), the function field is very close to : if is smooth and proper (the analogue of being compact), can be reconstructed, up to isomorphism, from its field of functions. In higher dimension the function field remembers less, but still decisive information about . The study of function fields and their geometric meaning in higher dimensions is referred to as birational geometry. The minimal model program attempts to identify the simplest (in a certain precise sense) algebraic varieties with a prescribed function field.
Number theory: global fields
Global fields are in the limelight in algebraic number theory and arithmetic geometry.
They are, by definition, number fields (finite extensions of ) or function fields over (finite extensions of ). As for local fields, these two types of fields share several similar features, even though they are of characteristic and positive characteristic, respectively. This function field analogy can help to shape mathematical expectations, often first by understanding questions about function fields, and later treating the number field case. The latter is often more difficult. For example, the Riemann hypothesis concerning the zeros of the Riemann zeta function (open as of 2017) can be regarded as being parallel to the Weil conjectures (proven in 1974 by Pierre Deligne).
Cyclotomic fields are among the most intensely studied number fields. They are of the form , where is a primitive th root of unity, i.e., a complex number that satisfies and for all . For being a regular prime, Kummer used cyclotomic fields to prove Fermat's Last Theorem, which asserts the non-existence of rational nonzero solutions to the equation
.
Local fields are completions of global fields. Ostrowski's theorem asserts that the only completions of , a global field, are the local fields and . Studying arithmetic questions in global fields may sometimes be done by looking at the corresponding questions locally. This technique is called the local–global principle. For example, the Hasse–Minkowski theorem reduces the problem of finding rational solutions of quadratic equations to solving these equations in and , whose solutions can easily be described.
Unlike for local fields, the Galois groups of global fields are not known. Inverse Galois theory studies the (unsolved) problem whether any finite group is the Galois group for some number field . Class field theory describes the abelian extensions, i.e., ones with abelian Galois group, or equivalently the abelianized Galois groups of global fields. A classical statement, the Kronecker–Weber theorem, describes the maximal abelian extension of : it is the field
obtained by adjoining all primitive th roots of unity. Kronecker's Jugendtraum asks for a similarly explicit description of of general number fields . For imaginary quadratic fields, , , the theory of complex multiplication describes using elliptic curves. For general number fields, no such explicit description is known.
Related notions
In addition to the additional structure that fields may enjoy, fields admit various other related notions. Since in any field , any field has at least two elements. Nonetheless, there is a concept of field with one element, which is suggested to be a limit of the finite fields , as tends to . In addition to division rings, there are various other weaker algebraic structures related to fields such as quasifields, near-fields and semifields.
There are also proper classes with field structure, which are sometimes called Fields, with a capital 'F'. The surreal numbers form a Field containing the reals, and would be a field except for the fact that they are a proper class, not a set. The nimbers, a concept from game theory, form such a Field as well.
Division rings
Dropping one or several axioms in the definition of a field leads to other algebraic structures. As was mentioned above, commutative rings satisfy all field axioms except for the existence of multiplicative inverses. Dropping instead commutativity of multiplication leads to the concept of a division ring or skew field; sometimes associativity is weakened as well. The only division rings that are finite-dimensional -vector spaces are itself, (which is a field), and the quaternions (in which multiplication is non-commutative). This result is known as the Frobenius theorem. The octonions , for which multiplication is neither commutative nor associative, is a normed alternative division algebra, but is not a division ring. This fact was proved using methods of algebraic topology in 1958 by Michel Kervaire, Raoul Bott, and John Milnor.
Wedderburn's little theorem states that all finite division rings are fields.
| Mathematics | Algebra | null |
10606 | https://en.wikipedia.org/wiki/Factorial | Factorial | In mathematics, the factorial of a non-negative denoted is the product of all positive integers less than or equal The factorial also equals the product of with the next smaller factorial:
For example,
The value of 0! is 1, according to the convention for an empty product.
Factorials have been discovered in several ancient cultures, notably in Indian mathematics in the canonical works of Jain literature, and by Jewish mystics in the Talmudic book Sefer Yetzirah. The factorial operation is encountered in many areas of mathematics, notably in combinatorics, where its most basic use counts the possible distinct sequences – the permutations – of distinct objects: there In mathematical analysis, factorials are used in power series for the exponential function and other functions, and they also have applications in algebra, number theory, probability theory, and computer science.
Much of the mathematics of the factorial function was developed beginning in the late 18th and early 19th centuries.
Stirling's approximation provides an accurate approximation to the factorial of large numbers, showing that it grows more quickly than exponential growth. Legendre's formula describes the exponents of the prime numbers in a prime factorization of the factorials, and can be used to count the trailing zeros of the factorials. Daniel Bernoulli and Leonhard Euler interpolated the factorial function to a continuous function of complex numbers, except at the negative integers, the (offset) gamma function.
Many other notable functions and number sequences are closely related to the factorials, including the binomial coefficients, double factorials, falling factorials, primorials, and subfactorials. Implementations of the factorial function are commonly used as an example of different computer programming styles, and are included in scientific calculators and scientific computing software libraries. Although directly computing large factorials using the product formula or recurrence is not efficient, faster algorithms are known, matching to within a constant factor the time for fast multiplication algorithms for numbers with the same number of digits.
History
The concept of factorials has arisen independently in many cultures:
In Indian mathematics, one of the earliest known descriptions of factorials comes from the Anuyogadvāra-sūtra, one of the canonical works of Jain literature, which has been assigned dates varying from 300 BCE to 400 CE. It separates out the sorted and reversed order of a set of items from the other ("mixed") orders, evaluating the number of mixed orders by subtracting two from the usual product formula for the factorial. The product rule for permutations was also described by 6th-century CE Jain monk Jinabhadra. Hindu scholars have been using factorial formulas since at least 1150, when Bhāskara II mentioned factorials in his work Līlāvatī, in connection with a problem of how many ways Vishnu could hold his four characteristic objects (a conch shell, discus, mace, and lotus flower) in his four hands, and a similar problem for a ten-handed god.
In the mathematics of the Middle East, the Hebrew mystic book of creation Sefer Yetzirah, from the Talmudic period (200 to 500 CE), lists factorials up to 7! as part of an investigation into the number of words that can be formed from the Hebrew alphabet. Factorials were also studied for similar reasons by 8th-century Arab grammarian Al-Khalil ibn Ahmad al-Farahidi. Arab mathematician Ibn al-Haytham (also known as Alhazen, c. 965 – c. 1040) was the first to formulate Wilson's theorem connecting the factorials with the prime numbers.
In Europe, although Greek mathematics included some combinatorics, and Plato famously used 5,040 (a factorial) as the population of an ideal community, in part because of its divisibility properties, there is no direct evidence of ancient Greek study of factorials. Instead, the first work on factorials in Europe was by Jewish scholars such as Shabbethai Donnolo, explicating the Sefer Yetzirah passage. In 1677, British author Fabian Stedman described the application of factorials to change ringing, a musical art involving the ringing of several tuned bells.
From the late 15th century onward, factorials became the subject of study by Western mathematicians. In a 1494 treatise, Italian mathematician Luca Pacioli calculated factorials up to 11!, in connection with a problem of dining table arrangements. Christopher Clavius discussed factorials in a 1603 commentary on the work of Johannes de Sacrobosco, and in the 1640s, French polymath Marin Mersenne published large (but not entirely correct) tables of factorials, up to 64!, based on the work of Clavius. The power series for the exponential function, with the reciprocals of factorials for its coefficients, was first formulated in 1676 by Isaac Newton in a letter to Gottfried Wilhelm Leibniz. Other important works of early European mathematics on factorials include extensive coverage in a 1685 treatise by John Wallis, a study of their approximate values for large values of by Abraham de Moivre in 1721, a 1729 letter from James Stirling to de Moivre stating what became known as Stirling's approximation, and work at the same time by Daniel Bernoulli and Leonhard Euler formulating the continuous extension of the factorial function to the gamma function. Adrien-Marie Legendre included Legendre's formula, describing the exponents in the factorization of factorials into prime powers, in an 1808 text on number theory.
The notation for factorials was introduced by the French mathematician Christian Kramp in 1808. Many other notations have also been used. Another later notation , in which the argument of the factorial was half-enclosed by the left and bottom sides of a box, was popular for some time in Britain and America but fell out of use, perhaps because it is difficult to typeset. The word "factorial" (originally French: factorielle) was first used in 1800 by Louis François Antoine Arbogast, in the first work on Faà di Bruno's formula, but referring to a more general concept of products of arithmetic progressions. The "factors" that this name refers to are the terms of the product formula for the factorial.
Definition
The factorial function of a positive integer is defined by the product of all positive integers not greater than
This may be written more concisely in product notation as
If this product formula is changed to keep all but the last term, it would define a product of the same form, for a smaller factorial. This leads to a recurrence relation, according to which each value of the factorial function can be obtained by multiplying the previous value
For example,
Factorial of zero
The factorial or in symbols, There are several motivations for this definition:
For the definition of as a product involves the product of no numbers at all, and so is an example of the broader convention that the empty product, a product of no factors, is equal to the multiplicative identity.
There is exactly one permutation of zero objects: with nothing to permute, the only rearrangement is to do nothing.
This convention makes many identities in combinatorics valid for all valid choices of their parameters. For instance, the number of ways to choose all elements from a set of is a binomial coefficient identity that would only be valid
With the recurrence relation for the factorial remains valid Therefore, with this convention, a recursive computation of the factorial needs to have only the value for zero as a base case, simplifying the computation and avoiding the need for additional special cases.
Setting allows for the compact expression of many formulae, such as the exponential function, as a power series:
This choice matches the gamma function and the gamma function must have this value to be a continuous function.
Applications
The earliest uses of the factorial function involve counting permutations: there are different ways of arranging distinct objects into a sequence. Factorials appear more broadly in many formulas in combinatorics, to account for different orderings of objects. For instance the binomial coefficients count the combinations (subsets of from a set with and can be computed from factorials using the formula The Stirling numbers of the first kind sum to the factorials, and count the permutations grouped into subsets with the same numbers of cycles. Another combinatorial application is in counting derangements, permutations that do not leave any element in its original position; the number of derangements of items is the nearest integer
In algebra, the factorials arise through the binomial theorem, which uses binomial coefficients to expand powers of sums. They also occur in the coefficients used to relate certain families of polynomials to each other, for instance in Newton's identities for symmetric polynomials. Their use in counting permutations can also be restated algebraically: the factorials are the orders of finite symmetric groups. In calculus, factorials occur in Faà di Bruno's formula for chaining higher derivatives. In mathematical analysis, factorials frequently appear in the denominators of power series, most notably in the series for the exponential function,
and in the coefficients of other Taylor series (in particular those of the trigonometric and hyperbolic functions), where they cancel factors of coming from the This usage of factorials in power series connects back to analytic combinatorics through the exponential generating function, which for a combinatorial class with elements of is defined as the power series
In number theory, the most salient property of factorials is the divisibility of by all positive integers up described more precisely for prime factors by Legendre's formula. It follows that arbitrarily large prime numbers can be found as the prime factors of the numbers
, leading to a proof of Euclid's theorem that the number of primes is infinite. When is itself prime it is called a factorial prime; relatedly, Brocard's problem, also posed by Srinivasa Ramanujan, concerns the existence of square numbers of the form In contrast, the numbers must all be composite, proving the existence of arbitrarily large prime gaps. An elementary proof of Bertrand's postulate on the existence of a prime in any interval of the one of the first results of Paul Erdős, was based on the divisibility properties of factorials. The factorial number system is a mixed radix notation for numbers in which the place values of each digit are factorials.
Factorials are used extensively in probability theory, for instance in the Poisson distribution and in the probabilities of random permutations. In computer science, beyond appearing in the analysis of brute-force searches over permutations, factorials arise in the lower bound of on the number of comparisons needed to comparison sort a set of items, and in the analysis of chained hash tables, where the distribution of keys per cell can be accurately approximated by a Poisson distribution. Moreover, factorials naturally appear in formulae from quantum and statistical physics, where one often considers all the possible permutations of a set of particles. In statistical mechanics, calculations of entropy such as Boltzmann's entropy formula or the Sackur–Tetrode equation must correct the count of microstates by dividing by the factorials of the numbers of each type of indistinguishable particle to avoid the Gibbs paradox. Quantum physics provides the underlying reason for why these corrections are necessary.
Properties
Growth and approximation
As a function the factorial has faster than exponential growth, but grows more slowly than a double exponential function. Its growth rate is similar but slower by an exponential factor. One way of approaching this result is by taking the natural logarithm of the factorial, which turns its product formula into a sum, and then estimating the sum by an integral:
Exponentiating the result (and ignoring the negligible term) approximates as
More carefully bounding the sum both above and below by an integral, using the trapezoid rule, shows that this estimate needs a correction factor proportional The constant of proportionality for this correction can be found from the Wallis product, which expresses as a limiting ratio of factorials and powers of two. The result of these corrections is Stirling's approximation:
Here, the symbol means that, as goes to infinity, the ratio between the left and right sides approaches one in the limit.
Stirling's formula provides the first term in an asymptotic series that becomes even more accurate when taken to greater numbers of terms:
An alternative version uses only odd exponents in the correction terms:
Many other variations of these formulas have also been developed, by Srinivasa Ramanujan, Bill Gosper, and others.
The binary logarithm of the factorial, used to analyze comparison sorting, can be very accurately estimated using Stirling's approximation. In the formula below, the term invokes big O notation.
Divisibility and digits
The product formula for the factorial implies that is divisible by all prime numbers that are at and by no larger prime numbers. More precise information about its divisibility is given by Legendre's formula, which gives the exponent of each prime in the prime factorization of as
Here denotes the sum of the digits and the exponent given by this formula can also be interpreted in advanced mathematics as the -adic valuation of the factorial. Applying Legendre's formula to the product formula for binomial coefficients produces Kummer's theorem, a similar result on the exponent of each prime in the factorization of a binomial coefficient. Grouping the prime factors of the factorial into prime powers in different ways produces the multiplicative partitions of factorials.
The special case of Legendre's formula for gives the number of trailing zeros in the decimal representation of the factorials. According to this formula, the number of zeros can be obtained by subtracting the base-5 digits of from , and dividing the result by four. Legendre's formula implies that the exponent of the prime is always larger than the exponent for so each factor of five can be paired with a factor of two to produce one of these trailing zeros. The leading digits of the factorials are distributed according to Benford's law. Every sequence of digits, in any base, is the sequence of initial digits of some factorial number in that base.
Another result on divisibility of factorials, Wilson's theorem, states that is divisible by if and only if is a prime number. For any given the Kempner function of is given by the smallest for which divides For almost all numbers (all but a subset of exceptions with asymptotic density zero), it coincides with the largest prime factor
The product of two factorials, always evenly divides There are infinitely many factorials that equal the product of other factorials: if is itself any product of factorials, then equals that same product multiplied by one more factorial, The only known examples of factorials that are products of other factorials but are not of this "trivial" form are and It would follow from the conjecture that there are only finitely many nontrivial examples.
The greatest common divisor of the values of a primitive polynomial of degree over the integers evenly divides
Continuous interpolation and non-integer generalization
There are infinitely many ways to extend the factorials to a continuous function. The most widely used of these uses the gamma function, which can be defined for positive real numbers as the integral
The resulting function is related to the factorial of a non-negative integer by the equation
which can be used as a definition of the factorial for non-integer arguments.
At all values for which both and are defined, the gamma function obeys the functional equation
generalizing the recurrence relation for the factorials.
The same integral converges more generally for any complex number whose real part is positive. It can be extended to the non-integer points in the rest of the complex plane by solving for Euler's reflection formula
However, this formula cannot be used at integers because, for them, the term would produce a division by zero. The result of this extension process is an analytic function, the analytic continuation of the integral formula for the gamma function. It has a nonzero value at all complex numbers, except for the non-positive integers where it has simple poles. Correspondingly, this provides a definition for the factorial at all complex numbers other than the negative integers.
One property of the gamma function, distinguishing it from other continuous interpolations of the factorials, is given by the Bohr–Mollerup theorem, which states that the gamma function (offset by one) is the only log-convex function on the positive real numbers that interpolates the factorials and obeys the same functional equation. A related uniqueness theorem of Helmut Wielandt states that the complex gamma function and its scalar multiples are the only holomorphic functions on the positive complex half-plane that obey the functional equation and remain bounded for complex numbers with real part between 1 and 2.
Other complex functions that interpolate the factorial values include Hadamard's gamma function, which is an entire function over all the complex numbers, including the non-positive integers. In the -adic numbers, it is not possible to continuously interpolate the factorial function directly, because the factorials of large integers (a dense subset of the -adics) converge to zero according to Legendre's formula, forcing any continuous function that is close to their values to be zero everywhere. Instead, the -adic gamma function provides a continuous interpolation of a modified form of the factorial, omitting the factors in the factorial that are divisible by .
The digamma function is the logarithmic derivative of the gamma function. Just as the gamma function provides a continuous interpolation of the factorials, offset by one, the digamma function provides a continuous interpolation of the harmonic numbers, offset by the Euler–Mascheroni constant.
Computation
The factorial function is a common feature in scientific calculators. It is also included in scientific programming libraries such as the Python mathematical functions module and the Boost C++ library. If efficiency is not a concern, computing factorials is trivial: just successively multiply a variable initialized by the integers up The simplicity of this computation makes it a common example in the use of different computer programming styles and methods.
The computation of can be expressed in pseudocode using iteration as
define factorial(n):
f := 1
for i := 1, 2, 3, ..., n:
f := f * i
return f
or using recursion based on its recurrence relation as
define factorial(n):
if (n = 0) return 1
return n * factorial(n − 1)
Other methods suitable for its computation include memoization, dynamic programming, and functional programming. The computational complexity of these algorithms may be analyzed using the unit-cost random-access machine model of computation, in which each arithmetic operation takes constant time and each number uses a constant amount of storage space. In this model, these methods can compute in time and the iterative version uses space Unless optimized for tail recursion, the recursive version takes linear space to store its call stack. However, this model of computation is only suitable when is small enough to allow to fit into a machine word. The values 12! and 20! are the largest factorials that can be stored in, respectively, the 32-bit and 64-bit integers. Floating point can represent larger factorials, but approximately rather than exactly, and will still overflow for factorials larger than
The exact computation of larger factorials involves arbitrary-precision arithmetic, because of fast growth and integer overflow. Time of computation can be analyzed as a function of the number of digits or bits in the result. By Stirling's formula, has bits. The Schönhage–Strassen algorithm can produce a product in time and faster multiplication algorithms taking time are known. However, computing the factorial involves repeated products, rather than a single multiplication, so these time bounds do not apply directly. In this setting, computing by multiplying the numbers from 1 in sequence is inefficient, because it involves multiplications, a constant fraction of which take time each, giving total time A better approach is to perform the multiplications as a divide-and-conquer algorithm that multiplies a sequence of numbers by splitting it into two subsequences of numbers, multiplies each subsequence, and combines the results with one last multiplication. This approach to the factorial takes total time one logarithm comes from the number of bits in the factorial, a second comes from the multiplication algorithm, and a third comes from the divide and conquer.
Even better efficiency is obtained by computing from its prime factorization, based on the principle that exponentiation by squaring is faster than expanding an exponent into a product. An algorithm for this by Arnold Schönhage begins by finding the list of the primes up for instance using the sieve of Eratosthenes, and uses Legendre's formula to compute the exponent for each prime. Then it computes the product of the prime powers with these exponents, using a recursive algorithm, as follows:
Use divide and conquer to compute the product of the primes whose exponents are odd
Divide all of the exponents by two (rounding down to an integer), recursively compute the product of the prime powers with these smaller exponents, and square the result
Multiply together the results of the two previous steps
The product of all primes up to is an -bit number, by the prime number theorem, so the time for the first step is , with one logarithm coming from the divide and conquer and another coming from the multiplication algorithm. In the recursive calls to the algorithm, the prime number theorem can again be invoked to prove that the numbers of bits in the corresponding products decrease by a constant factor at each level of recursion, so the total time for these steps at all levels of recursion adds in a geometric series The time for the squaring in the second step and the multiplication in the third step are again because each is a single multiplication of a number with bits. Again, at each level of recursion the numbers involved have a constant fraction as many bits (because otherwise repeatedly squaring them would produce too large a final result) so again the amounts of time for these steps in the recursive calls add in a geometric series Consequentially, the whole algorithm takes proportional to a single multiplication with the same number of bits in its result.
Related sequences and functions
Several other integer sequences are similar to or related to the factorials:
Alternating factorial
The alternating factorial is the absolute value of the alternating sum of the first factorials, These have mainly been studied in connection with their primality; only finitely many of them can be prime, but a complete list of primes of this form is not known.
Bhargava factorial
The Bhargava factorials are a family of integer sequences defined by Manjul Bhargava with similar number-theoretic properties to the factorials, including the factorials themselves as a special case.
Double factorial
The product of all the odd integers up to some odd positive is called the double factorial and denoted by That is, For example, . Double factorials are used in trigonometric integrals, in expressions for the gamma function at half-integers and the volumes of hyperspheres, and in counting binary trees and perfect matchings.
Exponential factorial
Just as triangular numbers sum the numbers from and factorials take their product, the exponential factorial exponentiates. The exponential factorial is defined recursively For example, the exponential factorial of 4 is These numbers grow much more quickly than regular factorials.
Falling factorial
The notations or are sometimes used to represent the product of the greatest integers counting up to and equal to This is also known as a falling factorial or backward factorial, and the notation is a Pochhammer symbol. Falling factorials count the number of different sequences of distinct items that can be drawn from a universe of items. They occur as coefficients in the higher derivatives of polynomials, and in the factorial moments of random variables.
Hyperfactorials
The hyperfactorial of is the product . These numbers form the discriminants of Hermite polynomials. They can be continuously interpolated by the K-function, and obey analogues to Stirling's formula and Wilson's theorem.
Jordan–Pólya numbers
The Jordan–Pólya numbers are the products of factorials, allowing repetitions. Every tree has a symmetry group whose number of symmetries is a Jordan–Pólya number, and every Jordan–Pólya number counts the symmetries of some tree.
Primorial
The primorial is the product of prime numbers less than or equal this construction gives them some similar divisibility properties to factorials, but unlike factorials they are squarefree. As with the factorial primes researchers have studied primorial primes
Subfactorial
The subfactorial yields the number of derangements of a set of objects. It is sometimes denoted , and equals the closest integer
Superfactorial
The superfactorial of is the product of the first factorials. The superfactorials are continuously interpolated by the Barnes G-function.
| Mathematics | Basics | null |
10628 | https://en.wikipedia.org/wiki/Fullerene | Fullerene | A fullerene is an allotrope of carbon whose molecules consist of carbon atoms connected by single and double bonds so as to form a closed or partially closed mesh, with fused rings of five to six atoms. The molecules may have hollow sphere- and ellipsoid-like forms, tubes, or other shapes.
Fullerenes with a closed mesh topology are informally denoted by their empirical formula Cn, often written Cn, where n is the number of carbon atoms. However, for some values of n there may be more than one isomer.
The family is named after buckminsterfullerene (C60), the most famous member, which in turn is named after Buckminster Fuller. The closed fullerenes, especially C60, are also informally called buckyballs for their resemblance to the standard ball of association football ("soccer"). Nested closed fullerenes have been named bucky onions. Cylindrical fullerenes are also called carbon nanotubes or buckytubes. The bulk solid form of pure or mixed fullerenes is called fullerite.
Fullerenes had been predicted for some time, but only after their accidental synthesis in 1985 were they detected in nature and outer space. The discovery of fullerenes greatly expanded the number of known allotropes of carbon, which had previously been limited to graphite, diamond, and amorphous carbon such as soot and charcoal. They have been the subject of intense research, both for their chemistry and for their technological applications, especially in materials science, electronics, and nanotechnology.
Definition
IUPAC defines fullerenes as "polyhedral closed cages made up entirely of n three-coordinate carbon atoms and having 12 pentagonal and (n/2-10) hexagonal faces, where n ≥ 20."
History
Predictions and limited observations
The icosahedral cage was mentioned in 1965 as a possible topological structure. Eiji Osawa predicted the existence of in 1970. He noticed that the structure of a corannulene molecule was a subset of the shape of a football, and hypothesised that a full ball shape could also exist. Japanese scientific journals reported his idea, but neither it nor any translations of it reached Europe or the Americas.
Also in 1970, R.W.Henson (then of the UK Atomic Energy Research Establishment) proposed the structure and made a model of it. Unfortunately, the evidence for that new form of carbon was very weak at the time, so the proposal was met with skepticism, and was never published. It was acknowledged only in 1999.
In 1973, independently from Henson, D. A. Bochvar and E. G. Galpern made a quantum-chemical analysis of the stability of and calculated its electronic structure. The paper was published in 1973, but the scientific community did not give much importance to this theoretical prediction.
Around 1980, Sumio Iijima identified the molecule of from an electron microscope image of carbon black, where it formed the core of a particle with the structure of a "bucky onion".
Also in the 1980s at MIT, Mildred Dresselhaus and Morinobu Endo, collaborating with T. Venkatesan, directed studies blasting graphite with lasers, producing carbon clusters of atoms, which would be later identified as "fullerenes."
Discovery of
In 1985, Harold Kroto of the University of Sussex, working with James R. Heath, Sean O'Brien, Robert Curl and Richard Smalley from Rice University, discovered fullerenes in the sooty residue created by vaporising carbon in a helium atmosphere. In the mass spectrum of the product, discrete peaks appeared corresponding to molecules with the exact mass of sixty or seventy or more carbon atoms, namely and . The team identified their structure as the now familiar "buckyballs".
The name "buckminsterfullerene" was eventually chosen for by the discoverers as an homage to American architect Buckminster Fuller for the vague similarity of the structure to the geodesic domes which he popularized; which, if they were extended to a full sphere, would also have the icosahedral symmetry group. The "ene" ending was chosen to indicate that the carbons are unsaturated, being connected to only three other atoms instead of the normal four. The shortened name "fullerene" eventually came to be applied to the whole family.
Kroto, Curl, and Smalley were awarded the 1996 Nobel Prize in Chemistry for their roles in the discovery of this class of molecules.
Further developments
Kroto and the Rice team already discovered other fullerenes besides C60, and the list was much expanded in the following years. Carbon nanotubes were first discovered and synthesized in 1991.
After their discovery, minute quantities of fullerenes were found to be produced in sooty flames, and by lightning discharges in the atmosphere. In 1992, fullerenes were found in a family of mineraloids known as shungites in Karelia, Russia.
The production techniques were improved by many scientists, including Donald Huffman, Wolfgang Krätschmer, Lowell D. Lamb, and Konstantinos Fostiropoulos. Thanks to their efforts, by 1990 it was relatively easy to produce gram-sized samples of fullerene powder. Fullerene purification remains a challenge to chemists and to a large extent determines fullerene prices.
In 2010, the spectral signatures of C60 and C70 were observed by NASA's Spitzer infrared telescope in a cloud of cosmic dust surrounding a star 6500 light years away. Kroto commented: "This most exciting breakthrough provides convincing evidence that the buckyball has, as I long suspected, existed since time immemorial in the dark recesses of our galaxy." According to astronomer Letizia Stanghellini, "It’s possible that buckyballs from outer space provided seeds for life on Earth." In 2019, ionized C60 molecules were detected with the Hubble Space Telescope in the space between those stars.
Types
There are two major families of fullerenes, with fairly distinct properties and applications: the closed buckyballs and the open-ended cylindrical carbon nanotubes. However, hybrid structures exist between those two classes, such as carbon nanobuds — nanotubes capped by hemispherical meshes or larger "buckybuds".
Buckyballs
Buckminsterfullerene
Buckminsterfullerene is the smallest fullerene molecule containing pentagonal and hexagonal rings in which no two pentagons share an edge (which can be destabilizing, as in pentalene). It is also most common in terms of natural occurrence, as it can often be found in soot.
The empirical formula of buckminsterfullerene is and its structure is a truncated icosahedron, which resembles an association football ball of the type made of twenty hexagons and twelve pentagons, with a carbon atom at the vertices of each polygon and a bond along each polygon edge.
The van der Waals diameter of a buckminsterfullerene molecule is about 1.1 nanometers (nm). The nucleus to nucleus diameter of a buckminsterfullerene molecule is about 0.71 nm.
The buckminsterfullerene molecule has two bond lengths. The 6:6 ring bonds (between two hexagons) can be considered "double bonds" and are shorter (1.401 Å) than the 6:5 bonds (1.458 Å, between a hexagon and a pentagon). The weighted average bond length is 1.44 Å.
Other fullerenes
Another fairly common fullerene has empirical formula , but fullerenes with 72, 76, 84 and even up to 100 carbon atoms are commonly obtained.
The smallest possible fullerene is the dodecahedral . There are no fullerenes with 22 vertices. The number of different fullerenes C2n grows with increasing n = 12, 13, 14, ..., roughly in proportion to n9 . For instance, there are 1812 non-isomorphic fullerenes . Note that only one form of , buckminsterfullerene, has no pair of adjacent pentagons (the smallest such fullerene). To further illustrate the growth, there are 214,127,713 non-isomorphic fullerenes , 15,655,672 of which have no adjacent pentagons. Optimized structures of many fullerene isomers are published and listed on the web.
Heterofullerenes have heteroatoms substituting carbons in cage or tube-shaped structures. They were discovered in 1993 and greatly expand the overall fullerene class of compounds and can have dangling bonds on their surfaces. Notable examples include boron, nitrogen (azafullerene), oxygen, and phosphorus derivatives.
Carbon nanotubes
Carbon nanotubes are cylindrical fullerenes. These tubes of carbon are usually only a few nanometres wide, but they can range from less than a micrometer to several millimeters in length. They often have closed ends, but can be open-ended as well. There are also cases in which the tube reduces in diameter before closing off. Their unique molecular structure results in extraordinary macroscopic properties, including high tensile strength, high electrical conductivity, high ductility, high heat conductivity, and relative chemical inactivity (as it is cylindrical and "planar" — that is, it has no "exposed" atoms that can be easily displaced). One proposed use of carbon nanotubes is in paper batteries, developed in 2007 by researchers at Rensselaer Polytechnic Institute. Another highly speculative proposed use in the field of space technologies is to produce high-tensile carbon cables required by a space elevator.
Derivatives
Buckyballs and carbon nanotubes have been used as building blocks for a great variety of derivatives and larger structures, such as
Nested buckyballs ("carbon nano-onions" or "buckyonions") proposed for lubricants;
Nested carbon nanotubes ("carbon megatubes")
Linked "ball-and-chain" dimers (two buckyballs linked by a carbon chain)
Rings of buckyballs linked together.
Heterofullerenes and non-carbon fullerenes
After the discovery of C60, many fullerenes have been synthesized (or studied theoretically by molecular modeling methods) in which some or all the carbon atoms are replaced by other elements. Non-carbon nanotubes, in particular, have attracted much attention.
Boron
A type of buckyball which uses boron atoms, instead of the usual carbon, was predicted and described in 2007. The structure, with each atom forming 5 or 6 bonds, was predicted to be more stable than the buckyball. However, subsequent analysis found that the predicted Ih symmetric structure was vibrationally unstable and the resulting cage would undergo a spontaneous symmetry break, yielding a puckered cage with rare Th symmetry (symmetry of a volleyball). The number of six-member rings in this molecule is 20 and number of five-member rings is 12. There is an additional atom in the center of each six-member ring, bonded to each atom surrounding it. By employing a systematic global search algorithm, it was later found that the previously proposed fullerene is not a global maximum for 80-atom boron clusters and hence can not be found in nature; the most stable configurations have complex geometries. The same paper concluded that boron's energy landscape, unlike others, has many disordered low-energy structures, hence pure boron fullerenes are unlikely to exist in nature.
However, an irregular complex dubbed borospherene was prepared in 2014. This complex has two hexagonal faces and four heptagonal faces with in D2d symmetry interleaved with a network of 48 triangles.
was experimentally obtained in 2024, i.e. 17 years after theoretical prediction by Gonzalez Szwacki et al..
Other elements
Inorganic (carbon-free) fullerene-type structures have been built with the molybdenum(IV) sulfide (MoS2), long used as a graphite-like lubricant, tungsten (WS2), titanium (TiS2) and niobium (NbS2). These materials were found to be stable up to at least 350 tons/cm2 (34.3 GPa).
Icosahedral or distorted-icosahedral fullerene-like complexes have also been prepared for germanium, tin, and lead; some of these complexes are spacious enough to hold most transition metal atoms.
Main fullerenes
Below is a table of main closed carbon fullerenes synthesized and characterized so far, with their CAS number when known. Fullerenes with fewer than 60 carbon atoms have been called "lower fullerenes", and those with more than 70 atoms "higher fullerenes".
In the table, "Num.Isom." is the number of possible isomers within the "isolated pentagon rule", which states that two pentagons in a fullerene should not share edges. "Mol.Symm." is the symmetry of the molecule, whereas "Cryst.Symm." is that of the crystalline framework in the solid state. Both are specified for the most experimentally abundant form(s). The asterisk * marks symmetries with more than one chiral form.
When or crystals are grown from toluene solution they have a monoclinic symmetry. The crystal structure contains toluene molecules packed between the spheres of the fullerene. However, evaporation of the solvent from transforms it into a face-centered cubic form. Both monoclinic and face-centered cubic (fcc) phases are known for better-characterized and fullerenes.
Properties
Topology
Schlegel diagrams are often used to clarify the 3D structure of closed-shell fullerenes, as 2D projections are often not ideal in this sense.
In mathematical terms, the combinatorial topology (that is, the carbon atoms and the bonds between them, ignoring their positions and distances) of a closed-shell fullerene with a simple sphere-like mean surface (orientable, genus zero) can be represented as a convex polyhedron; more precisely, its one-dimensional skeleton, consisting of its vertices and edges. The Schlegel diagram is a projection of that skeleton onto one of the faces of the polyhedron, through a point just outside that face; so that all other vertices project inside that face.
The Schlegel diagram of a closed fullerene is a graph that is planar and 3-regular (or "cubic"; meaning that all vertices have degree 3).
A closed fullerene with sphere-like shell must have at least some cycles that are pentagons or heptagons. More precisely, if all the faces have 5 or 6 sides, it follows from Euler's polyhedron formula, V−E+F=2 (where V, E, F are the numbers of vertices, edges, and faces), that V must be even, and that there must be exactly 12 pentagons and V/2−10 hexagons. Similar constraints exist if the fullerene has heptagonal (seven-atom) cycles.
Bonding
Since each carbon atom is connected to only three neighbors, instead of the usual four, it is customary to describe those bonds as being a mixture of single and double covalent bonds. The hybridization of carbon in C60 has been reported to be sp2.01. The bonding state can be analyzed by Raman spectroscopy, IR spectroscopy and X-ray photoelectron spectroscopy.
Encapsulation
Additional atoms, ions, clusters, or small molecules can be trapped inside fullerenes to form inclusion compounds known as endohedral fullerenes. An unusual example is the egg-shaped fullerene Tb3N@, which violates the isolated pentagon rule. Evidence for a meteor impact at the end of the Permian period was found by analyzing noble gases preserved by being trapped in fullerenes.
Research
In the field of nanotechnology, heat resistance and superconductivity are some of the more heavily studied properties.
There are many calculations that have been done using ab-initio quantum methods applied to fullerenes. By DFT and TD-DFT methods one can obtain IR, Raman and UV spectra. Results of such calculations can be compared with experimental results.
Fullerene is an unusual reactant in many organic reactions such as the Bingel reaction discovered in 1993.
Aromaticity
Researchers have been able to increase the reactivity of fullerenes by attaching active groups to their surfaces. Buckminsterfullerene does not exhibit "superaromaticity": that is, the electrons in the hexagonal rings do not delocalize over the whole molecule.
A spherical fullerene of n carbon atoms has n pi-bonding electrons, free to delocalize. These should try to delocalize over the whole molecule. The quantum mechanics of such an arrangement should be like only one shell of the well-known quantum mechanical structure of a single atom, with a stable filled shell for n = 2, 8, 18, 32, 50, 72, 98, 128, etc. (i.e., twice a perfect square number), but this series does not include 60. This 2(N + 1)2 rule (with N integer) for spherical aromaticity is the three-dimensional analogue of Hückel's rule. The 10+ cation would satisfy this rule, and should be aromatic. This has been shown to be the case using quantum chemical modelling, which showed the existence of strong diamagnetic sphere currents in the cation.
As a result, in water tends to pick up two more electrons and become an anion. The n described below may be the result of trying to form a loose metallic bond.
Reactions
Polymerization
Under high pressure and temperature, buckyballs collapse to form various one-, two-, or three-dimensional carbon frameworks. Single-strand polymers are formed using the Atom Transfer Radical Addition Polymerization (ATRAP) route.
"Ultrahard fullerite" is a coined term frequently used to describe material produced by high-pressure high-temperature (HPHT) processing of fullerite. Such treatment converts fullerite into a nanocrystalline form of diamond which has been reported to exhibit remarkable mechanical properties.
Chemistry
Fullerenes are stable, but not totally unreactive. The sp2-hybridized carbon atoms, which are at their energy minimum in planar graphite, must be bent to form the closed sphere or tube, which produces angle strain. The characteristic reaction of fullerenes is electrophilic addition at 6,6-double bonds, which reduces angle strain by changing sp2-hybridized carbons into sp3-hybridized ones. The change in hybridized orbitals causes the bond angles to decrease from about 120° in the sp2 orbitals to about 109.5° in the sp3 orbitals. This decrease in bond angles allows for the bonds to bend less when closing the sphere or tube, and thus, the molecule becomes more stable.
Solubility
Fullerenes are soluble in many organic solvents, such as toluene, chlorobenzene, and 1,2,3-trichloropropane. Solubilities are generally rather low, such as 8 g/L for C60 in carbon disulfide. Still, fullerenes are the only known allotrope of carbon that can be dissolved in common solvents at room temperature. Among the best solvents is 1-chloronaphthalene, which will dissolve 51 g/L of C60.
Solutions of pure buckminsterfullerene have a deep purple color. Solutions of are a reddish brown. The higher fullerenes to have a variety of colors.
Millimeter-sized crystals of and , both pure and solvated, can be grown from benzene solution. Crystallization of from benzene solution below 30 °C (when solubility is maximum) yields a triclinic solid solvate ·4. Above 30 °C one obtains solvate-free fcc .
Quantum mechanics
In 1999, researchers from the University of Vienna demonstrated that wave-particle duality applied to molecules such as fullerene.
Superconductivity
Fullerenes are normally electrical insulators, but when crystallized with alkali metals, the resultant compound can be conducting or even superconducting.
Chirality
Some fullerenes (e.g. , , , and ) are inherently chiral because they are D2-symmetric, and have been successfully resolved. Research efforts are ongoing to develop specific sensors for their enantiomers.
Stability
Two theories have been proposed to describe the molecular mechanisms that make fullerenes. The older, "bottom-up" theory proposes that they are built atom-by-atom. The alternative "top-down" approach claims that fullerenes form when much larger structures break into constituent parts.
In 2013 researchers discovered that asymmetrical fullerenes formed from larger structures settle into stable fullerenes. The synthesized substance was a particular metallofullerene consisting of 84 carbon atoms with two additional carbon atoms and two yttrium atoms inside the cage. The process produced approximately 100 micrograms.
However, they found that the asymmetrical molecule could theoretically collapse to form nearly every known fullerene and metallofullerene. Minor perturbations involving the breaking of a few molecular bonds cause the cage to become highly symmetrical and stable. This insight supports the theory that fullerenes can be formed from graphene when the appropriate molecular bonds are severed.
Systematic naming
According to the IUPAC, to name a fullerene, one must cite the number of member atoms for the rings which comprise the fullerene, its symmetry point group in the Schoenflies notation, and the total number of atoms. For example, buckminsterfullerene C60 is systematically named (-Ih)[5,6]fullerene. The name of the point group should be retained in any derivative of said fullerene, even if that symmetry is lost by the derivation.
To indicate the position of substituted or attached elements, the fullerene atoms are usually numbered in a spiral path, usually starting with the ring on one of the main axes. If the structure of the fullerene does not allow such numbering, another starting atom was chosen to still achieve a spiral path sequence.
The latter is the case for C70, which is (-D5h(6))[5,6]fullerene in IUPAC notation. The symmetry D5h(6) means that this is the isomer where the C5 axis goes through a pentagon surrounded by hexagons rather than pentagons.
In IUPAC's nomenclature, fully saturated analogues of fullerenes are called fulleranes. If the mesh has other element(s) substituted for one or more carbons, the compound is named a heterofullerene. If a double bond is replaced by a methylene bridge , the resulting structure is a homofullerene. If an atom is fully deleted and missing valences saturated with hydrogen atoms, it is a norfullerene. When bonds are removed (both sigma and pi), the compound becomes secofullerene; if some new bonds are added in an unconventional order, it is a cyclofullerene.
Production
Fullerene production generally starts by producing fullerene-rich soot. The original (and still current) method was to send a large electric current between two nearby graphite electrodes in an inert atmosphere. The resulting electric arc vaporizes the carbon into a plasma that then cools into sooty residue. Alternatively, soot is produced by laser ablation of graphite or pyrolysis of aromatic hydrocarbons. Combustion of benzene is the most efficient process, developed at MIT.
These processes yield a mixture of various fullerenes and other forms of carbon. The fullerenes are then extracted from the soot using appropriate organic solvents and separated by chromatography. One can obtain milligram quantities of fullerenes with 80 atoms or more. C76, C78 and C84 are available commercially.
Applications
Biomedical
Functionalized fullerenes have been researched extensively for several potential biomedical applications including high-performance MRI contrast agents, X-ray imaging contrast agents, photodynamic therapy for tumor treatment, and drug and gene delivery.
Safety and toxicity
In 2013, a comprehensive review on the toxicity of fullerene was published reviewing work beginning in the early 1990s to present and concluded that very little evidence gathered since the discovery of fullerenes indicate that is toxic. The toxicity of these carbon nanoparticles is not only dose- and time-dependent, but also depends on a number of other factors such as:
type (e.g.: , , M@, M@)
functional groups used to water-solubilize these nanoparticles (e.g.: OH, COOH)
method of administration (e.g.: intravenous, intraperitoneal)
It was recommended to assess the pharmacology of every new fullerene- or metallofullerene-based complex individually as a different compound.
Popular culture
Examples of fullerenes appear frequently in popular culture. Fullerenes appeared in fiction well before scientists took serious interest in them. In a humorously speculative 1966 column for New Scientist, David Jones suggested the possibility of making giant hollow carbon molecules by distorting a plane hexagonal net with the addition of impurity atoms.
| Physical sciences | Group 14 | Chemistry |
10673 | https://en.wikipedia.org/wiki/Fagales | Fagales | The Fagales are an order of flowering plants in the rosid group of dicotyledons, including some of the best-known trees. Well-known members of Fagales include: beeches, chestnuts, oaks, walnut, pecan, hickory, birches, alders, hazels, hornbeams, she-oaks, and southern beeches. The order name is derived from genus Fagus (beeches).
Systematics
Fagales include the following seven families, according to the APG III system of classification:
Betulaceae – birch family (Alnus, Betula, Carpinus, Corylus, Ostrya, and Ostryopsis)
Casuarinaceae – she-oak family (Allocasuarina, Casuarina, Ceuthostoma, and Gymnostoma)
Fagaceae – beech family (Castanea, Castanopsis, Chrysolepis, Fagus, Lithocarpus, Notholithocarpus, Quercus, and Trigonobalanus)
Juglandaceae – walnut family (Alfaroa, Carya, Cyclocarya, Engelhardia, Juglans, Oreomunnea, Platycarya, Pterocarya, and Rhoiptelea)
Myricaceae – bayberry family (Canacomyrica, Comptonia, and Myrica)
Nothofagaceae – southern beech family (Nothofagus)
Ticodendraceae – ticodendron family (Ticodendron)
Modern molecular phylogenetics suggest the following relationships:
The older Cronquist system only included four families (Betulaceae, Corylaceae, Fagaceae, Ticodendraceae; Corylaceae now being included within Betulaceae); this arrangement is followed by, for example, the World Checklist of selected plant families. The other families were split into three different orders, placed among the Hamamelidae. The Casuarinales comprised the single family Casuarinaceae, the Juglandales comprised the Juglandaceae and Rhoipteleaceae, and the Myricales comprised the remaining forms (plus Balanops). The change is due to studies suggesting the Myricales, so defined, are paraphyletic to the other two groups.
Characteristics
Most Fagales are wind pollinated and are monoecious with unisexual flowers.
Evolutionary history
The oldest member of the order is the flower Soepadmoa cupulata preserved in the late Turonian-Coniacian New Jersey amber, which is a mosaic with characteristics characteristic of both Nothofagus and other Fagales, suggesting that the ancestor of all Fagales was Nothofagus-like.
| Biology and health sciences | Fagales | Plants |
10674 | https://en.wikipedia.org/wiki/Fabales | Fabales | Fabales is an order of flowering plants included in the rosid group of the eudicots in the Angiosperm Phylogeny Group II classification system. In the APG II circumscription, this order includes the families Fabaceae or legumes (including the subfamilies Caesalpinioideae, Mimosoideae, and Faboideae), Quillajaceae, Polygalaceae or milkworts (including the families Diclidantheraceae, Moutabeaceae, and Xanthophyllaceae), and Surianaceae. Under the Cronquist system and some other plant classification systems, the order Fabales contains only the family Fabaceae. In the classification system of Dahlgren the Fabales were in the superorder Fabiflorae (also called Fabanae) with three families corresponding to the subfamilies of Fabaceae in APG II. The other families treated in the Fabales by the APG II classification were placed in separate orders by Cronquist, the Polygalaceae within its own order, the Polygalales, and the Quillajaceae and Surianaceae within the Rosales.
The Fabaceae, as the third-largest plant family in the world, contain most of the diversity of the Fabales, the other families making up a comparatively small portion of the order's diversity. Research in the order is largely focused on the Fabaceae, due in part to its great biological diversity, and to its importance as food plants. The Polygalaceae are fairly well researched among plant families, in part due to the large diversity of the genus Polygala, and other members of the family being food plants for various Lepidoptera (butterfly and moth) species. While taxonomists using molecular phylogenetic techniques find strong support for the order, questions remain about the morphological relationships of the Quillajaceae and Surianaceae to the rest of the order, due in part to limited research on these families.
According to molecular clock calculations, the lineage that led to Fabales split from other plants about 101 million years ago.
Distribution
The Fabales are a cosmopolitan order of plants, except only the subfamily Papilionoideae (Faboideae) of the Fabaceae are well dispersed throughout the northern part of the North Temperate Zone.
Phylogeny
The phylogeny of the Fabales is shown below.
Gallery
| Biology and health sciences | Fabales | Plants |
10779 | https://en.wikipedia.org/wiki/Frequency | Frequency | Frequency (symbol f), most often measured in hertz (symbol: Hz), is the number of occurrences of a repeating event per unit of time. It is also occasionally referred to as temporal frequency for clarity and to distinguish it from spatial frequency. Ordinary frequency is related to angular frequency (symbol ω, with SI unit radian per second) by a factor of 2. The period (symbol T) is the interval of time between events, so the period is the reciprocal of the frequency: .
Frequency is an important parameter used in science and engineering to specify the rate of oscillatory and vibratory phenomena, such as mechanical vibrations, audio signals (sound), radio waves, and light.
For example, if a heart beats at a frequency of 120 times per minute (2 hertz), the period—the time interval between beats—is half a second (60 seconds divided by 120).
Definitions and units
For cyclical phenomena such as oscillations, waves, or for examples of simple harmonic motion, the term frequency is defined as the number of cycles or repetitions per unit of time. The conventional symbol for frequency is f or ν (the Greek letter nu) is also used. The period T is the time taken to complete one cycle of an oscillation or rotation. The frequency and the period are related by the equation
The term temporal frequency is used to emphasise that the frequency is characterised by the number of occurrences of a repeating event per unit time.
The SI unit of frequency is the hertz (Hz), named after the German physicist Heinrich Hertz by the International Electrotechnical Commission in 1930. It was adopted by the CGPM (Conférence générale des poids et mesures) in 1960, officially replacing the previous name, cycle per second (cps). The SI unit for the period, as for all measurements of time, is the second. A traditional unit of frequency used with rotating mechanical devices, where it is termed rotational frequency, is revolution per minute, abbreviated r/min or rpm. Sixty rpm is equivalent to one hertz.
Period versus frequency
As a matter of convenience, longer and slower waves, such as ocean surface waves, are more typically described by wave period rather than frequency. Short and fast waves, like audio and radio, are usually described by their frequency. Some commonly used conversions are listed below:
Related quantities
Rotational frequency, usually denoted by the Greek letter ν (nu), is defined as the instantaneous rate of change of the number of rotations, N, with respect to time: it is a type of frequency applied to rotational motion.
Angular frequency, usually denoted by the Greek letter ω (omega), is defined as the rate of change of angular displacement (during rotation), θ (theta), or the rate of change of the phase of a sinusoidal waveform (notably in oscillations and waves), or as the rate of change of the argument to the sine function:
The unit of angular frequency is the radian per second (rad/s) but, for discrete-time signals, can also be expressed as radians per sampling interval, which is a dimensionless quantity. Angular frequency is frequency multiplied by 2.
Spatial frequency, denoted here by ξ (xi), is analogous to temporal frequency, but with a spatial measurement replacing time measurement, e.g.:
Spatial period or wavelength is the spatial analog to temporal period.
In wave propagation
For periodic waves in nondispersive media (that is, media in which the wave speed is independent of frequency), frequency has an inverse relationship to the wavelength, λ (lambda). Even in dispersive media, the frequency f of a sinusoidal wave is equal to the phase velocity v of the wave divided by the wavelength λ of the wave:
In the special case of electromagnetic waves in vacuum, then , where c is the speed of light in vacuum, and this expression becomes
When monochromatic waves travel from one medium to another, their frequency remains the same—only their wavelength and speed change.
Measurement
Measurement of frequency can be done in the following ways:
Counting
Calculating the frequency of a repeating event is accomplished by counting the number of times that event occurs within a specific time period, then dividing the count by the period. For example, if 71 events occur within 15 seconds the frequency is:
If the number of counts is not very large, it is more accurate to measure the time interval for a predetermined number of occurrences, rather than the number of occurrences within a specified time. The latter method introduces a random error into the count of between zero and one count, so on average half a count. This is called gating error and causes an average error in the calculated frequency of , or a fractional error of where is the timing interval and is the measured frequency. This error decreases with frequency, so it is generally a problem at low frequencies where the number of counts N is small.
Stroboscope
An old method of measuring the frequency of rotating or vibrating objects is to use a stroboscope. This is an intense repetitively flashing light (strobe light) whose frequency can be adjusted with a calibrated timing circuit. The strobe light is pointed at the rotating object and the frequency adjusted up and down. When the frequency of the strobe equals the frequency of the rotating or vibrating object, the object completes one cycle of oscillation and returns to its original position between the flashes of light, so when illuminated by the strobe the object appears stationary. Then the frequency can be read from the calibrated readout on the stroboscope. A downside of this method is that an object rotating at an integer multiple of the strobing frequency will also appear stationary.
Frequency counter
Higher frequencies are usually measured with a frequency counter. This is an electronic instrument which measures the frequency of an applied repetitive electronic signal and displays the result in hertz on a digital display. It uses digital logic to count the number of cycles during a time interval established by a precision quartz time base. Cyclic processes that are not electrical, such as the rotation rate of a shaft, mechanical vibrations, or sound waves, can be converted to a repetitive electronic signal by transducers and the signal applied to a frequency counter. As of 2018, frequency counters can cover the range up to about 100 GHz. This represents the limit of direct counting methods; frequencies above this must be measured by indirect methods.
Heterodyne methods
Above the range of frequency counters, frequencies of electromagnetic signals are often measured indirectly utilizing heterodyning (frequency conversion). A reference signal of a known frequency near the unknown frequency is mixed with the unknown frequency in a nonlinear mixing device such as a diode. This creates a heterodyne or "beat" signal at the difference between the two frequencies. If the two signals are close together in frequency the heterodyne is low enough to be measured by a frequency counter. This process only measures the difference between the unknown frequency and the reference frequency. To convert higher frequencies, several stages of heterodyning can be used. Current research is extending this method to infrared and light frequencies (optical heterodyne detection).
Examples
Light
Visible light is an electromagnetic wave, consisting of oscillating electric and magnetic fields traveling through space. The frequency of the wave determines its color: 400 THz ( Hz) is red light, 800 THz () is violet light, and between these (in the range 400–800 THz) are all the other colors of the visible spectrum. An electromagnetic wave with a frequency less than will be invisible to the human eye; such waves are called infrared (IR) radiation. At even lower frequency, the wave is called a microwave, and at still lower frequencies it is called a radio wave. Likewise, an electromagnetic wave with a frequency higher than will also be invisible to the human eye; such waves are called ultraviolet (UV) radiation. Even higher-frequency waves are called X-rays, and higher still are gamma rays.
All of these waves, from the lowest-frequency radio waves to the highest-frequency gamma rays, are fundamentally the same, and they are all called electromagnetic radiation. They all travel through vacuum at the same speed (the speed of light), giving them wavelengths inversely proportional to their frequencies.
where c is the speed of light (c in vacuum or less in other media), f is the frequency and λ is the wavelength.
In dispersive media, such as glass, the speed depends somewhat on frequency, so the wavelength is not quite inversely proportional to frequency.
Sound
Sound propagates as mechanical vibration waves of pressure and displacement, in air or other substances. In general, frequency components of a sound determine its "color", its timbre. When speaking about the frequency (in singular) of a sound, it means the property that most determines its pitch.
The frequencies an ear can hear are limited to a specific range of frequencies. The audible frequency range for humans is typically given as being between about 20 Hz and 20,000 Hz (20 kHz), though the high frequency limit usually reduces with age. Other species have different hearing ranges. For example, some dog breeds can perceive vibrations up to 60,000 Hz.
In many media, such as air, the speed of sound is approximately independent of frequency, so the wavelength of the sound waves (distance between repetitions) is approximately inversely proportional to frequency.
Line current
In Europe, Africa, Australia, southern South America, most of Asia, and Russia, the frequency of the alternating current in household electrical outlets is 50 Hz (close to the tone G), whereas in North America and northern South America, the frequency of the alternating current in household electrical outlets is 60 Hz (between the tones B and B; that is, a minor third above the European frequency). The frequency of the 'hum' in an audio recording can show in which of these general regions the recording was made.
Aperiodic frequency
Aperiodic frequency is the rate of incidence or occurrence of non-cyclic phenomena, including random processes such as radioactive decay. It is expressed with the unit reciprocal second (s−1) or, in the case of radioactivity, with the unit becquerel.
It is defined as a rate, , involving the number of entities counted or the number of events happened (N) during a given time duration (Δt); it is a physical quantity of type temporal rate.
| Physical sciences | Waves | null |
10821 | https://en.wikipedia.org/wiki/Francium | Francium | Francium is a chemical element; it has symbol Fr and atomic number 87. It is extremely radioactive; its most stable isotope, francium-223 (originally called actinium K after the natural decay chain in which it appears), has a half-life of only 22 minutes. It is the second-most electropositive element, behind only caesium, and is the second rarest naturally occurring element (after astatine). Francium's isotopes decay quickly into astatine, radium, and radon. The electronic structure of a francium atom is [Rn] 7s1; thus, the element is classed as an alkali metal.
As a consequence of its extreme instability, bulk francium has never been seen. Because of the general appearance of the other elements in its periodic table column, it is presumed that francium would appear as a highly reactive metal if enough could be collected together to be viewed as a bulk solid or liquid. Obtaining such a sample is highly improbable since the extreme heat of decay resulting from its short half-life would immediately vaporize any viewable quantity of the element.
Francium was discovered by Marguerite Perey in France (from which the element takes its name) on January 7, 1939. Before its discovery, francium was referred to as eka-caesium or ekacaesium because of its conjectured existence below caesium in the periodic table. It was the last element first discovered in nature, rather than by synthesis. Outside the laboratory, francium is extremely rare, with trace amounts found in uranium ores, where the isotope francium-223 (in the family of uranium-235) continually forms and decays. As little as exists at any given time throughout the Earth's crust; aside from francium-223 and francium-221, its other isotopes are entirely synthetic. The largest amount produced in the laboratory was a cluster of more than 300,000 atoms.
Characteristics
Francium is one of the most unstable of the naturally occurring elements: its longest-lived isotope, francium-223, has a half-life of only 22 minutes. The only comparable element is astatine, whose most stable natural isotope, astatine-219 (the alpha daughter of francium-223), has a half-life of 56 seconds, although synthetic astatine-210 is much longer-lived with a half-life of 8.1 hours. All isotopes of francium decay into astatine, radium, or radon. Francium-223 also has a shorter half-life than the longest-lived isotope of each synthetic element up to and including element 105, dubnium.
Francium is an alkali metal whose chemical properties mostly resemble those of caesium. A heavy element with a single valence electron, it has the highest equivalent weight of any element. Liquid francium—if created—should have a surface tension of 0.05092 N/m at its melting point. Francium's melting point was estimated to be around ; a value of is also often encountered. The melting point is uncertain because of the element's extreme rarity and radioactivity; a different extrapolation based on Dmitri Mendeleev's method gave . A calculation based on the melting temperatures of binary ionic crystals gives . The estimated boiling point of is also uncertain; the estimates and , as well as the extrapolation from Mendeleev's method of , have also been suggested. The density of francium is expected to be around 2.48 g/cm3 (Mendeleev's method extrapolates 2.4 g/cm3).
Linus Pauling estimated the electronegativity of francium at 0.7 on the Pauling scale, the same as caesium; the value for caesium has since been refined to 0.79, but there are no experimental data to allow a refinement of the value for francium. Francium has a slightly higher ionization energy than caesium, 392.811(4) kJ/mol as opposed to 375.7041(2) kJ/mol for caesium, as would be expected from relativistic effects, and this would imply that caesium is the less electronegative of the two. Francium should also have a higher electron affinity than caesium and the Fr− ion should be more polarizable than the Cs− ion.
Compounds
As a result of francium's instability, its salts are only known to a small extent. Francium coprecipitates with several caesium salts, such as caesium perchlorate, which results in small amounts of francium perchlorate. This coprecipitation can be used to isolate francium, by adapting the radiocaesium coprecipitation method of Lawrence E. Glendenin and C. M. Nelson. It will additionally coprecipitate with many other caesium salts, including the iodate, the picrate, the tartrate (also rubidium tartrate), the chloroplatinate, and the silicotungstate. It also coprecipitates with silicotungstic acid, and with perchloric acid, without another alkali metal as a carrier, which leads to other methods of separation.
Francium perchlorate
Francium perchlorate is produced by the reaction of francium chloride and sodium perchlorate. The francium perchlorate coprecipitates with caesium perchlorate. This coprecipitation can be used to isolate francium, by adapting the radiocaesium coprecipitation method of Lawrence E. Glendenin and C. M. Nelson. However, this method is unreliable in separating thallium, which also coprecipitates with caesium. Francium perchlorate's entropy is expected to be 42.7 e.u (178.7 J mol−1 K−1).
Francium halides
Francium halides are all soluble in water and are expected to be white solids. They are expected to be produced by the reaction of the corresponding halogens. For example, francium chloride would be produced by the reaction of francium and chlorine. Francium chloride has been studied as a pathway to separate francium from other elements, by using the high vapour pressure of the compound, although francium fluoride would have a higher vapour pressure.
Other compounds
Francium nitrate, sulfate, hydroxide, carbonate, acetate, and oxalate, are all soluble in water, while the iodate, picrate, tartrate, chloroplatinate, and silicotungstate are insoluble. The insolubility of these compounds are used to extract francium from other radioactive products, such as zirconium, niobium, molybdenum, tin, antimony, the method mentioned in the section above. Francium oxide is believed to disproportionate to the peroxide and francium metal. The CsFr molecule is predicted to have francium at the negative end of the dipole, unlike all known heterodiatomic alkali metal molecules. Francium superoxide (FrO2) is expected to have a more covalent character than its lighter congeners; this is attributed to the 6p electrons in francium being more involved in the francium–oxygen bonding. The relativistic destabilisation of the 6p3/2 spinor may make francium compounds in oxidation states higher than +1 possible, such as [FrVF6]−; but this has not been experimentally confirmed.
Isotopes
There are 37 known isotopes of francium ranging in atomic mass from 197 to 233. Francium has seven metastable nuclear isomers. Francium-223 and francium-221 are the only isotopes that occur in nature, with the former being far more common.
Francium-223 is the most stable isotope, with a half-life of 21.8 minutes, and it is highly unlikely that an isotope of francium with a longer half-life will ever be discovered or synthesized. Francium-223 is a fifth product of the uranium-235 decay series as a daughter isotope of actinium-227; thorium-227 is the more common daughter. Francium-223 then decays into radium-223 by beta decay (1.149 MeV decay energy), with a minor (0.006%) alpha decay path to astatine-219 (5.4 MeV decay energy).
Francium-221 has a half-life of 4.8 minutes. It is the ninth product of the neptunium decay series as a daughter isotope of actinium-225. Francium-221 then decays into astatine-217 by alpha decay (6.457 MeV decay energy). Although all primordial 237Np is extinct, the neptunium decay series continues to exist naturally in tiny traces due to (n,2n) knockout reactions in natural 238U. Francium-222, with a half-life of 14 minutes, may be produced as a result of the beta decay of natural radon-222; this process nonetheless not yet been observed, and it is unknown that is this process energetically possible.
The least stable ground state isotope is francium-215, with a half-life of 90 ns: it undergoes a 9.54 MeV alpha decay to astatine-211.
Applications
Due to its instability and rarity, there are no commercial applications for francium. It has been used for research purposes in the fields of chemistry
and of atomic structure. Its use as a potential diagnostic aid for various cancers has also been explored, but this application has been deemed impractical.
Francium's ability to be synthesized, trapped, and cooled, along with its relatively simple atomic structure, has made it the subject of specialized spectroscopy experiments. These experiments have led to more specific information regarding energy levels and the coupling constants between subatomic particles. Studies on the light emitted by laser-trapped francium-210 ions have provided accurate data on transitions between atomic energy levels which are fairly similar to those predicted by quantum theory. Francium is a prospective candidate for searching for CP violation.
History
As early as 1870, chemists thought that there should be an alkali metal beyond caesium, with an atomic number of 87. It was then referred to by the provisional name eka-caesium.
Erroneous and incomplete discoveries
In 1914, Stefan Meyer, Viktor F. Hess, and Friedrich Paneth (working in Vienna) made measurements of alpha radiation from various substances, including 227Ac. They observed the possibility of a minor alpha branch of this nuclide, though follow-up work could not be done due to the outbreak of World War I. Their observations were not precise and sure enough for them to announce the discovery of element 87, though it is likely that they did indeed observe the decay of 227Ac to 223Fr.
Soviet chemist Dmitry Dobroserdov was the first scientist to claim to have found eka-caesium, or francium. In 1925, he observed weak radioactivity in a sample of potassium, another alkali metal, and incorrectly concluded that eka-caesium was contaminating the sample (the radioactivity from the sample was from the naturally occurring potassium radioisotope, potassium-40). He then published a thesis on his predictions of the properties of eka-caesium, in which he named the element russium after his home country. Shortly thereafter, Dobroserdov began to focus on his teaching career at the Polytechnic Institute of Odesa, and he did not pursue the element further.
The following year, English chemists Gerald J. F. Druce and Frederick H. Loring analyzed X-ray photographs of manganese(II) sulfate. They observed spectral lines which they presumed to be of eka-caesium. They announced their discovery of element 87 and proposed the name alkalinium, as it would be the heaviest alkali metal.
In 1930, Fred Allison of the Alabama Polytechnic Institute claimed to have discovered element 87 (in addition to 85) when analyzing pollucite and lepidolite using his magneto-optical machine. Allison requested that it be named virginium after his home state of Virginia, along with the symbols Vi and Vm. In 1934, H.G. MacPherson of UC Berkeley disproved the effectiveness of Allison's device and the validity of his discovery.
In 1936, Romanian physicist Horia Hulubei and his French colleague Yvette Cauchois also analyzed pollucite, this time using their high-resolution X-ray apparatus. They observed several weak emission lines, which they presumed to be those of element 87. Hulubei and Cauchois reported their discovery and proposed the name moldavium, along with the symbol Ml, after Moldavia, the Romanian province where Hulubei was born. In 1937, Hulubei's work was criticized by American physicist F. H. Hirsh Jr., who rejected Hulubei's research methods. Hirsh was certain that eka-caesium would not be found in nature, and that Hulubei had instead observed mercury or bismuth X-ray lines. Hulubei insisted that his X-ray apparatus and methods were too accurate to make such a mistake. Because of this, Jean Baptiste Perrin, Nobel Prize winner and Hulubei's mentor, endorsed moldavium as the true eka-caesium over Marguerite Perey's recently discovered francium. Perey took pains to be accurate and detailed in her criticism of Hulubei's work, and finally she was credited as the sole discoverer of element 87. All other previous purported discoveries of element 87 were ruled out due to francium's very limited half-life.
Perey's analysis
Eka-caesium was discovered on January 7, 1939, by Marguerite Perey of the Curie Institute in Paris, when she purified a sample of actinium-227 which had been reported to have a decay energy of 220 keV. Perey noticed decay particles with an energy level below 80 keV. Perey thought this decay activity might have been caused by a previously unidentified decay product, one which was separated during purification, but emerged again out of the pure actinium-227. Various tests eliminated the possibility of the unknown element being thorium, radium, lead, bismuth, or thallium. The new product exhibited chemical properties of an alkali metal (such as coprecipitating with caesium salts), which led Perey to believe that it was element 87, produced by the alpha decay of actinium-227. Perey then attempted to determine the proportion of beta decay to alpha decay in actinium-227. Her first test put the alpha branching at 0.6%, a figure which she later revised to 1%.
Perey named the new isotope actinium-K (it is now referred to as francium-223) and in 1946, she proposed the name catium (Cm) for her newly discovered element, as she believed it to be the most electropositive cation of the elements. Irène Joliot-Curie, one of Perey's supervisors, opposed the name due to its connotation of cat rather than cation; furthermore, the symbol coincided with that which had since been assigned to curium. Perey then suggested francium, after France. This name was officially adopted by the International Union of Pure and Applied Chemistry (IUPAC) in 1949, becoming the second element after gallium to be named after France. It was assigned the symbol Fa, but it was revised to the current Fr shortly thereafter. Francium was the last element discovered in nature, rather than synthesized, following hafnium and rhenium. Further research into francium's structure was carried out by, among others, Sylvain Lieberman and his team at CERN in the 1970s and 1980s.
Occurrence
223Fr is the result of the alpha decay of 227Ac and can be found in trace amounts in uranium minerals. In a given sample of uranium, there is estimated to be only one francium atom for every 1 × 1018 uranium atoms. Only about of francium is present naturally in the earth's crust.
Production
Francium can be synthesized by a fusion reaction when a gold-197 target is bombarded with a beam of oxygen-18 atoms from a linear accelerator in a process originally developed at the physics department of the State University of New York at Stony Brook in 1995. Depending on the energy of the oxygen beam, the reaction can yield francium isotopes with masses of 209, 210, and 211.
197Au + 18O → 209Fr + 6 n
197Au + 18O → 210Fr + 5 n
197Au + 18O → 211Fr + 4 n
The francium atoms leave the gold target as ions, which are neutralized by collision with yttrium and then isolated in a magneto-optical trap (MOT) in a gaseous unconsolidated state. Although the atoms only remain in the trap for about 30 seconds before escaping or undergoing nuclear decay, the process supplies a continual stream of fresh atoms. The result is a steady state containing a fairly constant number of atoms for a much longer time. The original apparatus could trap up to a few thousand atoms, while a later improved design could trap over 300,000 at a time. Sensitive measurements of the light emitted and absorbed by the trapped atoms provided the first experimental results on various transitions between atomic energy levels in francium. Initial measurements show very good agreement between experimental values and calculations based on quantum theory. The research project using this production method relocated to TRIUMF in 2012, where over 106 francium atoms have been held at a time, including large amounts of 209Fr in addition to 207Fr and 221Fr.
Other synthesis methods include bombarding radium with neutrons, and bombarding thorium with protons, deuterons, or helium ions.
223Fr can also be isolated from samples of its parent 227Ac, the francium being milked via elution with NH4Cl–CrO3 from an actinium-containing cation exchanger and purified by passing the solution through a silicon dioxide compound loaded with barium sulfate.
In 1996, the Stony Brook group trapped 3000 atoms in their MOT, which was enough for a video camera to capture the light given off by the atoms as they fluoresce. Francium has not been synthesized in amounts large enough to weigh.
| Physical sciences | Chemical elements_2 | null |
10822 | https://en.wikipedia.org/wiki/Fermium | Fermium | Fermium is a synthetic chemical element; it has symbol Fm and atomic number 100. It is an actinide and the heaviest element that can be formed by neutron bombardment of lighter elements, and hence the last element that can be prepared in macroscopic quantities, although pure fermium metal has not been prepared yet. A total of 20 isotopes are known, with 257Fm being the longest-lived with a half-life of 100.5 days.
Fermium was discovered in the debris of the first hydrogen bomb explosion in 1952, and named after Enrico Fermi, one of the pioneers of nuclear physics. Its chemistry is typical for the late actinides, with a preponderance of the +3 oxidation state but also an accessible +2 oxidation state. Owing to the small amounts of produced fermium and all of its isotopes having relatively short half-lives, there are currently no uses for it outside basic scientific research.
Discovery
Fermium was first discovered in the fallout from the 'Ivy Mike' nuclear test (1 November 1952), the first successful test of a hydrogen bomb. Initial examination of the debris from the explosion had shown the production of a new isotope of plutonium, : this could only have formed by the absorption of six neutrons by a uranium-238 nucleus followed by two β− decays. At the time, the absorption of neutrons by a heavy nucleus was thought to be a rare process, but the identification of raised the possibility that still more neutrons could have been absorbed by the uranium nuclei, leading to new elements.
Element 99 (einsteinium) was quickly discovered on filter papers which had been flown through clouds from the explosion (the same sampling technique that had been used to discover ). It was then identified in December 1952 by Albert Ghiorso and co-workers at the University of California at Berkeley. They discovered the isotope 253Es (half-life 20.5 days) that was made by the capture of 15 neutrons by uranium-238 nuclei – which then underwent seven successive beta decays:
Some 238U atoms, however, could capture another amount of neutrons (most likely, 16 or 17).
The discovery of fermium (Z = 100) required more material, as the yield was expected to be at least an order of magnitude lower than that of element 99, and so contaminated coral from the Enewetak atoll (where the test had taken place) was shipped to the University of California Radiation Laboratory in Berkeley, California, for processing and analysis. About two months after the test, a new component was isolated emitting high-energy α-particles (7.1 MeV) with a half-life of about a day. With such a short half-life, it could only arise from the β− decay of an isotope of einsteinium, and so had to be an isotope of the new element 100: it was quickly identified as 255Fm ().
The discovery of the new elements, and the new data on neutron capture, was initially kept secret on the orders of the U.S. military until 1955 due to Cold War tensions. Nevertheless, the Berkeley team was able to prepare elements 99 and 100 by civilian means, through the neutron bombardment of plutonium-239, and published this work in 1954 with the disclaimer that it was not the first studies that had been carried out on the elements. The "Ivy Mike" studies were declassified and published in 1955.
The Berkeley team had been worried that another group might discover lighter isotopes of element 100 through ion-bombardment techniques before they could publish their classified research, and this proved to be the case. A group at the Nobel Institute for Physics in Stockholm independently discovered the element, producing an isotope later confirmed to be 250Fm (t1/2 = 30 minutes) by bombarding a target with oxygen-16 ions, and published their work in May 1954. Nevertheless, the priority of the Berkeley team was generally recognized, and with it the prerogative to name the new element in honour of Enrico Fermi, the developer of the first artificial self-sustained nuclear reactor. Fermi was still alive when the name was proposed, but had died by the time it became official.
Isotopes
There are 20 isotopes of fermium listed in NUBASE 2016, with atomic weights of 241 to 260, of which Fm is the longest-lived with a half-life of 100.5 days. Fm has a half-life of 3 days, while Fm of 5.3 h, Fm of 25.4 h, Fm of 3.2 h, Fm of 20.1 h, and Fm of 2.6 hours. All the remaining ones have half-lives ranging from 30 minutes to less than a millisecond.
The neutron capture product of fermium-257, Fm, undergoes spontaneous fission with a half-life of just 370(14) microseconds; Fm and Fm also undergo spontaneous fission (t1/2 = 1.5(3) s and 4 ms respectively). This means that neutron capture cannot be used to create nuclides with a mass number greater than 257, unless carried out in a nuclear explosion. As Fm alpha decays to Cf, and no known fermium isotopes undergo beta minus decay to the next element, mendelevium, fermium is also the last element that can be synthesized by neutron-capture. Because of this impediment in forming heavier isotopes, these short-lived isotopes Fm constitute the "fermium gap."
Occurrence
Production
Fermium is produced by the bombardment of lighter actinides with neutrons in a nuclear reactor. Fermium-257 is the heaviest isotope that is obtained via neutron capture, and can only be produced in picogram quantities. The major source is the 85 MW High Flux Isotope Reactor (HFIR) at the Oak Ridge National Laboratory in Tennessee, USA, which is dedicated to the production of transcurium (Z > 96) elements. Lower mass fermium isotopes are available in greater quantities, though these isotopes (254Fm and 255Fm) are comparatively short-lived. In a "typical processing campaign" at Oak Ridge, tens of grams of curium are irradiated to produce decigram quantities of californium, milligram quantities of berkelium and einsteinium, and picogram quantities of fermium. However, nanogram quantities of fermium can be prepared for specific experiments. The quantities of fermium produced in 20–200 kiloton thermonuclear explosions is believed to be of the order of milligrams, although it is mixed in with a huge quantity of debris; 4.0 picograms of 257Fm was recovered from 10 kilograms of debris from the "Hutch" test (16 July 1969). The Hutch experiment produced an estimated total of 250 micrograms of 257Fm.
After production, the fermium must be separated from other actinides and from lanthanide fission products. This is usually achieved by ion-exchange chromatography, with the standard process using a cation exchanger such as Dowex 50 or TEVA eluted with a solution of ammonium α-hydroxyisobutyrate. Smaller cations form more stable complexes with the α-hydroxyisobutyrate anion, and so are preferentially eluted from the column. A rapid fractional crystallization method has also been described.
Although the most stable isotope of fermium is 257Fm, with a half-life of 100.5 days, most studies are conducted on 255Fm (t1/2 = 20.07(7) hours), since this isotope can be easily isolated as required as the decay product of 255Es (t1/2 = 39.8(12) days).
Synthesis in nuclear explosions
The analysis of the debris at the 10-megaton Ivy Mike nuclear test was a part of long-term project, one of the goals of which was studying the efficiency of production of transuranium elements in high-power nuclear explosions. The motivation for these experiments was as follows: synthesis of such elements from uranium requires multiple neutron capture. The probability of such events increases with the neutron flux, and nuclear explosions are the most powerful neutron sources, providing densities on the order 10 neutrons/cm within a microsecond, i.e. about 10 neutrons/(cm·s). For comparison, the flux of the HFIR reactor is 5 neutrons/(cm·s). A dedicated laboratory was set up right at Enewetak Atoll for preliminary analysis of debris, as some isotopes could have decayed by the time the debris samples reached the U.S. The laboratory was receiving samples for analysis, as soon as possible, from airplanes equipped with paper filters which flew over the atoll after the tests. Whereas it was hoped to discover new chemical elements heavier than fermium, those were not found after a series of megaton explosions conducted between 1954 and 1956 at the atoll.
The atmospheric results were supplemented by the underground test data accumulated in the 1960s at the Nevada Test Site, as it was hoped that powerful explosions conducted in confined space might result in improved yields and heavier isotopes. Apart from traditional uranium charges, combinations of uranium with americium and thorium have been tried, as well as a mixed plutonium-neptunium charge. They were less successful in terms of yield, which was attributed to stronger losses of heavy isotopes due to enhanced fission rates in heavy-element charges. Isolation of the products was found to be rather problematic, as the explosions were spreading debris through melting and vaporizing rocks under the great depth of 300–600 meters, and drilling to such depth in order to extract the products was both slow and inefficient in terms of collected volumes.
Among the nine underground tests, which were carried between 1962 and 1969 and codenamed Anacostia (5.2 kilotons, 1962), Kennebec (<5 kilotons, 1963), Par (38 kilotons, 1964), Barbel (<20 kilotons, 1964), Tweed (<20 kilotons, 1965), Cyclamen (13 kilotons, 1966), Kankakee (20-200 kilotons, 1966), Vulcan (25 kilotons, 1966) and Hutch (20-200 kilotons, 1969), the last one was most powerful and had the highest yield of transuranium elements. In the dependence on the atomic mass number, the yield showed a saw-tooth behavior with the lower values for odd isotopes, due to their higher fission rates. The major practical problem of the entire proposal, however, was collecting the radioactive debris dispersed by the powerful blast. Aircraft filters adsorbed only about 4 of the total amount and collection of tons of corals at Enewetak Atoll increased this fraction by only two orders of magnitude. Extraction of about 500 kilograms of underground rocks 60 days after the Hutch explosion recovered only about 10 of the total charge. The amount of transuranium elements in this 500-kg batch was only 30 times higher than in a 0.4 kg rock picked up 7 days after the test. This observation demonstrated the highly nonlinear dependence of the transuranium elements yield on the amount of retrieved radioactive rock. In order to accelerate sample collection after the explosion, shafts were drilled at the site not after but before the test, so that the explosion would expel radioactive material from the epicenter, through the shafts, to collecting volumes near the surface. This method was tried in the Anacostia and Kennebec tests and instantly provided hundreds of kilograms of material, but with actinide concentrations 3 times lower than in samples obtained after drilling; whereas such a method could have been efficient in scientific studies of short-lived isotopes, it could not improve the overall collection efficiency of the produced actinides.
Though no new elements (apart from einsteinium and fermium) could be detected in the nuclear test debris, and the total yields of transuranium elements were disappointingly low, these tests did provide significantly higher amounts of rare heavy isotopes than previously available in laboratories. For example, 6 atoms of Fm could be recovered after the Hutch detonation. They were then used in the studies of thermal-neutron induced fission of Fm and in discovery of a new fermium isotope Fm. Also, the rare isotope Cm was synthesized in large quantities, which is very difficult to produce in nuclear reactors from its progenitor Cm; the half-life of Cm (64 minutes) is much too short for months-long reactor irradiations, but is very "long" on the explosion timescale.
Natural occurrence
Because of the short half-life of all known isotopes of fermium, any primordial fermium, that is fermium present on Earth during its formation, has decayed by now. Synthesis of fermium from naturally occurring uranium and thorium in the Earth's crust requires multiple neutron captures, which is extremely unlikely. Therefore, most fermium is produced on Earth in laboratories, high-power nuclear reactors, or in nuclear tests, and is present for only a few months afterward. The transuranic elements americium to fermium did occur naturally in the natural nuclear fission reactor at Oklo, but no longer do so.
Chemistry
The chemistry of fermium has only been studied in solution using tracer techniques, and no solid compounds have been prepared. Under normal conditions, fermium exists in solution as the Fm3+ ion, which has a hydration number of 16.9 and an acid dissociation constant of 1.6 (pK = 3.8). Fm forms complexes with a wide variety of organic ligands with hard donor atoms such as oxygen, and these complexes are usually more stable than those of the preceding actinides. It also forms anionic complexes with ligands such as chloride or nitrate and, again, these complexes appear to be more stable than those formed by einsteinium or californium. It is believed that the bonding in the complexes of the later actinides is mostly ionic in character: the Fm ion is expected to be smaller than the preceding An ions because of the higher effective nuclear charge of fermium, and hence fermium would be expected to form shorter and stronger metal–ligand bonds.
Fermium(III) can be fairly easily reduced to fermium(II), for example with samarium(II) chloride, with which fermium(II) coprecipitates. In the precipitate, the compound fermium(II) chloride (FmCl) was produced, though it was not purified or studied in isolation. The electrode potential has been estimated to be similar to that of the ytterbium(III)/(II) couple, or about −1.15 V with respect to the standard hydrogen electrode, a value which agrees with theoretical calculations. The Fm/Fm couple has an electrode potential of −2.37(10) V based on polarographic measurements.
Toxicity
Though few people come in contact with fermium, the International Commission on Radiological Protection has set annual exposure limits for the two most stable isotopes. For fermium-253, the ingestion limit was set at 10 becquerels (1 Bq equals one decay per second), and the inhalation limit at 10 Bq; for fermium-257, at 10 Bq and 4,000 Bq respectively.
| Physical sciences | Actinides | Chemistry |
10826 | https://en.wikipedia.org/wiki/Fax | Fax | Fax (short for facsimile), sometimes called telecopying or telefax (short for telefacsimile), is the telephonic transmission of scanned printed material (both text and images), normally to a telephone number connected to a printer or other output device. The original document is scanned with a fax machine (or a telecopier), which processes the contents (text or images) as a single fixed graphic image, converting it into a bitmap, and then transmitting it through the telephone system in the form of audio-frequency tones. The receiving fax machine interprets the tones and reconstructs the image, printing a paper copy. Early systems used direct conversions of image darkness to audio tone in a continuous or analog manner. Since the 1980s, most machines transmit an audio-encoded digital representation of the page, using data compression to transmit areas that are all-white or all-black, more quickly.
Initially a niche product, fax machines became ubiquitous in offices in the 1980s and 1990s. However, they have largely been rendered obsolete by Internet-based technologies such as email and the World Wide Web, but are still used in some medical administration and law enforcement settings.
History
Wire transmission
Scottish inventor Alexander Bain worked on chemical-mechanical fax-type devices and in 1846 Bain was able to reproduce graphic signs in laboratory experiments. He received British patent 9745 on May 27, 1843, for his "Electric Printing Telegraph". Frederick Bakewell made several improvements on Bain's design and demonstrated a telefax machine. The Pantelegraph was invented by the Italian physicist Giovanni Caselli. He introduced the first commercial telefax service between Paris and Lyon in 1865, some 11 years before the invention of the telephone.
In 1880, English inventor Shelford Bidwell constructed the scanning phototelegraph that was the first telefax machine to scan any two-dimensional original, not requiring manual plotting or drawing. An account of Henry Sutton's "telephane" was published in 1896. Around 1900, German physicist Arthur Korn invented the Bildtelegraph, widespread in continental Europe especially following a widely noticed transmission of a wanted-person photograph from Paris to London in 1908, used until the wider distribution of the radiofax. Its main competitors were the Bélinographe by Édouard Belin first, then since the 1930s the Hellschreiber, invented in 1929 by German inventor Rudolf Hell, a pioneer in mechanical image scanning and transmission.
The 1888 invention of the telautograph by Elisha Gray marked a further development in fax technology, allowing users to send signatures over long distances, thus allowing the verification of identification or ownership over long distances.
On May 19, 1924, scientists of the AT&T Corporation "by a new process of transmitting pictures by electricity" sent 15 photographs by telephone from Cleveland to New York City, such photos being suitable for newspaper reproduction. Previously, photographs had been sent over the radio using this process.
The Western Union "Deskfax" fax machine, announced in 1948, was a compact machine that fit comfortably on a desktop, using special spark printer paper.
Wireless transmission
As a designer for the Radio Corporation of America (RCA), in 1924, Richard H. Ranger invented the wireless photoradiogram, or transoceanic radio facsimile, the forerunner of today's "fax" machines. A photograph of President Calvin Coolidge sent from New York to London on November 29, 1924, became the first photo picture reproduced by transoceanic radio facsimile. Commercial use of Ranger's product began two years later. Also in 1924, Herbert E. Ives of AT&T transmitted and reconstructed the first color facsimile, a natural-color photograph of silent film star Rudolph Valentino in period costume, using red, green and blue color separations.
Beginning in the late 1930s, the Finch Facsimile system was used to transmit a "radio newspaper" to private homes via commercial AM radio stations and ordinary radio receivers equipped with Finch's printer, which used thermal paper. Sensing a new and potentially golden opportunity, competitors soon entered the field, but the printer and special paper were expensive luxuries, AM radio transmission was very slow and vulnerable to static, and the newspaper was too small. After more than ten years of repeated attempts by Finch and others to establish such a service as a viable business, the public, apparently quite content with its cheaper and much more substantial home-delivered daily newspapers, and with conventional spoken radio bulletins to provide any "hot" news, still showed only a passing curiosity about the new medium.
By the late 1940s, radiofax receivers were sufficiently miniaturized to be fitted beneath the dashboard of Western Union's "Telecar" telegram delivery vehicles.
In the 1960s, the United States Army transmitted the first photograph via satellite facsimile to Puerto Rico from the Deal Test Site using the Courier satellite.
Radio fax is still in limited use today for transmitting weather charts and information to ships at sea. The closely related technology of slow-scan television is still used by amateur radio operators.
Telephone transmission
In 1964, Xerox Corporation introduced (and patented) what many consider to be the first commercialized version of the modern fax machine, under the name (LDX) or Long Distance Xerography. This model was superseded two years later with a unit that would set the standard for fax machines for years to come. Up until this point facsimile machines were very expensive and hard to operate. In 1966, Xerox released the Magnafax Telecopiers, a smaller, facsimile machine. This unit was far easier to operate and could be connected to any standard telephone line. This machine was capable of transmitting a letter-sized document in about six minutes. The first sub-minute, digital fax machine was developed by Dacom, which built on digital data compression technology originally developed at Lockheed for satellite communication.
By the late 1970s, many companies around the world (especially Japanese firms) had entered the fax market. Very shortly after this, a new wave of more compact, faster and efficient fax machines would hit the market. Xerox continued to refine the fax machine for years after their ground-breaking first machine. In later years it would be combined with copier equipment to create the hybrid machines we have today that copy, scan and fax. Some of the lesser known capabilities of the Xerox fax technologies included their Ethernet enabled Fax Services on their 8000 workstations in the early 1980s.
Prior to the introduction of the ubiquitous fax machine, one of the first being the Exxon Qwip in the mid-1970s, facsimile machines worked by optical scanning of a document or drawing spinning on a drum. The reflected light, varying in intensity according to the light and dark areas of the document, was focused on a photocell so that the current in a circuit varied with the amount of light. This current was used to control a tone generator (a modulator), the current determining the frequency of the tone produced. This audio tone was then transmitted using an acoustic coupler (a speaker, in this case) attached to the microphone of a common telephone handset. At the receiving end, a handset's speaker was attached to an acoustic coupler (a microphone), and a demodulator converted the varying tone into a variable current that controlled the mechanical movement of a pen or pencil to reproduce the image on a blank sheet of paper on an identical drum rotating at the same rate.
Computer facsimile interface
In 1985, Hank Magnuski, founder of GammaLink, produced the first computer fax board, called GammaFax. Such boards could provide voice telephony via Analog Expansion Bus.
In the 21st century
Although businesses usually maintain some kind of fax capability, the technology has faced increasing competition from Internet-based alternatives. In some countries, because electronic signatures on contracts are not yet recognized by law, while faxed contracts with copies of signatures are, fax machines enjoy continuing support in business. In Japan, faxes are still used extensively as of September 2020 for cultural and They are available for sending to both domestic and international recipients from over 81% of all convenience stores nationwide. Convenience-store fax machines commonly print the slightly re-sized content of the sent fax in the electronic confirmation-slip, in A4 paper size. Use of fax machines for reporting cases during the COVID-19 pandemic has been criticised in Japan for introducing data errors and delays in reporting, slowing response efforts to contain the spread of infections and hindering the transition to remote work.
In many corporate environments, freestanding fax machines have been replaced by fax servers and other computerized systems capable of receiving and storing incoming faxes electronically, and then routing them to users on paper or via an email (which may be secured). Such systems have the advantage of reducing costs by eliminating unnecessary printouts and reducing the number of inbound analog phone lines needed by an office.
The once ubiquitous fax machine has also begun to disappear from the small office and home office environments. Remotely hosted fax-server services are widely available from VoIP and e-mail providers allowing users to send and receive faxes using their existing e-mail accounts without the need for any hardware or dedicated fax lines. Personal computers have also long been able to handle incoming and outgoing faxes using analog modems or ISDN, eliminating the need for a stand-alone fax machine. These solutions are often ideally suited for users who only very occasionally need to use fax services. In July 2017 the United Kingdom's National Health Service was said to be the world's largest purchaser of fax machines because the digital revolution has largely bypassed it. In June 2018 the Labour Party said that the NHS had at least 11,620 fax machines in operation and in December the Department of Health and Social Care said that no more fax machines could be bought from 2019 and that the existing ones must be replaced by secure email by March 31, 2020.
Leeds Teaching Hospitals NHS Trust, generally viewed as digitally advanced in the NHS, was engaged in a process of removing its fax machines in early 2019. This involved quite a lot of e-fax solutions because of the need to communicate with pharmacies and nursing homes which may not have access to the NHS email system and may need something in their paper records.
In 2018 two-thirds of Canadian doctors reported that they primarily used fax machines to communicate with other doctors. Faxes are still seen as safer and more secure and electronic systems are often unable to communicate with each other.
Hospitals are the leading users for fax machines in the United States where some doctors prefer fax machines over emails, often due to concerns about accidentally violating HIPAA.
Capabilities
There are several indicators of fax capabilities: group, class, data transmission rate, and conformance with ITU-T (formerly CCITT) recommendations. Since the 1968 Carterfone decision, most fax machines have been designed to connect to standard PSTN lines and telephone numbers.
Group
Analog
Group 1 and 2 faxes are sent in the same manner as a frame of analog television, with each scanned line transmitted as a continuous analog signal. Horizontal resolution depended upon the quality of the scanner, transmission line, and the printer. Analog fax machines are obsolete and no longer manufactured. ITU-T Recommendations T.2 and T.3 were withdrawn as obsolete in July 1996.
Group 1 faxes conform to the ITU-T Recommendation T.2. Group 1 faxes take six minutes to transmit a single page, with a vertical resolution of 96 scan lines per inch. Group 1 fax machines are obsolete and no longer manufactured.
Group 2 faxes conform to the ITU-T Recommendations T.3 and T.30. Group 2 faxes take three minutes to transmit a single page, with a vertical resolution of 96 scan lines per inch. Group 2 fax machines are almost obsolete, and are no longer manufactured. Group 2 fax machines can interoperate with Group 3 fax machines.
Digital
A major breakthrough in the development of the modern facsimile system was the result of digital technology, where the analog signal from scanners was digitized and then compressed, resulting in the ability to transmit high rates of data across standard phone lines. The first digital fax machine was the Dacom Rapidfax first sold in late 1960s, which incorporated digital data compression technology developed by Lockheed for transmission of images from satellites.
Group 3 and 4 faxes are digital formats and take advantage of digital compression methods to greatly reduce transmission times.
Group 3 faxes conform to the ITU-T Recommendations T.30 and T.4. Group 3 faxes take between 6 and 15 seconds to transmit a single page (not including the initial time for the fax machines to handshake and synchronize). The horizontal and vertical resolutions are allowed by the T.4 standard to vary among a set of fixed resolutions:
Horizontal: 100 scan lines per inch
Vertical: 100 scan lines per inch ("Basic")
Horizontal: 200 or 204 scan lines per inch
Vertical: 100 or 98 scan lines per inch ("Standard")
Vertical: 200 or 196 scan lines per inch ("Fine")
Vertical: 400 or 391 (note not 392) scan lines per inch ("Superfine")
Horizontal: 300 scan lines per inch
Vertical: 300 scan lines per inch
Horizontal: 400 or 408 scan lines per inch
Vertical: 400 or 391 scan lines per inch ("Ultrafine")
Group 4 faxes are designed to operate over 64 kbit/s digital ISDN circuits. They conform to the ITU-T Recommendations
T.563 (Terminal characteristics for Group 4 facsimile apparatus),
T.503 (Document application profile for the interchange of Group 4 facsimile documents),
T.521 (Communication application profile BT0 for document bulk transfer based on the session service),
T.6 (Facsimile coding schemes and coding control functions for Group 4 facsimile apparatus) specifying resolutions, a superset of the resolutions from T.4 ,
T.62 (Control procedures for teletex and Group 4 facsimile services),
T.70 (Network-independent basic transport service for the telematic services), and
T.411 to T.417 (concerned with aspects of the Open Document Architecture).
Fax Over IP (FoIP) can transmit and receive pre-digitized documents at near-realtime speeds using ITU-T recommendation T.38 to send digitised images over an IP network using JPEG compression. T.38 is designed to work with VoIP services and often supported by analog telephone adapters used by legacy fax machines that need to connect through a VoIP service. Scanned documents are limited to the amount of time the user takes to load the document in a scanner and for the device to process a digital file. The resolution can vary from as little as 150 DPI to 9600 DPI or more. This type of faxing is not related to the e-mail–to–fax service that still uses fax modems at least one way.
Class
Computer modems are often designated by a particular fax class, which indicates how much processing is offloaded from the computer's CPU to the fax modem.
Class 1 (also known as Class 1.0) fax devices do fax data transfer, while the T.4/T.6 data compression and T.30 session management are performed by software on a controlling computer. This is described in ITU-T recommendation T.31.
What is commonly known as "Class 2" is an unofficial class of fax devices that perform T.30 session management themselves, but the T.4/T.6 data compression is performed by software on a controlling computer. Implementations of this "class" are based on draft versions of the standard that eventually significantly evolved to become Class 2.0. All implementations of "Class 2" are manufacturer-specific.
Class 2.0 is the official ITU-T version of Class 2 and is commonly known as Class 2.0 to differentiate it from many manufacturer-specific implementations of what is commonly known as "Class 2". It uses a different but standardized command set than the various manufacturer-specific implementations of "Class 2". The relevant ITU-T recommendation is T.32.
Class 2.1 is an improvement of Class 2.0 that implements faxing over V.34 (33.6 kbit/s), which boosts faxing speed from fax classes "2" and 2.0, which are limited to 14.4 kbit/s. The relevant ITU-T recommendation is T.32 Amendment 1. Class 2.1 fax devices are referred to as "super G3".
Data transmission rate
Several different telephone-line modulation techniques are used by fax machines. They are negotiated during the fax-modem handshake, and the fax devices will use the highest data rate that both fax devices support, usually a minimum of 14.4 kbit/s for Group 3 fax.
{| class="wikitable"
!ITU standard
!Released date
!Data rates (bit/s)
!Modulation method
|-
|V.27
|1988
|4800, 2400
|PSK
|-
|V.29
|1988
|9600, 7200, 4800
|QAM
|-
|V.17
|1991
|, , 9600, 7200
|TCM
|-
|V.34
|1994
|
|QAM
|-
|V.34bis
|1998
|
|QAM
|-
|ISDN
|1986
|
|4B3T / 2B1Q (line coding)
|}
"Super Group 3" faxes use V.34bis modulation that allows a data rate of up to 33.6 kbit/s.
Compression
As well as specifying the resolution (and allowable physical size) of the image being faxed, the ITU-T T.4 recommendation specifies two compression methods for decreasing the amount of data that needs to be transmitted between the fax machines to transfer the image. The two methods defined in T.4 are:
Modified Huffman (MH).
Modified READ (MR) (Relative Element Address Designate), optional.
An additional method is specified in T.6:
Modified Modified READ (MMR).
Later, other compression techniques were added as options to ITU-T recommendation T.30, such as the more efficient JBIG (T.82, T.85) for bi-level content, and JPEG (T.81), T.43, MRC (T.44), and T.45 for grayscale, palette, and colour content. Fax machines can negotiate at the start of the T.30 session to use the best technique implemented on both sides.
Modified Huffman
Modified Huffman (MH), specified in T.4 as the one-dimensional coding scheme, is a codebook-based run-length encoding scheme optimised to efficiently compress whitespace. As most faxes consist mostly of white space, this minimises the transmission time of most faxes. Each line scanned is compressed independently of its predecessor and successor.
Modified READ
Modified READ, specified as an optional two-dimensional coding scheme in T.4, encodes the first scanned line using MH. The next line is compared to the first, the differences determined, and then the differences are encoded and transmitted. This is effective, as most lines differ little from their predecessor. This is not continued to the end of the fax transmission, but only for a limited number of lines until the process is reset, and a new "first line" encoded with MH is produced. This limited number of lines is to prevent errors propagating throughout the whole fax, as the standard does not provide for error correction. This is an optional facility, and some fax machines do not use MR in order to minimise the amount of computation required by the machine. The limited number of lines is 2 for "Standard"-resolution faxes, and 4 for "Fine"-resolution faxes.
Modified Modified READ
The ITU-T T.6 recommendation adds a further compression type of Modified Modified READ (MMR), which simply allows a greater number of lines to be coded by MR than in T.4. This is because T.6 makes the assumption that the transmission is over a circuit with a low number of line errors, such as digital ISDN. In this case, the number of lines for which the differences are encoded is not limited.
JBIG
In 1999, ITU-T recommendation T.30 added JBIG (ITU-T T.82) as another lossless bi-level compression algorithm, or more precisely a "fax profile" subset of JBIG (ITU-T T.85). JBIG-compressed pages result in 20% to 50% faster transmission than MMR-compressed pages, and up to 30 times faster transmission if the page includes halftone images.
JBIG performs adaptive compression, that is, both the encoder and decoder collect statistical information about the transmitted image from the pixels transmitted so far, in order to predict the probability for each next pixel being either black or white. For each new pixel, JBIG looks at ten nearby, previously transmitted pixels. It counts, how often in the past the next pixel has been black or white in the same neighborhood, and estimates from that the probability distribution of the next pixel. This is fed into an arithmetic coder, which adds only a small fraction of a bit to the output sequence if the more probable pixel is then encountered.
The ITU-T T.85 "fax profile" constrains some optional features of the full JBIG standard, such that codecs do not have to keep data about more than the last three pixel rows of an image in memory at any time. This allows the streaming of "endless" images, where the height of the image may not be known until the last row is transmitted.
ITU-T T.30 allows fax machines to negotiate one of two options of the T.85 "fax profile":
In "basic mode", the JBIG encoder must split the image into horizontal stripes of 128 lines (parameter L0 = 128) and restart the arithmetic encoder for each stripe.
In "option mode", there is no such constraint.
Matsushita Whiteline Skip
A proprietary compression scheme employed on Panasonic fax machines is Matsushita Whiteline Skip (MWS). It can be overlaid on the other compression schemes, but is operative only when two Panasonic machines are communicating with one another. This system detects the blank scanned areas between lines of text, and then compresses several blank scan lines into the data space of a single character. (JBIG implements a similar technique called "typical prediction", if header flag TPBON is set to 1.)
Typical characteristics
Group 3 fax machines transfer one or a few printed or handwritten pages per minute in black-and-white (bitonal) at a resolution of 204×98 (normal) or 204×196 (fine) dots per square inch. The transfer rate is 14.4 kbit/s or higher for modems and some fax machines, but fax machines support speeds beginning with 2400 bit/s and typically operate at 9600 bit/s. The transferred image formats are called ITU-T (formerly CCITT) fax group 3 or 4. Group 3 faxes have the suffix .g3 and the MIME type image/g3fax.
The most basic fax mode transfers in black and white only. The original page is scanned in a resolution of 1728 pixels/line and 1145 lines/page (for A4). The resulting raw data is compressed using a modified Huffman code optimized for written text, achieving average compression factors of around 20. Typically a page needs 10 s for transmission, instead of about three minutes for the same uncompressed raw data of 1728×1145 bits at a speed of 9600 bit/s. The compression method uses a Huffman codebook for run lengths of black and white runs in a single scanned line, and it can also use the fact that two adjacent scanlines are usually quite similar, saving bandwidth by encoding only the differences.
Fax classes denote the way fax programs interact with fax hardware. Available classes include Class 1, Class 2, Class 2.0 and 2.1, and Intel CAS. Many modems support at least class 1 and often either Class 2 or Class 2.0. Which is preferable to use depends on factors such as hardware, software, modem firmware, and expected use.
Printing process
Fax machines from the 1970s to the 1990s often used direct thermal printers with rolls of thermal paper as their printing technology, but since the mid-1990s there has been a transition towards plain-paper faxes: thermal transfer printers, inkjet printers and laser printers.
One of the advantages of inkjet printing is that inkjets can affordably print in color; therefore, many of the inkjet-based fax machines claim to have color fax capability. There is a standard called ITU-T30e (formally ITU-T Recommendation T.30 Annex E ) for faxing in color; however, it is not widely supported, so many of the color fax machines can only fax in color to machines from the same manufacturer.
Stroke speed
Stroke speed in facsimile systems is the rate at which a fixed line perpendicular to the direction of scanning is crossed in one direction by a scanning or recording spot. Stroke speed is usually expressed as a number of strokes per minute. When the fax system scans in both directions, the stroke speed is twice this number. In most conventional 20th century mechanical systems, the stroke speed is equivalent to drum speed.
Fax paper
As a precaution, thermal fax paper is typically not accepted in archives or as documentary evidence in some courts of law unless photocopied. This is because the image-forming coating is eradicable and brittle, and it tends to detach from the medium after a long time in storage.
Fax tone
A CNG tone is an 1100 Hz tone transmitted by a fax machine when it calls another fax machine. Fax tones can cause complications when implementing fax over IP.
Internet fax
One popular alternative is to subscribe to an Internet fax service, allowing users to send and receive faxes from their personal computers using an existing email account. No software, fax server or fax machine is needed. Faxes are received as attached TIFF or PDF files, or in proprietary formats that require the use of the service provider's software. Faxes can be sent or retrieved from anywhere at any time that a user can get Internet access. Some services offer secure faxing to comply with stringent HIPAA and Gramm–Leach–Bliley Act requirements to keep medical information and financial information private and secure. Utilizing a fax service provider does not require paper, a dedicated fax line, or consumable resources.
Another alternative to a physical fax machine is to make use of computer software which allows people to send and receive faxes using their own computers, utilizing fax servers and unified messaging. A virtual (email) fax can be printed out and then signed and scanned back to computer before being emailed. Also the sender can attach a digital signature to the document file.
With the surging popularity of mobile phones, virtual fax machines can now be downloaded as applications for Android and iOS. These applications make use of the phone's internal camera to scan fax documents for upload or they can import from various cloud services.
Related standards
T.4 is the umbrella specification for fax. It specifies the standard image sizes, two forms of image-data compression (encoding), the image-data format, and references, T.30 and the various modem standards.
T.6 specifies a compression scheme that reduces the time required to transmit an image by roughly 50-percent.
T.30 specifies the procedures that a sending and receiving terminal use to set up a fax call, determine the image size, encoding, and transfer speed, the demarcation between pages, and the termination of the call. T.30 also references the various modem standards.
V.21, V.27ter, V.29, V.17, V.34: ITU modem standards used in facsimile. The first three were ratified prior to 1980, and were specified in the original T.4 and T.30 standards. V.34 was published for fax in 1994.
T.37 The ITU standard for sending a fax-image file via e-mail to the intended recipient of a fax.
T.38 The ITU standard for sending Fax over IP (FoIP).
G.711 pass through - this is where the T.30 fax call is carried in a VoIP call encoded as audio. This is sensitive to network packet loss, jitter and clock synchronization. When using voice high-compression encoding techniques such as, but not limited to, G.729, some fax tonal signals may not be correctly transported across the packet network.
image/t38 MIME-type
SSL Fax An emerging standard that allows a telephone based fax session to negotiate a fax transfer over the internet, but only if both sides support the standard. The standard is partially based on T.30 and is being developed by Hylafax+ developers.
| Technology | Telecommunications | null |
10835 | https://en.wikipedia.org/wiki/Frequency%20modulation | Frequency modulation | Frequency modulation (FM) is the encoding of information in a carrier wave by varying the instantaneous frequency of the wave. The technology is used in telecommunications, radio broadcasting, signal processing, and computing.
In analog frequency modulation, such as radio broadcasting, of an audio signal representing voice or music, the instantaneous frequency deviation, i.e. the difference between the frequency of the carrier and its center frequency, has a functional relation to the modulating signal amplitude.
Digital data can be encoded and transmitted with a type of frequency modulation known as frequency-shift keying (FSK), in which the instantaneous frequency of the carrier is shifted among a set of frequencies. The frequencies may represent digits, such as '0' and '1'. FSK is widely used in computer modems such as fax modems, telephone caller ID systems, garage door openers, and other low-frequency transmissions. Radioteletype also uses FSK.
Frequency modulation is widely used for FM radio broadcasting. It is also used in telemetry, radar, seismic prospecting, and monitoring newborns for seizures via EEG, two-way radio systems, sound synthesis, magnetic tape-recording systems and some video-transmission systems. In radio transmission, an advantage of frequency modulation is that it has a larger signal-to-noise ratio and therefore rejects radio frequency interference better than an equal power amplitude modulation (AM) signal. For this reason, most music is broadcast over FM radio.
However, under severe enough multipath conditions it performs much more poorly than AM, with distinct high frequency noise artifacts that are audible with lower volumes and less complex tones. With high enough volume and carrier deviation audio distortion starts to occur that otherwise wouldn't be present without multipath or with an AM signal.
Frequency modulation and phase modulation are the two complementary principal methods of angle modulation; phase modulation is often used as an intermediate step to achieve frequency modulation. These methods contrast with amplitude modulation, in which the amplitude of the carrier wave varies, while the frequency and phase remain constant.
Theory
If the information to be transmitted (i.e., the baseband signal) is and the sinusoidal carrier is , where fc is the carrier's base frequency, and Ac is the carrier's amplitude, the modulator combines the carrier with the baseband data signal to get the transmitted signal:
where , being the sensitivity of the frequency modulator and being the amplitude of the modulating signal or baseband signal.
In this equation, is the instantaneous frequency of the oscillator and is the frequency deviation, which represents the maximum shift away from fc in one direction, assuming xm(t) is limited to the range ±1.
It is important to realize that this process of integrating the instantaneous frequency to create an instantaneous phase is quite different from what the term "frequency modulation" naively implies, namely directly adding the modulating signal to the carrier frequency
which would result in a modulated signal that has spurious local minima and maxima that do not correspond to those of the carrier.
While most of the energy of the signal is contained within fc ± fΔ, it can be shown by Fourier analysis that a wider range of frequencies is required to precisely represent an FM signal. The frequency spectrum of an actual FM signal has components extending infinitely, although their amplitude decreases and higher-order components are often neglected in practical design problems.
Sinusoidal baseband signal
Mathematically, a baseband modulating signal may be approximated by a sinusoidal continuous wave signal with a frequency fm. This method is also named as single-tone modulation. The integral of such a signal is:
In this case, the expression for y(t) above simplifies to:
where the amplitude of the modulating sinusoid is represented in the peak deviation (see frequency deviation).
The harmonic distribution of a sine wave carrier modulated by such a sinusoidal signal can be represented with Bessel functions; this provides the basis for a mathematical understanding of frequency modulation in the frequency domain.
Modulation index
As in other modulation systems, the modulation index indicates by how much the modulated variable varies around its unmodulated level. It relates to variations in the carrier frequency:
where is the highest frequency component present in the modulating signal xm(t), and is the peak frequency-deviationi.e. the maximum deviation of the instantaneous frequency from the carrier frequency. For a sine wave modulation, the modulation index is seen to be the ratio of the peak frequency deviation of the carrier wave to the frequency of the modulating sine wave.
If , the modulation is called narrowband FM (NFM), and its bandwidth is approximately . Sometimes modulation index is considered NFM and other modulation indices are considered wideband FM (WFM or FM).
For digital modulation systems, for example, binary frequency shift keying (BFSK), where a binary signal modulates the carrier, the modulation index is given by:
where is the symbol period, and is used as the highest frequency of the modulating binary waveform by convention, even though it would be more accurate to say it is the highest fundamental of the modulating binary waveform. In the case of digital modulation, the carrier is never transmitted. Rather, one of two frequencies is transmitted, either or , depending on the binary state 0 or 1 of the modulation signal.
If , the modulation is called wideband FM and its bandwidth is approximately . While wideband FM uses more bandwidth, it can improve the signal-to-noise ratio significantly; for example, doubling the value of , while keeping constant, results in an eight-fold improvement in the signal-to-noise ratio. (Compare this with chirp spread spectrum, which uses extremely wide frequency deviations to achieve processing gains comparable to traditional, better-known spread-spectrum modes).
With a tone-modulated FM wave, if the modulation frequency is held constant and the modulation index is increased, the (non-negligible) bandwidth of the FM signal increases but the spacing between spectra remains the same; some spectral components decrease in strength as others increase. If the frequency deviation is held constant and the modulation frequency increased, the spacing between spectra increases.
Frequency modulation can be classified as narrowband if the change in the carrier frequency is about the same as the signal frequency, or as wideband if the change in the carrier frequency is much higher (modulation index > 1) than the signal frequency. For example, narrowband FM (NFM) is used for two-way radio systems such as Family Radio Service, in which the carrier is allowed to deviate only 2.5 kHz above and below the center frequency with speech signals of no more than 3.5 kHz bandwidth. Wideband FM is used for FM broadcasting, in which music and speech are transmitted with up to 75 kHz deviation from the center frequency and carry audio with up to a 20 kHz bandwidth and subcarriers up to 92 kHz.
Bessel functions
For the case of a carrier modulated by a single sine wave, the resulting frequency spectrum can be calculated using Bessel functions of the first kind, as a function of the sideband number and the modulation index. The carrier and sideband amplitudes are illustrated for different modulation indices of FM signals. For particular values of the modulation index, the carrier amplitude becomes zero and all the signal power is in the sidebands.
Since the sidebands are on both sides of the carrier, their count is doubled, and then multiplied by the modulating frequency to find the bandwidth. For example, 3 kHz deviation modulated by a 2.2 kHz audio tone produces a modulation index of 1.36. Suppose that we limit ourselves to only those sidebands that have a relative amplitude of at least 0.01. Then, examining the chart shows this modulation index will produce three sidebands. These three sidebands, when doubled, gives us (6 × 2.2 kHz) or a 13.2 kHz required bandwidth.
Carson's rule
A rule of thumb, Carson's rule states that nearly all (≈98 percent) of the power of a frequency-modulated signal lies within a bandwidth of:
where , as defined above, is the peak deviation of the instantaneous frequency from the center carrier frequency , is the Modulation index which is the ratio of frequency deviation to highest frequency in the modulating signal and is the highest frequency in the modulating signal.
Condition for application of Carson's rule is only sinusoidal signals. For non-sinusoidal signals:
where W is the highest frequency in the modulating signal but non-sinusoidal in nature and D is the Deviation ratio which is the ratio of frequency deviation to highest frequency of modulating non-sinusoidal signal.
Noise reduction
FM provides improved signal-to-noise ratio (SNR), as compared for example with AM. Compared with an optimum AM scheme, FM typically has poorer SNR below a certain signal level called the noise threshold, but above a higher level – the full improvement or full quieting threshold – the SNR is much improved over AM. The improvement depends on modulation level and deviation. For typical voice communications channels, improvements are typically 5–15 dB. FM broadcasting using wider deviation can achieve even greater improvements. Additional techniques, such as pre-emphasis of higher audio frequencies with corresponding de-emphasis in the receiver, are generally used to improve overall SNR in FM circuits. Since FM signals have constant amplitude, FM receivers normally have limiters that remove AM noise, further improving SNR.
Implementation
Modulation
FM signals can be generated using either direct or indirect frequency modulation:
Direct FM modulation can be achieved by directly feeding the message into the input of a voltage-controlled oscillator.
For indirect FM modulation, the message signal is integrated to generate a phase-modulated signal. This is used to modulate a crystal-controlled oscillator, and the result is passed through a frequency multiplier to produce an FM signal. In this modulation, narrowband FM is generated leading to wideband FM later and hence the modulation is known as indirect FM modulation.
Demodulation
Many FM detector circuits exist. A common method for recovering the information signal is through a Foster–Seeley discriminator or ratio detector. A phase-locked loop can be used as an FM demodulator. Slope detection demodulates an FM signal by using a tuned circuit which has its resonant frequency slightly offset from the carrier. As the frequency rises and falls the tuned circuit provides a changing amplitude of response, converting FM to AM. AM receivers may detect some FM transmissions by this means, although it does not provide an efficient means of detection for FM broadcasts. In Software-Defined Radio implementations the demodulation may be carried out by using the Hilbert transform (implemented as a filter) to recover the instantaneous phase, and thereafter differentiating this phase (using another filter) to recover the instantaneous frequency. Alternatively, a complex mixer followed by a bandpass filter may be used to translate the signal to baseband, and then proceeding as before.
Applications
Doppler effect
When an echolocating bat approaches a target, its outgoing sounds return as echoes, which are Doppler-shifted upward in frequency. In certain species of bats, which produce constant frequency (CF) echolocation calls, the bats compensate for the Doppler shift by lowering their call frequency as they approach a target. This keeps the returning echo in the same frequency range of the normal echolocation call. This dynamic frequency modulation is called the Doppler Shift Compensation (DSC), and was discovered by Hans Schnitzler in 1968.
Magnetic tape storage
FM is also used at intermediate frequencies by analog VCR systems (including VHS) to record the luminance (black and white) portions of the video signal. Commonly, the chrominance component is recorded as a conventional AM signal, using the higher-frequency FM signal as bias. FM is the only feasible method of recording the luminance ("black-and-white") component of video to (and retrieving video from) magnetic tape without distortion; video signals have a large range of frequency components – from a few hertz to several megahertz, too wide for equalizers to work with due to electronic noise below −60 dB. FM also keeps the tape at saturation level, acting as a form of noise reduction; a limiter can mask variations in playback output, and the FM capture effect removes print-through and pre-echo. A continuous pilot-tone, if added to the signal – as was done on V2000 and many Hi-band formats – can keep mechanical jitter under control and assist timebase correction.
These FM systems are unusual, in that they have a ratio of carrier to maximum modulation frequency of less than two; contrast this with FM audio broadcasting, where the ratio is around 10,000. Consider, for example, a 6-MHz carrier modulated at a 3.5-MHz rate; by Bessel analysis, the first sidebands are on 9.5 and 2.5 MHz and the second sidebands are on 13 MHz and −1 MHz. The result is a reversed-phase sideband on +1 MHz; on demodulation, this results in unwanted output at 6 – 1 = 5 MHz. The system must be designed so that this unwanted output is reduced to an acceptable level.
Sound
FM is also used at audio frequencies to synthesize sound. This technique, known as FM synthesis, was popularized by early digital synthesizers and became a standard feature in several generations of personal computer sound cards.
Radio
Edwin Howard Armstrong (1890–1954) was an American electrical engineer who invented wideband frequency modulation (FM) radio.
He patented the regenerative circuit in 1914, the superheterodyne receiver in 1918 and the super-regenerative circuit in 1922. Armstrong presented his paper, "A Method of Reducing Disturbances in Radio Signaling by a System of Frequency Modulation", (which first described FM radio) before the New York section of the Institute of Radio Engineers on November 6, 1935. The paper was published in 1936.
As the name implies, wideband FM (WFM) requires a wider signal bandwidth than amplitude modulation by an equivalent modulating signal; this also makes the signal more robust against noise and interference. Frequency modulation is also more robust against signal-amplitude-fading phenomena. As a result, FM was chosen as the modulation standard for high frequency, high fidelity radio transmission, hence the term "FM radio" (although for many years the BBC called it "VHF radio" because commercial FM broadcasting uses part of the VHF bandthe FM broadcast band). FM receivers employ a special detector for FM signals and exhibit a phenomenon known as the capture effect, in which the tuner "captures" the stronger of two stations on the same frequency while rejecting the other (compare this with a similar situation on an AM receiver, where both stations can be heard simultaneously). Frequency drift or a lack of selectivity may cause one station to be overtaken by another on an adjacent channel. Frequency drift was a problem in early (or inexpensive) receivers; inadequate selectivity may affect any tuner.
A wideband FM signal can also be used to carry a stereo signal; this is done with multiplexing and demultiplexing before and after the FM process. The FM modulation and demodulation process is identical in stereo and monaural processes.
FM is commonly used at VHF radio frequencies for high-fidelity broadcasts of music and speech. In broadcast services, where audio fidelity is important, wideband FM is generally used. Analog TV sound is also broadcast using FM. Narrowband FM is used for voice communications in commercial and amateur radio settings. In two-way radio, narrowband FM (NBFM) is used to conserve bandwidth for land mobile, marine mobile and other radio services.
A high-efficiency radio-frequency switching amplifier can be used to transmit FM signals (and other constant-amplitude signals). For a given signal strength (measured at the receiver antenna), switching amplifiers use less battery power and typically cost less than a linear amplifier. This gives FM another advantage over other modulation methods requiring linear amplifiers, such as AM and QAM.
There are reports that on October 5, 1924, Professor Mikhail A. Bonch-Bruevich, during a scientific and technical conversation in the Nizhny Novgorod Radio Laboratory, reported about his new method of telephony, based on a change in the period of oscillations. Demonstration of frequency modulation was carried out on the laboratory model.
Hearing assistive technology
Frequency modulated systems are a widespread and commercially available assistive technology that make speech more understandable by improving the signal-to-noise ratio in the user's ear. They are also called auditory trainers, a term which refers to any sound amplification system not classified as a hearing aid. They intensify signal levels from the source by 15 to 20 decibels. FM systems are used by hearing-impaired people as well as children whose listening is affected by disorders such as auditory processing disorder or ADHD. For people with sensorineural hearing loss, FM systems result in better speech perception than hearing aids. They can be coupled with behind-the-ear hearing aids to allow the user to alternate the setting. FM systems are more convenient and cost-effective than alternatives such as cochlear implants, but many users use FM systems infrequently due to their conspicuousness and need for recharging.
| Technology | Telecommunications | null |
10843 | https://en.wikipedia.org/wiki/Fruit | Fruit | In botany, a fruit is the seed-bearing structure in flowering plants (angiosperms) that is formed from the ovary after flowering (see Fruit anatomy).
Fruits are the means by which angiosperms disseminate their seeds. Edible fruits in particular have long propagated using the movements of humans and other animals in a symbiotic relationship that is the means for seed dispersal for the one group and nutrition for the other; humans, and many other animals, have become dependent on fruits as a source of food. Consequently, fruits account for a substantial fraction of the world's agricultural output, and some (such as the apple and the pomegranate) have acquired extensive cultural and symbolic meanings.
In common language and culinary usage, fruit normally means the seed-associated fleshy structures (or produce) of plants that typically are sweet (or sour) and edible in the raw state, such as apples, bananas, grapes, lemons, oranges, and strawberries. In botanical usage, the term fruit also includes many structures that are not commonly called as such in everyday language, such as nuts, bean pods, corn kernels, tomatoes, and wheat grains.
Botanical vs. culinary
Many common language terms used for fruit and seeds differ from botanical classifications. For example, in botany, a fruit is a ripened ovary or carpel that contains seeds, e.g., an orange, pomegranate, tomato or a pumpkin. A nut is a type of fruit (and not a seed), and a seed is a ripened ovule.
In culinary language, a fruit is the sweet- or not sweet- (even sour-) tasting produce of a specific plant (e.g., a peach, pear or lemon); nuts are hard, oily, non-sweet plant produce in shells (hazelnut, acorn). Vegetables, so-called, typically are savory or non-sweet produce (zucchini, lettuce, broccoli, and tomato). but some may be sweet-tasting (sweet potato).
Examples of botanically classified fruit that are typically called vegetables include cucumber, pumpkin, and squash (all are cucurbits); beans, peanuts, and peas (all legumes); and corn, eggplant, bell pepper (or sweet pepper), and tomato. Many spices are fruits, botanically speaking, including black pepper, chili pepper, cumin and allspice. In contrast, rhubarb is often called a fruit when used in making pies, but the edible produce of rhubarb is actually the leaf stalk or petiole of the plant. Edible gymnosperm seeds are often given fruit names, e.g., ginkgo nuts and pine nuts.
Botanically, a cereal grain, such as corn, rice, or wheat is a kind of fruit (termed a caryopsis). However, the fruit wall is thin and fused to the seed coat, so almost all the edible grain-fruit is actually a seed.
Structure
The outer layer, often edible, of most fruits is called the pericarp. Typically formed from the ovary, it surrounds the seeds; in some species, however, other structural tissues contribute to or form the edible portion. The pericarp may be described in three layers from outer to inner, i.e., the epicarp, mesocarp and endocarp.
Fruit that bears a prominent pointed terminal projection is said to be beaked.
Development
A fruit results from the fertilizing and maturing of one or more flowers. The gynoecium, which contains the stigma-style-ovary system, is centered in the flower-head, and it forms all or part of the fruit. Inside the ovary(ies) are one or more ovules. Here begins a complex sequence called double fertilization: a female gametophyte produces an egg cell for the purpose of fertilization. (A female gametophyte is called a megagametophyte, and also called the embryo sac.) After double fertilization, the ovules will become seeds.
Ovules are fertilized in a process that starts with pollination, which is the movement of pollen from the stamens to the stigma-style-ovary system within the flower-head. After pollination, a pollen tube grows from the (deposited) pollen through the stigma down the style into the ovary to the ovule. Two sperm are transferred from the pollen to a megagametophyte. Within the megagametophyte, one sperm unites with the egg, forming a zygote, while the second sperm enters the central cell forming the endosperm mother cell, which completes the double fertilization process. Later, the zygote will give rise to the embryo of the seed, and the endosperm mother cell will give rise to endosperm, a nutritive tissue used by the embryo.
Fruit formation is associated with meiosis, a central aspect of sexual reproduction in flowering plants. During meiosis homologous chromosomes replicate, recombine and randomly segregate, and then undergo segregation of sister chromatids to produce haploid cells. Union of haploid nuclei from pollen and ovule (fertilisation), occurring either by self- or cross-pollination, leads to the formation of a diploid zygote that can then develop into an embryo within the emerging seed. Repeated fertilisations within the ovary are accompanied by maturation of the ovary to form the fruit.
As the ovules develop into seeds, the ovary begins to ripen and the ovary wall, the pericarp, may become fleshy (as in berries or drupes), or it may form a hard outer covering (as in nuts). In some multi-seeded fruits, the extent to which a fleshy structure develops is proportional to the number of fertilized ovules. The pericarp typically is differentiated into two or three distinct layers; these are called the exocarp (outer layer, also called epicarp), mesocarp (middle layer), and endocarp (inner layer).
In some fruits, the sepals, petals, stamens or the style of the flower fall away as the fleshy fruit ripens. However, for simple fruits derived from an inferior ovary – i.e., one that lies the attachment of other floral parts – there are parts (including petals, sepals, and stamens) that fuse with the ovary and ripen with it. For such a case, when floral parts other than the ovary form a significant part of the fruit that develops, it is called an accessory fruit. Examples of accessory fruits include apple, rose hip, strawberry, and pineapple.
Because several parts of the flower besides the ovary may contribute to the structure of a fruit, it is important to understand how a particular fruit forms. There are three general modes of fruit development:
Apocarpous fruits develop from a single flower (while having one or more separate, unfused, carpels); they are the simple fruits.
Syncarpous fruits develop from a single gynoecium (having two or more carpels fused together).
Multiple fruits form from many flowers – i.e., an inflorescence of flowers.
Classification of fruits
Consistent with the three modes of fruit development, plant scientists have classified fruits into three main groups: simple fruits, aggregate fruits, and multiple (or composite) fruits. The groupings reflect how the ovary and other flower organs are arranged and how the fruits develop, but they are not evolutionarily relevant as diverse plant taxa may be in the same group.
While the section of a fungus that produces spores is called a fruiting body, fungi are members of the fungi kingdom and not of the plant kingdom.
Simple fruits
Simple fruits are the result of the ripening-to-fruit of a simple or compound ovary in a single flower with a single pistil. In contrast, a single flower with numerous pistils typically produces an aggregate fruit; and the merging of several flowers, or a 'multiple' of flowers, results in a 'multiple' fruit. A simple fruit is further classified as either dry or fleshy.
To distribute their seeds, dry fruits may split open and discharge their seeds to the winds, which is called dehiscence. Or the distribution process may rely upon the decay and degradation of the fruit to expose the seeds; or it may rely upon the eating of fruit and excreting of seeds by frugivores – both are called indehiscence. Fleshy fruits do not split open, but they also are indehiscent and they may also rely on frugivores for distribution of their seeds. Typically, the entire outer layer of the ovary wall ripens into a potentially edible pericarp.
Types of dry simple fruits, (with examples) include:
Achene – most commonly seen in aggregate fruits (e.g., strawberry, see below).
Capsule – (Brazil nut: botanically, it is not a nut).
Caryopsis – (cereal grains, including wheat, rice, oats, barley).
Cypsela – an achene-like fruit derived from the individual florets in a capitulum: (dandelion).
Fibrous drupe – (coconut, walnut: botanically, neither is a true nut.).
Follicle – follicles are formed from a single carpel, and opens by one suture: (milkweed); also commonly seen in aggregate fruits: (magnolia, peony).
Legume – (bean, pea, peanut: botanically, the peanut is the seed of a legume, not a nut).
Loment – a type of indehiscent legume: (sweet vetch or wild potato).
Nut – (beechnut, hazelnut, acorn (of the oak): botanically, these are true nuts).
Samara – (ash, elm, maple key).
Schizocarp, see below – (carrot seed).
Silique – (radish seed).
Silicle – (shepherd's purse).
Utricle – (beet, Rumex).
Fruits in which part or all of the pericarp (fruit wall) is fleshy at maturity are termed fleshy simple fruits.
Types of fleshy simple fruits, (with examples) include:
Berry – the berry is the most common type of fleshy fruit. The entire outer layer of the ovary wall ripens into a potentially edible "pericarp", (see below).
Stone fruit or drupe – the definitive characteristic of a drupe is the hard, "lignified" stone (sometimes called the "pit"). It is derived from the ovary wall of the flower: apricot, cherry, olive, peach, plum, mango.
Pome – the pome fruits: apples, pears, rosehips, saskatoon berry, etc., are a syncarpous (fused) fleshy fruit, a simple fruit, developing from a half-inferior ovary. Pomes are of the family Rosaceae.
Berries
Berries are a type of simple fleshy fruit that issue from a single ovary. (The ovary itself may be compound, with several carpels.) The botanical term true berry includes grapes, currants, cucumbers, eggplants (aubergines), tomatoes, chili peppers, and bananas, but excludes certain fruits that are called "-berry" by culinary custom or by common usage of the term – such as strawberries and raspberries. Berries may be formed from one or more carpels (i.e., from the simple or compound ovary) from the same, single flower. Seeds typically are embedded in the fleshy interior of the ovary.
Examples include:
Tomato – in culinary terms, the tomato is regarded as a vegetable, but it is botanically classified as a fruit and a berry.
Banana – the fruit has been described as a "leathery berry". In cultivated varieties, the seeds are diminished nearly to non-existence.
Pepo – berries with skin that is hardened: cucurbits, including gourds, squash, melons.
Hesperidium – berries with a rind and a juicy interior: most citrus fruit.
Cranberry, gooseberry, redcurrant, grape.
The strawberry, regardless of its appearance, is classified as a dry, not a fleshy fruit. Botanically, it is not a berry; it is an aggregate-accessory fruit, the latter term meaning the fleshy part is derived not from the plant's ovaries but from the receptacle that holds the ovaries. Numerous dry achenes are attached to the outside of the fruit-flesh; they appear to be seeds but each is actually an ovary of a flower, with a seed inside.
Schizocarps are dry fruits, though some appear to be fleshy. They originate from syncarpous ovaries but do not actually dehisce; rather, they split into segments with one or more seeds. They include a number of different forms from a wide range of families, including carrot, parsnip, parsley, cumin.
Aggregate fruits
An aggregate fruit is also called an aggregation, or etaerio; it develops from a single flower that presents numerous simple pistils. Each pistil contains one carpel; together, they form a fruitlet. The ultimate (fruiting) development of the aggregation of pistils is called an aggregate fruit, etaerio fruit, or simply an etaerio.
Different types of aggregate fruits can produce different etaerios, such as achenes, drupelets, follicles, and berries.
For example, the Ranunculaceae species, including Clematis and Ranunculus, produces an etaerio of achenes;
Rubus species, including raspberry: an etaerio of drupelets;
Calotropis species: an etaerio of follicles fruit;
Annona species: an etaerio of berries.
Some other broadly recognized species and their etaerios (or aggregations) are:
Teasel; fruit is an aggregation of cypselas.
Tuliptree; fruit is an aggregation of samaras.
Magnolia and peony; fruit is an aggregation of follicles.
American sweet gum; fruit is an aggregation of capsules.
Sycamore; fruit is an aggregation of achenes.
The pistils of the raspberry are called drupelets because each pistil is like a small drupe attached to the receptacle. In some bramble fruits, such as blackberry, the receptacle, an accessory part, elongates and then develops as part of the fruit, making the blackberry an aggregate-accessory fruit. The strawberry is also an aggregate-accessory fruit, of which the seeds are contained in the achenes. Notably in all these examples, the fruit develops from a single flower, with numerous pistils.
Multiple fruits
A multiple fruit is formed from a cluster of flowers, (a 'multiple' of flowers) – also called an inflorescence. Each ('smallish') flower produces a single fruitlet, which, as all develop, all merge into one mass of fruit. Examples include pineapple, fig, mulberry, Osage orange, and breadfruit. An inflorescence (a cluster) of white flowers, called a head, is produced first. After fertilization, each flower in the cluster develops into a drupe; as the drupes expand, they develop as a connate organ, merging into a multiple fleshy fruit called a syncarp.
Progressive stages of multiple flowering and fruit development can be observed on a single branch of the Indian mulberry, or noni. During the sequence of development, a progression of second, third, and more inflorescences are initiated in turn at the head of the branch or stem.
Accessory fruit forms
Fruits may incorporate tissues derived from other floral parts besides the ovary, including the receptacle, hypanthium, petals, or sepals. Accessory fruits occur in all three classes of fruit development – simple, aggregate, and multiple. Accessory fruits are frequently designated by the hyphenated term showing both characters. For example, a pineapple is a multiple-accessory fruit, a blackberry is an aggregate-accessory fruit, and an apple is a simple-accessory fruit.
Table of fleshy fruit examples
Seedless fruits
Seedlessness is an important feature of some fruits of commerce. Commercial cultivars of bananas and pineapples are examples of seedless fruits. Some cultivars of citrus fruits (especially grapefruit, mandarin oranges, navel oranges, satsumas), table grapes, and of watermelons are valued for their seedlessness. In some species, seedlessness is the result of parthenocarpy, where fruits set without fertilization. Parthenocarpic fruit-set may (or may not) require pollination, but most seedless citrus fruits require a stimulus from pollination to produce fruit. Seedless bananas and grapes are triploids, and seedlessness results from the abortion of the embryonic plant that is produced by fertilization, a phenomenon known as stenospermocarpy, which requires normal pollination and fertilization.
Seed dissemination
Variations in fruit structures largely depend on the modes of dispersal applied to their seeds. Dispersal is achieved by wind or water, by explosive dehiscence, and by interactions with animals.
Some fruits present their outer skins or shells coated with spikes or hooked burrs; these evolved either to deter would-be foragers from feeding on them or to serve to attach themselves to the hair, feathers, legs, or clothing of animals, thereby using them as dispersal agents. These plants are termed zoochorous; common examples include cocklebur, unicorn plant, and beggarticks (or Spanish needle).
By developments of mutual evolution, the fleshy produce of fruits typically appeals to hungry animals, such that the seeds contained within are taken in, carried away, and later deposited (i.e., defecated) at a distance from the parent plant. Likewise, the nutritious, oily kernels of nuts typically motivate birds and squirrels to hoard them, burying them in soil to retrieve later during the winter of scarcity; thereby, uneaten seeds are sown effectively under natural conditions to germinate and grow a new plant some distance away from the parent.
Other fruits have evolved flattened and elongated wings or helicopter-like blades, e.g., elm, maple, and tuliptree. This mechanism increases dispersal distance away from the parent via wind. Other wind-dispersed fruit have tiny "parachutes", e.g., dandelion, milkweed, salsify.
Coconut fruits can float thousands of miles in the ocean, thereby spreading their seeds. Other fruits that can disperse via water are nipa palm and screw pine.
Some fruits have evolved propulsive mechanisms that fling seeds substantial distances – perhaps up to in the case of the sandbox tree – via explosive dehiscence or other such mechanisms (see impatiens and squirting cucumber).
Food uses
A cornucopia of fruits – fleshy (simple) fruits from apples to berries to watermelon; dry (simple) fruits including beans and rice and coconuts; aggregate fruits including strawberries, raspberries, blackberries, pawpaw; and multiple fruits such as pineapple, fig, mulberries – are commercially valuable as human food. They are eaten both fresh and as jams, marmalade and other fruit preserves. They are used extensively in manufactured and processed foods (cakes, cookies, baked goods, flavorings, ice cream, yogurt, canned vegetables, frozen vegetables and meals) and beverages such as fruit juices and alcoholic beverages (brandy, fruit beer, wine). Spices like vanilla, black pepper, paprika, and allspice are derived from berries. Olive fruit is pressed for olive oil and similar processing is applied to other oil-bearing fruits and vegetables. Some fruits are available all year round, while others (such as blackberries and apricots in the UK) are subject to seasonal availability.
Fruits are also used for socializing and gift-giving in the form of fruit baskets and fruit bouquets.
Typically, many botanical fruits – "vegetables" in culinary parlance – (including tomato, green beans, leaf greens, bell pepper, cucumber, eggplant, okra, pumpkin, squash, zucchini) are bought and sold daily in fresh produce markets and greengroceries and carried back to kitchens, at home or restaurant, for preparation of meals.
Storage
All fruits benefit from proper post-harvest care, and in many fruits, the plant hormone ethylene causes ripening. Therefore, maintaining most fruits in an efficient cold chain is optimal for post-harvest storage, with the aim of extending and ensuring shelf life.
Nutritional value
A meta-analysis of 83 studies showed fruit or vegetable consumption is associated with reduced markers of inflammation (reduced tumor necrosis factor and C-reactive protein) and enhanced immune cell profile (increased gamma delta T cells).
Various culinary fruits provide significant amounts of fiber and water, and many are generally high in vitamin C. An overview of numerous studies showed that fruits (e.g., whole apples or whole oranges) are satisfying (filling) by simply eating and chewing them.
The dietary fiber consumed in eating fruit promotes satiety, and may help to control body weight and aid reduction of blood cholesterol, a risk factor for cardiovascular diseases. Fruit consumption is under preliminary research for the potential to improve nutrition and affect chronic diseases. Regular consumption of fruit is generally associated with reduced risks of several diseases and functional declines associated with aging.
Food safety
For food safety, the CDC recommends proper fruit handling and preparation to reduce the risk of food contamination and foodborne illness. Fresh fruits and vegetables should be carefully selected; at the store, they should not be damaged or bruised; and precut pieces should be refrigerated or surrounded by ice.
All fruits and vegetables should be rinsed before eating. This recommendation also applies to produce with rinds or skins that are not eaten. It should be done just before preparing or eating to avoid premature spoilage.
Fruits and vegetables should be kept separate from raw foods like meat, poultry, and seafood, as well as from utensils that have come in contact with raw foods. Fruits and vegetables that are not going to be cooked should be thrown away if they have touched raw meat, poultry, seafood, or eggs.
All cut, peeled, or cooked fruits and vegetables should be refrigerated within two hours. After a certain time, harmful bacteria may grow on them and increase the risk of foodborne illness.
Allergies
Fruit allergies make up about 10 percent of all food-related allergies.
Nonfood uses
Because fruits have been such a major part of the human diet, various cultures have developed many different uses for fruits they do not depend on for food. For example:
Bayberry fruits provide a wax often used to make candles;
Many dry fruits are used as decorations or in dried flower arrangements (e.g., annual honesty, cotoneaster, lotus, milkweed, unicorn plant, and wheat). Ornamental trees and shrubs are often cultivated for their colorful fruits, including beautyberry, cotoneaster, holly, pyracantha, skimmia, and viburnum.
Fruits of opium poppy are the source of opium, which contains the drugs codeine and morphine, as well as the biologically inactive chemical theabaine from which the drug oxycodone is synthesized.
Osage orange fruits are used to repel cockroaches.
Many fruits provide natural dyes (e.g., cherry, mulberry, sumac, and walnut).
Dried gourds are used as bird houses, cups, decorations, dishes, musical instruments, and water jugs.
Pumpkins are carved into Jack-o'-lanterns for Halloween.
The fibrous core of the mature and dry Luffa fruit is used as a sponge.
The spiny fruit of burdock or cocklebur inspired the invention of Velcro.
Coir fiber from coconut shells is used for brushes, doormats, floor tiles, insulation, mattresses, sacking, and as a growing medium for container plants. The shell of the coconut fruit is used to make bird houses, bowls, cups, musical instruments, and souvenir heads.
The hard and colorful grain fruits of Job's tears are used as decorative beads for jewelry, garments, and ritual objects.
Fruit is often a subject of still life paintings.
| Biology and health sciences | Food and drink | null |
1023378 | https://en.wikipedia.org/wiki/Albite | Albite | Albite is a plagioclase feldspar mineral. It is the sodium endmember of the plagioclase solid solution series. It represents a plagioclase with less than 10% anorthite content. The pure albite endmember has the formula . It is a tectosilicate. Its color is usually pure white, hence its name from Latin, . It is a common constituent in felsic rocks.
Properties
Albite crystallizes with triclinic pinacoidal forms. Its specific gravity is about 2.62 and it has a Mohs hardness of 6 to 6.5. Albite almost always exhibits crystal twinning often as minute parallel striations on the crystal face. Albite often occurs as fine parallel segregations alternating with pink microcline in perthite as a result of exolution on cooling.
There are two variants of albite, which are referred to as 'low albite' and 'high albite'; the latter is also known as 'analbite'. Although both variants are triclinic, they differ in the volume of their unit cell, which is slightly larger for the 'high' form. The 'high' form can be produced from the 'low' form by heating above High albite can be found in meteor impact craters such as in Winslow, Arizona. Upon further heating to more than the crystal symmetry changes from triclinic to monoclinic; this variant is also known as 'monalbite'. Albite melts at .
Oftentimes, potassium can replace the sodium characteristic in albite at amounts of up to 10%. When this is exceeded the mineral is then considered to be anorthoclase.
Occurrence
It occurs in granitic and pegmatite masses (often as the variety cleavelandite), in some hydrothermal vein deposits, and forms part of the typical greenschist metamorphic facies for rocks of originally basaltic composition. Minerals that albite is often considered associated with in occurrence include biotite, hornblende, orthoclase, muscovite and quartz.
Discovery
Albite was first reported in 1815 for an occurrence in Finnbo, Falun, Dalarna, Sweden.
Use
Albite is used as a gemstone, albeit semiprecious. Albite is also used by geologists as it is identified as an important rock forming mineral. There is some industrial use for the mineral such as the manufacture of glass and ceramics.
One of the iridescent varieties of albite, discovered in 1925 near the White Sea coast by academician Alexander Fersman, became widely known under the trade name belomorite.
| Physical sciences | Silicate minerals | Earth science |
1023388 | https://en.wikipedia.org/wiki/Eridanus%20%28constellation%29 | Eridanus (constellation) | Eridanus is a constellation which stretches along the southern celestial hemisphere. It is represented as a river. One of the 48 constellations listed by the 2nd century AD astronomer Ptolemy, it remains one of the 88 modern constellations. It is the sixth largest of the modern constellations. The same name was later taken as a Latin name for the real Po River and also for the name of a minor river in Athens.
Features
Stars
At its southern end is the magnitude 0.5 star Achernar, designated Alpha Eridani. It is a blue-white hued main sequence star 144 light-years from Earth, whose traditional name means "the river's end". Achernar is a very peculiar star because it is one of the flattest stars known. Observations indicate that its radius is about 50% larger at the equator than at the poles. This distortion occurs because the star is spinning extremely rapidly.
There are several other noteworthy stars in Eridanus, including some double stars. Beta Eridani, traditionally called Cursa, is a blue-white star of magnitude 2.8, 89 light-years from Earth. Its place to the south of Orion's foot gives it its name, which means "the footstool". Theta Eridani, called Acamar, is a binary star with blue-white components, distinguishable in small amateur telescopes and 161 light-years from Earth. The primary is of magnitude 3.2 and the secondary is of magnitude 4.3. 32 Eridani is a binary star 290 light-years from Earth. The primary is a yellow-hued star of magnitude 4.8 and the secondary is a blue-green star of magnitude 6.1. 32 Eridani is visible in small amateur telescopes. 39 Eridani is a binary star also divisible in small amateur telescopes, 206 light-years from Earth. The primary is an orange-hued giant star of magnitude 4.9 and the secondary is of magnitude 8. 40 Eridani is a triple star system consisting of an orange main-sequence star, a white dwarf, and a red dwarf. The orange main-sequence star is the primary of magnitude 4.4, and the white secondary of magnitude 9.5 is the most easily visible white dwarf. The red dwarf, of magnitude 11, orbits the white dwarf every 250 years. The 40 Eridani system is 16 light-years from Earth. p Eridani is a binary star with two orange components, 27 light-years from Earth. The magnitude 5.8 primary and 5.9 secondary have an orbital period of 500 years.
Epsilon Eridani (the proper name is Ran) is a star with one extrasolar planet similar to Jupiter. It is an orange-hued main-sequence star of magnitude 3.7, 10.5 light-years from Earth. Its one planet, with an approximate mass of one Jupiter mass, has a period of 7 years.
Supervoid
The Eridanus Supervoid is a large supervoid (an area of the universe devoid of galaxies) discovered . At a diameter of about one billion light years it is the second largest known void, superseded only by the Giant Void in Canes Venatici. It was discovered by linking a "cold spot" in the cosmic microwave background to an absence of radio galaxies in data of the United States National Radio Astronomy Observatory's Very Large Array Sky Survey. There is some speculation that the void may be due to quantum entanglement between our universe and another.
Deep-sky objects
NGC 1535 is a small blue-gray planetary nebula visible in small amateur telescopes, with a disk visible in large amateur instruments. 2000 light-years away, it is of the 9th magnitude.
A portion of the Orion Molecular Cloud Complex can be found in the far northeastern section of Eridanus. IC 2118 is a faint reflection nebula believed to be an ancient supernova remnant or gas cloud illuminated by nearby supergiant star Rigel in Orion.
Eridanus contains the galaxies NGC 1232, NGC 1234, NGC 1291 and NGC 1300, a grand design barred spiral galaxy.
NGC 1300 is a face-on barred spiral galaxy located 61 (plus or minus 8) million light-years away. The center of the bar shows an unusual structure: within the overall spiral structure, a grand design spiral that is 3,300 light-years in diameter exists. Its spiral arms are tightly wound.
Meteor showers
The Nu Eridanids, a recently discovered meteor shower, radiate from the constellation between August 30 and September 12 every year; the shower's parent body is an unidentified Oort cloud object. Another meteor shower in Eridanus is the Omicron Eridanids, which peak between November 1 and 10.
Visualizations
Eridanus is depicted in ancient sky charts as a flowing river, starting from Orion and flowing in a meandering fashion past Cetus and Fornax and into the southern hemispheric stars. Johann Bayer's Uranometria depicts the river constellation as a flowing river.
History and mythology
According to one theory, the Greek constellation takes its name from the Babylonian constellation known as the Star of Eridu (MUL.NUN.KI). Eridu was an ancient city in the extreme south of Babylonia; situated in the marshy regions it was held sacred to the god Enki-Ea who ruled the cosmic domain of the Abyss - a mythical conception of the fresh-water reservoir below the Earth's surface.
Eridanus is connected to the myth of Phaethon, who took over the reins of his father Helios' sky chariot (i.e., the Sun), but didn't have the strength to control it and so veered wildly in different directions, scorching both Earth and heaven. Zeus intervened by striking Phaethon dead with a thunderbolt and casting him to Earth. The constellation was supposed to be the path Phaethon drove along; in later times, it was considered a path of souls. Since Eridanos was also a Greek name for the Po (Latin Padus), in which the burning body of Phaethon is said by Ovid to have extinguished, the mythic geography of the celestial and earthly Eridanus is complex.
Another association with Eridanus is a series of rivers all around the world. First conflated with the Nile River in Egypt, the constellation was also identified with the Po River in Italy. The stars of the modern constellation Fornax were formerly a part of Eridanus.
Equivalents
The stars that correspond to Eridanus are also depicted as a river in Indian astronomy starting close to the head of Orion just below Auriga. Eridanus is called Srotaswini in Sanskrit, srótas meaning the course of a river or stream. Specifically, it is depicted as the Ganges on the head of Dakshinamoorthy or Nataraja, a Hindu incarnation of Shiva. Dakshinamoorthy himself is represented by the constellation Orion.
The stars that correspond to Eridanus cannot be fully seen from China. In Chinese astronomy, the northern part is located within the White Tiger of the West (西方白虎, Xī Fāng Bái Hǔ). The unseen southern part was classified among the Southern Asterisms (近南極星區, Jìnnánjíxīngōu) by Xu Guangqi, based on knowledge of western star charts.
Namesakes
USS Eridanus (AK-92) was a United States Navy Crater-class cargo ship named after the constellation.
was a French cargo liner named after the constellation.
| Physical sciences | Other | Astronomy |
1023548 | https://en.wikipedia.org/wiki/Diopside | Diopside | Diopside is a monoclinic pyroxene mineral with composition . It forms complete solid solution series with hedenbergite () and augite, and partial solid solutions with orthopyroxene and pigeonite. It forms variably colored, but typically dull green crystals in the monoclinic prismatic class. It has two distinct prismatic cleavages at 87 and 93° typical of the pyroxene series. It has a Mohs hardness of six, a Vickers hardness of 7.7 GPa at a load of 0.98 N, and a specific gravity of 3.25 to 3.55. It is transparent to translucent with indices of refraction of nα=1.663–1.699, nβ=1.671–1.705, and nγ=1.693–1.728. The optic angle is 58° to 63°.
Formation
Diopside is found in ultramafic (kimberlite and peridotite) igneous rocks, and diopside-rich augite is common in mafic rocks, such as olivine basalt and andesite. Diopside is also found in a variety of metamorphic rocks, such as in contact metamorphosed skarns developed from high silica dolomites. It is an important mineral in the Earth's mantle and is common in peridotite xenoliths erupted in kimberlite and alkali basalt.
Mineralogy and occurrence
Diopside is a precursor of chrysotile (white asbestos) by hydrothermal alteration and magmatic differentiation; it can react with hydrous solutions of magnesium and chlorine to yield chrysotile by heating at 600 °C for three days. Some vermiculite deposits, most notably those in Libby, Montana, are contaminated with chrysotile (as well as other forms of asbestos) that formed from diopside.
At relatively high temperatures, there is a miscibility gap between diopside and pigeonite, and at lower temperatures, between diopside and orthopyroxene. The calcium/(calcium+magnesium+iron) ratio in diopside that formed with one of these other two pyroxenes is particularly sensitive to temperature above 900 °C, and compositions of diopside in peridotite xenoliths have been important in reconstructions of temperatures in the Earth's mantle.
Chrome diopside () is a common constituent of peridotite xenoliths, and dispersed grains are found near kimberlite pipes, and as such are a prospecting indicator for diamonds. Occurrences are reported in Canada, South Africa, Russia, Brazil, and a wide variety of other locations. In the US, chromian diopside localities are described in the serpentinite belt in northern California, in kimberlite in the Colorado-Wyoming State Line district, in kimberlite in the Iron Mountain district, Wyoming, in lamprophyre at Cedar Mountain in Wyoming, and in numerous anthills and outcrops of the Tertiary Bishop Conglomerate in the Green River Basin of Wyoming. Much chromian diopside from the Green River Basin localities and several of the State Line Kimberlites have been gem in character.
As a gem
Gemstone quality diopside is found in two forms: black star diopside and chrome diopside (which includes chromium, giving it a rich green color). At 5.5–6.5 on the Mohs scale, chrome diopside is relatively soft to scratch. Due to the deep green color of the gem, they are sometimes referred to as Siberian emeralds, although they are on a gemological level completely unrelated, emerald being a precious stone and diopside being a semi-precious stone.
Green diopside crystals included within a white feldspar matrix are also sold as gemstones, usually as beads or cabochons. This stone is often marketed as 'green spot jasper' or green spot stone'.
Violane is a manganese-rich variety of diopside, violet to light blue in color.
Etymology and history
Diopside derives its name from the Greek dis, "twice", and òpsè, "face" in reference to the two ways of orienting the vertical prism.
Diopside was discovered and first described about 1800, by Brazilian naturalist Jose Bonifacio de Andrada e Silva.
Potential uses
Diopside based ceramics and glass-ceramics have potential applications in various technological areas. A diopside based glass-ceramic named 'silceram' was produced by scientists from Imperial College, UK during the 1980s from blast furnace slag and other waste products. They also produced glass-ceramic is a potential structural material. Similarly, diopside based ceramics and glass-ceramics have potential applications in the field of biomaterials, nuclear waste immobilization and sealing materials in solid oxide fuel cells.
| Physical sciences | Silicate minerals | Earth science |
1024033 | https://en.wikipedia.org/wiki/Potassium%20perchlorate | Potassium perchlorate | Potassium perchlorate is the inorganic salt with the chemical formula KClO4. Like other perchlorates, this salt is a strong oxidizer when the solid is heated at high temperature although it usually reacts very slowly in solution with reducing agents or organic substances. This colorless crystalline solid is a common oxidizer used in fireworks, ammunition percussion caps, explosive primers, and is used variously in propellants, flash compositions, stars, and sparklers. It has been used as a solid rocket propellant, although in that application it has mostly been replaced by the more performant ammonium perchlorate.
KClO4 has a relatively low solubility in water (1.5 g in 100 mL of water at 25 °C).
Production
Potassium perchlorate is prepared industrially by treating an aqueous solution of sodium perchlorate with potassium chloride. This single precipitation reaction exploits the low solubility of KClO4, which is about 1/100 as much as the solubility of NaClO4 (209.6 g/100 mL at 25 °C).
It can also be produced by bubbling chlorine gas through a solution of potassium chlorate and potassium hydroxide, and by the reaction of perchloric acid with potassium hydroxide; however, this is not used widely due to the dangers of perchloric acid.
Another preparation involves the electrolysis of a potassium chlorate solution, causing KClO4 to form and precipitate at the anode. This procedure is complicated by the low solubility of both potassium chlorate and potassium perchlorate, the latter of which may precipitate onto the electrodes and impede the current.
Oxidizing properties
KClO4 is an oxidizer in the sense that it exothermically "transfers oxygen" to combustible materials, greatly increasing their rate of combustion relative to that in air. Thus, it reacts with glucose to give carbon dioxide, water molecules and potassium chloride:
3 KClO4 + C6H12O6 → 6 CO2 + 6 H2O + 3 KCl
The conversion of solid glucose into hot gaseous is the basis of the explosive force of this and other such mixtures. With sugar, KClO4 yields a low explosive, provided a necessary confinement. Otherwise such mixtures simply deflagrate with an intense purple flame characteristic of potassium. Flash compositions used in firecrackers usually consist of a mixture of aluminium powder and potassium perchlorate. This mixture, sometimes called flash powder, is also used in ground and air fireworks.
As an oxidizer, potassium perchlorate can be used safely in the presence of sulfur, whereas potassium chlorate cannot. The greater reactivity of chlorate is typical – perchlorates are kinetically poorer oxidants. Chlorate produces chloric acid (), which is highly unstable and can lead to premature ignition of the composition. Correspondingly, perchloric acid () is quite stable.
For a commercial use, potassium perchlorate is mixed 50/50 with potassium nitrate to fabricate Pyrodex, a black powder substitute, and when not compressed within a muzzle loading firearm or in a cartridge, burns at a sufficiently slow rate to prevent it from being categorized with the black powder as a "low explosive", and to demote it as "flammable" material.
Debated medical use
Potassium perchlorate can be used as an antithyroid agent used to treat hyperthyroidism, usually in combination with one other medication. This application exploits the similar ionic radius and hydrophilicity of perchlorate and iodide.
The administration of known goitrogen substances can also be used as a prevention in reducing the biological uptake of iodine, (whether it is the nutritional non-radioactive iodine-127 or radioactive iodine, most commonly iodine-131 (half-life = 8.02 days), as the body cannot discern between different iodine isotopes). Perchlorate ions, a common water contaminant in the USA due to the aerospace industry, has been shown to reduce iodine uptake and thus is classified as a goitrogen. Perchlorate ion is a competitive inhibitor of the process by which iodide is actively accumulated into the thyroid follicular cells. Studies involving healthy adult volunteers determined that at levels above 7 micrograms per kilogram per day (μg/(kg·d)), perchlorate begins to temporarily inhibit the thyroid gland's ability to absorb iodine from the bloodstream ("iodide uptake inhibition", thus perchlorate is a known goitrogen). The reduction of the iodide pool by perchlorate has a dual effect – reduction of excess hormone synthesis and hyperthyroidism, on the one hand, and reduction of thyroid inhibitor synthesis and hypothyroidism on the other. Perchlorate remains very useful as a single dose application in tests measuring the discharge of radioiodide accumulated in the thyroid as a result of many different disruptions in the further metabolism of iodide in the thyroid gland.
Treatment of thyrotoxicosis (including Graves' disease) with 600-2,000 mg potassium perchlorate (430-1,400 mg perchlorate) daily for periods of several months, or longer, was once a common practice, particularly in Europe, and perchlorate use at lower doses to treat thyroid problems continues to this day. Although 400 mg of potassium perchlorate divided into four or five daily doses was used initially and found effective, higher doses were introduced when 400 mg/d was discovered not to control thyrotoxicosis in all subjects.
Current regimens for treatment of thyrotoxicosis (including Graves' disease), when a patient is exposed to additional sources of iodine, commonly include 500 mg potassium perchlorate twice per day for 18–40 days.
Prophylaxis with perchlorate-containing water at concentrations of 17 ppm, corresponding to 0.5 mg/(kg·d) intake for a person of 70 kg consuming 2 litres of water per day, was found to reduce the baseline of radioiodine uptake by 67% This is equivalent to ingesting a total of just 35 mg of perchlorate ions per day. In another related study were subjects drank just 1 litre of perchlorate-containing water per day at a concentration of 10 ppm, i.e. daily 10 mg of perchlorate ions were ingested, an average 38% reduction in the uptake of Iodine was observed.
However, when the average perchlorate absorption in perchlorate plant workers subjected to the highest exposure has been estimated as approximately 0.5 mg/(kg·d), as in the above paragraph, a 67% reduction of iodine uptake would be expected. Studies of chronically exposed workers though have thus far failed to detect any abnormalities of thyroid function, including the uptake of iodine. This may well be attributable to sufficient daily exposure, or intake, of stable iodine-127 among these workers and the short 8 hr biological half life of perchlorate in the body.
To completely block the uptake of iodine-131 (half-life = 8.02 days) by the purposeful addition of perchlorate ions to a public water supply, aiming at dosages of 0.5 mg/(kg·d), or a water concentration of 17 ppm, would therefore be grossly inadequate at truly reducing a radio-iodine uptake. Perchlorate ion concentrations in a region water supply, would need to be much higher, at least 7.15 mg/kg of body weight per day or a water concentration of 250 ppm, assuming people drink 2 liters of water per day, to be truly beneficial to the population at preventing bioaccumulation when exposed to an iodine-131 contamination, independent of the availability of iodate or iodide compounds.
The distribution of perchlorate tablets, or the addition of perchlorate to the water supply, would need to continue for 80–90 days (~10 half-life of 8.02 days) after the release of iodine-131. After this time, the radioactive iodine-131 would have decayed to less than 1/1000 of its initial activity at which time the danger from the biological uptake of iodine-131 is essentially over.
Limitations and criticisms
So, perchlorate administration could represent a possible alternative to iodide tablets distribution in case of a large-scale nuclear accident releasing important quantities of iodine-131 in the atmosphere. However, the advantages are not always clear and would depend on the extent of a hypothetical nuclear accident. As for the stable iodide intake to rapidly saturate the thyroid gland before it accumulates radioactive iodine-131, a careful cost-benefit analysis has to be first done by the nuclear safety authorities. Indeed, blocking the thyroid activity of a whole population for three months can also have negative consequences for the human health, especially for young children.
So, the decision of perchlorate, or stable iodine, administration cannot be left to the individual initiative and falls under the authority of the government in case of a major nuclear accident.
Injecting perchlorate or iodide directly in the public drinking water is also probably as restrictive as tablets distribution.
| Physical sciences | Halide oxyanions | Chemistry |
1024947 | https://en.wikipedia.org/wiki/J/psi%20meson | J/psi meson | The (J/psi) meson is a subatomic particle, a flavor-neutral meson consisting of a charm quark and a charm antiquark. Mesons formed by a bound state of a charm quark and a charm anti-quark are generally known as "charmonium" or psions. The is the most common form of charmonium, due to its spin of 1 and its low rest mass. The has a rest mass of , just above that of the (), and a mean lifetime of . This lifetime was about a thousand times longer than expected.
Its discovery was made independently by two research groups, one at the Stanford Linear Accelerator Center, headed by Burton Richter, and one at the Brookhaven National Laboratory, headed by Samuel Ting of MIT. They discovered that they had found the same particle, and both announced their discoveries on 11 November 1974. The importance of this discovery is highlighted by the fact that the subsequent, rapid changes in high-energy physics at the time have become collectively known as the "November Revolution". Richter and Ting were awarded the 1976 Nobel Prize in Physics.
Background to discovery
The background to the discovery of the was both theoretical and experimental. In the 1960s, the first quark models of elementary particle physics were proposed, which said that protons, neutrons, and all other baryons, and also all mesons, are made from fractionally charged particles, the "quarks", originally with three types or "flavors", called up, down, and strange. (Later the model was expanded to six quarks, adding the charm, top and bottom quarks.) Despite the ability of quark models to bring order to the "elementary particle zoo", they were considered something like mathematical fiction at the time, a simple artifact of deeper physical reasons.
Starting in 1969, deep inelastic scattering experiments at SLAC revealed surprising experimental evidence for particles inside of protons. Whether these were quarks or something else was not known at first. Many experiments were needed to fully identify the properties of the sub-protonic components. To a first approximation, they indeed were a match for the previously described quarks.
On the theoretical front, gauge theories with broken symmetry became the first fully viable contenders for explaining the weak interaction after Gerardus 't Hooft discovered in 1971 how to calculate with them beyond tree level. The first experimental evidence for these electroweak unification theories was the discovery of the weak neutral current in 1973. Gauge theories with quarks became a viable contender for the strong interaction in 1973, when the concept of asymptotic freedom was identified.
However, a naive mixture of electroweak theory and the quark model led to calculations about known decay modes that contradicted observation: In particular, it predicted Z boson-mediated flavor-changing decays of a strange quark into a down quark, which were not observed. A 1970 idea of Sheldon Glashow, John Iliopoulos, and Luciano Maiani, known as the GIM mechanism, showed that the flavor-changing decays would be strongly suppressed if there were a fourth quark (now called the charm quark) that was a complementary counterpart to the strange quark. By summer 1974 this work had led to theoretical predictions of what a charm + anticharm meson would be like.
The group at Brookhaven, were the first to discern a peak at 3.1 GeV in plots of production rates and named the particle the ψ meson. Ting named it the "J meson" in his simultaneous discovery.
Decay modes
Hadronic decay modes of are strongly suppressed because of the OZI rule. This effect strongly increases the lifetime of the particle and thereby gives it its very narrow decay width of just . Because of this strong suppression, electromagnetic decays begin to compete with hadronic decays. This is why the has a significant branching fraction to leptons.
The primary decay modes are:
melting
In a hot QCD medium, when the temperature is raised well beyond the Hagedorn temperature, the and its excitations are expected to melt. This is one of the predicted signals of the formation of the quark–gluon plasma. Heavy-ion experiments at CERN's Super Proton Synchrotron and at BNL's Relativistic Heavy Ion Collider have studied this phenomenon without a conclusive outcome as of 2009. This is due to the requirement that the disappearance of mesons is evaluated with respect to the baseline provided by the total production of all charm quark-containing subatomic particles, and because it is widely expected that some are produced and/or destroyed at time of QGP hadronization. Thus, there is uncertainty in the prevailing conditions at the initial collisions.
In fact, instead of suppression, enhanced production of is expected in heavy ion experiments at LHC where the quark-combinant production mechanism should be dominant given the large abundance of charm quarks in the QGP. Aside of , charmed B mesons (), offer a signature that indicates that quarks move freely and bind at-will when combining to form hadrons.
Name
Because of the nearly simultaneous discovery, the is the only particle to have a two-letter name. Richter named it "SP", after the SPEAR accelerator used at SLAC; however, none of his coworkers liked that name. After consulting with Greek-born Leo Resvanis to see which Greek letters were still available, and rejecting "iota" because its name implies insignificance, Richter chose "psi"a name which, as Gerson Goldhaber pointed out, contains the original name "SP", but in reverse order. Coincidentally, later spark chamber pictures often resembled the psi shape. Ting assigned the name "J" to it, saying that the more stable particles, such as the W and Z bosons had Roman names, as opposed to classical particles, which had Greek names. He also cited the symbol for electromagnetic current which much of their previous work was concentrated on to be one of the reasons.
Much of the scientific community considered it unjust to give one of the two discoverers priority, so most subsequent publications have referred to the particle as the "".
The first excited state of the was called the ψ′; it is now called the ψ(2S), indicating its quantum state. The next excited state was called the ψ″; it is now called ψ(3770), indicating mass in . Other vector charm–anticharm states are denoted similarly with ψ and the quantum state (if known) or the mass. The "J" is not used, since Richter's group alone first found excited states.
The name charmonium is used for the and other charm–anticharm bound states. This is by analogy with positronium, which also consists of a particle and its antiparticle (an electron and positron in the case of positronium).
| Physical sciences | Bosons | Physics |
1025551 | https://en.wikipedia.org/wiki/Formica%20rufa | Formica rufa | Formica rufa, also known as the red wood ant, southern wood ant, or horse ant, is a boreal member of the Formica rufa group of ants, and is the type species for that group, being described already by Linnaeus. It is native to Eurasia, with a recorded distribution stretching from the middle of Scandinavia to the northern Iberia and Anatolia, and from Great Britain to Lake Baikal, with unconfirmed reportings of it also to the Russian Far East. There are claims that it can be found in North America, but this is not confirmed in specialised literature, and no recent publication where North American wood ants are listed mentions it as present, while records from North America are all listed as dubious or unconfirmed in a record compilation. The workers' heads and thoraces are colored red and the abdomen brownish-black, usually with dark patches on the head and promensonotum, although some individuals may be more uniform reddish and even have some red on the part of the gaster facing the body. In order to separate them from closely related species, specimens needs to be inspected under magnification, where difference in hairiness are among the telling characteristics, with Formica rufa being hairier than per example Formica polyctena but less hairy than Formica lugubris. Workers are polymorphic, measuring 4.5–9 mm in length. They have large mandibles, and like many other ant species, they are able to spray formic acid from their abdomens as a defence. Formic acid was first extracted in 1671 by the English naturalist John Ray by distilling a large number of crushed ants of this species. Adult wood ants primarily feed on honeydew from aphids. Some groups form large networks of connected nests with multiple queen colonies, while others have single-queen colonies.
Description
Nests of these ants are large, conspicuous, dome-shaped mounds of grass, twigs, or conifer needles, often built against a rotting stump, usually situated in woodland clearings where the sun's rays can reach them. Large colonies may have 100,000 to 400,000 workers and 100 queens. F. rufa is highly polygynous and often readopts postnuptial queens from its own mother colony, leading to old, multigallery nests that may contain well over 100 egg-producing females. These colonies often may measure several metres in height and diameter. F. rufa is aggressively territorial, and often attacks and removes other ant species from the area. Nuptial flights take place during the springtime and are often marked by savage battles between neighbouring colonies as territorial boundaries are re-established. New nests are established by budding from existing nests in the spring, or by the mechanism of temporary social parasitism, the hosts being species of the F. fusca group, notably F. fusca and F. lemani, although incipient F. rufa colonies have also been recorded from nests of F. glebaria, F. cunnicularia. An F. rufa queen ousts the nest's existing queen, lays eggs, and the existing workers care for her offspring until the nest is taken over.
Diet
These ants' primary diet is aphid honeydew, but they also prey on invertebrates such as insects and arachnids; they are voracious scavengers. Foraging trails may extend 100 m. Larger workers have been observed to forage farther away from the nest. F. rufa commonly is used in forestry and often is introduced into an area as a form of pest management.
Behavior
Nursing
Worker ants in F. rufa have been observed to practice parental care or perform cocoon nursing. A worker ant goes through a sensitive phase, where it becomes accustomed to a chemical stimulus emitted by the cocoon. The sensitive phase occurs at an early and specific period. An experiment was conducted by Moli et al. to test how worker ants react to different types of cocoon: homospecific and heterospecific cocoons. If the worker ant is brought up in the absence of cocoons, it will show neither recognition nor nursing behaviour. Both types of cocoons are opened up by the workers and devoured for nutrients. When accustomed to only the homospecific cocoons, the workers collect both types of cocoons, but only place and protect the homospecific cocoons. The heterospecific cocoons are neglected and abandoned in the nest and eaten. Lastly, if heterospecific cocoons were injected with extract from the homospecific cocoons, the workers tend to both types of cocoons equally. This demonstrates that a chemical stimulus from the cocoons seems to be of paramount importance in prompting adoption behaviour in worker ants. However, the specific chemical / stimulus has not been identified.
Foraging behaviour
The foraging behaviour of wood ants changes according to the environment. Wood ants have been shown to tend and harvest aphids and prey on and compete with, other predators for food resources. They tend to prey on the most plentiful members of the community whether they are in the canopies of trees or in the forest foliage. Wood ants seem to favour prey that lives in local canopies near their nest; however, when food resources dwindle, they seek other trees further from the nests and explore more trees instead of exploring the forest floor more thoroughly. This makes foraging for food significantly less efficient, but the rest of the nest does not help the foraging ants.
Kin behaviour
Wood ants have shown aggressive behaviour toward their own species in certain situations. Intraspecific competition usually occurs early in the spring between workers of competing nests. This aggression may be linked to the protection of maintaining territory and trail. By observing skirmishes and trail formation of wood ants, the territory surrounding each nest differs between seasons. Permanent foraging trails are reinforced each season, and if an ant from an alien species crossed it, hostile activity occurs. Most likely, the territory changes based on foraging patterns are influenced by seasonal changes.
Ants recognize their nestmates through chemical signals. Failure in recognition causes the colony integrity to decay. Heavy metals accumulated through the environment alter the aggression levels. This could be due to a variety of factors such as changes in physiological effect or changes in resource levels. The ants in these territories tend to be less productive and efficient. Increased resource competition would be expected to increase level of aggression, but this is not the case.
Raiding
Wood ants, particularly those in the Formica species, perform organised and planned attacks on other ant colonies or insects. These planned attacks are motivated by territory expansion, resource acquisition, and brood capture. Raids are performed at certain times of the year, when resources may need restocking, and during the day when ants are most active. Organised and cooperative strategies for raiding are more specific tactics used by the Formica polyctena species. However, raiding is still an integral behaviour of the Formica rufa group. Scouts will investigate neighbouring nests to raid, marking their targets using pheromones. Wood ants are also capable of counterattack/defending retaliation. Strong defensive measures include guarding entrances to tunnels and having routine patrols of the areas to watch neighbouring nests. Some wood ant species, such as Formica sanguinea, will raid brood, which is then integrated into their colony as workers. This behaviour enables the colony to bolster its workforce without expending energy on raising its brood. The captured brood matures and functions within the raiding colony, helping with foraging and nest maintenance tasks.
Raiding has significant evolutionary and ecological implications. This behaviour can establish dominance hierarchies among colonies and influence the structure of ant communities. Raiding contributes to the success of dominant species by providing access to resources that might otherwise be difficult to obtain. This behaviour also reflects the ants’ ability to adapt their foraging strategies to varying environmental conditions. Wood ants can also alter the distribution of resources in the ecosystem by dominating key food sources.
Resin use
Wood ants intently collect resins from coniferous trees and incorporate them into their nests for various uses. Resin provides wood ants with structural soundness and predator defense to their nests and antimicrobial, antifungal, and pathogen defense when in conjunction with formic acid from their venom gland.
By leveraging the antimicrobial properties of the resin, wood ants are adequately ensuring and sustaining the health of their colonies. Wood ant nests are vulnerable to rapidly spreading microbial loads due to the dense population and organic debris accumulation within large, complex structures. Terpenes and phenolic acids found in coniferous tree resins provide antimicrobial defense and inhibit the growth of pathogens within the nests when mixed with the ants' formic acid. Nests that have been fortified by resin have significantly less microbial diversity when compared to nests without resin. By managing their environment, wood ants are proficiently protecting the health of their colonies, with the direct advantages of protecting the queen and developing brood with decreased pathogen exposure.
Besides antifungal and microbial defense, resin provides value structural integrity to the nest and a protective barrier from potential intruders and predators. Wood ant nests are vulnerable to numerous external threats as they are often large, complex, and above ground. By binding the resin to other organic materials, the nest is provided with cohesive building material, making the nest less prone to collapse. Incorporating resin also provides nests with waterproofing and weather resistance, another way to prevent fungal growth. The stickiness and sometimes toxicity of the resin aid in providing a protective barrier against small arthropods and mites that may attack the nest. Chemically, the resin provides camouflage and deters intruders that may use chemical cues to locate nests.
Colony structure
Polygyny
Polygyny in wood ants (Formica genus) is a colony's social structure that contains multiple reproducing queens. Polygyny may have evolved to enhance colony survival in unstable environments as it allows wood ants to disperse across larger areas by establishing interconnected nests with several queens. This differs from the more commonly observed monogynous social structure of only one reproducing queen within a colony. This behaviour can lead to significant ecological, evolutionary, and colony-level consequences.
Polygyny may have evolved to enhance colony survival in unstable environments as it allows wood ants to disperse across larger areas by establishing interconnected nests with several queens. This differs from a monogamous colony, as a single queen’s reproductive output limits the colony's growth. In a monogamous colony, a new queen will typically leave its nest by flight to find and establish a new nest away from the old one. In a polygynous colony, the new queen will establish its nest nearby, with worker ants helping to connect and create cooperative, large colonies. Polygyny allows for higher genetic diversity within the colony, making the colony less susceptible to pathogens and infections. These polygynous colonies have a more complex social hierarchy and can be more successful in certain ecological contexts because of the combined reproductive efforts of several queens.
Through polygyny, the wood ant colonies exhibit reduced levels of relatedness between workers, which can have negative and positive implications. A negative implication is that there can be reduced cooperation between the ants within a colony. However, this reduced level of cooperation is mitigated by the sheer scale of resources available to polygynous colonies. Besides higher genetic diversity, a positive implication is that the colony has faster growth in numbers due to multiple queens producing broods. With higher numbers, there are more ants to collect resources and carry out raids, but this also has drawbacks. Larger colonies put a lot of structural pressure on the above-ground nest that most wood ants have.
Nest splitting
Wood ants typically have multiple nests so they may relocate in case of drastic changes in the environment. This splitting of nests causes the creation of multiple daughter nests. Several reasons occur as to why wood ants move. Such as a change in availability of food resources, attack by the population of another colony, or a change in the state of the nest itself. During this time, workers, queens, and the brood are transferred from the original nest to the daughter nest in a bilateral direction. The goal is to move to the daughter nest, but the transporting ants may bring an individual back to the original nest. The splitting process may last from a week to over a month.
Population
Turnover rate of wood ant nests is very quick. Within a period of three years, Klimetzek counted 248 nests within a 1,640 hectare area under study. Furthermore, no evidence of a correlation between nest age and mortality was found. Smaller nests had lower life expectancy compared to larger nests. The size of the nests increased as the nest aged.
Bee paralysis virus
In 2008, the chronic bee paralysis virus was reported for the first time in this and another species of ants, Camponotus vagus. CBPV affects bees, ants, and mites.
| Biology and health sciences | Hymenoptera | Animals |
1025650 | https://en.wikipedia.org/wiki/Cooloola%20%28insect%29 | Cooloola (insect) | Cooloola is a genus of ensiferan orthopterans known as Cooloola monsters. It is the only genus in the subfamily Cooloolinae and family Cooloolidae of the superfamily Stenopelmatoidea.
Four species are known from this family, all endemic to Queensland, Australia. The name originated from the discovery of the best-known member of the family, the Cooloola monster (Cooloola propator), in the Cooloola National Park.
Little is known about their life histories as they lead an almost entirely subterranean existence, but they are believed to prey on other soil-dwelling invertebrates. Cooloola monsters are unusual in comparison with other members of the primitive superfamily Stenopelmatoidea in that the cooloolids' antennae are considerably shorter than their body lengths.
Classification
While often treated as a family, molecular evidence suggests that cooloolids are in fact aberrant members of the family Anostostomatidae, and the genus Cooloola might not be monophyletic.
Species include:
Cooloola dingo Rentz, 1986 – dingo monster
Cooloola pearsoni Rentz, 1999 – Pearson's monster
Cooloola propator Rentz, 1980 – Cooloola monster
Cooloola ziljan Rentz, 1986 – sugarcane monster
| Biology and health sciences | Orthoptera | Animals |
1025693 | https://en.wikipedia.org/wiki/Microraptor | Microraptor | Microraptor (Greek, μικρός, mīkros: "small"; Latin, raptor: "one who seizes") is a genus of small, four-winged dromaeosaurid dinosaurs. Numerous well-preserved fossil specimens have been recovered from Liaoning, China. They date from the early Cretaceous Jiufotang Formation (Aptian stage), 125 to 120 million years ago. Three species have been named (M. zhaoianus, M. gui, and M. hanqingi), though further study has suggested that all of them represent variation in a single species, which is properly called M. zhaoianus. Cryptovolans, initially described as another four-winged dinosaur, is usually considered to be a synonym of Microraptor.
Like Archaeopteryx, well-preserved fossils of Microraptor provide important evidence about the evolutionary relationship between birds and earlier dinosaurs. Microraptor had long pennaceous feathers that formed aerodynamic surfaces on the arms and tail but also on the legs. This led paleontologist Xu Xing in 2003 to describe the first specimen to preserve this feature as a "four-winged dinosaur" and to speculate that it may have glided using all four limbs for lift. Subsequent studies have suggested that Microraptor was capable of powered flight as well.
Microraptor was among the most abundant non-avialan dinosaurs in its ecosystem, and the genus is represented by more fossils than any other dromaeosaurid, with possibly over 300 fossil specimens represented across various museum collections. One specimen in particular shows evidence of active primary feather moulting, which is one of the few known fossil evidence of such behavior among pennaraptoran dinosaurs.
History
Naming controversy
The initial naming of Microraptor was controversial, because of the unusual circumstances of its first description. The first specimen to be described was part of a chimeric specimen—a patchwork of different feathered dinosaur species (Microraptor itself, Yanornis and an as-of-yet undescribed third species) assembled from multiple specimens in China and smuggled to the USA for sale. After the forgery was revealed by Xu Xing of Beijing's Institute of Vertebrate Paleontology and Paleoanthropology, Storrs L. Olson, curator of birds in the National Museum of Natural History of the Smithsonian Institution, published a description of the Microraptor's tail in an obscure journal, giving it the name Archaeoraptor liaoningensis in an attempt to remove the name from the paleornithological record by assigning it to the part least likely to be a bird. However, Xu had discovered the remains of the specimen from which the tail had been taken and published a description of it later that year, giving it the name Microraptor zhaoianus.
Since the two names designate the same individual as the type specimen, Microraptor zhaoianus would have been a junior objective synonym of Archaeoraptor liaoningensis and the latter, if valid, would have had priority under the International Code of Zoological Nomenclature. However, there is some doubt whether Olson in fact succeeded in meeting all the formal requirements for establishing a new taxon. Namely, Olson designated the specimen as a lectotype, before an actual type species was formally erected. A similar situation arose with Tyrannosaurus rex and Manospondylus gigas, in which the former became a nomen protectum and the latter a nomen oblitum due to revisions in the ICZN rules that took place on December 31, 1999. In addition, Xu's name for the type specimen (Microraptor) was subsequently used more frequently than the original name; as such, this and the chimeric nature of the specimen would render the name "Archaeoraptor" a nomen vanum (as it was improperly described) and the junior synonym Microraptor a nomen protectum (as it's been used in more published works than "Archaeoraptor" and was properly described).
Additional specimens
The first specimen referred to Microraptor represented a small individual and included faint feather remnants, but was otherwise not well preserved and lacked a skull.
In 2002 Mark Norell et al. described another specimen, BPM 1 3-13, which they did not name or refer to an existing species. Later that year Stephen Czerkas et al. named the specimen Cryptovolans pauli, and referred two additional specimens (the first to show well-preserved feathers) to this species. The generic name was derived from Greek kryptos, "hidden", and Latin volans, "flying". The specific name, pauli, honors paleontologist Gregory S. Paul, who had long proposed that dromaeosaurids evolved from flying ancestors.
The type specimens of C. pauli were collected from the Jiufotang Formation, dating from the early Albian and now belong to the collection of the Paleontology Museum of Beipiao, in Liaoning, China. They are referred to by the inventory numbers LPM 0200, the holotype; LPM 0201, its counterslab (slab and counterslab together represent the earlier BPM 1 3-13); and the paratype LPM 0159, a smaller skeleton. Both individuals are preserved as articulated compression fossils; they are reasonably complete but partially damaged.
Czerkas et al. (2002) diagnosed the genus on the basis of having primary feathers (which in the authors' opinion made it a bird), a co-ossified sternum, a tail consisting of 28 to 30 vertebrae and a third finger with a short phalanx III-3. Some of the feathers Czerkas described as primary were actually attached to the leg, rather than the arm. This, along with most of the other diagnostic characters, is also present in the genus Microraptor, which was first described earlier than Cryptovolans. However, BPM 1 3-13 has a longer tail, proportionately, than other Microraptor specimens that had been described by 2002, which have 24 to 26 tail vertebrae.
Subsequent studies (and more specimens of Microraptor) have shown that the features used to distinguish Cryptovolans are not unique, but are present to varying degrees across various specimens. In a review by Phil Senter and colleagues in 2004, the scientists suggested that all these features represented individual variation across various age groups of a single Microraptor species, making the name Cryptovolans pauli and Microraptor gui junior synonyms of Microraptor zhaoianus. Many other researchers, including Alan Feduccia and Tom Holtz, have since supported its synonymy. M. gui has been accepted as a distinct species with the specimen reported in 2013 being distinguishable from the type specimen of M. zhaoianus.
A new specimen of Microraptor, BMNHC PH881, showed several features previously unknown in the animal, including the probably glossy-black iridescent plumage coloration. The new specimen also featured a bifurcated tailfan, similar in shape to previously known Microraptor tailfans except sporting a pair of long, narrow feathers at the center of the fan. The new specimen also showed no sign of the nuchal crest, indicating that the crest inferred from the holotype specimen may be an artifact of taphonomic distortion.
Numerous further specimens likely belonging to Microraptor have been uncovered, all from the Shangheshou Bed of the Jiufotang Formation in Liaoning, China. In fact, Microraptor is the most abundant non-avialan dinosaur fossil type found in this formation. In 2010, it was reported that there were over 300 undescribed specimens attributable to Microraptor or its close relatives among the collections of several Chinese museums, though many had been altered or composited by private fossil collectors.
Study and debate
Norell et al. (2002) described BPM 1 3-13 as the first dinosaur known to have flight feathers on its legs as well as on its arms.
Czerkas (2002) mistakenly described the fossil as having no long feathers on its legs, but only on its hands and arms, as he illustrated on the cover of his book Feathered Dinosaurs and the Origin of Flight. In his discussion of Cryptovolans in this book, Czerkas strongly denounces Norell's conclusions; "The misinterpretation of the primary wing feathers as being from the hind legs stems directly to [sic] seeing what one believes and wants to see". Czerkas also denounced Norell for failing to conclude that dromaeosaurs are birds, accusing him of succumbing to "...the blinding influences of preconceived ideas." The crown group definition of Aves, as a subset of Avialae, the explicit definition of the term "bird" that Norell employs, would definitely exclude BPM 1 3-13. However, he does not consider the specimen to belong to Avialae either.
Czerkas's interpretation of the hindleg feathers noted by Norell proved to be incorrect the following year when additional specimens of Microraptor were published by Xu and colleagues, showing a distinctive "hindwing" completely separate from the forelimb wing. The first of these specimens was discovered in 2001, and between 2001 and 2003 four more specimens were bought from private collectors by Xu's museum, the Institute of Vertebrate Paleontology and Paleoanthropology. Xu also considered these specimens, most of which had hindwings and proportional differences from the original Microraptor specimen, to be a new species, which he named Microraptor gui. However, Senter also questioned this classification, noting that as with Cryptovolans, most of the differences appeared to correspond with size, and likely age differences. Two further specimens, classified as M. zhaoianus in 2002 (M. gui had not yet been named), have also been described by Hwang and colleagues.
Czerkas also believed that the animal may have been able to fly better than Archaeopteryx, the animal usually referred to as the earliest known bird. He cited the fused sternum and asymmetrical feathers, and argued that Microraptor has modern bird features that make it more derived than Archaeopteryx. Czerkas cited the fact that this possibly volant animal is also very clearly a dromaeosaurid to suggest that the Dromaeosauridae might actually be a basal bird group, and that later, larger, species such as Deinonychus were secondarily flightless (Czerkas, 2002). The current consensus is that there is not enough evidence to conclude whether dromaeosaurs descended from an ancestor with some aerodynamic abilities. The work of Xu et al. (2003) suggested that basal dromaeosaurs were probably small, arboreal, and could glide. The work of Turner et al. (2007) suggested that the ancestral dromaeosaur could not glide or fly, but that there was good evidence that it was small-bodied (around 65 cm long and 600–700 g in mass).
Description
Microraptor was among the smallest-known non-avian dinosaurs, with the holotype of M. gui measuring in length, in wingspan and weighing . There are larger specimens which would have measured at least in length, more than in wingspan and weighed . Aside from their extremely small size, Microraptor were among the first non-avialan dinosaurs discovered with the impressions of feathers and wings. Seven specimens of M. zhaoianus have been described in detail, from which most feather impressions are known. Unusual even among early birds and feathered dinosaurs, Microraptor is one of the few known bird precursors to sport long flight feathers on the legs as well as the wings. Their bodies had a thick covering of feathers, with a diamond-shaped fan on the end of the tail (possibly for added stability during flight). Xu et al. (2003) compared the longer plumes on Microraptors head to those of the Philippine eagle. Bands of dark and light present on some specimens may indicate color patterns present in life, though at least some individuals almost certainly possessed an iridescent black coloration.
Distinguishing anatomical features
A diagnosis is a statement of the anatomical features of an organism (or group) that collectively distinguish it from all other organisms. Some, but not all, of the features in a diagnosis are also autapomorphies. An autapomorphy is a distinctive anatomical feature that is unique to a given organism. Several anatomical features found in Microraptor, such as a combination of unserrated and partially serrated teeth with constricted 'waists', and unusually long upper arm bones, are shared with both primitive avians and primitive troodontids. Microraptor is particularly similar to the basal troodontid Sinovenator; in their 2002 description of two M. zhaoianus specimens, Hwang et al. note that this is not particularly surprising, given that both Microraptor and Sinovenator are very primitive members of two closely related groups, and both are close to the deinonychosaurian split between dromaeosaurids and troodontids.
Coloration
In March 2012, Quanguo Li et al. determined the plumage coloration of Microraptor based on the new specimen BMNHC PH881, which also showed several other features previously unknown in Microraptor. By analyzing the fossilized melanosomes (pigment cells) in the fossil with scanning electron microscope techniques, the researchers compared their arrangements to those of modern birds. In Microraptor, these cells were shaped in a manner consistent with black, glossy coloration in modern birds. These rod-shaped, narrow melanosomes were arranged in stacked layers, much like those of a modern starling, and indicated iridescence in the plumage of Microraptor. Though the researchers state that the true function of the iridescence is yet unknown, it has been suggested that the tiny dromaeosaur was using its glossy coat as a form of communication or sexual display, much as in modern iridescent birds.
Classification
The cladogram below follows a 2012 analysis by paleontologists Phil Senter, James I. Kirkland, Donald D. DeBlieux, Scott Madsen and Natalie Toth.
In a 2024 paper which reported the smallest known juvenile specimen of Microraptor, Wang and Pei included microraptorians and eudromaeosaurians within a new clade Serraraptoria.
Paleobiology
Wings and flight
Microraptor had four wings, one on each of its forelimbs and hindlimbs, somewhat resembling one possible arrangement of the quartet of flight surfaces on a tandem wing aircraft of today. It had long pennaceous feathers on arms and hands with legs and feet . The long feathers on the legs of Microraptor were true flight feathers as seen in modern birds, with asymmetrical vanes on the arm, leg, and tail feathers. As in modern bird wings, Microraptor had both primary (anchored to the hand) and secondary (anchored to the arm) flight feathers. This standard wing pattern was mirrored on the hindlegs, with flight feathers anchored to the upper foot bones as well as the upper and lower leg. Though not apparent in most fossils under natural light, due to obstruction from decayed soft tissue, the feather bases extended close to or in contact with the bones, as in modern birds, providing strong anchor points.
It was originally thought that Microraptor was a glider, and probably lived mainly in trees, because the hindwings anchored to the feet of Microraptor would have hindered their ability to run on the ground. Some paleontologists have suggested that feathered dinosaurs used their wings to parachute from trees, possibly to attack or ambush prey on the ground, as a precursor to gliding or true flight. In their 2007 study, Chatterjee and Templin tested this hypothesis as well, and found that the combined wing surface of Microraptor was too narrow to successfully parachute to the ground without injury from any significant height. However, the authors did leave open the possibility that Microraptor could have parachuted short distances, as between closely spaced tree branches. Wind tunnel experiments have demonstrated that sustaining a high-lift coefficient at the expense of high drag was likely the most efficient strategy for Microraptor when gliding between low elevations. Microraptor did not require a sophisticated, 'modern' wing morphology to be an effective glider. However, the idea that Microraptor was an arboreal glider relies on it to have regularly climbed or even lived in trees, when study of its anatomy have shown that its limb proportions fall in line with modern ground birds rather than climbers, and its skeleton shows none of the expected adaptations in animals specialized for climbing trees.
Describing specimens originally referenced as a distinctive species (Cryptovolans pauli), paleontologist Stephen Czerkas argued Microraptor may have been a powered flier, and indeed possibly a better flyer than Archaeopteryx. He noted that the Microraptor's fused sternum, asymmetrical feathers, and features of the shoulder girdle indicated that it could fly under its own power, rather than merely gliding. Today, most scientists agree that Microraptor had the anatomical features expected of a flying animal, though it would have been a less advanced form of flight compared to birds. For example, some studies suggest the shoulder joint was too primitive to allow a full flapping flight stroke. In the ancestral anatomy of theropod dinosaurs, the shoulder socket faced downward and slightly backward, making it impossible for the animals to raise their arms vertically, a prerequisite for the flapping flight stroke in birds. Studies of maniraptoran anatomy have suggested that the shoulder socket did not shift into the bird-like position of a high, upward orientation close to the vertebral column until relatively advanced avialans like the enantiornithes appeared. However, other scientists have argued that the shoulder girdle in some paravian theropods, including Microraptor, is curved in such a way that the shoulder joint could only have been positioned high on the back, allowing for a nearly vertical upstroke of the wing. This possibly advanced shoulder anatomy, combined with the presence of a propatagium linking the wrist to the shoulder (which fills the space in front of the flexed wing and may support the wing against drag in modern birds) and an alula, much like a "thumb-like" form of leading edge slot, may indicate that Microraptor was capable of true, powered flight.
Other studies have demonstrated that the wings of Microraptor were large enough to generate the lift necessary for powered launching into flight even without a fully vertical flight stroke. A 2016 study of incipient flight ability in paravians demonstrated that Microraptor was capable of wing-assisted incline running, as well as wing-assisted leaping and even ground-based launching.
Stephen Czerkas, Gregory S. Paul, and others have argued that the fact Microraptor could fly and yet is also very clearly a dromaeosaurid suggests that the Dromaeosauridae, including later and larger species such as Deinonychus, were secondarily flightless. The work of Xu and colleagues also suggested that the ancestors of dromaeosaurids were probably small, arboreal, and capable of gliding, although later discoveries of more primitive dromaeosaurids with short forelimbs unsuitable for gliding have cast doubt on this view. Work done on the question of flight ability in other paravians, however, showed that most of them probably would not have been able to achieve enough lift for powered flight, given their limited flight strokes and relatively smaller wings. These studies concluded that Microraptor probably evolved flight and its associated features (fused sternum, alula, etc.) independently of the ancestors of birds. In 2024, Kiat and O'Connor analyzed that Mesozoic birds and Microraptor had remex morphologies that are consistent with modern volant birds, while anchiornithids and Caudipteryx were secondarily flightless.
Hindwing posture
Sankar Chatterjee suggested in 2005 that, in order for Microraptor to glide or fly, the forewings and hindwings must have been on different levels (as on a biplane) and not overlaid (as on a dragonfly), and that the latter posture would have been anatomically impossible. Using this biplane model, Chatterjee was able to calculate possible methods of gliding and determined that Microraptor most likely employed a phugoid style of gliding: launching itself from a perch, the animal would have swooped downward in a deep U-shaped curve and then lifted again to land on another tree. The feathers not directly employed in the biplane wing structure, like those on the tibia and the tail, could have been used to control drag and alter the flight path, trajectory, etc. The orientation of the hindwings would also have helped the animal control its gliding flight. Chatterjee also used computer algorithms that test animal flight capacity to test whether or not Microraptor was capable of true, powered flight, as opposed to or in addition to passive gliding. The resulting data showed that Microraptor did have the requirements to sustain level powered flight, so it is theoretically possible that the animal flew, as opposed to gliding.
Some paleontologists have doubted the biplane hypothesis, and have proposed other configurations. A 2010 study by Alexander et al. described the construction of a lightweight three-dimensional physical model used to perform glide tests. Using several hindleg configurations for the model, they found that the biplane model, while not unreasonable, was structurally deficient and needed a heavy-headed weight distribution for stable gliding, which they deemed unlikely. The study indicated that a laterally abducted hindwing structure represented the most biologically and aerodynamically consistent configuration for Microraptor. A further analysis by Brougham and Brusatte, however, concluded that Alexander's model reconstruction was not consistent with all of the available data on Microraptor and argued that the study was insufficient for determining a likely flight pattern for Microraptor. Brougham and Brusatte criticized the anatomy of the model used by Alexander and his team, noting that the hip anatomy was not consistent with other dromaeosaurs. In most dromaeosaurids, features of the hip bone prevent the legs from splaying horizontally; instead, they are locked in a vertical position below the body. Alexander's team used a specimen of Microraptor which was crushed flat to make their model, which Brougham and Brusatte argued did not reflect its actual anatomy. Later in 2010, Alexander's team responded to these criticisms, noting that the related dromaeosaur Hesperonychus, which is known from complete hip bones preserved in three dimensions, also shows hip sockets directed partially upward, possibly allowing the legs to splay more than in other dromaeosaurs. However, Hartman and colleagues suggested that Hesperonychus is not a dromaeosaur, but actually an avialan close to modern birds like Balaur bondoc based on phylogenetic analyses in 2019.
Ground movement
Due to the extent of the hindwings onto most of the animal's foot, many scientists have suggested that Microraptor would have been awkward during normal ground movement or running. The front wing feathers would also have hindered Microraptor when on the ground, due to the limited range of motion in the wrist and the extreme length of the wing feathers. A 2010 study by Corwin Sullivan and colleagues showed that, even with the wing folded as far as possible, the feathers would still have dragged along the ground if the arms were held in a neutral position, or extended forward as in a predatory strike. Only by keeping the wings elevated, or the upper arm extended fully backward, could Microraptor have avoided damaging the wing feathers. Therefore, it may have been anatomically impossible for Microraptor to have used its clawed forelimbs in capturing prey or manipulating objects.
Implications
The unique wing arrangement found in Microraptor raised the question of whether the evolution of flight in modern birds went through a four-winged stage, or whether four-winged gliders like Microraptor were an evolutionary side-branch that left no descendants. As early as 1915, naturalist William Beebe had argued that the evolution of bird flight may have gone through a four-winged (or tetrapteryx) stage. Chatterjee and Templin did not take a strong stance on this possibility, noting that both a conventional interpretation and a tetrapteryx stage are equally possible. However, based on the presence of unusually long leg feathers in various feathered dinosaurs, Archaeopteryx, and some modern birds such as raptors, as well as the discovery of further dinosaurs with long primary feathers on their feet (such as Pedopenna), the authors argued that the current body of evidence, both from morphology and phylogeny, suggests that bird flight did shift at some point from shared limb dominance to front-limb dominance and that all modern birds may have evolved from four-winged ancestors, or at least ancestors with unusually long leg feathers relative to the modern configuration.
Feeding
In 2010 researchers announced that further preparation of the type fossil of M. zhaoianus revealed preserved probable gut contents, and a full study on them was later published in 2022 by David Hone and colleagues. These consisted of the remains of a mammal, primarily a complete and articulated right foot (including all tarsals, metatarsals, and most of the phalanges) as well as the shafts of additional long bones and potentially other fragments. The foot skeleton is similar to those of Eomaia and Sinodelphys. It corresponds to an animal with an estimated snout to vent length of and a mass of . The unguals of the foot are less curved than in Eomaia or Sinodelphys, indicating that the mammal could climb but less effectively than in the two latter genera and so was likely not arboreal but potentially scansorial.
It is ambiguous whether the mammal had been predated upon or scavenged by the Microraptor, although the lack of other definitive body parts consumed may suggest the low-muscle mass foot may have been eaten during a late stage of carcass consumption, possibly through scavenging. The find is a rare example of a theropod definitively consuming a Mesozoic mammal, the only other being a specimen of the compsognathid Sinosauropteryx.
In the December 6, 2011 issue of Proceedings of the National Academy of Sciences, Jingmai O'Connor and coauthors described a specimen of Microraptor gui containing bones of an arboreal enantiornithean bird in its abdomen, specifically a partial wing and feet. Their position implies the bird was swallowed whole and head-first, which the authors interpreted as implying that the Microraptor had caught and consumed the bird in the trees, rather than scavenging it.
In 2013 researchers announced that they had found fish scales in the abdominal cavity of another M. gui specimen. The authors contradicted the prior suggestion that M. gui hunted only in an arboreal environment, proposing that it was also an adept hunter of fish as well. They further argued that the specimen showed a probable adaptation to a fish-eating diet, pointing to the first three teeth of the mandible being inclined anterodorsally, a characteristic often associated with piscivory. They concluded that Microraptor was an opportunistic feeder, hunting the most common prey in both arboreal and aquatic habitats.
Both of these studies regarded each gut contents as instances of predation. However, Hone and colleagues (2022) questioned the reliability of these interpretations and wrote that both could just as equally be attributed to scavenging. Further, they argued against Microraptor being a specialist in either or both arboreal or aquatic hunting, citing the broad range of vertebrate gut contents (i.e. fish, mammals, lizards, birds) as evidence for a generalist hunting strategy, and that neither required that Microraptor being a specialist for hunting in either habitats.
In 2019, a new genus of scleroglossan lizard (Indrasaurus) was described from a specimen found in the stomach of a Microraptor. The Microraptor apparently swallowed its prey head first, a behavior typical of modern carnivorous birds and lizards. The Indrasaurus bones lacked marked pitting and scarring, indicating that the Microraptor died shortly after eating the lizard and before significant digestion had occurred.
Unlike its fellow paravian Anchiornis, Microraptor has never been found with gastric pellets, despite the existence of four Microraptor specimens that preserve stomach contents. This suggests that Microraptor passed indigestible fur, feathers, and bits of bone in its droppings instead of producing pellets.
Based on the size of the scleral ring of the eye, it has been suggested Microraptor hunted at night. The discovery of iridescent plumage in Microraptor has led many to question this assumption on the grounds that no modern birds that have iridescent plumage are known to be nocturnal, but this argument is itself questionable as there in fact are modern nocturnal birds with iridescent plumage, such as the kākāpō as well as various night-feeding waterfowl; furthermore, as a forest-dwelling, presumably solitary carnivore, Microraptor was significantly different ecologically compared to extant corvids and icterids with dark, iridescent plumage, which are social omnivores of more open habitats.
| Biology and health sciences | Theropods | Animals |
1026522 | https://en.wikipedia.org/wiki/Boltzmann%20equation | Boltzmann equation | The Boltzmann equation or Boltzmann transport equation (BTE) describes the statistical behaviour of a thermodynamic system not in a state of equilibrium; it was devised by Ludwig Boltzmann in 1872.
The classic example of such a system is a fluid with temperature gradients in space causing heat to flow from hotter regions to colder ones, by the random but biased transport of the particles making up that fluid. In the modern literature the term Boltzmann equation is often used in a more general sense, referring to any kinetic equation that describes the change of a macroscopic quantity in a thermodynamic system, such as energy, charge or particle number.
The equation arises not by analyzing the individual positions and momenta of each particle in the fluid but rather by considering a probability distribution for the position and momentum of a typical particle—that is, the probability that the particle occupies a given very small region of space (mathematically the volume element ) centered at the position , and has momentum nearly equal to a given momentum vector (thus occupying a very small region of momentum space ), at an instant of time.
The Boltzmann equation can be used to determine how physical quantities change, such as heat energy and momentum, when a fluid is in transport. One may also derive other properties characteristic to fluids such as viscosity, thermal conductivity, and electrical conductivity (by treating the charge carriers in a material as a gas). | Physical sciences | Thermodynamics | Physics |
11963035 | https://en.wikipedia.org/wiki/Dreadnought | Dreadnought | The dreadnought was the predominant type of battleship in the early 20th century. The first of the kind, the Royal Navy's , had such an effect when launched in 1906 that similar battleships built after her were referred to as "dreadnoughts", and earlier battleships became known as pre-dreadnoughts. Her design had two revolutionary features: an "all-big-gun" armament scheme, with an unprecedented number of heavy-calibre guns, and steam turbine propulsion. As dreadnoughts became a crucial symbol of national power, the arrival of these new warships renewed the naval arms race between the United Kingdom and Germany. Dreadnought races sprang up around the world, including in South America, lasting up to the beginning of World War I. Successive designs increased rapidly in size and made use of improvements in armament, armour, and propulsion throughout the dreadnought era. Within five years, new battleships outclassed Dreadnought herself. These more powerful vessels were known as "super-dreadnoughts". Most of the original dreadnoughts were scrapped after the end of World War I under the terms of the Washington Naval Treaty, but many of the newer super-dreadnoughts continued serving throughout World War II.
Dreadnought-building consumed vast resources in the early 20th century, but there was only one battle between large dreadnought fleets. At the Battle of Jutland in 1916, the British and German navies clashed with no decisive result. The term dreadnought gradually dropped from use after World War I, especially after the Washington Naval Treaty, as virtually all remaining battleships shared dreadnought characteristics; it can also be used to describe battlecruisers, the other type of ship resulting from the dreadnought revolution.
Origins
The distinctive all-big-gun armament of the dreadnought was developed in the first years of the 20th century as navies sought to increase the range and power of the armament of their battleships. The typical battleship of the 1890s, now known as the "pre-dreadnought", had a main armament of four heavy guns of calibre, a secondary armament of six to eighteen quick-firing guns of between calibre, and other smaller weapons. This was in keeping with the prevailing theory of naval combat that battles would initially be fought at some distance, but the ships would then approach to close range for the final blows (as they did in the Battle of Manila Bay), when the shorter-range, faster-firing guns would prove most useful. Some designs had an intermediate battery of guns. Serious proposals for an all-big-gun armament were circulated in several countries by 1903.
All-big-gun designs commenced almost simultaneously in three navies. In 1904, the Imperial Japanese Navy authorized construction of , originally designed with twelve guns. Work began on her construction in May 1905. The Royal Navy began the design of HMS Dreadnought in January 1905, and she was laid down in October of the same year. Finally, the US Navy gained authorization for , carrying eight 12-inch guns, in March 1905, with construction commencing in December 1906.
The move to all-big-gun designs was accomplished because a uniform, heavy-calibre armament offered advantages in both firepower and fire control, and the Russo-Japanese War of 1904–1905 showed that future naval battles could, and likely would, be fought at long distances. The newest guns had longer range and fired heavier shells than a gun of calibre. Another possible advantage was fire control; at long ranges guns were aimed by observing the splashes caused by shells fired in salvoes, and it was difficult to interpret different splashes caused by different calibres of gun. There is still debate as to whether this feature was important.
Long-range gunnery
In naval battles of the 1890s the decisive weapon was the medium-calibre, typically , quick-firing gun firing at relatively short range; at the Battle of the Yalu River in 1894, the victorious Japanese did not commence firing until the range had closed to , and most of the fighting occurred at . At these ranges, lighter guns had good accuracy, and their high rate of fire delivered high volumes of ordnance on the target, known as the "hail of fire". Naval gunnery was too inaccurate to hit targets at a longer range.
By the early 20th century, British and American admirals expected future battleships would engage at longer distances. Newer models of torpedo had longer ranges. For instance, in 1903, the US Navy ordered a design of torpedo effective to . Both British and American admirals concluded that they needed to engage the enemy at longer ranges. In 1900, Admiral Fisher, commanding the Royal Navy Mediterranean Fleet, ordered gunnery practice with 6-inch guns at . By 1904 the US Naval War College was considering the effects on battleship tactics of torpedoes with a range of .
The range of light and medium-calibre guns was limited, and accuracy declined badly at longer range. At longer ranges the advantage of a high rate of fire decreased; accurate shooting depended on spotting the shell-splashes of the previous salvo, which limited the optimum rate of fire.
On 10 August 1904 the Imperial Russian Navy and the Imperial Japanese Navy had one of the longest-range gunnery duels to date—over during the Battle of the Yellow Sea. The Russian battleships were equipped with Lugeol range finders with an effective range of , and the Japanese ships had Barr & Stroud range finders that reached out to , but both sides still managed to hit each other with fire at . Naval architects and strategists around the world took notice.
All-big-gun mixed-calibre ships
An evolutionary step was to reduce the quick-firing secondary battery and substitute additional heavy guns, typically . Ships designed in this way have been described as 'all-big-gun mixed-calibre' or later 'semi-dreadnoughts'. Semi-dreadnought ships had many heavy secondary guns in wing turrets near the centre of the ship, instead of the small guns mounted in barbettes of earlier pre-dreadnought ships.
Semi-dreadnought classes included the British and ; Russian ; Japanese , , and ; American and ; French ; Italian ; and Austro-Hungarian es.
The design process for these ships often included discussion of an 'all-big-gun one-calibre' alternative. The June 1902 issue of Proceedings of the US Naval Institute contained comments by the US Navy's leading gunnery expert, P. R. Alger, proposing a main battery of eight guns in twin turrets. In May 1902, the Bureau of Construction and Repair submitted a design for the battleship with twelve guns in twin turrets, two at the ends and four in the wings. Lt. Cdr. Homer C. Poundstone submitted a paper to President Theodore Roosevelt in December 1902 arguing the case for larger battleships. In an appendix to his paper, Poundstone suggested a greater number of guns was preferable to a smaller number of . The Naval War College and Bureau of Construction and Repair developed these ideas in studies between 1903 and 1905. War-game studies begun in July 1903 "showed that a battleship armed with twelve guns hexagonally arranged would be equal to three or more of the conventional type."
The Royal Navy was thinking along similar lines. A design had been circulated in 1902–1903 for "a powerful 'all big-gun' armament of two calibres, viz. four and twelve guns." The Admiralty decided to build three more King Edward VIIs (with a mixture of 12-inch, 9.2-inch and 6-inch) in the 1903–1904 naval construction programme instead. The all-big-gun concept was revived for the 1904–1905 programme, the Lord Nelson class. Restrictions on length and beam meant the midships 9.2-inch turrets became single instead of twin, thus giving an armament of four 12-inch, ten 9.2-inch and no 6-inch. The constructor for this design, J. H. Narbeth, submitted an alternative drawing showing an armament of twelve 12-inch guns, but the Admiralty was not prepared to accept this. Part of the rationale for the decision to retain mixed-calibre guns was the need to begin the building of the ships quickly because of the tense situation produced by the Russo-Japanese War.
Switch to all-big-gun designs
The replacement of the guns with weapons of calibre improved the striking power of a battleship, particularly at longer ranges. Uniform heavy-gun armament offered many other advantages. One advantage was logistical simplicity. When the US was considering whether to have a mixed-calibre main armament for the , for example, William Sims and Poundstone stressed the advantages of homogeneity in terms of ammunition supply and the transfer of crews from the disengaged guns to replace gunners wounded in action.
A uniform calibre of gun also helped streamline fire control. The designers of Dreadnought preferred an all-big-gun design because it would mean only one set of calculations about adjustments to the range of the guns. Some historians today hold that a uniform calibre was particularly important because the risk of confusion between shell-splashes of 12-inch and lighter guns made accurate ranging difficult. This viewpoint is controversial, as fire control in 1905 was not advanced enough to use the salvo-firing technique where this confusion might be important, and confusion of shell-splashes does not seem to have been a concern of those working on all-big-gun designs. Nevertheless, the likelihood of engagements at longer ranges was important in deciding that the heaviest possible guns should become standard, hence 12-inch rather than 10-inch.
The newer designs of 12-inch gun mounting had a considerably higher rate of fire, removing the advantage previously enjoyed by smaller calibres. In 1895, a 12-inch gun might have fired one round every four minutes; by 1902, two rounds per minute was usual. In October 1903, the Italian naval architect Vittorio Cuniberti published a paper in Jane's Fighting Ships entitled "An Ideal Battleship for the British Navy", which called for a 17,000-ton ship carrying a main armament of twelve 12-inch guns, protected by armour 12 inches thick, and having a speed of . Cuniberti's idea—which he had already proposed to his own navy, the —was to make use of the high rate of fire of new 12-inch guns to produce devastating rapid fire from heavy guns to replace the 'hail of fire' from lighter weapons. Something similar lay behind the Japanese move towards heavier guns; at Tsushima, Japanese shells contained a higher than normal proportion of high explosive, and were fused to explode on contact, starting fires rather than piercing armour. The increased rate of fire laid the foundations for future advances in fire control.
Building the first dreadnoughts
In Japan, the two battleships of the 1903–1904 programme were the first in the world to be laid down as all-big-gun ships, with eight 12-inch guns. The armour of their design was considered too thin, demanding a substantial redesign. The financial pressures of the Russo-Japanese War and the short supply of 12-inch guns—which had to be imported from the United Kingdom—meant these ships were completed with a mixture of 12-inch and 10-inch armament. The 1903–1904 design retained traditional triple-expansion steam engines, unlike Dreadnought.
The dreadnought breakthrough occurred in the United Kingdom in October 1905. Fisher, now the First Sea Lord, had long been an advocate of new technology in the Royal Navy and had recently been convinced of the idea of an all-big-gun battleship. Fisher is often credited as the creator of the dreadnought and the father of the United Kingdom's great dreadnought battleship fleet, an impression he himself did much to reinforce. It has been suggested Fisher's main focus was on the arguably even more revolutionary battlecruiser and not the battleship.
Shortly after taking office, Fisher set up a Committee on Designs to consider future battleships and armoured cruisers. The committee's first task was to consider a new battleship. The specification for the new ship was a 12-inch main battery and anti-torpedo-boat guns but no intermediate calibres, and a speed of , which was two or three knots faster than existing battleships. The initial designs intended twelve 12-inch guns, though difficulties in positioning these guns led the chief constructor at one stage to propose a return to four 12-inch guns with sixteen or eighteen of 9.2-inch. After a full evaluation of reports of the action at Tsushima compiled by an official observer, Captain Pakenham, the Committee settled on a main battery of ten 12-inch guns, along with twenty-two 12-pounders as secondary armament. The committee also gave Dreadnought steam turbine propulsion, which was unprecedented in a large warship. The greater power and lighter weight of turbines meant the 21-knot design speed could be achieved in a smaller and less costly ship than if reciprocating engines had been used. Construction took place quickly; the keel was laid on 2 October 1905, the ship was launched on 10 February 1906, and completed on 3 October 1906—an impressive demonstration of British industrial might.
The first US dreadnoughts were the two South Carolina-class ships. Detailed plans for these were worked out in July–November 1905, and approved by the Board of Construction on 23 November 1905. Building was slow; specifications for bidders were issued on 21 March 1906, the contracts awarded on 21 July 1906 and the two ships were laid down in December 1906, after the completion of the Dreadnought.
Design
The designers of dreadnoughts sought to provide as much protection, speed, and firepower as possible in a ship of a realistic size and cost. The hallmark of dreadnought battleships was an "all-big-gun" armament, but they also had heavy armour concentrated mainly in a thick belt at the waterline and in one or more armoured decks. Secondary armament, fire control, command equipment, and protection against torpedoes also had to be crammed into the hull.
The inevitable consequence of demands for ever greater speed, striking power, and endurance meant that displacement, and hence cost, of dreadnoughts tended to increase. The Washington Naval Treaty of 1922 imposed a limit of 35,000 tons on the displacement of capital ships. In subsequent years treaty battleships were commissioned to build up to this limit. Japan's decision to leave the Treaty in the 1930s, and the arrival of the Second World War, eventually made this limit irrelevant.
Armament
Dreadnoughts mounted a uniform main battery of heavy-calibre guns; the number, size, and arrangement differed between designs. Dreadnought mounted ten 12-inch guns. 12-inch guns had been standard for most navies in the pre-dreadnought era, and this continued in the first generation of dreadnought battleships. The Imperial German Navy was an exception, continuing to use 11-inch guns in its first class of dreadnoughts, the .
Dreadnoughts also carried lighter weapons. Many early dreadnoughts carried a secondary armament of very light guns designed to fend off enemy torpedo boats. The calibre and weight of secondary armament tended to increase, as the range of torpedoes and the staying power of the torpedo boats and destroyers expected to carry them also increased. From the end of World War I onwards, battleships had to be equipped with many light guns as anti-aircraft armament.
Dreadnoughts frequently carried torpedo tubes themselves. In theory, a line of battleships so equipped could unleash a devastating volley of torpedoes on an enemy line steaming a parallel course. This was also a carry-over from the older tactical doctrine of continuously closing range with the enemy, and the idea that gunfire alone may be sufficient to cripple a battleship, but not sink it outright, so a coup de grace would be made with torpedoes. In practice, torpedoes fired from battleships scored very few hits, and there was a risk that a stored torpedo would cause a dangerous explosion if hit by enemy fire. And in fact, the only documented instance of one battleship successfully torpedoing another came during the action of 27 May 1941, where the British battleship claimed to have torpedoed the crippled at close range.
Position of main armament
The effectiveness of the guns depended in part on the layout of the turrets. Dreadnought, and the British ships which immediately followed it, carried five turrets: one forward, one aft and one amidships on the centreline of the ship, and two in the 'wings' next to the superstructure. This allowed three turrets to fire ahead and four on the broadside. The Nassau and classes of German dreadnoughts adopted a 'hexagonal' layout, with one turret each fore and aft and four wing turrets; this meant more guns were mounted in total, but the same number could fire ahead or broadside as with Dreadnought.
Dreadnought designs experimented with different layouts. The British Neptune-class battleship staggered the wing turrets, so all ten guns could fire on the broadside, a feature also used by the German . This risked blast damage to parts of the ship over which the guns fired, and put great stress on the ship's frames.
If all turrets were on the centreline of the vessel, stresses on the ship's frames were relatively low. This layout meant the entire main battery could fire on the broadside, though fewer could fire end-on. It meant the hull would be longer, which posed some challenges for the designers; a longer ship needed to devote more weight to armour to get equivalent protection, and the magazines which served each turret interfered with the distribution of boilers and engines. For these reasons, , which carried a record fourteen 12-inch guns in seven centreline turrets, was not considered a success.
A superfiring layout was eventually adopted as standard. This involved raising one or two turrets so they could fire over a turret immediately forward or astern of them. The US Navy adopted this feature with their first dreadnoughts in 1906, but others were slower to do so. As with other layouts there were drawbacks. Initially, there were concerns about the impact of the blast of the raised guns on the lower turret. Raised turrets raised the centre of gravity of the ship, and might reduce the stability of the ship. Nevertheless, this layout made the best of the firepower available from a fixed number of guns, and was eventually adopted generally. The US Navy used superfiring on the South Carolina class, and the layout was adopted in the Royal Navy with the of 1910. By World War II, superfiring was entirely standard.
Initially, all dreadnoughts had two guns to a turret. One solution to the problem of turret layout was to put three or even four guns in each turret. Fewer turrets meant the ship could be shorter, or could devote more space to machinery. On the other hand, it meant that in the event of an enemy shell destroying one turret, a higher proportion of the main armament would be out of action. The risk of the blast waves from each gun barrel interfering with others in the same turret reduced the rate of fire from the guns somewhat. The first nation to adopt the triple turret was Italy, in the , soon followed by Russia with the , the Austro-Hungarian , and the US . British Royal Navy battleships did not adopt triple turrets until after the First World War, with the , and Japanese battleships not until the late-1930s . Several later designs used quadruple turrets, including the British and French .
Main armament power and calibre
Rather than try to fit more guns onto a ship, it was possible to increase the power of each gun. This could be done by increasing either the calibre of the weapon and hence the weight of shell, or by lengthening the barrel to increase muzzle velocity. Either of these offered the chance to increase range and armour penetration.
Both methods offered advantages and disadvantages, though in general greater muzzle velocity meant increased barrel wear. As guns fire, their barrels wear out, losing accuracy and eventually requiring replacement. At times, this became problematic; the US Navy seriously considered stopping practice firing of heavy guns in 1910 because of the wear on the barrels. The disadvantages of guns of larger calibre are that guns and turrets must be heavier; and heavier shells, which are fired at lower velocities, require turret designs that allow a larger angle of elevation for the same range. Heavier shells have the advantage of being slowed less by air resistance, retaining more penetrating power at longer ranges.
Different navies approached the issue of calibre in different ways. The German navy, for instance, generally used a lighter calibre than the equivalent British ships, e.g. 12-inch calibre when the British standard was . Because German metallurgy was superior, the German 12-inch gun had better shell weight and muzzle velocity than the British 12-inch; and German ships could afford more armour for the same vessel weight because the German 12-inch guns were lighter than the 13.5-inch guns the British required for comparable effect.
Over time the calibre of guns tended to increase. In the Royal Navy, the Orion class, launched 1910, had ten 13.5-inch guns, all on the centreline; the Queen Elizabeth class, launched in 1913, had eight guns. In all navies, fewer guns of larger calibre came to be used. The smaller number of guns simplified their distribution, and centreline turrets became the norm.
A further step change was planned for battleships designed and laid down at the end of World War I. The Japanese s in 1917 carried guns, which was quickly matched by the US Navy's . Both the United Kingdom and Japan were planning battleships with armament, in the British case the . The Washington Naval Treaty concluded on 6 February 1922 and ratified later limited battleship guns to not more than calibre, and these heavier guns were not produced.
The only battleships to break the limit were the Japanese Yamato class, begun in 1937 (after the treaty expired), which carried main guns. By the middle of World War II, the United Kingdom was making use of guns kept as spares for the to arm the last British battleship, .
Some World War II-era designs were drawn up proposing another move towards gigantic armament. The German H-43 and H-44 designs proposed guns, and there is evidence Hitler wanted calibres as high as ; the Japanese 'Super Yamato' design also called for 20-inch guns. None of these proposals went further than very preliminary design work.
Secondary armament
The first dreadnoughts tended to have a very light secondary armament intended to protect them from torpedo boats. Dreadnought carried 12-pounder guns; each of her twenty-two 12-pounders could fire at least 15 rounds a minute at any torpedo boat making an attack. The South Carolinas and other early American dreadnoughts were similarly equipped. At this stage, torpedo boats were expected to attack separately from any fleet actions. Therefore, there was no need to armour the secondary gun armament, or to protect the crews from the blast effects of the main guns. In this context, the light guns tended to be mounted in unarmoured positions high on the ship to minimize weight and maximize field of fire.
Within a few years, the principal threat was from the destroyer—larger, more heavily armed, and harder to destroy than the torpedo boat. Since the risk from destroyers was very serious, it was considered that one shell from a battleship's secondary armament should sink (rather than merely damage) any attacking destroyer. Destroyers, in contrast to torpedo boats, were expected to attack as part of a general fleet engagement, so it was necessary for the secondary armament to be protected against shell splinters from heavy guns, and the blast of the main armament. This philosophy of secondary armament was adopted by the German navy from the start; Nassau, for instance, carried twelve and sixteen guns, and subsequent German dreadnought classes followed this lead. These heavier guns tended to be mounted in armoured barbettes or casemates on the main deck. The Royal Navy increased its secondary armament from 12-pounder to first and then guns, which were standard at the start of World War I; the US standardized on 5-inch calibre for the war but planned 6-inch guns for the ships designed just afterwards.
The secondary battery served several other roles. It was hoped that a medium-calibre shell might be able to score a hit on an enemy dreadnought's sensitive fire control systems. It was also felt that the secondary armament could play an important role in driving off enemy cruisers from attacking a crippled battleship.
The secondary armament of dreadnoughts was, on the whole, unsatisfactory. A hit from a light gun could not be relied on to stop a destroyer. Heavier guns could not be relied on to hit a destroyer, as experience at the Battle of Jutland showed. The casemate mountings of heavier guns proved problematic; being low in the hull, they proved liable to flooding, and on several classes, some were removed and plated over. The only sure way to protect a dreadnought from destroyer or torpedo boat attack was to provide a destroyer squadron as an escort. After World War I the secondary armament tended to be mounted in turrets on the upper deck and around the superstructure. This allowed a wide field of fire and good protection without the negative points of casemates. Increasingly through the 1920s and 1930s, the secondary guns were seen as a major part of the anti-aircraft battery, with high-angle, dual-purpose guns increasingly adopted.
Armour
Much of the displacement of a dreadnought was taken up by the steel plating of the armour. Designers spent much time and effort to provide the best possible protection for their ships against the various weapons with which they would be faced. Only so much weight could be devoted to protection, without compromising speed, firepower or seakeeping.
Central citadel
The bulk of a dreadnought's armour was concentrated around the "armoured citadel". This was a box, with four armoured walls and an armoured roof, around the most important parts of the ship. The sides of the citadel were the "armoured belt" of the ship, which started on the hull just in front of the forward turret and ran to just behind the aft turret. The ends of the citadel were two armoured bulkheads, fore and aft, which stretched between the ends of the armour belt. The "roof" of the citadel was an armoured deck. Within the citadel were the boilers, engines, and the magazines for the main armament. A hit to any of these systems could cripple or destroy the ship. The "floor" of the box was the bottom of the ship's hull, and was unarmoured, although it was, in fact, a "triple bottom".
The earliest dreadnoughts were intended to take part in a pitched battle against other battleships at ranges of up to . In such an encounter, shells would fly on a relatively flat trajectory, and a shell would have to hit at or just about the waterline to damage the vitals of the ship. For this reason, the early dreadnoughts' armour was concentrated in a thick belt around the waterline; this was thick in Dreadnought. Behind this belt were arranged the ship's coal bunkers, to further protect the engineering spaces. In an engagement of this sort, there was also a lesser threat of indirect damage to the vital parts of the ship. A shell which struck above the belt armour and exploded could send fragments flying in all directions. These fragments were dangerous but could be stopped by much thinner armour than what would be necessary to stop an unexploded armour-piercing shell. To protect the innards of the ship from fragments of shells which detonated on the superstructure, much thinner steel armour was applied to the decks of the ship.
The thickest protection was reserved for the central citadel in all battleships. Some navies extended a thinner armoured belt and armoured deck to cover the ends of the ship, or extended a thinner armoured belt up the outside of the hull. This "tapered" armour was used by the major European navies—the United Kingdom, Germany, and France. This arrangement gave some armour to a larger part of the ship; for the first dreadnoughts, when high-explosive shellfire was still considered a significant threat, this was useful. It tended to result in the main belt being very short, only protecting a thin strip above the waterline; some navies found that when their dreadnoughts were heavily laden, the armoured belt was entirely submerged. The alternative was an "all or nothing" protection scheme, developed by the US Navy. The armour belt was tall and thick, but no side protection at all was provided to the ends of the ship or the upper decks. The armoured deck was also thickened. The "all-or-nothing" system provided more effective protection against the very-long-range engagements of dreadnought fleets and was adopted outside the US Navy after World War I.
The design of the dreadnought changed to meet new challenges. For example, armour schemes were changed to reflect the greater risk of plunging shells from long-range gunfire, and the increasing threat from armour-piercing bombs dropped by aircraft. Later designs carried a greater thickness of steel on the armoured deck; Yamato carried a main belt, but a deck thick.
Underwater protection and subdivision
The final element of the protection scheme of the first dreadnoughts was the subdivision of the ship below the waterline into several watertight compartments. If the hull were holed—by shellfire, mine, torpedo, or collision—then, in theory, only one area would flood and the ship could survive. To make this precaution even more effective, many dreadnoughts had no doors between different underwater sections, so that even a surprise hole below the waterline need not sink the ship. There were still several instances where flooding spread between underwater compartments.
The greatest evolution in dreadnought protection came with the development of the anti-torpedo bulge and torpedo belt, both attempts to protect against underwater damage by mines and torpedoes. The purpose of underwater protection was to absorb the force of a detonating mine or torpedo well away from the final watertight hull. This meant an inner bulkhead along the side of the hull, which was generally lightly armoured to capture splinters, separated from the outer hull by one or more compartments. The compartments in between were either left empty, or filled with coal, water or fuel oil.
Propulsion
Dreadnoughts were propelled by two to four screw propellers. Dreadnought herself, and all British dreadnoughts, had screw shafts driven by steam turbines. The first generation of dreadnoughts built in other nations used the slower triple-expansion steam engine which had been standard in pre-dreadnoughts.
Turbines offered more power than reciprocating engines for the same volume of machinery. This, along with a guarantee on the new machinery from the inventor, Charles Parsons, persuaded the Royal Navy to use turbines in Dreadnought. It is often said that turbines had the additional benefits of being cleaner and more reliable than reciprocating engines. By 1905, new designs of reciprocating engine were available which were cleaner and more reliable than previous models.
Turbines also had disadvantages. At cruising speeds much slower than maximum speed, turbines were markedly less fuel-efficient than reciprocating engines. This was particularly important for navies which required a long range at cruising speeds—and hence for the US Navy, which was planning in the event of war to cruise across the Pacific and engage the Japanese in the Philippines.
The US Navy experimented with turbine engines from 1908 in the , but was not fully committed to turbines until the in 1916. In the preceding Nevada class, one ship, , received reciprocating engines, while received geared turbines. The two s of 1914 both received reciprocating engines, but all four ships of the (1911) and (1912) classes received turbines.
The disadvantages of the turbine were eventually overcome. The solution which eventually was generally adopted was the geared turbine, where gearing reduced the rotation rate of the propellers and hence increased efficiency. This solution required technical precision in the gears and hence was difficult to implement.
One alternative was the turbo-electric drive where the steam turbine generated electrical power which then drove the propellers. This was particularly favoured by the US Navy, which used it for all dreadnoughts from late 1915–1922. The advantages of this method were its low cost, the opportunity for very close underwater compartmentalization, and good astern performance. The disadvantages were that the machinery was heavy and vulnerable to battle damage, particularly the effects of flooding on the electrics.
Turbines were never replaced in battleship design. Diesel engines were eventually considered by some powers, as they offered very good endurance and an engineering space taking up less of the length of the ship. They were also heavier, however, took up a greater vertical space, offered less power, and were considered unreliable.
Fuel
The first generation of dreadnoughts used coal to fire the boilers which fed steam to the turbines. Coal had been in use since the first steam warships. One advantage of coal was that it is quite inert (in lump form) and thus could be used as part of the ship's protection scheme. Coal also had many disadvantages. It was labour-intensive to pack coal into the ship's bunkers and then feed it into the boilers. The boilers became clogged with ash. Airborne coal dust and related vapours were highly explosive, possibly evidenced by the explosion of . Burning coal as fuel also produced thick black smoke which gave away the position of a fleet and interfered with visibility, signaling, and fire control. In addition, coal was very bulky and had comparatively low thermal efficiency.
Oil-fired propulsion had many advantages for naval architects and officers at sea alike. It reduced smoke, making ships less visible. It could be fed into boilers automatically, rather than needing a complement of stokers to do it by hand. Oil has roughly twice the thermal content of coal. This meant that the boilers themselves could be smaller; and for the same volume of fuel, an oil-fired ship would have much greater range.
These benefits meant that, as early as 1901, Fisher was pressing the advantages of oil fuel. There were technical problems with oil-firing, connected with the different distribution of the weight of oil fuel compared to coal, and the problems of pumping viscous oil. The main problem with using oil for the battle fleet was that, with the exception of the United States, every major navy would have to import its oil. As a result, some navies adopted 'dual-firing' boilers which could use coal sprayed with oil; British ships so equipped, which included dreadnoughts, could even use oil alone at up to 60% power.
The US had large reserves of oil, and the US Navy was the first to wholeheartedly adopt oil-firing, deciding to do so in 1910 and ordering oil-fired boilers for the Nevada class, in 1911. The United Kingdom was not far behind, deciding in 1912 to use oil on its own in the Queen Elizabeth class; shorter British design and building times meant that Queen Elizabeth was commissioned before either of the Nevada-class vessels. The United Kingdom planned to revert to mixed firing with the subsequent , at the cost of some speed—but Fisher, who returned to office in 1914, insisted that all the boilers should be oil-fired. Other major navies retained mixed coal-and-oil firing until the end of World War I.
Dreadnought building
Dreadnoughts developed as a move in an international battleship arms-race which had begun in the 1890s. The British Royal Navy had a big lead in the number of pre-dreadnought battleships, but a lead of only one dreadnought in 1906. This has led to criticism that the British, by launching HMS Dreadnought, threw away a strategic advantage. Most of the United Kingdom's naval rivals had already contemplated or even built warships that featured a uniform battery of heavy guns. Both the Japanese Navy and the US Navy ordered "all-big-gun" ships in 1904–1905, with Satsuma and South Carolina, respectively. Germany's Kaiser Wilhelm II had advocated a fast warship armed only with heavy guns since the 1890s. By securing a head start in dreadnought construction, the United Kingdom ensured its dominance of the seas continued.
The battleship race soon accelerated once more, placing a great burden on the finances of the governments which engaged in it. The first dreadnoughts were not much more expensive than the last pre-dreadnoughts, but the cost per ship continued to grow thereafter. Modern battleships were the crucial element of naval power in spite of their price. Each battleship signalled national power and prestige, in a manner similar to the nuclear weapons of today. Germany, France, Russia, Italy, Japan and Austria-Hungary all began dreadnought programmes, and second-rank powers—including the Ottoman Empire, Greece, Argentina, Brazil, and Chile—commissioned British, French, German, and American yards to build dreadnoughts for them.
Anglo-German arms race
The construction of Dreadnought coincided with increasing tension between the United Kingdom and Germany. Germany had begun building a large battlefleet in the 1890s, as part of a deliberate policy to challenge British naval supremacy. With the signing of the Entente Cordiale in April 1904, it became increasingly clear the United Kingdom's principal naval enemy would be Germany, which was building up a large, modern fleet under the "Tirpitz" laws. This rivalry gave rise to the two largest dreadnought fleets of the pre-1914 period.
The first German response to Dreadnought was the Nassau class, laid down in 1907, followed by the Helgoland class in 1909. Together with two battlecruisers—a type for which the Germans had less admiration than Fisher, but which could be built under the authorization for armoured cruisers, rather than for capital ships—these classes gave Germany a total of ten modern capital ships built or building in 1909. The British ships were faster and more powerful than their German equivalents, but a 12:10 ratio fell far short of the 2:1 superiority the Royal Navy wanted to maintain.
In 1909, the British Parliament authorized an additional four capital ships, holding out hope Germany would be willing to negotiate a treaty limiting battleship numbers. If no such solution could be found, an additional four ships would be laid down in 1910. Even this compromise meant, when taken together with some social reforms, raising taxes enough to prompt a constitutional crisis in the United Kingdom in 1909–1910. In 1910, the British eight-ship construction plan went ahead, including four Orion-class super-dreadnoughts, augmented by battlecruisers purchased by Australia and New Zealand. In the same period, Germany laid down only three ships, giving the United Kingdom a superiority of 22 ships to 13. The British resolve, as demonstrated by their construction programme, led the Germans to seek a negotiated end to the arms race. The Admiralty's new target of a 60% lead over Germany was near enough to Tirpitz's goal of cutting the British lead to 50%, but talks foundered on the question on whether to include British colonial battlecruisers in the count, as well as on non-naval matters like the German demands for recognition of ownership of Alsace-Lorraine.
The dreadnought race stepped up in 1910 and 1911, with Germany laying down four capital ships each year and the United Kingdom five. Tension came to a head following the German Naval Law of 1912. This proposed a fleet of 33 German battleships and battlecruisers, outnumbering the Royal Navy in home waters. To make matters worse for the United Kingdom, the Imperial Austro-Hungarian Navy was building four dreadnoughts, while Italy had four and was building two more. Against such threats, the Royal Navy could no longer guarantee vital British interests. The United Kingdom was faced with a choice between building more battleships, withdrawing from the Mediterranean, or seeking an alliance with France. Further naval construction was unacceptably expensive at a time when social welfare provision was making calls on the budget. Withdrawing from the Mediterranean would mean a huge loss of influence, weakening British diplomacy in the region and shaking the stability of the British Empire. The only acceptable option, and the one recommended by First Lord of the Admiralty Winston Churchill, was to break with the policies of the past and to make an arrangement with France. The French would assume responsibility for checking Italy and Austria-Hungary in the Mediterranean, while the British would protect the north coast of France. In spite of some opposition from British politicians, the Royal Navy organised itself on this basis in 1912.
In spite of these important strategic consequences, the 1912 Naval Law had little bearing on the battleship-force ratios. The United Kingdom responded by laying down ten new super-dreadnoughts in its 1912 and 1913 budgets—ships of the Queen Elizabeth and Revenge classes, which introduced a further step-change in armament, speed and protection—while Germany laid down only five, concentrating resources on its army.
United States
The American South Carolina-class battleships were the first all-big-gun ships completed by one of the United Kingdom's rivals. The planning for the type had begun before Dreadnought was launched. There is some speculation that informal contacts with sympathetic Royal Navy officials influenced the US Navy design, but the American ship was very different.
The US Congress authorized the Navy to build two battleships, but of only 16,000 tons or lower displacement. As a result, the South Carolina class were built to much tighter limits than Dreadnought. To make the best use of the weight available for armament, all eight 12-inch guns were mounted along the centreline, in superfiring pairs fore and aft. This arrangement gave a broadside equal to Dreadnought, but with fewer guns; this was the most efficient distribution of weapons and proved a precursor of the standard practice of future generations of battleships. The principal economy of displacement compared to Dreadnought was in propulsion; South Carolina retained triple-expansion steam engines, and could manage only compared to for Dreadnought. For this reason the later were described by some as the US Navy's first dreadnoughts; only a few years after their commissioning, the South Carolina class could not operate tactically with the newer dreadnoughts due to their low speed, and were forced to operate with the older pre-dreadnoughts.
The two 10-gun, 20,500-ton ships of the Delaware class were the first US battleships to match the speed of British dreadnoughts, but their secondary battery was "wet" (suffering from spray) and their bow was low in the water. An alternative 12-gun 24,000-ton design had many disadvantages as well; the extra two guns and a lower casemate had "hidden costs"—the two wing turrets planned would weaken the upper deck, be almost impossible to adequately protect against underwater attack, and force magazines to be located too close to the sides of the ship.
The US Navy continued to expand its battlefleet, laying down two ships in most subsequent years until 1920. The US continued to use reciprocating engines as an alternative to turbines until Nevada, laid down in 1912. In part, this reflected a cautious approach to battleship-building, and in part a preference for long endurance over high maximum speed owing to the US Navy's need to operate in the Pacific Ocean.
Japan
With their victory in the Russo-Japanese War of 1904–1905, the Japanese became concerned about the potential for conflict with the US. The theorist Satō Tetsutarō developed the doctrine that Japan should have a battlefleet at least 70% the size of that of the US. This would enable the Japanese navy to win two decisive battles: the first early in a prospective war against the US Pacific Fleet, and the second against the US Atlantic Fleet which would inevitably be dispatched as reinforcements.
Japan's first priorities were to refit the pre-dreadnoughts captured from Russia and to complete Satsuma and . The Satsumas were designed before Dreadnought, but financial shortages resulting from the Russo-Japanese War delayed completion and resulted in their carrying a mixed armament, so they were known as "semi-dreadnoughts". These were followed by a modified Aki-type: and of the Kawachi-class. These two ships were laid down in 1909 and completed in 1912. They were armed with twelve 12-inch guns, but they were of two different models with differing barrel-lengths, meaning that they would have had difficulty controlling their fire at long ranges.
In other countries
Compared to the other major naval powers, France was slow to start building dreadnoughts, instead finishing the planned Danton class of pre-dreadnoughts, laying down five in 1907 and 1908. In September 1910 the first of the was laid down, making France the eleventh nation to enter the dreadnought race. In the Navy Estimates of 1911, Paul Bénazet asserted that from 1896 to 1911, France dropped from being the world's second-largest naval power to fourth; he attributed this to problems in maintenance routines and neglect. The closer alliance with the United Kingdom made these reduced forces more than adequate for French needs.
The Italian Regia Marina had received proposals for an all-big-gun battleship from Cuniberti well before Dreadnought was launched, but it took until 1909 for Italy to lay down one of its own. The construction of Dante Alighieri was prompted by rumours of Austro-Hungarian dreadnought-building. A further five dreadnoughts of the and classes followed as Italy sought to maintain its lead over Austria-Hungary. These ships remained the core of Italian naval strength until World War II. The subsequent were suspended (and later cancelled) on the outbreak of World War I.
In January 1909 Austro-Hungarian admirals circulated a document calling for a fleet of four dreadnoughts. A constitutional crisis in 1909–1910 meant no construction could be approved. In spite of this, shipyards laid down two dreadnoughts on a speculative basis—due especially to the energetic manipulations of Rudolf Montecuccoli, Chief of the Austro-Hungarian Navy—later approved along with an additional two. The resulting ships, all Tegetthoff class, were to be accompanied by a further four ships of the , but these were cancelled on the Austro-Hungarian entry into World War I.
In June 1909 the Imperial Russian Navy began construction of four Gangut dreadnoughts for the Baltic Fleet, and in October 1911, three more dreadnoughts for the Black Sea Fleet were laid down. Of seven ships, only one was completed within four years of being laid down, and the Gangut ships were "obsolescent and outclassed" upon commissioning. Taking lessons from Tsushima, and influenced by Cuniberti, they ended up more closely resembling slower versions of Fisher's battlecruisers than Dreadnought, and they proved badly flawed due to their smaller guns and thinner armour when compared with contemporary dreadnoughts.
Spain commissioned three ships of the , with the first laid down in 1909. The three ships, the smallest dreadnoughts ever constructed, were built in Spain with British assistance; construction on the third ship, , took nine years from its laying down date to completion because of non-delivery of critical material, especially armament, from the United Kingdom.
Brazil was the third country to begin construction on a dreadnought. It ordered three dreadnoughts from the United Kingdom which would mount a heavier main battery than any other battleship afloat at the time (twelve 12-inch/45 calibre guns). Two were completed for Brazil: was laid down on by Armstrong (Elswick) on 17 April 1907, and its sister, , followed thirteen days later at Vickers (Barrow). Although many naval journals in Europe and the US speculated that Brazil was really acting as a proxy for one of the naval powers and would hand the ships over to them as soon as they were complete, both ships were commissioned into the Brazilian Navy in 1910. The third ship, Rio de Janeiro, was nearly complete when rubber prices collapsed and Brazil could not afford her. She was sold to the Ottoman Empire in 1913.
The Netherlands intended by 1912 to replace its fleet of pre-dreadnought armoured ships with a modern fleet composed of dreadnoughts. After a Royal Commission proposed the purchase of nine dreadnoughts in August 1913, there were extensive debates over the need for such ships and—if they were necessary—over the actual number needed. These lasted into August 1914, when a bill authorizing funding for four dreadnoughts was finalized, but the outbreak of World War I halted the ambitious plan.
The Ottoman Empire ordered two dreadnoughts from British yards, Reshadiye in 1911 and Fatih Sultan Mehmed in 1914. Reshadiye was completed, and in 1913, the Ottoman Empire also acquired a nearly-completed dreadnought from Brazil, which became Sultan Osman I. At the start of World War I, Britain seized the two completed ships for the Royal Navy. Reshadiye and Sultan Osman I became and Agincourt respectively. (Fatih Sultan Mehmed was scrapped.) This greatly offended the Ottoman Empire. When two German warships, the battlecruiser and the cruiser , became trapped in Ottoman territory after the start of the war, Germany "gave" them to the Ottomans. (They remained German-crewed and under German orders.) The British seizure and the German gift proved important factors in the Ottoman Empire joining the Central Powers in October 1914.
Greece had ordered the dreadnought from Germany, but work stopped on the outbreak of war. The main armament for the Greek ship had been ordered in the United States, and the guns consequently equipped a class of British monitors. In 1914 Greece purchased two pre-dreadnoughts from the United States Navy, renaming them and in Royal Hellenic Navy service.
The Conservative Party-dominated House of Commons of Canada passed a bill purchasing three British dreadnoughts for $35 million to use in the Canadian Naval Service, but the measure was defeated in the Liberal Party-dominated Senate of Canada. As a result, the country's navy was unprepared for World War I.
Super-dreadnoughts
Within five years of the commissioning of Dreadnought, a new generation of more powerful "super-dreadnoughts" was being built. The British Orion class jumped an unprecedented 2,000 tons in displacement, introduced the heavier 13.5-inch (343 mm) gun, and placed all the main armament on the centreline (hence with some turrets superfiring over others). In the four years between Dreadnought and Orion, displacement had increased by 25%, and weight of broadside (the weight of ammunition that can be fired on a single bearing in one salvo) had doubled.
British super-dreadnoughts were joined by those built by other nations. The US Navy New York class, laid down in 1911, carried 14-inch (356 mm) guns in response to the British move and this calibre became standard. In Japan, two super-dreadnoughts were laid down in 1912, followed by the two ships in 1914, with both classes carrying twelve 14-inch (356 mm) guns. In 1917, the Nagato class was ordered, the first super-dreadnoughts to mount 16-inch guns, making them arguably the most powerful warships in the world. All were increasingly built from Japanese rather than from imported components. In France, the Courbets were followed by three super-dreadnoughts of the , carrying guns; another five s were canceled on the outbreak of World War I. The aforementioned Brazilian dreadnoughts sparked a small-scale arms race in South America, as Argentina and Chile each ordered two super-dreadnoughts from the US and the United Kingdom, respectively. Argentina's and had a main armament equaling that of their Brazilian counterparts, but were much heavier and carried thicker armour. The British purchased both of Chile's battleships on the outbreak of the First World War. One, , was later repurchased by Chile.
Later British super-dreadnoughts, principally the Queen Elizabeth class, dispensed with the midships turret, freeing weight and volume for larger, oil-fired boilers. The new 15-inch (381 mm) gun gave greater firepower in spite of the loss of a turret, and there were a thicker armour belt and improved underwater protection. The class had a design speed, and they were considered the first fast battleships.
The design weakness of super-dreadnoughts, which distinguished them from post-1918 vessels, was armour disposition. Their design emphasized the vertical armour protection needed in short-range battles, where shells would strike the sides of the ship, and assumed that an outer plate of armour would detonate any incoming shells so that crucial internal structures such as turret bases needed only light protection against splinters. This was in spite of the ability to engage the enemy at , ranges where the shells would descend at angles of up to thirty degrees ("plunging fire") and so could pierce the deck behind the outer plate and strike the internal structures directly. Post-war designs typically had 5 to 6 inches (130 to 150 mm) of deck armour laid across the top of single, much thicker vertical plates to defend against this. The concept of zone of immunity became a major part of the thinking behind battleship design. Lack of underwater protection was also a weakness of these pre-World War I designs, which originated before the use of torpedoes became widespread.
The United States Navy designed its 'Standard-type battleships', beginning with the Nevada class, with long-range engagements and plunging fire in mind; the first of these was laid down in 1912, four years before the Battle of Jutland taught the dangers of long-range fire to European navies. Important features of the standard battleships were "all or nothing" armour and "raft" construction—based on a design philosophy which held that only those parts of the ship worth giving the thickest possible protection were worth armouring at all, and that the resulting armoured "raft" should contain enough reserve buoyancy to keep the entire ship afloat in the event the unarmoured bow and stern were thoroughly punctured and flooded. This design proved its worth in the 1942 Naval Battle of Guadalcanal, when an ill-timed turn by silhouetted her to Japanese guns. In spite of receiving 26 hits, her armoured raft remained untouched and she remained both afloat and operational at the end of action.
In action
The First World War saw no decisive engagements between battlefleets to compare with Tsushima. The role of battleships was marginal to the land fighting in France and Russia; it was equally marginal to the German war on commerce (Handelskrieg) and the Allied blockade.
By virtue of geography, the Royal Navy could keep the German High Seas Fleet confined to the North Sea with relative ease, but was unable to break the German superiority in the Baltic Sea. Both sides were aware, because of the greater number of British dreadnoughts, that a full fleet engagement would likely result in a British victory. The German strategy was, therefore, to try to provoke an engagement on favourable terms: either inducing a part of the Grand Fleet to enter battle alone, or to fight a pitched battle near the German coast, where friendly minefields, torpedo boats, and submarines could even the odds.
The first two years of war saw conflict in the North Sea limited to skirmishes by battlecruisers at the Battle of Heligoland Bight and Battle of Dogger Bank, and raids on the English coast. In May 1916, a further attempt to draw British ships into battle on favourable terms resulted in a clash of the battlefleets on 31 May to 1 June in the indecisive Battle of Jutland.
In the other naval theatres, there were no decisive pitched battles. In the Black Sea, Russian and Turkish battleships skirmished, but nothing more. In the Baltic Sea, action was largely limited to convoy raiding and the laying of defensive minefields. The Adriatic was in a sense the mirror of the North Sea: the Austro-Hungarian dreadnought fleet was confined to the Adriatic Sea by the Italian, British and French blockade but bombarded the Italians on several occasions, notably at Ancona in 1915. And in the Mediterranean, the most important use of battleships was in support of the amphibious assault at Gallipoli.
The course of the war illustrated the vulnerability of battleships to cheaper weapons. In September 1914, the U-boat threat to capital ships was demonstrated by successful attacks on British cruisers, including the sinking of three elderly British armoured cruisers by the German submarine in less than an hour. Mines continued to prove a threat when a month later the recently commissioned British super-dreadnought struck one and sank in 1914. By the end of October, British strategy and tactics in the North Sea had changed to reduce the risk of U-boat attack. Jutland was the only major clash of dreadnought battleship fleets in history, and the German plan for the battle relied on U-boat attacks on the British fleet; and the escape of the German fleet from the superior British firepower was effected by the German cruisers and destroyers closing on British battleships, causing them to turn away to avoid the threat of torpedo attack. Further near-misses from submarine attacks on battleships led to growing concern in the Royal Navy about the vulnerability of battleships.
For the German part, the High Seas Fleet determined not to engage the British without the assistance of submarines, and since submarines were more needed for commerce raiding, the fleet stayed in port for much of the remainder of the war. Other theatres showed the role of small craft in damaging or destroying dreadnoughts. The two Austrian dreadnoughts lost in November 1918 were casualties of Italian torpedo boats and frogmen.
Battleship building from 1914 onwards
World War I
The outbreak of World War I largely halted the dreadnought arms race as funds and technical resources were diverted to more pressing priorities. The foundries which produced battleship guns were dedicated instead to the production of land-based artillery, and shipyards were flooded with orders for small ships. The weaker naval powers engaged in the Great War—France, Austria-Hungary, Italy and Russia—suspended their battleship programmes entirely. The United Kingdom and Germany continued building battleships and battlecruisers but at a reduced pace.
In the United Kingdom, Fisher returned to his old post as First Sea Lord; he had been created 1st Baron Fisher in 1909, taking the motto Fear God and dread nought. This, combined with a government moratorium on battleship building, meant a renewed focus on the battlecruiser. Fisher resigned in 1915 following arguments about the Gallipoli Campaign with the First Lord of the Admiralty, Winston Churchill.
The final units of the Revenge and Queen Elizabeth classes were completed, though the last two battleships of the Revenge class were re-ordered as battlecruisers of the . Fisher followed these ships with the even more extreme ; very fast and heavily armed ships with minimal, armour, called 'large light cruisers' to get around a Cabinet ruling against new capital ships. Fisher's mania for speed culminated in his suggestion for , a mammoth, lightly armoured battlecruiser.
In Germany, two units of the pre-war were gradually completed, but the other two laid down were still unfinished by the end of the War. , also laid down before the start of the war, was completed in 1917. The , designed in 1914–1915, were begun but never finished.
Post-war
In spite of the lull in battleship building during the World War, the years 1919–1922 saw the threat of a renewed naval arms race between the United Kingdom, Japan, and the US. The Battle of Jutland exerted a huge influence over the designs produced in this period. The first ships which fit into this picture are the British , designed in 1916. Jutland finally persuaded the Admiralty that lightly armoured battlecruisers were too vulnerable, and therefore the final design of the Admirals incorporated much-increased armour, increasing displacement to 42,000 tons. The initiative in creating the new arms race lay with the Japanese and United States navies. The United States Naval Appropriations Act of 1916 authorized the construction of 156 new ships, including ten battleships and six battlecruisers. For the first time, the United States Navy was threatening the British global lead. This programme was started slowly (in part because of a desire to learn lessons from Jutland), and never fulfilled entirely. The new American ships (the Colorado-class battleships, South Dakota-class battleships and s), took a qualitative step beyond the British Queen Elizabeth class and Admiral classes by mounting 16-inch guns.
At the same time, the Imperial Japanese Navy was finally gaining authorization for its 'eight-eight battlefleet'. The Nagato class, authorized in 1916, carried eight 16-inch guns like their American counterparts. The next year's naval bill authorized two more battleships and two more battlecruisers. The battleships, which became the , were to carry ten 16-inch guns. The battlecruisers, the , also carried ten 16-inch guns and were designed to be capable of 30 knots, capable of beating both the British Admiral- and the US Navy's Lexington-class battlecruisers.
Matters took a further turn for the worse in 1919 when Woodrow Wilson proposed a further expansion of the United States Navy, asking for funds for an additional ten battleships and six battlecruisers in addition to the completion of the 1916 programme (the not yet started). In response, the Diet of Japan finally agreed to the completion of the 'eight-eight fleet', incorporating a further four battleships. These ships, the would displace 43,000 tons; the next design, the , would have carried guns. Many in the Japanese navy were still dissatisfied, calling for an 'eight-eight-eight' fleet with 24 modern battleships and battlecruisers.
The British, impoverished by World War I, faced the prospect of slipping behind the US and Japan. No ships had been begun since the Admiral class, and of those only had been completed. A June 1919 Admiralty plan outlined a post-war fleet with 33 battleships and eight battlecruisers, which could be built and sustained for £171 million a year (approximately £ today); only £84 million was available. The Admiralty then demanded, as an absolute minimum, a further eight battleships. These would have been the G3 battlecruisers, with 16-inch guns and high speed, and the N3-class battleships, with guns. Its navy severely limited by the Treaty of Versailles, Germany did not participate in this three-way naval building competition. Most of the German dreadnought fleet was scuttled at Scapa Flow by its crews in 1919; the remainder were handed over as war prizes.
The major naval powers avoided the cripplingly expensive expansion programmes by negotiating the Washington Naval Treaty in 1922. The Treaty laid out a list of ships, including most of the older dreadnoughts and almost all the newer ships under construction, which were to be scrapped or otherwise put out of use. It furthermore declared a 'building holiday' during which no new battleships or battlecruisers were to be laid down, save for the British Nelson class. The ships which survived the treaty, including the most modern super-dreadnoughts of all three navies, formed the bulk of international capital ship strength through the interwar period and, with some modernisation, into World War II. The ships built under the terms of the Washington Treaty (and subsequently the London Treaties in 1930 and 1936) to replace outdated vessels were known as treaty battleships.
From this point on, the term 'dreadnought' became less widely used. Most pre-dreadnought battleships were scrapped or hulked after World War I, so the term 'dreadnought' became less necessary.
| Technology | Naval warfare | null |
7319263 | https://en.wikipedia.org/wiki/Entropy%20%28energy%20dispersal%29 | Entropy (energy dispersal) | In thermodynamics, the interpretation of entropy as a measure of energy dispersal has been exercised against the background of the traditional view, introduced by Ludwig Boltzmann, of entropy as a quantitative measure of disorder. The energy dispersal approach avoids the ambiguous term 'disorder'. An early advocate of the energy dispersal conception was Edward A. Guggenheim in 1949, using the word 'spread'.
In this alternative approach, entropy is a measure of energy dispersal or spread at a specific temperature. Changes in entropy can be quantitatively related to the distribution or the spreading out of the energy of a thermodynamic system, divided by its temperature.
Some educators propose that the energy dispersal idea is easier to understand than the traditional approach. The concept has been used to facilitate teaching entropy to students beginning university chemistry and biology.
Comparisons with traditional approach
The term "entropy" has been in use from early in the history of classical thermodynamics, and with the development of statistical thermodynamics and quantum theory, entropy changes have been described in terms of the mixing or "spreading" of the total energy of each constituent of a system over its particular quantized energy levels.
Such descriptions have tended to be used together with commonly used terms such as disorder and randomness, which are ambiguous, and whose everyday meaning is the opposite of what they are intended to mean in thermodynamics. Not only does this situation cause confusion, but it also hampers the teaching of thermodynamics. Students were being asked to grasp meanings directly contradicting their normal usage, with equilibrium being equated to "perfect internal disorder" and the mixing of milk in coffee from apparent chaos to uniformity being described as a transition from an ordered state into a disordered state.
The description of entropy as the amount of "mixedupness" or "disorder," as well as the abstract nature of the statistical mechanics grounding this notion, can lead to confusion and considerable difficulty for those beginning the subject. Even though courses emphasised microstates and energy levels, most students could not get beyond simplistic notions of randomness or disorder. Many of those who learned by practising calculations did not understand well the intrinsic meanings of equations, and there was a need for qualitative explanations of thermodynamic relationships.
Arieh Ben-Naim recommends abandonment of the word entropy, rejecting both the 'dispersal' and the 'disorder' interpretations; instead he proposes the notion of "missing information" about microstates as considered in statistical mechanics, which he regards as commonsensical.
Description
Increase of entropy in a thermodynamic process can be described in terms of "energy dispersal" and the "spreading of energy," while avoiding mention of "disorder" except when explaining misconceptions. All explanations of where and how energy is dispersing or spreading have been recast in terms of energy dispersal, so as to emphasise the underlying qualitative meaning.
In this approach, the second law of thermodynamics is introduced as "Energy spontaneously disperses from being localized to becoming spread out if it is not hindered from doing so," often in the context of common experiences such as a rock falling, a hot frying pan cooling down, iron rusting, air leaving a punctured tyre and ice melting in a warm room. Entropy is then depicted as a sophisticated kind of "before and after" yardstick — measuring how much energy is spread out over time as a result of a process such as heating a system, or how widely spread out the energy is after something happens in comparison with its previous state, in a process such as gas expansion or fluids mixing (at a constant temperature). The equations are explored with reference to the common experiences, with emphasis that in chemistry the energy that entropy measures as dispersing is the internal energy of molecules.
The statistical interpretation is related to quantum mechanics in describing the way that energy is distributed (quantized) amongst molecules on specific energy levels, with all the energy of the macrostate always in only one microstate at one instant. Entropy is described as measuring the energy dispersal for a system by the number of accessible microstates, the number of different arrangements of all its energy at the next instant. Thus, an increase in entropy means a greater number of microstates for the final state than for the initial state, and hence more possible arrangements of a system's total energy at any one instant. Here, the greater 'dispersal of the total energy of a system' means the existence of many possibilities.
Continuous movement and molecular collisions visualised as being like bouncing balls blown by air as used in a lottery can then lead on to showing the possibilities of many Boltzmann distributions and continually changing "distribution of the instant", and on to the idea that when the system changes, dynamic molecules will have a greater number of accessible microstates. In this approach, all everyday spontaneous physical happenings and chemical reactions are depicted as involving some type of energy flows from being localized or concentrated to becoming spread out to a larger space, always to a state with a greater number of microstates.
This approach provides a good basis for understanding the conventional approach, except in very complex cases where the qualitative relation of energy dispersal to entropy change can be so inextricably obscured that it is moot. Thus in situations such as the entropy of mixing when the two or more different substances being mixed are at the same temperature and pressure so there will be no net exchange of heat or work, the entropy increase will be due to the literal spreading out of the motional energy of each substance in the larger combined final volume. Each component's energetic molecules become more separated from one another than they would be in the pure state, when in the pure state they were colliding only with identical adjacent molecules, leading to an increase in its number of accessible microstates.
Current adoption
Variants of the energy dispersal approach have been adopted in number of undergraduate chemistry texts, mainly in the United States. One respected text states:
The concept of the number of microstates makes quantitative the ill-defined qualitative concepts of 'disorder' and the 'dispersal' of matter and energy that are used widely to introduce the concept of entropy: a more 'disorderly' distribution of energy and matter corresponds to a greater number of micro-states associated with the same total energy. — Atkins & de Paula (2006)
History
The concept of 'dissipation of energy' was used in Lord Kelvin's 1852 article "On a Universal Tendency in Nature to the Dissipation of Mechanical Energy." He distinguished between two types or "stores" of mechanical energy: "statical" and "dynamical." He discussed how these two types of energy can change from one form to the other during a thermodynamic transformation. When heat is created by any irreversible process (such as friction), or when heat is diffused by conduction, mechanical energy is dissipated, and it is impossible to restore the initial state.
Using the word 'spread', an early advocate of the energy dispersal concept was Edward Armand Guggenheim. In the mid-1950s, with the development of quantum theory, researchers began speaking about entropy changes in terms of the mixing or "spreading" of the total energy of each constituent of a system over its particular quantized energy levels, such as by the reactants and products of a chemical reaction.
In 1984, the Oxford physical chemist Peter Atkins, in a book The Second Law, written for laypersons, presented a nonmathematical interpretation of what he called the "infinitely incomprehensible entropy" in simple terms, describing the Second Law of thermodynamics as "energy tends to disperse". His analogies included an imaginary intelligent being called "Boltzmann's Demon," who runs around reorganizing and dispersing energy, in order to show how the W in Boltzmann's entropy formula relates to energy dispersion. This dispersion is transmitted via atomic vibrations and collisions. Atkins wrote: "each atom carries kinetic energy, and the spreading of the atoms spreads the energy…the Boltzmann equation therefore captures the aspect of dispersal: the dispersal of the entities that are carrying the energy."
In 1997, John Wrigglesworth described spatial particle distributions as represented by distributions of energy states. According to the second law of thermodynamics, isolated systems will tend to redistribute the energy of the system into a more probable arrangement or a maximum probability energy distribution, i.e. from that of being concentrated to that of being spread out. By virtue of the First law of thermodynamics, the total energy does not change; instead, the energy tends to disperse over the space to which it has access. In his 1999 Statistical Thermodynamics, M.C. Gupta defined entropy as a function that measures how energy disperses when a system changes from one state to another. Other authors defining entropy in a way that embodies energy dispersal are Cecie Starr and Andrew Scott.
In a 1996 article, the physicist Harvey S. Leff set out what he called "the spreading and sharing of energy." Another physicist, Daniel F. Styer, published an article in 2000 showing that "entropy as disorder" was inadequate. In an article published in the 2002 Journal of Chemical Education, Frank L. Lambert argued that portraying entropy as "disorder" is confusing and should be abandoned. He has gone on to develop detailed resources for chemistry instructors, equating entropy increase as the spontaneous dispersal of energy, namely how much energy is spread out in a process, or how widely dispersed it becomes – at a specific temperature.
| Physical sciences | Statistical mechanics | Physics |
9503135 | https://en.wikipedia.org/wiki/Polylepis | Polylepis | Polylepis is a genus comprising 28 recognised shrub and tree species, that are endemic to the mid- and high-elevation regions of the tropical Andes. This group is unique in the rose family in that it is predominantly wind-pollinated. They are usually gnarled in shape, but in certain areas some trees are 15–20 m tall and have 2 m-thick trunks. The foliage is evergreen, with dense small leaves, and often having large amounts of dead twigs hanging down from the underside of the canopy. The name Polylepis is, in fact, derived from the Greek words poly (many) plus letis (layers), referring to the shredding, multi-layered bark that is common to all species of the genus. The bark is thick and rough and densely layered for protection against low temperatures.
Some species of Polylepis form woodlands growing well above normal tree line within grass and scrub associations at elevations over 5000 m; which makes Polylepis appear to be the highest naturally occurring arboraceous angiosperm genus in the world.
Classification/taxonomy
The genus Polylepis contains about twenty species that are distributed across the Andes. It is in the rose family, Rosaceae. The genus belongs to the tribe Sanguisorbeae, which mainly comprises herbs and small shrubs. Although the relationship of Polylepis to other genera of Sanguisorbeae is largely unknown, the analysis of Torsten Eriksson et al. (2003) showed evidence of a close relationship between Polylepis and Acaena, which shows tendencies towards having fused stipular sheaths, reddish, flaking-off bark, and axillary, somewhat pendant inflorescences, features otherwise characteristic of Polylepis. There are several characteristics that are important taxonomically to distinguish between species of Polylepis, for example: 1) The amount of leaf congestion, 2) presence or absence of spurs and their size and vestiture, 3) presence or absence and type of trichomes, (4) size, shape, thickness and vestiture of leaflets. The most important taxonomic character, however, is the leaflets.
Studies suggest that repeated fragmentation and reconnection of páramo vegetation, caused by the Pleistocene climatic fluctuations, had a strong influence on the evolution and speed of speciation in the genus Polylepis as well as the páramo biota as a whole.
Species
Accepted species include:
Habitat and distribution
Tree species in the genus Polylepis are confined to the high tropical South American Andes Mountains, with the most abundant concentrations of Polylepis ranging from northern Venezuela to northern Chile and adjacent Argentina. One known group of extra-tropical populations of Polylepis is distributed in the mountains of Northwestern Argentina. Most species of Polylepis grow best at high elevations between 3500 and 5000 meters. However, there are occurrences of species at altitudes as low as 1800 meters. These low altitude species are mixed with montane forest which indicates that components of the genus could have been present in western South America during the Miocene Period or even earlier. It is extremely rare for tree species to live at such altitudes, making Polylepis one of the highest naturally occurring trees along with the conifers of the Himalayan Mountains. Polylepis racemosa grows as shrubby trees on steep, rocky slopes above cloud forest. Polylepis tarapacana is one that reaches 4,800 m; the highest elevation of tree growth in the world.
There is much debate on whether Polylepis was forced to exhibit such extreme elevation habitats due to habitat destruction by human interference. Physiological tolerances for growth at these elevations are subject to considerable debate among scientists, but evidence indicates that even before severe decimation by man, high elevation trees were limited in their distribution by the presence of specialized microhabitats. Due to the harsh environment in which many species of Polylepis grow, the growth of the tree's stems and branches are generally contorted. This abnormal growth is often associated with windy, cold or arid habitats. The climate of the South American Andes changes drastically throughout the region creating many microhabitats. Overall, the climate consists of short southern summers when temperatures are warm and rainfall is high and long winters when temperatures are low and rainfall is limited. The temperature and amount of rainfall also depend on which side of the mountain (eastern or western side), elevation and latitude.
Morphological characteristics
Bark: The bark of Polylepis consists of numerous layers of thin, dark red exfoliating sheets. In some cases, the layered bark can be more than an inch thick. A majority of the larger branches have similar shredding bark. It would seem that the bark serves as insulation from both the nightly frosts and the intense daytime irradiation. The thick bark of Polylepis also serves an important function as protection against fire. It is thought to originally have been a protection against epiphytic mosses, whose thick masses may damage trees by adding weight to the branches and providing a suitable environment for fungi which attack the trees.
Branching pattern and leaf arrangement: Polylepis trees tend to have twisted, crooked stems and branches with repeated sympodial branching. Contorted growth is often associated with windy, cold, or arid habitats. The leaves are generally congested along the branch tips often at the end of long, naked branch segments.
Stipule sheath: Each leaf has a pair of stipules fused around the branch forming a sheath. The crowding of the leaves results in a pattern of stacked, inverted cones due to the overlapping of the stipule sheaths. On the top of the sheaths on either side of the petiole there are often projections, or spurs. The presence or absence of these spurs and their size are important taxonomic characteristics.
Leaves and leaflets: All species of Polylepis have compound, imparipinnate leaves, but the number of pairs of leaflets varies within and between species. The arrangement of the leaflets and the position from the terminal leaflet of the largest pair of leaflets determine the shape of the leaf. The outline of the leaf is usually rhombic in species with one pair of leaflets. Depending on the position of the largest pair, the leaf can be trullate to obtrullate in taxa with more than one leaflet pair.
Leaf anatomy: The leaves of all species are built on a dorsiventral arrangement of cells, with the epidermis and palisade layer on the adaxial surface and the spongy tissue on the abaxial surface.
Reproduction
The pollen of Polylepis can be described as monads, isopolar, and more or less spheroidal to slightly oblate in shape. They have both an elongated and rounded aperture and the limits of the endoaperture (the inner openings of compound the aperture) are obscure. The elongated part of the aperture is completely covered by a pontoperculum.
The fruits of Polylepis are essentially achenes composed of the floral cup fused to the ovary. Fruits of all species are indehiscent (they do not open at maturity) and one seeded. The surface of the fruit of different species has ridges, knobs, spines or wings. There are no definite sites for the placement of these different types of protrusions that appear irregularly over the surface. The type of protrusion, wings verses spines, or knobs versus wings, is useful for distinguishing between species.
The flowers of all species of the genus are born on inflorescences. In most cases the inflorescences are long enough to hang like a pendant, but in the westernmost populations of P. tomentella and in at least one population of P. pepei, the inflorescence is so reduced that it remains almost hidden in the leaf axil. In the species with pendant inflorescences, the flowers are born regularly along the rachis or clustered toward the terminal end. The flowers themselves are reduced and have many features associated with wind pollination. These include: the absence of petals, green rather than colored sepals, an absence of scent or nectar, numerous anthers with long filaments, abundant, dry pollen, a large, spreading, fine fringed stigma, compounded pinnate leaves and the growth of trees in strands.
Pollination and dispersal
Wind-pollination was a useful and evolutionary event in the adaption to the highlands, where insects are much scarcer than in warmer climates. By relying on wind for pollination, species distribution and phylogeny reconstruction have different patterns than insect-pollinated genus. Wind pollination allows genetic information to cover large distances and hurdle reproductive barriers.
The fruits of all species must be wind dispersed because members of the genus are trees and are thus too tall for animals (presumably mammals) to brush against on the ground. However, the elaboration of spines on the fruits of many taxa would argue for animal dispersal although wind dispersal undoubtedly predominates in P. australis. Numerous birds forage or live in Polylepis trees and it is possible that they disperse fruits caught in their feathers.
Ecology
Mountain forest ecosystems have drastically changed due to human disruption such as cutting, burning and grazing, which causes fragmentation of the forest landscape. Polylepis contains some unique forms of autoecological (population ecology) and synecological relationships. Since they are located at high altitudes, they are equipped with specializations that help them withstand the harsh conditions. They are semiarid with a mean annual rainfall average between 200 and 500 mm. Tropical habitats found above 3600 m are subject to extreme diurnal changes. In midday, the temperatures may reach somewhere around 10-12 °C (or higher). This causes the soil lower than the top 30 cm to maintain a constant temperature of about 2-5 °C (or lower) all year. Thus plants must stay active throughout the year and do not become dormant. Given these harsh circumstances, the growth of trees in such areas should be impossible. The reasons for Polylepis’ ability to inhabit such conditions have been studied by many. Carl Troll, for example, considered Polylepis to be a distinct type of vegetation and he claimed one of the reasons for their survival is the presence of microclimatic phenomena such as the formation of cloud layers on slopes and along low drainage areas, prevented nighttime freezes and producing what he called "lower elevation" conditions. Another study was done by Hoch and Korner which provided that Polylepis has slow growth making it a weak competitor. Therefore, if the temperatures become warmer and more humid, Polylepis tends to lose out to the species that are more vigorous.
Conservation issues
Polylepis forests exist primarily as small, widely isolated fragments, which are being rapidly depleted by rural communities. Remaining Polylepis forests are used for firewood and building material and provide protection against erosion and habitats for endangered animals. In some countries, conservation and reforestation measures are underway.
Human use
Since Polylepis inhabits extremely high elevations, it has played an important role in the culture of various Andean Indigenous groups by providing building material and firewood. The woodlands themselves constitute a distinctive habitat for other organisms allowing for the creation of endemic fauna in the future. The trees are also used as decoration; planted in front of buildings and houses. As a result of people expanding their reach, Polylepis have been subjected to harvest for firewood, the clearing of woodlands for pastureland and the destruction of seedlings by domesticated animals. Few trees have been found growing on level ground and are subsequently located on "inaccessible" slopes.
| Biology and health sciences | Rosales | Plants |
606970 | https://en.wikipedia.org/wiki/Minimal%20Supersymmetric%20Standard%20Model | Minimal Supersymmetric Standard Model | The Minimal Supersymmetric Standard Model (MSSM) is an extension to the Standard Model that realizes supersymmetry. MSSM is the minimal supersymmetrical model as it considers only "the [minimum] number of new particle states and new interactions consistent with "Reality". Supersymmetry pairs bosons with fermions, so every Standard Model particle has a (yet undiscovered) superpartner. If discovered, such superparticles could be candidates for dark matter, and could provide evidence for grand unification or the viability of string theory. The failure to find evidence for MSSM using the Large Hadron Collider has strengthened an inclination to abandon it.
Background
The MSSM was originally proposed in 1981 to stabilize the weak scale, solving the hierarchy problem. The Higgs boson mass of the Standard Model is unstable to quantum corrections and the theory predicts that weak scale should be much weaker than what is observed to be. In the MSSM, the Higgs boson has a fermionic superpartner, the Higgsino, that has the same mass as it would if supersymmetry were an exact symmetry. Because fermion masses are radiatively stable, the Higgs mass inherits this stability. However, in MSSM there is a need for more than one Higgs field, as described below.
The only unambiguous way to claim discovery of supersymmetry is to produce superparticles in the laboratory. Because superparticles are expected to be 100 to 1000 times heavier than the proton, it requires a huge amount of energy to make these particles that can only be achieved at particle accelerators. The Tevatron was actively looking for evidence of the production of supersymmetric particles before it was shut down on 30 September 2011. Most physicists believe that supersymmetry must be discovered at the LHC if it is responsible for stabilizing the weak scale. There are five classes of particle that superpartners of the Standard Model fall into: squarks, gluinos, charginos, neutralinos, and sleptons. These superparticles have their interactions and subsequent decays described by the MSSM and each has characteristic signatures.
The MSSM imposes R-parity to explain the stability of the proton. It adds supersymmetry breaking by introducing explicit soft supersymmetry breaking operators into the Lagrangian that is communicated to it by some unknown (and unspecified) dynamics. This means that there are 120 new parameters in the MSSM. Most of these parameters lead to unacceptable phenomenology such as large flavor changing neutral currents or large electric dipole moments for the neutron and electron. To avoid these problems, the MSSM takes all of the soft supersymmetry breaking to be diagonal in flavor space and for all of the new CP violating phases to vanish.
Theoretical motivations
There are three principal motivations for the MSSM over other theoretical extensions of the Standard Model, namely:
Naturalness
Gauge coupling unification
Dark Matter
These motivations come out without much effort and they are the primary reasons why the MSSM is the leading candidate for a new theory to be discovered at collider experiments such as the Tevatron or the LHC.
Naturalness
The original motivation for proposing the MSSM was to stabilize the Higgs mass to radiative corrections that are quadratically divergent in the Standard Model (the hierarchy problem). In supersymmetric models, scalars are related to fermions and have the same mass. Since fermion masses are logarithmically divergent, scalar masses inherit the same radiative stability. The Higgs vacuum expectation value (VEV) is related to the negative scalar mass in the Lagrangian. In order for the radiative corrections to the Higgs mass to not be dramatically larger than the actual value, the mass of the superpartners of the Standard Model should not be significantly heavier than the Higgs VEV – roughly 100 GeV. In 2012, the Higgs particle was discovered at the LHC, and its mass was found to be 125–126 GeV.
Gauge-coupling unification
If the superpartners of the Standard Model are near the TeV scale, then measured gauge couplings of the three gauge groups unify at high energies. The beta-functions for the MSSM gauge couplings are given by
where is measured in SU(5) normalization—a factor of different
than the Standard Model's normalization and predicted by Georgi–Glashow SU(5) .
The condition for gauge coupling unification at one loop is whether the following expression is satisfied
.
Remarkably, this is precisely satisfied to experimental errors in the values of . There are two loop corrections and both TeV-scale and GUT-scale threshold corrections that alter this condition on gauge coupling unification, and the results of more extensive calculations reveal that gauge coupling unification occurs to an accuracy of 1%, though this is about 3 standard deviations from the theoretical expectations.
This prediction is generally considered as indirect evidence for both the MSSM and SUSY GUTs. Gauge coupling unification does not necessarily imply grand unification and there exist other mechanisms to reproduce gauge coupling unification. However, if superpartners are found in the near future, the apparent success of gauge coupling unification would suggest that a supersymmetric grand unified theory is a promising candidate for high scale physics.
Dark matter
If R-parity is preserved, then the lightest superparticle (LSP) of the MSSM is stable and is a Weakly interacting massive particle (WIMP) – i.e. it does not have electromagnetic or strong interactions. This makes the LSP a good dark matter candidate, and falls into the category of cold dark matter (CDM).
Predictions of the MSSM regarding hadron colliders
The Tevatron and LHC have active experimental programs searching for supersymmetric particles. Since both of these machines are hadron colliders – proton antiproton for the Tevatron and proton proton for the LHC – they search best for strongly interacting particles. Therefore, most experimental signature involve production of squarks or gluinos. Since the MSSM has R-parity, the lightest supersymmetric particle is stable and after the squarks and gluinos decay each decay chain will contain one LSP that will leave the detector unseen. This leads to the generic prediction that the MSSM will produce a 'missing energy' signal from these particles leaving the detector.
Neutralinos
There are four neutralinos that are fermions and are electrically neutral, the lightest of which is typically stable. They are typically labeled , , , (although sometimes is used instead). These four states are mixtures of the bino and the neutral wino (which are the neutral electroweak gauginos), and the neutral higgsinos. As the neutralinos are Majorana fermions, each of them is identical with its antiparticle. Because these particles only interact with the weak vector bosons, they are not directly produced at hadron colliders in copious numbers. They primarily appear as particles in cascade decays of heavier particles usually originating from colored supersymmetric particles such as squarks or gluinos.
In R-parity conserving models, the lightest neutralino is stable and all supersymmetric cascade decays end up decaying into this particle which leaves the detector unseen and its existence can only be inferred by looking for unbalanced momentum in a detector.
The heavier neutralinos typically decay through a to a lighter neutralino or through a to chargino. Thus a typical decay is
{|
|
| →
|
| +
|
| colspan=6|
| →
| Missing energy
| +
|
| +
|
|-
|
| →
|
| +
|
| →
|
| +
|
| +
|
| →
| Missing energy
| +
|
| +
|
|}
Note that the “Missing energy” byproduct represents the mass-energy of the neutralino ( ) and in the second line, the mass-energy of a neutrino-antineutrino pair ( + ) produced with the lepton and antilepton in the final decay, all of which are undetectable in individual reactions with current technology.
The mass splittings between the different neutralinos will dictate which patterns of decays are allowed.
Charginos
There are two charginos that are fermions and are electrically charged. They are typically labeled and (although sometimes and is used instead). The heavier chargino can decay through to the lighter chargino. Both can decay through a to neutralino.
Squarks
The squarks are the scalar superpartners of the quarks and there is one version for each Standard Model quark. Due to phenomenological constraints from flavor changing neutral currents, typically the lighter two generations of squarks have to be nearly the same in mass and therefore are not given distinct names. The superpartners of the top and bottom quark can be split from the lighter squarks and are called stop and sbottom.
In the other direction, there may be a remarkable left-right mixing of the stops and of the sbottoms because of the high masses of the partner quarks top and bottom:
A similar story holds for bottom with its own parameters and .
Squarks can be produced through strong interactions and therefore are easily produced at hadron colliders. They decay to quarks and neutralinos or charginos which further decay. In R-parity conserving scenarios, squarks are pair produced and therefore a typical signal is
2 jets + missing energy
2 jets + 2 leptons + missing energy
Gluinos
Gluinos are Majorana fermionic partners of the gluon which means that they are their own antiparticles. They interact strongly and therefore can be produced significantly at the LHC. They can only decay to a quark and a squark and thus a typical gluino signal is
4 jets + Missing energy
Because gluinos are Majorana, gluinos can decay to either a quark+anti-squark or an anti-quark+squark with equal probability. Therefore, pairs of gluinos can decay to
4 jets+ + Missing energy
This is a distinctive signature because it has same-sign di-leptons and has very little background in the Standard Model.
Sleptons
Sleptons are the scalar partners of the leptons of the Standard Model. They are not strongly interacting and therefore are not produced very often at hadron colliders unless they are very light.
Because of the high mass of the tau lepton there will be left-right mixing of the stau similar to that of stop and sbottom (see above).
Sleptons will typically be found in decays of a charginos and neutralinos if they are light enough to be a decay product.
MSSM fields
Fermions have bosonic superpartners (called sfermions), and bosons have fermionic superpartners (called bosinos). For most of the Standard Model particles, doubling is very straightforward. However, for the Higgs boson, it is more complicated.
A single Higgsino (the fermionic superpartner of the Higgs boson) would lead to a gauge anomaly and would cause the theory to be inconsistent. However, if two Higgsinos are added, there is no gauge anomaly. The simplest theory is one with two Higgsinos and therefore two scalar Higgs doublets.
Another reason for having two scalar Higgs doublets rather than one is in order to have Yukawa couplings between the Higgs and both down-type quarks and up-type quarks; these are the terms responsible for the quarks' masses. In the Standard Model the down-type quarks couple to the Higgs field (which has Y=−) and the up-type quarks to its complex conjugate (which has Y=+). However, in a supersymmetric theory this is not allowed, so two types of Higgs fields are needed.
MSSM superfields
In supersymmetric theories, every field and its superpartner can be written together as a superfield. The superfield formulation of supersymmetry is very convenient to write down manifestly supersymmetric theories (i.e. one does not have to tediously check that the theory is supersymmetric term by term in the Lagrangian). The MSSM contains vector superfields associated with the Standard Model gauge groups which contain the vector bosons and associated gauginos. It also contains chiral superfields for the Standard Model fermions and Higgs bosons (and their respective superpartners).
MSSM Higgs mass
The MSSM Higgs mass is a prediction of the Minimal Supersymmetric Standard Model. The mass of the lightest Higgs boson is set by the Higgs quartic coupling. Quartic couplings are not soft supersymmetry-breaking parameters since they lead to a quadratic divergence of the Higgs mass. Furthermore, there are no supersymmetric parameters to make the Higgs mass a free parameter in the MSSM (though not in non-minimal extensions). This means that Higgs mass is a prediction of the MSSM. The LEP II and the IV experiments placed a lower limit on the Higgs mass of 114.4 GeV. This lower limit is significantly above where the MSSM would typically predict it to be but does not rule out the MSSM; the discovery of the Higgs with a mass of 125 GeV is within the maximal upper bound of approximately 130 GeV that loop corrections within the MSSM would raise the Higgs mass to. Proponents of the MSSM point out that a Higgs mass within the upper bound of the MSSM calculation of the Higgs mass is a successful prediction, albeit pointing to more fine tuning than expected.
Formulas
The only susy-preserving operator that creates a quartic coupling for the Higgs in the MSSM arise for the D-terms of the SU(2) and U(1) gauge sector and the magnitude of the quartic coupling is set by the size of the gauge couplings.
This leads to the prediction that the Standard Model-like Higgs mass (the scalar that couples approximately to the VEV) is limited to be less than the Z mass:
.
Since supersymmetry is broken, there are radiative corrections to the quartic coupling that can increase the Higgs mass. These dominantly arise from the 'top sector':
where is the top mass and is the mass of the top squark. This result can be interpreted as the RG running of the Higgs quartic coupling from the scale of supersymmetry to the top mass—however since the top squark mass should be relatively close to the top mass, this is usually a fairly modest contribution and increases the Higgs mass to roughly the LEP II bound of 114 GeV before the top squark becomes too heavy.
Finally there is a contribution from the top squark A-terms:
where is a dimensionless number. This contributes an additional term to the Higgs mass at loop level, but is not logarithmically enhanced
by pushing (known as 'maximal mixing') it is possible to push the Higgs mass to 125 GeV without decoupling the top squark or adding new dynamics to the MSSM.
As the Higgs was found at around 125 GeV (along with no other superparticles) at the LHC, this strongly hints at new dynamics beyond the MSSM, such as the 'Next to Minimal Supersymmetric Standard Model' (NMSSM); and suggests some correlation to the little hierarchy problem.
MSSM Lagrangian
The Lagrangian for the MSSM contains several pieces.
The first is the Kähler potential for the matter and Higgs fields which produces the kinetic terms for the fields.
The second piece is the gauge field superpotential that produces the kinetic terms for the gauge bosons and gauginos.
The next term is the superpotential for the matter and Higgs fields. These produce the Yukawa couplings for the Standard Model fermions and also the mass term for the Higgsinos. After imposing R-parity, the renormalizable, gauge invariant operators in the superpotential are
The constant term is unphysical in global supersymmetry (as opposed to supergravity).
Soft SUSY breaking
The last piece of the MSSM Lagrangian is the soft supersymmetry breaking Lagrangian. The vast majority of the parameters of the MSSM are in the susy breaking Lagrangian. The soft susy breaking are divided into roughly three pieces.
The first are the gaugino masses
where are the gauginos and is different for the wino, bino and gluino.
The next are the soft masses for the scalar fields
where are any of the scalars in the MSSM and are Hermitian matrices for the squarks and sleptons of a given set of gauge quantum numbers. The eigenvalues of these matrices are actually the masses squared, rather than the masses.
There are the and terms which are given by
The terms are complex matrices much as the scalar masses are.
Although not often mentioned with regard to soft terms, to be consistent with observation, one must also include Gravitino and Goldstino soft masses given by
The reason these soft terms are not often mentioned are that they arise through local supersymmetry and not global supersymmetry, although they are required otherwise if the Goldstino were massless it would contradict observation. The Goldstino mode is eaten by the Gravitino to become massive, through a gauge shift, which also absorbs the would-be "mass" term of the Goldstino.
Problems
There are several problems with the MSSM—most of them falling into understanding the parameters.
The mu problem: The Higgsino mass parameter μ appears as the following term in the superpotential: μHuHd. It should have the same order of magnitude as the electroweak scale, many orders of magnitude smaller than that of the Planck scale, which is the natural cutoff scale. The soft supersymmetry breaking terms should also be of the same order of magnitude as the electroweak scale. This brings about a problem of naturalness: why are these scales so much smaller than the cutoff scale yet happen to fall so close to each other?
Flavor universality of soft masses and A-terms: since no flavor mixing additional to that predicted by the standard model has been discovered so far, the coefficients of the additional terms in the MSSM Lagrangian must be, at least approximately, flavor invariant (i.e. the same for all flavors).
Smallness of CP violating phases: since no CP violation additional to that predicted by the standard model has been discovered so far, the additional terms in the MSSM Lagrangian must be, at least approximately, CP invariant, so that their CP violating phases are small.
Theories of supersymmetry breaking
A large amount of theoretical effort has been spent trying to understand the mechanism for soft supersymmetry breaking that produces the desired properties in the superpartner masses and interactions. The three most extensively studied mechanisms are:
Gravity-mediated supersymmetry breaking
Gravity-mediated supersymmetry breaking is a method of communicating supersymmetry breaking to the supersymmetric Standard Model through gravitational interactions. It was the first method proposed to communicate supersymmetry breaking. In gravity-mediated supersymmetry-breaking models, there is a part of the theory that only interacts with the MSSM through gravitational interaction. This hidden sector of the theory breaks supersymmetry. Through the supersymmetric version of the Higgs mechanism, the gravitino, the supersymmetric version of the graviton, acquires a mass. After the gravitino has a mass, gravitational radiative corrections to soft masses are incompletely cancelled beneath the gravitino's mass.
It is currently believed that it is not generic to have a sector completely decoupled from the MSSM and there should be higher dimension operators that couple different sectors together with the higher dimension operators suppressed by the Planck scale. These operators give as large of a contribution to the soft supersymmetry breaking masses as the gravitational loops; therefore, today people usually consider gravity mediation to be gravitational sized direct interactions between the hidden sector and the MSSM.
mSUGRA stands for minimal supergravity. The construction of a realistic model of interactions within supergravity framework where supersymmetry breaking is communicated through the supergravity interactions was carried out by Ali Chamseddine, Richard Arnowitt, and Pran Nath in 1982. mSUGRA is one of the most widely investigated models of particle physics due to its predictive power requiring only 4 input parameters and a sign, to determine the low energy phenomenology from the scale of Grand Unification. The most widely used set of parameters is:
Gravity-Mediated Supersymmetry Breaking was assumed to be flavor universal because of the universality of gravity; however, in 1986 Hall, Kostelecky, and Raby showed that Planck-scale physics that are necessary to generate the Standard-Model Yukawa couplings spoil the universality of the supersymmetry breaking.
Gauge-mediated supersymmetry breaking (GMSB)
Gauge-mediated supersymmetry breaking is method of communicating supersymmetry breaking to the supersymmetric Standard Model through the Standard Model's gauge interactions. Typically a hidden sector breaks supersymmetry and communicates it to massive messenger fields that are charged under the Standard Model. These messenger fields induce a gaugino mass at one loop and then this is transmitted on to the scalar superpartners at two loops. Requiring stop squarks below 2 TeV, the maximum Higgs boson mass predicted is just 121.5GeV. With the Higgs being discovered at 125GeV - this model requires stops above 2 TeV.
Anomaly-mediated supersymmetry breaking (AMSB)
Anomaly-mediated supersymmetry breaking is a special type of gravity mediated supersymmetry breaking that results in supersymmetry breaking being communicated to the supersymmetric Standard Model through the conformal anomaly. Requiring stop squarks below 2 TeV, the maximum Higgs boson mass predicted is just 121.0GeV. With the Higgs being discovered at 125GeV - this scenario requires stops heavier than 2 TeV.
Phenomenological MSSM (pMSSM)
The unconstrained MSSM has more than 100 parameters in addition to the Standard Model parameters.
This makes any phenomenological analysis (e.g. finding regions in parameter space consistent
with observed data) impractical. Under the following three assumptions:
no new source of CP-violation
no Flavour Changing Neutral Currents
first and second generation universality
one can reduce the number of additional parameters to the following 19 quantities of the phenomenological MSSM (pMSSM):
The large parameter space of pMSSM makes searches in pMSSM extremely challenging and makes pMSSM difficult to exclude.
Experimental tests
Terrestrial detectors
XENON1T (a dark matter WIMP detector - being commissioned in 2016) is expected to explore/test supersymmetry candidates such as CMSSM.
| Physical sciences | Particle physics: General | Physics |
607457 | https://en.wikipedia.org/wiki/Crown-of-thorns%20starfish | Crown-of-thorns starfish | The crown-of-thorns starfish (frequently abbreviated to COTS), Acanthaster planci, is a large starfish that preys upon hard, or stony, coral polyps (Scleractinia). The crown-of-thorns starfish receives its name from venomous thornlike spines that cover its upper surface, resembling the biblical crown of thorns. It is one of the largest starfish in the world.
A. planci has a very wide Indo-Pacific distribution. It is perhaps most common around Australia, but can occur at tropical and subtropical latitudes from the Red Sea and the East African coast across the Indian Ocean, and across the Pacific Ocean to the west coast of Central America. It occurs where coral reefs or hard coral communities occur in the region.
Description
The body form of the crown-of-thorns starfish is fundamentally the same as that of a typical starfish, with a central disk and radiating arms. Its special traits, however, include being disc-shaped, multiple-armed, flexible, prehensile, heavily spined, and having a large ratio of stomach surface to body mass. Its prehensile ability arises from the two rows of numerous tube feet that extend to the tip of each arm. In being multiple-armed, it has lost the five-fold symmetry (pentamerism) typical of starfish, although it begins its lifecycle with this symmetry. The animal has true image-forming vision.
Adult crown-of-thorns starfish normally range in size from . They have up to 21 arms. Although the body of the crown of thorns has a stiff appearance, it is able to bend and twist to fit around the contours of the corals on which it feeds. The underside of each arm has a series of closely fitting plates, which form a groove and extend in rows to the mouth. Depending on diet or geographic region, individuals can be purple, purple-blue, reddish grey or brown with red spine tips, or green with yellow spine tips.
The long, sharp spines on the sides of the starfish's arms and upper (aboral) surface resemble thorns and create a crown-like shape, giving the creature its name. The spines can range from 4 to 5 cm long and are stiff, very sharp, and readily pierce through soft surfaces. Despite the battery of sharp spines on the aboral surface and blunt spines on the oral surface, the crown-of-thorns starfish's general body surface is membranous and soft. When the starfish is removed from the water, the body surface ruptures and the body fluid leaks out, so the body collapses and flattens. The spines bend over and flatten, as well. They recover their shape when reimmersed, if they are still alive.
Online Model Organism Database
Echinobase is the model organism database for the A. planci and a number of other echinoderms.
Taxonomy
Family
The family Acanthasteridae is monogeneric; its position within the Asteroides is unsettled. It is generally recognized as a distinctly isolated taxon. Recently, paleontologist Daniel Blake concluded from comparative morphology studies of A. planci that it has strong similarities with various members of the Oreasteridae. He transferred the Acanthasteridae from the Spinulosida to the Valvatida and assigned it a position near to the Oreasteridae, from which it appears to be derived. He attributed Acanthaster morphology as possibly evolving in association with its locomotion over irregular coral surfaces in higher energy environments. A complication exists, however, in that Acanthaster is not a monospecific genus and any consideration of the genus must also take into account another species, Acanthaster brevispinus, which lives in a completely different environment. A. brevispinus lives on soft substrates, perhaps buried in the substrate at times like other soft substrate-inhabiting starfish, at moderate depths, where presumably the surface is regular and little wave action occurs.
Genus and species
A. planci has a long history in the scientific literature with great confusion in the generic and specific names from the outset, with a long list of complex synonyms. Georg Eberhard Rhumphius first described it in 1705, naming it Stella marina quindecium radiotorum. Later, Carl Linnaeus described it as Asterias planci based on an illustration by Plancus and Gualtieri (1743), when he introduced his system of binomial nomenclature. No type specimens are known; the specimen described by Plancus and Gualtieri (1743) is no longer extant.
Subsequent generic names used for the crown-of-thorns starfish included Stellonia, Echinaster, and Echinites, before settling on Acanthaster (Gervais 1841). Specific names included echintes, solaris, mauritensis, ellisii, and ellisii pseudoplanci (with subspecies). Most of these names arose from confusion in the historical literature, but Acanthaster ellisii came to be used for the distinctive starfish in the eastern Pacific Gulf of California.
The eastern Pacific Acanthaster is very distinctive (see image to the right) with its rather 'plump' body, large disk to total diameter ratio, and short, blunt spines.
Genetic studies
Nishida and Lucas examined genetic variation at 14 allozyme loci of 10 population samples of A. planci using starch-gel electrophoresis. The samples were from localities across the Pacific: Ryukyu archipelago (four locations), Micronesia (two locations), and samples from one location each of the Great Barrier Reef, Fiji, Hawaii, and the Gulf of California. A sample of 10 specimens of A. brevispinus from the Great Barrier Reef region was included for comparison. Considerable genetic differentiation was seen between the A. brevispinus and A. planci populations (D= 0.20 +/− 0.02)(D is genetic distance). The genetic differences between geographic populations of A. planci were, however, small (D = 0.03 +/− 0.00; Fsr = 0.07 + 0.02) (Fsr is standardized genetic variance for each polymorphic locus) despite the great distances separating them. A positive correlation was observed between degree of genetic differentiation and geographic distance, suggesting the genetic homogeneity among A. planci populations is due to gene flow by planktonic larval dispersion. The distance effect on genetic differentiation most probably reflects decreasing levels of successful larval dispersal over long distances. In view of the level of macrogeographic homogeneity, significant allele frequency differences were observed between adjacent populations separated by about 10 km. The Hawaiian population was most differentiated from other populations. Treating the morphologically distinctive, eastern Pacific Acanthaster as a separate species, A. ellisii, is not supported by these data. The lack of unique alleles in the central (Hawaii) and eastern Pacific (Gulf of California) populations suggests they were derived from those in the western Pacific.
Further details of the genetic relationship between A. planci and A. brevispinus are presented in the entry for the latter species. These are clearly sibling species, and A. planci, the specialized, coral-feeding species, is suggested to have arisen from A. brevispinus, the less-specialized, soft-bottom inhabitant.
In a very comprehensive geographic study, Benzie examined allozyme loci variation in 20 populations of A. planci, throughout the Pacific and Indian Oceans. The most striking result was a very marked discontinuity between the Indian and Pacific Ocean populations. Those, however, off northern Western Australia had a strong Pacific affinity. With the exception of the very strong connection of southern Japanese populations to the Great Barrier Reef populations, the patterns of variation within regions were consistent with isolation by distance. Again, the pattern of decreasing levels of successful larval dispersal over long distances is apparent. Benzie suggests that the divergence between Indian Ocean and Pacific Ocean populations began at least 1.6 million years ago and is likely to reflect responses to changes in climate and sea level.
A more recent comprehensive geographic study of A. planci by Vogler et al., using DNA analyses (one mitochondrial gene), suggests it is actually a species complex consisting of four species or clades. The four cryptic species/clades are defined geographically: Northern Indian Ocean, southern Indian Ocean, Red Sea, and Pacific Ocean. These molecular data suggest the species/clades diverged 1.95 and 3.65 million years ago. (The divergence of A. planci and A. brevispinus is not included in this time scale.) The authors suggest the differences between the four putative species in behavior, diet, or habitat may be important for the design of appropriate reef-conservation strategies.
Problems exist, though, with this proposal of cryptic speciation (cryptic species). The basis of these data from one mitochondrial gene (mtDNA) data is, however, only one source of information about the status of taxa and the use of one mtDNA gene as a sole criterion for species identification is disputed. The allozyme data should also be taken into account. Three localities that were sampled by Vogler et al. are of particular interest; Palau Sebibu, UEA, and Oman were found to have two clades/sibling species in sympatry. These are important to investigate the nature of the co-existence and barriers to introgression of genetic material. A. planci as a taxon is a generalist, being amongst the most ubiquitous of large coral predators on coral reefs, feeding on virtually all hard coral species, reproducing during summer without a pattern of spawning, and often participating in mass multiple-species spawnings, and releasing vast amounts of gametes that trigger spawning in other individuals. Conceiving of two species/clades of A. planci in sympatry without habitat competition and introgression of genetic material, especially the latter, is very difficult.
Biology
Toxins
Starfish are characterized by having saponins known as in their tissues. They contain a mix of these saponins, and at least 15 chemical studies have been conducted seeking to characterize these saponins. They have detergent-like properties, and keeping starfish in limited water volumes with aeration results in large amounts of foam at the surface.
A. planci has no mechanism for injecting the toxin, but as the spines perforate tissue of a predator or unwary person, tissue containing the saponins is lost into the wound. In humans, this immediately causes a sharp, stinging pain that can last for several hours, persistent bleeding due to the haemolytic effect of saponins, and nausea and tissue swelling that may persist for a week or more. The spines, which are brittle, may also break off and become embedded in the tissue, where they must be surgically removed.
Saponins seem to occur throughout the lifecycle of the crown-of-thorns starfish. The saponins in the eggs are similar to those in the adult tissues, and presumably these carry over to the larvae. The mouthing behaviour of predators of juvenile starfish with rejection suggests the juveniles contain saponins.
Behavior
The adult crown-of-thorns is a corallivorous predator that usually preys on reef scleractinian coral polyps, as well as encrusting sessile invertebrates and dead animals. It climbs onto a section of living coral colony using the large number of tube feet, which lie in distinct ambulacral grooves on the oral surface. It fits closely to the surface of the coral, even the complex surfaces of branching corals. It then extrudes its stomach out through its mouth over the surface to virtually its own diameter. The stomach surface secretes digestive enzymes that allow the starfish to absorb nutrients from the liquefied coral tissue. This leaves a white scar of coral skeleton that is rapidly infested with filamentous algae. An individual starfish can consume up to of living coral reef per year. In a study of feeding rates on two coral reefs in the central Great Barrier Reef region, large starfish ( and greater diameter) killed about /day in winter and per day in summer. Smaller starfish, , killed per day in the equivalent seasons. The area killed by the large starfish is equivalent to about from these observations. Differences in feeding and locomotion rates between summer and winter reflect the fact that the crown-of-thorns, like all marine invertebrates, is a poikilotherm whose body temperature and metabolic rate are directly affected by the temperature of the surrounding water. In tropical coral reefs, crown-of-thorns specimens reach mean locomotion rates of , which explains how outbreaks can damage large reef areas in relatively short periods.
The starfish show preferences between the hard corals on which they feed. They tend to feed on branching corals and table-like corals, such as Acropora, Pavona, and Pocillopora species, rather than on more rounded corals with less exposed surface area, such as Porites species. Avoidance of Porites and some other corals may also be due to resident bivalve mollusks and polychaete worms in the surface of the coral, which discourage the starfish. Similarly, some symbionts, such as small crabs, living within the complex structures of branching corals, may ward off the starfish as it seeks to spread its stomach over the coral surface.
In reef areas of low densities of hard coral, reflecting the nature of the reef community or due to feeding by high density crown-of-thorns, the starfish may be found feeding on soft corals (Alcyonacea).
The starfish are cryptic in behavior during their first two years, emerging at night to feed. They usually remain so as adults when solitary. The only evidence of a hidden individual may be white feeding scars on adjacent coral. However, their behavior changes under two circumstances:
During the breeding season, which is typically during early to midsummer, the starfish may gather together high on a reef and synchronously release gametes to achieve high levels of egg fertilization. This pattern of synchronized spawning is not at all unique, but it is very common amongst marine invertebrates that do not copulate. Solitary spawning gives no opportunity for fertilization of eggs and wastes gametes and evidence exists of a spawning pheromone that causes the starfish to aggregate and release gametes synchronously.
When the starfish are at high densities, they may move day and night, competing for living coral.
‘A. planci forage on corals at a high rate especially when the populations occur therefore resulting to loss of reef habitats and overall, loss of species Diversity.’ Research shows that when such populations of these starfish are found, they can destroy coral and alter the structure of reefs and its support marine organism (Brodie et al., 2005)”.
Predators
The elongated, sharp spines covering nearly the entire upper surface of the crown-of-thorns serve as a mechanical defense against large predators. It also has a chemical defense. Saponins presumably serve as an irritant when the spines pierce a predator, in the same way as they do when they pierce the skin of humans. Saponins have an unpleasant taste. A study to test the predation rate on juvenile Acanthaster spp. by appropriate fish species found that the starfish were often mouthed, tasted, and rejected. These defenses tend to make it an unattractive target for coral community predators. In spite of this, however, Acanthaster populations are typically composed of a proportion of individuals with regenerating arms.
About 11 species have been reported to prey occasionally on uninjured and healthy adults of A. planci. All of these are generalist feeders, but none of these seems to specifically prefer the starfish as a food source. This number, however, is probably lower, as some of these presumed predators have not been witnessed reliably in the field. Some of those witnessed are:
A species of pufferfish and two triggerfish have been observed to feed on crown-of-thorns starfish in the Red Sea, and although they may have some effect on the A. planci population, no evidence exists of systematic predation. In the Indo-Pacific waters, white-spotted puffers, and Titan triggerfish have also been found to eat this starfish.
Triton's trumpet, a very large gastropod mollusk, is a known predator of Acanthaster in some parts of the starfish's range. The Triton has been described as tearing the starfish to pieces with its file-like radula.
The small painted shrimp Hymenocera picta, a general predator of starfish, has been found to prey on A. planci at some locations. A polychaete worm, Pherecardia striata, was observed to be feeding on the starfish together with the shrimp on an east Pacific coral reef. About 0.6% of the starfish in the reef population were being attacked by both the shrimp and polychaete worm, killing the starfish in about a week. Glynn suggested this resulted in a balance between mortality and recruitment in this population, leading to a relatively stable population of starfish.
Since P. striata can only attack a damaged A. planci and cause its death, it may be regarded as an "impatient scavenger" rather than a predator. As distinct from predators, dead and mutilated adults of A. planci attract a number of scavengers. Glynn lists two polychaete worms, a hermit crab, a sea urchin, and seven species of small reef fish. Apparently, they are able to tolerate the distasteful saponins for an easy meal.
A large, polyp-like creature of the cnidarian genus Pseudocorynactis was observed attacking, and then wholly ingesting a crown-of-thorns starfish of similar size. Continued studies revealed this polyp is able to completely ingest a crown-of-thorns specimen up to in diameter.
Lifecycle
Gametes and embryos
Gonads increase in size as the animals become sexually mature, and at maturity, fill the arms and extend into the disk region. The ripe ovaries and testes are readily distinguishable, with the former being more yellow and having larger lobes. In section, they are very different, with the ovaries densely filled with nutrient-packed ova (see ovum and photograph) and the testes densely filled with sperm, which consist of little more than a nucleus and flagellum. Fecundity in female crown-of-thorns starfish is related to size, with large starfish committing proportionally more energy into ova production such that:
A 200-mm-diameter female produces 0.5–2.5 million eggs, representing 2–8% of her wet weight.
A 300-mm-diameter female produces 6.5–14 million eggs, representing 9–14% of her wet weight.
A 400-mm-diameter female produces 47–53 million eggs, representing 20–25% of her wet weight.
In coral reefs in the Philippines, female specimens were found with a gonadosomatic index (ratio of gonad mass to body mass) as high as 22%, which underlines the high fecundity of this starfish. Babcock et al. (1993) monitored changes in fecundity and fertility (fertilisation rate) over the spawning season of the crown-of-thorns starfish on Davies Reef, central Great Barrier Reef, from 1990 to 1992. The starfish were observed to spawn (photograph) from December to January (early to midsummer) in this region with most observations being in January. However, both gonadosomatic index and fertility peaked early and declined to low levels by late January, indicating that most successful reproductive events took place early in the spawning season. In Northern Hemisphere coral reefs, however, crown-of-thorns populations reproduce in April and May, and were also observed spawning in the Gulf of Thailand in September. High rates of egg fertilisation may be achieved through the behaviour of proximate and synchronised spawning (see above in Behaviour).
Embryonic development begins about 1.5 hours after fertilisation, with the early cell divisions (cleavage) (photograph). By 8–9 hours, it has reached the 64-cell stage.
Some molecular and histological evidence suggests the occurrence of hermaphroditism in Acanthaster cf. solaris.
Larval stages
By day 1, the embryo has hatched as a ciliated gastrula stage (photograph). By day 2, the gut is complete and the larva is now known as a bipinnaria. It has ciliated bands along the body and uses these to swim and filter feed on microscopic particles, particularly unicellular green flagellates (phytoplankton). The scanning electron micrograph (SEM) clearly shows the complex ciliated bands of the bipinnarial larva. By day 5, it is an early brachiolarial larva. The arms of the bipinnaria have further elongated, two stump-like projections are in the anterior (not evident in the photograph), and structures are developing within the posterior of the larva. In the late brachiolarial larva (day 11), the larval arms are elongated and three distinctive arms occur at the anterior with small structures on their inner surfaces. To this stage, the larva has been virtually transparent, but the posterior section is now opaque with the initial development of a starfish. The late brachiolaria is 1.0–1.5 mm. It tends to sink to the bottom and test the substrate with its brachiolar arms, including flexing the anterior body to orient the brachiolar arms against the substrate.
This description and assessment of optimum rate of development is based on early studies in the laboratory under attempted optimum conditions. However, not unexpectedly, there are large differences in growth rate and survival under various environmental conditions (see Causes of population outbreaks).
Metamorphosis, development, and growth
The late brachiolaria search substrates with their arms, and when offered a choice of substrates, tend to settle on coralline algae, on which they subsequently feed. In the classic pattern for echinoderms, the bilaterally symmetrical larva is replaced by a pentamerously symmetrical stage at metamorphosis, with the latter's body axis bearing no relationship to that of the larva. Thus, the newly metamorphosed starfish are five-armed and are 0.4–1.0 mm in diameter. (Note the size of the tube feet relative to the size of the animal.) They feed on the thin coating layers of hard, encrusting algae (coralline algae) on the undersides of dead coral rubble and other concealed surfaces. They extend their stomach over the surface of the encrusting algae and digest the tissue, as in the feeding by larger crown-of-thorns starfish on hard corals. The living tissue of the encrusting algae is approximately pink to dark red, and feeding by these early juveniles results in white scars on the surface of the algae. During the next months, the juveniles grow and add arms and associated madreporites in the pattern described by Yamaguchi until the adult numbers are attained 5–7 months after metamorphosis. Two hard corals with small polyps, Pocillopora damicornis and Acropora acunimata, were included in the aquaria with the encrusting algae, and at about the time the juvenile starfish achieved their full number of arms, they began feeding on the corals.
Juveniles of A. planci that had reached the stage of feeding on coral were then reared for some years in the same large closed-circuit seawater system that was used for the early juveniles. They were moved to larger tanks and kept supplied with coral so that food was not a limiting factor on growth rate. The growth curves of size versus age were sigmoidal, as seen in majority marine invertebrates. An initial period of relatively slow growth occurred while the starfish were feeding on coralline algae. This was followed by a phase of rapid growth, which led to sexual maturity at the end of the second year. The starfish were in the vicinity of 200 mm in diameter at this stage. They continued to grow rapidly and were around 300 and tended to decline after 4 years. Gonad development was greater in the third and subsequent years than at 2 years, and a seasonal pattern of gametogenesis and spawning became apparent, with water temperature being the only notable cue in the indoor aquarium. Most specimens of A. planci died from "senility" during the period 5.0–7.5 years, i.e. they fed poorly and shrank.
Field observations of lifecycle
The data above are derived from laboratory studies of A. planci, which are much more readily obtained than equivalent data from the field. The laboratory observations, however, are in accord with the limited field observations of lifecycle.
As in laboratory studies where A. planci larvae were found to select coralline algae for settlement, early juveniles (<20 mm in diameter) were found on subtidal coralline algae (Porolithon onkodes) on the windward reef front of Suva Reef (Fiji). The juveniles were found in a variety of habitats where they were highly concealed - under coral blocks and rubble in the boulder zone of the exposed reef front, on dead bases of Acropora species in more sheltered areas, in narrow spaces within the reef crest, and on the fore-reef slope to depths of 8 m.
Growth rates on Suva Reef were found to be 2.6, 16.7 and 5.3 mm/month increase in diameter before coral feeding, in early coral feeding, and in adult phases, respectively. This is in accord with the sigmoidal pattern of size versus age observed in laboratory studies, i.e. slow initial growth, a phase of very rapid growth beginning at coral feeding and tapering off of growth after the starfish reaches sexual maturity. In reefs in the Philippines, female and male specimens matured at 13 and 16 cm, respectively.
Stump identified bands in the upper surface spines of A. planci, and attributed these to annual growth bands. He did not report growth rates based on these age determinations, and mark and recapture data, but he reported that the growth bands revealed 12+ year-old starfish: much older than those that became 'senile' and died in the laboratory.
In a small number of field studies, mortality rates of juvenile A. planci have to found to be very high, e.g. 6.5% per day for month-old and 0.45% per day for 7-month-old. Most of the mortality comes from predators, such as small crabs, that occur in and on the substrate with the juveniles. It is possible, however, that these rates may not reflect mortality over the range of habitats occupied by small juveniles.
Ecology
Ecological impact on reefs
A. planci is one of the most efficient predators on scleractinian corals (stony corals or hard corals). Most coral-feeding organisms only cause tissue loss or localized injuries, but adults of A. planci can kill entire coral colonies.
Popular anxiety to news of high densities of A. planci on the Great Barrier Reef was reflected in many newspaper reports and publications such as Requiem for the Reef, which also suggested that a cover-up of the extent of damage existed. A popular idea arose that the coral and with it whole reefs were being destroyed by the starfish. In fact, as described above, the starfish preys on coral by digesting the surface of living tissue from the coral skeletons. These skeletons persist, together with the mass of coralline algae that is essential for reef integrity. The initial change (first-order effect) is loss of the veneer of living coral tissue.
A. planci is a component of the fauna of most coral reefs and the effects of A. planci populations on coral reefs are very dependent on the population density. At low densities (1 to perhaps 30/hectare) the rate at which coral is being preyed upon by the starfish, is less than the growth rate of the coral, i.e. the surface area of living coral is increasing. The starfish may, however, influence the coral community structure. Because the starfish do not feed indiscriminately they may cause a distribution of coral species and colony sizes that differs from a pattern without them. This is evident by comparison of coral reefs where A. planci has not been found to the more typical reefs with A. planci.
Some ecologists suggest that the starfish has an important and active role in maintaining coral reef biodiversity, driving ecological succession. Before overpopulation became a significant issue, crown-of-thorns prevented fast-growing coral from overpowering the slower-growing coral varieties.
At high densities (outbreaks, plagues), which may be defined as when the starfish are too abundant for the coral food supply, coral cover goes into decline. The starfish must broaden their diet from their preferred species, colony size, and shape. The starfish often aggregate during feeding, even at low densities, but during high densities, the cleared coral patches become almost or completely continuous.
Second-order effects exist for these large areas of preyed coral:
The bare coral skeletons are rapidly colonised by filamentous algae.
Large stands of staghorn coral, Acropora species, may collapse and become rubble, reducing the topographical complexity of the reef
Sometimes, the preyed surfaces are further invaded by macroalgae, soft coral, and sponges. These tend to take over reef surfaces for long periods, as alternatives to hard coral communities; once established, they limit recruitment by hard-coral larvae.
Aesthetically, in all the above cases, the reef surface is not as attractive as the living coral surface, but it is anything but dead.
A third-order effect can arise from the invasion by filamentous algae. Animals that depend directly or indirectly on hard corals, e.g. for shelter and food, should lose out, and herbivores and less specialist feeders gain. This likely would be most conspicuous in the fish fauna, and long-terms studies of coral reef-fish communities confirm this expectation.
Population outbreaks
Large populations of crown-of-thorns starfish (sometimes emotively known as plagues) have been substantiated as occurring at 21 locations of coral reefs during the 1960s to 1980s. These locations ranged from the Red Sea through the tropical Indo-Pacific region to French Polynesia. At least two substantiated repeated outbreaks occurred at 10 of these locations.
Values of starfish density from 140 to 1,000/ha have been considered in various reports to be outbreak populations, while starfish densities less than 100/ha have been considered to be low; however, at densities below 100/ha, feeding by A. planci may exceed the growth of coral with a net loss of coral.
From the surveys of many reef locations throughout the starfish's distribution, large abundances of Acanthaster spp. can be categorised as:
Primary outbreaks, where abrupt population increases of at least two magnitudes cannot be explained by the presence of a previous outbreak
Secondary outbreaks can plausibly be related to previous outbreaks through the reproduction of a previous cohort of the starfish. These may appear as recruits to reefs down current from an existing outbreak population.
Chronic situations where a persistent moderate to high density population exists at a reef location where the coral is sparse due to persistent feeding by the starfish.
The Great Barrier Reef (GBR) is the most outstanding coral reef system in the world because of its great length, number of individual reefs, and species diversity. When high densities of Acanthaster, which were causing heavy mortality of coral, were first seen about Green Island, off Cairns, in 1960–65, this caused considerable alarm. High-density populations were subsequently found of a number of reefs to the south of Green Island, in the central GBR region Some popular publications suggested that the whole reef was in danger of dying, and they influenced and reflected some public alarm over the state and future of the GBR.
A number of studies have modeled the population outbreaks on the GBR as a means to understand the phenomenon.
The Australian and Queensland governments funded research and set up advisory committees during the period of great anxiety about the nature of the starfish outbreaks on the GBR. They were regarded as not coming to terms with the unprecedented nature and magnitude of this problem. Many scientists were criticised for not being able to give definitive but unsubstantiated answers. Others were more definitive in their answers. Scientists were criticised for their reticence and for disagreeing on the nature and causes of the outbreaks on the GBR, sometimes described as the "starfish wars".
Causes of population outbreaks
Serious discussion and some strongly held views mention the causes of this phenomenon. Some hypotheses focused on changes in the survival of juvenile and adult starfish—the "predator removal hypothesis":
Over-collecting of tritons, a predator of the starfish
Overfishing of predators of the starfish
Decline in predator populations through habitat destruction
Warmer sea temperatures enhance larvae development
Anthropogenic impacts, such as allochthonous nutrient input
Many of the reports of fish preying on Acanthaster are single observations or presumed predation from the nature of the fish. For example, the humphead wrasse may prey on the starfish amongst its more usual diet. Individual puffer fish and trigger fish have been observed to feed crown-of-thorns starfish in the Red Sea, but no evidence has found them to be a significant factor in population control. A study, however, based on the stomach contents of large carnivorous fish that are potential predators of the starfish, found no evidence of the starfish in the fish's guts. These carnivorous fish were caught commercially on the coral reefs on the Gulf of Oman and examined at local fish markets.
One problem with the concept of predators of large juvenile and adult starfish causing total mortality is that the starfish have good regenerative powers and they would not keep still while being eaten. Also, they need to be consumed completely or almost completely to die; 17–60% of starfish in various populations had missing or regenerating arms. Clearly, the starfish experience various levels of sublethal predation. When the damage includes a major section of the disk together with arms, the number of arms regenerating on the disk may be less than the number lost.
Another hypothesis is the "aggregation hypothesis", whereby large aggregations of A. planci appear as apparent outbreaks because they have consumed all the adjacent coral. This seems to imply that apparently a dense population outbreak exists when a more diffuse population outbreak has happened that has been dense enough to comprehensively prey on large areas of hard coral.
Female crown-of-thorns starfish are very fecund. Based on the eggs in ovaries, 200-, 300-. and 400-mm-diameter females potentially spawn around 4, 30, and 50 million eggs, respectively (see also Gametes and embryos). Lucas adopted a different approach, focusing on the survival of the larvae arising from the eggs. The rationale for this approach was that small changes in the survival of larvae and developmental stages would result in very large changes in the adult population, considering two hypothetical situations.
About 20 million eggs from a female spawning, having a survival rate around 0.00001% throughout development, would replace two adult starfish in a low-density population where the larvae recruit. If, however, the survival rate increases to 0.1% (one in a thousand) throughout development from one spawning of 20 million eggs, this would result in 20,000 adult starfish where the larvae have recruited. Since the larvae are the most abundant stages of development, changes in survival likely are of most importance during this phase of development.
Temperature and salinity have little effect on the survival of crown-of-thorns larvae. However, abundance and species of the particular component of phytoplankton (unicellular flagellates) on which the larvae feed has a profound effect on survival and rate of growth. The abundance of phytoplankton cells is especially important. As autotrophs, phytoplankton abundance is strongly influenced by the concentration of inorganic nutrients, such as nitrogenous compounds.
Birkeland had observed a correlation between the abundance of crown-of-thorns on reefs adjacent to land masses. These occurred on mainland islands as distinct from coral atolls about three years after heavy rainfall that followed a period of drought. He suggested that runoff from such heavy rainfall may stimulate phytoplankton blooms of sufficient size to produce enough food for the larvae of A. planci through input of nutrients.
Combining Birkeland observations with the influence of inorganic nutrients on survival of the starfish larvae in experimental studies gave support for a mechanism for starfish outbreaks:
increased terrestrial runoff → increased nutrients denser phytoplankton↑→ better larval survival → increased starfish populations
Further of these connections have been confirmed, but research by Olson (1987), Kaufmann (2002), and Byrne (2016) suggests terrestrial runoff has little or no impact on larval survival. The conflicting data describing the negligible role of terrestrial agricultural runoff have been described as "an inconvenient study".
Also, a flow-on effect is seen in that where large starfish populations produce large numbers of larvae, heavy recruitment is likely on reefs downstream to which the larvae are carried and then settle.
Population control
Population numbers for the crown-of-thorns have been increasing since the 1970s. Historic records of distribution patterns and numbers, though, are hard to come by, as SCUBA technology, necessary to conduct population censuses, had only been developed in the previous few decades.
To prevent overpopulation of crown-of-thorns causing widespread destruction to coral reef habitats, humans have implemented a variety of control measures. Manual removals have been successful, but are relatively labour-intensive. Injecting sodium bisulfate into the starfish is the most efficient measure in practice. Sodium bisulfate is deadly to crown-of-thorns, but it does not harm the surrounding reef and oceanic ecosystems. To control areas of high infestations, teams of divers have had kill rates of up to 120 per hour per diver. The practice of dismembering them was shown to have a kill rate of 12 per hour per diver, and the diver performing this test was spiked three times. As a result, dismemberings are discouraged for this reason, and not because of rumours that they might be able to regenerate.
An even more labour-intensive route, but less risky to the diver, is to bury them under rocks or debris. This route is only suitable for areas with low infestation and if materials are available to perform the procedure without damaging corals.
A 2015 study by James Cook University showed that common household vinegar is also effective, as the acidity causes the starfish to disintegrate within days. Vinegar is also harmless to the environment, and is not restricted by regulations regarding animal products such as bile. In 2019, divers were using a 10% vinegar solution to reduce starfish populations in the Raja Ampat Islands.
A new successful method of population control is by the injection of thiosulfate-citrate-bile salts-sucrose agar (TCBS). Only one injection is needed, leading to starfish's death in 24 hours from a contagious disease marked by "discoloured and necrotic skin, ulcerations, loss of body turgor, accumulation of colourless mucus on many spines especially at their tip, and loss of spines. Blisters on the dorsal integument broke through the skin surface and resulted in large, open sores that exposed the internal organs."
An autonomous starfish-killing robot called COTSBot has been developed, and as of September 2015, was close to being ready for trials on the GBR. The COTSbot, which has a neural net-aided vision system, is designed to seek out crown-of-thorns starfish and give them a lethal injection of bile salts. After it eradicates the bulk of the starfish in a given area, human divers can move in and remove the survivors. Field trials of the robot have begun in Moreton Bay in Brisbane to refine its navigation system, according to Queensland University of Technology researcher Matthew Dunbabin. No crown-of-thorns starfish are in Moreton Bay, but when the navigation has been refined, the robot will be used on the reef.
There is research in Indonesia into the use of ground COTS remains as a fortifying additive to feed for whiteleg shrimp.
| Biology and health sciences | Echinoderms | Animals |
607495 | https://en.wikipedia.org/wiki/Freezing-point%20depression | Freezing-point depression | Freezing-point depression is a drop in the maximum temperature at which a substance freezes, caused when a smaller amount of another, non-volatile substance is added. Examples include adding salt into water (used in ice cream makers and for de-icing roads), alcohol in water, ethylene or propylene glycol in water (used in antifreeze in cars), adding copper to molten silver (used to make solder that flows at a lower temperature than the silver pieces being joined), or the mixing of two solids such as impurities into a finely powdered drug.
In all cases, the substance added/present in smaller amounts is considered the solute, while the original substance present in larger quantity is thought of as the solvent. The resulting liquid solution or solid-solid mixture has a lower freezing point than the pure solvent or solid because the chemical potential of the solvent in the mixture is lower than that of the pure solvent, the difference between the two being proportional to the natural logarithm of the mole fraction. In a similar manner, the chemical potential of the vapor above the solution is lower than that above a pure solvent, which results in boiling-point elevation. Freezing-point depression is what causes sea water (a mixture of salt and other compounds in water) to remain liquid at temperatures below , the freezing point of pure water.
Explanation
Using vapour pressure
The freezing point is the temperature at which the liquid solvent and solid solvent are at equilibrium, so that their vapor pressures are equal. When a non-volatile solute is added to a volatile liquid solvent, the solution vapour pressure will be lower than that of the pure solvent. As a result, the solid will reach equilibrium with the solution at a lower temperature than with the pure solvent. This explanation in terms of vapor pressure is equivalent to the argument based on chemical potential, since the chemical potential of a vapor is logarithmically related to pressure. All of the colligative properties result from a lowering of the chemical potential of the solvent in the presence of a solute. This lowering is an entropy effect. The greater randomness of the solution (as compared to the pure solvent) acts in opposition to freezing, so that a lower temperature must be reached, over a broader range, before equilibrium between the liquid solution and solid solution phases is achieved. Melting point determinations are commonly exploited in organic chemistry to aid in identifying substances and to ascertain their purity.
Due to concentration and entropy
In the liquid solution, the solvent is diluted by the addition of a solute, so that fewer molecules are available to freeze (a lower concentration of solvent exists in a solution versus pure solvent). Re-establishment of equilibrium is achieved at a lower temperature at which the rate of freezing becomes equal to the rate of liquefying. The solute is not occluding or preventing the solvent from solidifying, it is simply diluting it so there is a reduced probability of a solvent making an attempt at freezing in any given moment.
At the lower freezing point, the vapor pressure of the liquid is equal to the vapor pressure of the corresponding solid, and the chemical potentials of the two phases are equal as well.
Uses
The phenomenon of freezing-point depression has many practical uses. The radiator fluid in an automobile is a mixture of water and ethylene glycol. The freezing-point depression prevents radiators from freezing in winter. Road salting takes advantage of this effect to lower the freezing point of the ice it is placed on. Lowering the freezing point allows the street ice to melt at lower temperatures, preventing the accumulation of dangerous, slippery ice. Commonly used sodium chloride can depress the freezing point of water to about . If the road surface temperature is lower, NaCl becomes ineffective and other salts are used, such as calcium chloride, magnesium chloride or a mixture of many. These salts are somewhat aggressive to metals, especially iron, so in airports safer media such as sodium formate, potassium formate, sodium acetate, and potassium acetate are used instead.
Freezing-point depression is used by some organisms that live in extreme cold. Such creatures have evolved means through which they can produce a high concentration of various compounds such as sorbitol and glycerol. This elevated concentration of solute decreases the freezing point of the water inside them, preventing the organism from freezing solid even as the water around them freezes, or as the air around them becomes very cold. Examples of organisms that produce antifreeze compounds include some species of arctic-living fish such as the rainbow smelt, which produces glycerol and other molecules to survive in frozen-over estuaries during the winter months. In other animals, such as the spring peeper frog (Pseudacris crucifer), the molality is increased temporarily as a reaction to cold temperatures. In the case of the peeper frog, freezing temperatures trigger a large-scale breakdown of glycogen in the frog's liver and subsequent release of massive amounts of glucose into the blood.
With the formula below, freezing-point depression can be used to measure the degree of dissociation or the molar mass of the solute. This kind of measurement is called cryoscopy (Greek cryo = cold, scopos = observe; "observe the cold") and relies on exact measurement of the freezing point. The degree of dissociation is measured by determining the van 't Hoff factor i by first determining mB and then comparing it to msolute. In this case, the molar mass of the solute must be known. The molar mass of a solute is determined by comparing mB with the amount of solute dissolved. In this case, i must be known, and the procedure is primarily useful for organic compounds using a nonpolar solvent. Cryoscopy is no longer as common a measurement method as it once was, but it was included in textbooks at the turn of the 20th century. As an example, it was still taught as a useful analytic procedure in Cohen's Practical Organic Chemistry of 1910, in which the molar mass of naphthalene is determined using a Beckmann freezing apparatus.
Laboratory uses
Freezing-point depression can also be used as a purity analysis tool when analyzed by differential scanning calorimetry. The results obtained are in mol%, but the method has its place, where other methods of analysis fail.
In the laboratory, lauric acid may be used to investigate the molar mass of an unknown substance via the freezing-point depression. The choice of lauric acid is convenient because the melting point of the pure compound is relatively high (43.8 °C). Its cryoscopic constant is 3.9 °C·kg/mol. By melting lauric acid with the unknown substance, allowing it to cool, and recording the temperature at which the mixture freezes, the molar mass of the unknown compound may be determined.
This is also the same principle acting in the melting-point depression observed when the melting point of an impure solid mixture is measured with a melting-point apparatus since melting and freezing points both refer to the liquid-solid phase transition (albeit in different directions).
In principle, the boiling-point elevation and the freezing-point depression could be used interchangeably for this purpose. However, the cryoscopic constant is larger than the ebullioscopic constant, and the freezing point is often easier to measure with precision, which means measurements using the freezing-point depression are more precise.
FPD measurements are also used in the dairy industry to ensure that milk has not had extra water added. Milk with a FPD of over 0.509 °C is considered to be unadulterated.
Formula
For dilute solution
If the solution is treated as an ideal solution, the extent of freezing-point depression depends only on the solute concentration that can be estimated by a simple linear relationship with the cryoscopic constant ("Blagden's Law").
where:
is the decrease in freezing point, defined as the freezing point of the pure solvent minus the freezing point of the solution, as the formula above results in a positive value given that all factors are positive. From the calculated using the formula above, the freezing point of the solution can then be calculated as .
, the cryoscopic constant, which is dependent on the properties of the solvent, not the solute. (Note: When conducting experiments, a higher k value makes it easier to observe larger drops in the freezing point.)
is the molality (moles of solute per kilogram of solvent)
is the van 't Hoff factor (number of ion particles per formula unit of solute, e.g. i = 2 for NaCl, 3 for BaCl2).
Some values of the cryoscopic constant Kf for selected solvents:
For concentrated solution
The simple relation above doesn't consider the nature of the solute, so it is only effective in a diluted solution. For a more accurate calculation at a higher concentration, for ionic solutes, Ge and Wang (2010) proposed a new equation:
In the above equation, TF is the normal freezing point of the pure solvent (273 K for water, for example); aliq is the activity of the solvent in the solution (water activity for aqueous solution); ΔHfusTF is the enthalpy change of fusion of the pure solvent at TF, which is 333.6 J/g for water at 273 K; ΔCfusp is the difference between the heat capacities of the liquid and solid phases at TF, which is 2.11 J/(g·K) for water.
The solvent activity can be calculated from the Pitzer model or modified TCPC model, which typically requires 3 adjustable parameters. For the TCPC model, these parameters are available for many single salts.
Ethanol example
The freezing point of ethanol water mixture is shown in the following graph.
| Physical sciences | Thermodynamics | Chemistry |
607530 | https://en.wikipedia.org/wiki/Reaction%20mechanism | Reaction mechanism | In chemistry, a reaction mechanism is the step by step sequence of elementary reactions by which overall chemical reaction occurs.
A chemical mechanism is a theoretical conjecture that tries to describe in detail what takes place at each stage of an overall chemical reaction. The detailed steps of a reaction are not observable in most cases. The conjectured mechanism is chosen because it is thermodynamically feasible and has experimental support in isolated intermediates (see next section) or other quantitative and qualitative characteristics of the reaction. It also describes each reactive intermediate, activated complex, and transition state, which bonds are broken (and in what order), and which bonds are formed (and in what order). A complete mechanism must also explain the reason for the reactants and catalyst used, the stereochemistry observed in reactants and products, all products formed and the amount of each.
The electron or arrow pushing method is often used in illustrating a reaction mechanism; for example, see the illustration of the mechanism for benzoin condensation in the following examples section.
Mechanisms also are of interest in inorganic chemistry. A often quoted mechanistic experiment involved the reaction of the labile hexaaquo chromous reductant with the exchange inert pentammine cobalt(III) chloride.
Reaction intermediates
Reaction intermediates are chemical species, often unstable and short-lived. They can, however, sometimes be isolated. They are neither reactants nor products of the overall chemical reaction, but temporary products and/or reactants in the mechanism's reaction steps. Reaction intermediates are often confused with the transition state. The transition states are, in contrast, fleeting, high-energy species that cannot be isolated. The kinetics (relative rates of the reaction steps and the rate equation for the overall reaction) are discussed in terms of the energy required for the conversion of the reactants to the proposed transition states (molecular states that correspond to maxima on the reaction coordinates, and to saddle points on the potential energy surface for the reaction).
Chemical kinetics
Information about the mechanism of a reaction is often provided by analyzing chemical kinetics to determine the reaction order in each reactant.
Illustrative is the oxidation of carbon monoxide by nitrogen dioxide:
CO + NO2 → CO2 + NO
The rate law for this reaction is:
This form shows that the rate-determining step does not involve CO. Instead, the slow step involves two molecules of NO2. A possible mechanism for the overall reaction that explains the rate law is:
2 NO2 → NO3 + NO (slow)
NO3 + CO → NO2 + CO2 (fast)
Each step is called an elementary step, and each has its own rate law and molecularity. The sum of the elementary steps gives the net reaction.
When determining the overall rate law for a reaction, the slowest step is the step that determines the reaction rate. Because the first step (in the above reaction) is the slowest step, it is the rate-determining step. Because it involves the collision of two NO2 molecules, it is a bimolecular reaction with a rate which obeys the rate law .
Other reactions may have mechanisms of several consecutive steps. In organic chemistry, the reaction mechanism for the benzoin condensation, put forward in 1903 by A. J. Lapworth, was one of the first proposed reaction mechanisms.
A chain reaction is an example of a complex mechanism, in which the propagation steps form a closed cycle.
In a chain reaction, the intermediate produced in one step generates an intermediate in another step.
Intermediates are called chain carriers. Sometimes, the chain carriers are radicals, they can be ions as well. In nuclear fission they are neutrons.
Chain reactions have several steps, which may include:
Chain initiation: this can be by thermolysis (heating the molecules) or photolysis (absorption of light) leading to the breakage of a bond.
Propagation: a chain carrier makes another carrier.
Branching: one carrier makes more than one carrier.
Retardation: a chain carrier may react with a product reducing the rate of formation of the product. It makes another chain carrier, but the product concentration is reduced.
Chain termination: radicals combine and the chain carriers are lost.
Inhibition: chain carriers are removed by processes other than termination, such as by forming radicals.
Even though all these steps can appear in one chain reaction, the minimum necessary ones are Initiation, propagation, and termination.
An example of a simple chain reaction is the thermal decomposition of acetaldehyde (CH3CHO) to methane (CH4) and carbon monoxide (CO). The experimental reaction order is 3/2, which can be explained by a Rice-Herzfeld mechanism.
This reaction mechanism for acetaldehyde has 4 steps with rate equations for each step :
Initiation : CH3CHO → •CH3 + •CHO (Rate=k1 [CH3CHO])
Propagation: CH3CHO + •CH3 → CH4 + CH3CO• (Rate=k2 [CH3CHO][•CH3])
Propagation: CH3CO• → •CH3 + CO (Rate=k3 [CH3CO•])
Termination: •CH3 + •CH3 → CH3CH3 (Rate=k4 [•CH3]2)
For the overall reaction, the rates of change of the concentration of the intermediates •CH3 and CH3CO• are zero, according to the steady-state approximation, which is used to account for the rate laws of chain reactions.
d[•CH3]/dt = k1[CH3CHO] – k2[•CH3][CH3CHO] + k3[CH3CO•] - 2k4[•CH3]2 = 0
and d[CH3CO•]/dt = k2[•CH3][CH3CHO] – k3[CH3CO•] = 0
The sum of these two equations is k1[CH3CHO] – 2 k4[•CH3]2 = 0. This may be solved to find the steady-state concentration of •CH3 radicals as [•CH3] = (k1 / 2k4)1/2 [CH3CHO]1/2.
It follows that the rate of formation of CH4 is d[CH4]/dt = k2[•CH3][CH3CHO] = k2 (k1 / 2k4)1/2 [CH3CHO]3/2
Thus the mechanism explains the observed rate expression, for the principal products CH4 and CO. The exact rate law may be even more complicated, there are also minor products such as acetone (CH3COCH3) and propanal (CH3CH2CHO).
Other experimental methods to determine mechanism
Many experiments that suggest the possible sequence of steps in a reaction mechanism have been designed, including:
measurement of the effect of temperature (Arrhenius equation) to determine the activation energy
spectroscopic observation of reaction intermediates
determination of the stereochemistry of products, for example in nucleophilic substitution reactions
measurement of the effect of isotopic substitution on the reaction rate
for reactions in solution, measurement of the effect of pressure on the reaction rate to determine the volume change on formation of the activated complex
for reactions of ions in solution, measurement of the effect of ionic strength on the reaction rate
direct observation of the activated complex by pump-probe spectroscopy
infrared chemiluminescence to detect vibrational excitation in the products
electrospray ionization mass spectrometry.
crossover experiments.
Theoretical modeling
A correct reaction mechanism is an important part of accurate predictive modeling. For many combustion and plasma systems, detailed mechanisms are not available or require development.
Even when information is available, identifying and assembling the relevant data from a variety of sources, reconciling discrepant values and extrapolating to different conditions can be a difficult process without expert help. Rate constants or thermochemical data are often not available in the literature, so computational chemistry techniques or group additivity methods must be used to obtain the required parameters.
Computational chemistry methods can also be used to calculate potential energy surfaces for reactions and determine probable mechanisms.
Molecularity
Molecularity in chemistry is the number of colliding molecular entities that are involved in a single reaction step.
A reaction step involving one molecular entity is called unimolecular.
A reaction step involving two molecular entities is called bimolecular.
A reaction step involving three molecular entities is called trimolecular or termolecular.
In general, reaction steps involving more than three molecular entities do not occur, because is statistically improbable in terms of Maxwell distribution to find such a transition state.
| Physical sciences | Chemical reactions | null |
607577 | https://en.wikipedia.org/wiki/Activated%20complex | Activated complex | In chemistry, an activated complex represents a collection of intermediate structures in a chemical reaction when bonds are breaking and forming. The activated complex is an arrangement of atoms in an arbitrary region near the saddle point of a potential energy surface. The region represents not one defined state, but a range of unstable configurations that a collection of atoms pass through between the reactants and products of a reaction. Activated complexes have partial reactant and product character, which can significantly impact their behaviour in chemical reactions.
The terms activated complex and transition state are often used interchangeably, but they represent different concepts. Transition states only represent the highest potential energy configuration of the atoms during the reaction, while activated complex refers to a range of configurations near the transition state. In a reaction coordinate, the transition state is the configuration at the maximum of the diagram while the activated complex can refer to any point near the maximum.
Transition state theory (also known as activated complex theory) studies the kinetics of reactions that pass through a defined intermediate state with standard Gibbs energy of activation . The transition state, represented by the double dagger symbol represents the exact configuration of atoms that has an equal probability of forming either the reactants or products of the given reaction.
The activation energy is the minimum amount of energy to initiate a chemical reaction and form the activated complex. The energy serves as a threshold that reactant molecules must surpass to overcome the energy barrier and transition into the activated complex. Endothermic reactions absorb energy from the surroundings, while exothermic reactions release energy. Some reactions occur spontaneously, while others necessitate an external energy input. The reaction can be visualized using a reaction coordinate diagram to show the activation energy and potential energy throughout the reaction.
Activated complexes were first discussed in transition state theory (also called activated complex theory), which was first developed by Eyring, Evans, and Polanyi in 1935.
Reaction rate
Transition state theory
Transition state theory explains the dynamics of reactions. The theory is based on the idea that there is an equilibrium between the activated complex and reactant molecules. The theory incorporates concepts from collision theory, which states that for a reaction to occur, reacting molecules must collide with a minimum energy and correct orientation. The reactants are first transformed into the activated complex before breaking into the products. From the properties of the activated complex and reactants, the reaction rate constant is
where K is the equilibrium constant, is the Boltzmann constant, T is the thermodynamic temperature, and h is the Planck constant. Transition state theory is based on classical mechanics, as it assumes that as the reaction proceeds, the molecules will never return to the transition state.
Symmetry
An activated complex with high symmetry can decrease the accuracy of rate expressions. Error can arise from introducing symmetry numbers into the rotational partition functions for the reactants and activated complexes. To reduce errors, symmetry numbers can by omitted by multiplying the rate expression by a statistical factor:
where the statistical factor is the number of equivalent activated complexes that can be formed, and the Q are the partition functions from the symmetry numbers that have been omitted.
The activated complex is a collection of molecules that forms and then explodes along a particular internal normal coordinate. Ordinary molecules have three translational degrees of freedom, and their properties are similar to activated complexes. However, activated complexed have an extra degree of translation associated with their approach to the energy barrier, crossing it, and then dissociating.
| Physical sciences | Kinetics | Chemistry |
608264 | https://en.wikipedia.org/wiki/Beijing%20Subway | Beijing Subway | The Beijing Subway is the rapid transit system of Beijing Municipality that consists of 29 lines including 24 rapid transit lines, two airport rail links, one maglev line and two light rail tram lines, and 523 stations. The rail network extends across 12 urban and suburban districts of Beijing and into one district of Langfang in neighboring Hebei province. Between December 2023 and December 2024, the Beijing Subway became the world's longest metro system by route length, surpassing the Shanghai Metro. The system has since returned to being the world's second longest, with new lines being opened by the Shanghai Metro. With 3.8484 billion trips delivered in 2018 (10.544 million trips per day) and single-day ridership record of 13.7538 million set on July 12, 2019, the Beijing Subway was the world's busiest metro system in the years immediately prior to the outbreak of the COVID-19 pandemic.
The Beijing Subway opened in 1971 and is the oldest metro system in mainland China and on the mainland of East Asia. Before the system began its rapid expansion in 2002, the subway had only two lines. The existing network still cannot adequately meet the city's mass transit needs. Beijing Subway's extensive expansion plans call for of lines serving a projected 18.5 million trips every day when Phase 2 Construction Plan finished (around 2025). The most recent expansion came into effect on December 15, 2024, with the openings of Line 3 and Line 12 and an extension of the Changping Line.
Fares
Fare schedules
Single-ride fare
The Beijing Subway charges single-ride fare according to trip distance for all lines except the two airport express lines.
For all lines except the two airport express lines, fares start at ¥3 for a trip up to 6 km in distance, with ¥1 added for the next 6 km, for every 10 km thereafter until the trip distance reaches 32 km, and for every 20 km beyond the first 32 km. A 40 km trip would cost ¥7.
The Capital Airport Express has a fixed fare of ¥25 per ride.
The Daxing Airport Express is the only line to maintain class-based fares with ordinary class fare varying with distance from ¥10 to ¥35 and business class fare fixed at ¥50 per ride.
Same-station transfers are free on all subway lines except the two Airport Express lines, the Xijiao Line and the Yizhuang T1 Line, which require the purchase of a new fare when transferring to or from those lines.
Fare free riders
Children below in height ride for free when accompanied by a paying adult. Senior citizens over the age of 65, individuals with physical disabilities, retired revolutionary cadres, police and army veterans who had been wounded in action, military personnel and People's Armed Police can ride the subway for free.
Unlimited-rides fare
Since January 20, 2019, riders can purchase unlimited rides fare tickets using the Yitongxing (亿通行) APP on smartphones, which generates a QR code with effective periods of one to seven days.
Previous fare schedules
On December 28, 2014, the Beijing Subway switched from a fixed-fare schedule to the current distance-based fare schedule for all lines except the Capital Airport Express. Prior to the December 28, 2014, fare increase, passengers paid a flat rate of RMB(¥) 2.00 (including unlimited fare-free transfers) for all lines except the Capital Airport Express, which cost ¥25, The flat fare was the lowest among metro systems in China. Before the flat fare schedule was introduced on October 7, 2007, fares ranged from ¥3 to ¥7, depending on the line and number of transfers.
Fare collection
Each station has two to fifteen ticket vending machines. Ticket vending machines on all lines can add credit to Yikatong cards. Single-ride tickets take the form of an RFID-enabled flexible plastic card.
Passengers must insert the ticket or scan the card at the gate both before entering and exiting the station. The subway's fare collection gates accept single-ride tickets and the Yikatong fare card. Passengers can purchase tickets and add credit to Yikatong card at ticket counters or vending machines in every station. The Yikatong, also known as Beijing Municipal Administration & Communication Card (BMAC), is an integrated circuit card that stores credit for the subway, urban and suburban buses and e-money for other purchases. The Yikatong card itself must be purchased at the ticket counter. To enter a station, the Yikatong card must have a minimum balance of ¥3.00. Upon exiting the system, single-ride tickets are inserted into the turnstile, which are reused by the system.
To prevent fraud, passengers are required to complete their journeys within four hours upon entering the subway. If the four-hour limit is exceeded, a surcharge of ¥3 is imposed. Each Yikatong card is allowed to be overdrawn once. The overdrawn amount is deducted when credits are added to the card.
Yikatong card users who spend more than ¥100 on subway fare in a calendar month will receive credits to their card the following month. After reaching ¥100 of spending in one calendar month, 20% of any further spending up to ¥150 will be credited. When spending exceeds ¥150, 50% of any further spending up to ¥250 will be credited. Once expenditures exceed ¥400, further spending won't earn any more credits. The credits are designed to ease commuters' burdens of fare increases.
Beginning in June 2017, single-journey tickets could be purchased via a phone app. A May 2018 upgrade allowed entrance via scanning a QR code from the same app.
Since the COVID-19 pandemic, a name and Chinese Resident Identity Card number must be entered when buying single-ride tickets for contact tracing purposes. This measure has been criticized for increasing the time spent buying tickets.
Lines in operation
Beijing Subway lines generally follow the checkerboard layout of the city. Most lines through the urban core (outlined by the Line 10 loop) run parallel or perpendicular to each other and intersect at right angles.
Lines through the urban core
The urban core of Beijing is roughly outlined by the Line 10 loop, which runs underneath or just beyond the 3rd Ring Road. Each of the following lines provides extensive service within the Line 10 loop. All have connections to seven or more lines. Lines 1, 4, 5, 6, 8, and 19 also run through the Line 2 loop, marking the old Ming-Qing era city of Beijing.
Line 1: straight east–west line underneath Chang'an Avenue, bisecting the city through Tiananmen Square. Line 1 connects major commercial centres, Xidan, Wangfujing, Dongdan and the Beijing CBD.
Line 2: the inner rectangular loop line that traces the Ming-era inner city wall which once surrounded the inner city, with stops at 11 of the wall's former gates (ending in men), now busy intersections on the 2nd Ring Road, as well as the Beijing railway station.
Line 3 runs from the eastern edge of the inner city to the northeast, through Sanlitun, Chaoyang Park and Chaoyang Station.
Line 4: mainly north–south line running to the west of city centre with stops at the Summer Palace, Old Summer Palace, Peking and Renmin Universities, Zhongguancun, National Library, Beijing Zoo, Xidan, Taoranting and Beijing South railway station.
Line 5: straight north–south line running to the east of the city centre. Line 5 passes the Temple of Earth, Yonghe Temple and the Temple of Heaven.
Line 6: east–west line running parallel and to the north of Line 1, passing through the city centre north of Beihai Park. At 53.4 km, Line 6 is the second longest Beijing Subway line after Line 10, and runs from Shijingshan District in the west to the Beijing City Sub-Center in Tongzhou District, terminating at Lucheng just beyond the eastern 6th Ring Road.
Line 7: east–west line running parallel and to the south of Line 1, from Beijing West railway station to . Line 7 serves the old neighborhoods of southern Beijing with stops at , Caishikou and .
Line 8: north–south line following the Beijing's central axis from Changping District through Huilongguan, the Olympic Green, Shichahai and Nanluoguxiang, where the line veers east of the Forbidden City and Tiananmen Square with stops at the National Art Museum and Wangfujing before returning to the central axis at Qianmen and continuing due south through Zhushikou and Yongdingmen to Heyi before turning southwest to Yinghai in Daxing District.
Line 9: north–south line running to the west of Line 4 from the National Library through the Military Museum and Beijing West railway station to Guogongzhuang in the southwestern suburbs.
Line 10, the outer loop line running beneath or just beyond the Third Ring Road. Apart from the Line 2 loop, which is entirely enclosed within the Line 10 loop, every other line through the urban core intersects with Line 10. In the north, Line 10 traces Beijing's Yuan-era city wall. In the east, Line 10 passes through the Beijing CBD.
Line 12 follows the northern section of the 3rd Ring Road and then further east into Chaoyang.
Line 13 arcs across suburbs north of the city and transports commuters to Xizhimen and Dongzhimen, at the northwest and northeast corners of Line 2.
Line 14: inverted-L shaped line that connects the southwest, southeast and northeast parts of the city. From in the southwest, Line 14 runs due west and enters the Line 10 loop at Xiju and passing through the Beijing South Railway Station, Yongdingmenwai, Puhuangyu, Fangzhuang and leaves the Line 10 loop at Shilihe before turning north at Beijing University of Technology and running south - north outside the Line 10 loop through the Beijing CBD, Chaoyang Park and Jiuxianqiao to Wangjing in the northeast.
Line 16: line from the northwest suburbs of Haidian District north of the Baiwang Mountain that runs mostly north - south upon entering Line 10 into and , then continuing south through and , before entering and . It then turns west through before ending at in Fengtai District.
Line 19: north–south line from to with stops inside the Line 2 loop at and near Beijing Financial Street.
Lines serving outlying suburbs
Each of the following lines provides service predominantly to one or more of the suburbs beyond the 5th Ring Road. Lines 15, S1 along with the Changping, Daxing, Yanfang lines extend beyond the 6th Ring Road.
Line 11 currently runs from to in Shijingshan District.
Line 15 east–west line which runs between the northern 4th and 5th Ring Road from the east of Tsinghua University, through the Olympic Green and Wangjing, turning northeast to suburban Shunyi District.
Line 17 currently runs from to in its south section, mainly serving Tongzhou District, whilst the north section currently runs from to , mainly serving Changping District and northern Chaoyang District.
Batong line extends Line 1 eastward from Sihui to suburban Tongzhou District.
Changping line starts at in Haidian District, passing through and before intersecting with Line 13 at and , and then running north through suburban Changping District. The line then passes the , , and the .
Daxing line extends Line 4 south to suburban Daxing District.
Fangshan line goes from in Fengtai District to in Fangshan District in the southwestern suburbs.
Yanfang line extends the Fangshan line further into western Fangshan District.
Yizhuang line extends from Line 5's southern terminus to the Yizhuang Economic & Technological Development Zone in the southeastern suburbs.
Capital Airport Express connects the Beijing Capital International Airport, northeast of the city, with Line 5 at Beixinqiao, Line 10 at Sanyuanqiao and Lines 2 and 13 at Dongzhimen.
Daxing Airport Express connects the Beijing Daxing International Airport, south of the city, with Line 10 at Caoqiao.
Line S1, a low-speed maglev line connecting suburban Mentougou District with Line 6 in Shijingshan District.
Xijiao line, a light rail line that branches off Line 10 at Bagou and extends west to .
Yizhuang T1 line, a light rail line runs from Quzhuang in Daxing District to Dinghaiyuan in Tongzhou District.
Future expansion
Phase II
According to the Phase 2 construction plan approved by the NDRC in 2015, the length of Beijing Subway will reach when the Phase 2 construction finished. By then, public transit will comprise 60% of all trips. Of those, the subway will comprise 62%. The adjustment of the Phase 2 construction plan was approved by the NDRC on December 5, 2019. Which altered and expanded some projects in the Phase 2 construction plan. Including adjusting alignments of Line 22 and Line 28 and additional projects such as the Daxing Airport Line north extension, the west section of Line 11 and transforming Line 13 into two lines, 13A and 13B.
Phase III (2022–2027)
According to the information released in July 2022, the "Beijing Rail Transit Phase III Construction Plan" includes 11 construction projects: Line 1 Branch, Line 7 Phase 3, Line 11 Phase 2, Line 15 Phase 2, Line 17 Phase 2 (Branch), Line 19 Phase 2, Line 20 Phase 1, Fangshan line (Line 25) Phase 3 (also known as Lijin Line), Line M101 Phase 1, Line S6 (New Town Link Line) Phase 1, and the connecting line between Yizhuang line, Line 5 and Line 10.
Owner and operators
The Beijing Subway is owned by the Beijing Municipal People's Government through the Beijing Infrastructure Investment Co., LTD, (北京市基础设施投资有限公司 or BIIC), a wholly owned subsidiary of the Beijing State-owned Assets Supervision and Administration Commission (北京市人民政府国有资产监督管理委员会 or Beijing SASAC), the municipal government's asset holding entity.
The Beijing Subway was originally developed and controlled by the Central Government. The subway's construction and planning was headed by a special committee of the State Council. In February 1970, Premier Zhou Enlai handed management of the subway to the People's Liberation Army, which formed the PLA Rail Engineering Corp Beijing Subway Management Bureau. In November 1975, by order of the State Council and Central Military Commission the bureau was placed under the authority of Beijing Municipal Transportation Department.
On April 20, 1981, the bureau became the Beijing Subway Company, which was a subsidiary of the Beijing Public Transportation Company.
In July 2001, the Beijing Municipal Government reorganized the subway company into the Beijing Subway Group Company Ltd., a wholly city-owned holding company, which assumed ownership of all of the subway's assets. In November 2003, the assets of the Beijing Subway Group Company were transferred to the newly created BIIC.
The Beijing Subway has five operators:
The main operator is the wholly state-owned Beijing Mass Transit Railway Operation Corp. (北京市地铁运营有限公司 or Beijing Subway OpCo), which was formed in the reorganization of the original Beijing Subway Group Company in 2001, and operates 15 lines: Lines 1, 2, 5–10, 13, 15, Batong line, Changping line, Fangshan line, Yizhuang line and S1 line.
The Beijing MTR Corp. (北京京港地铁有限公司 or Beijing MTR), a public–private joint venture formed in 2005 by and among Beijing Capital Group, a state company under Beijing SASAC (with 49% equity ownership), MTR Corporation of Hong Kong (49%), and BIIC (2%), and operates four lines: Lines 4, 14, 16 and Line 17 and Daxing line.
The (北京市轨道交通运营管理有限公司 or BJMOA), a subsidiary of Beijing Metro Construction Administration Corporation Ltd. (北京市轨道交通建设管理有限公司 or BJMCA) also under Beijing SASAC, became the third company to obtain operation rights for the Beijing Subway in 2015. The BJMOA operates the Yanfang line, Daxing Airport Express, and Line 19. Its corporate parent, BJMCA, is a general contractor for Beijing Subway construction.
The Beijing Public Transit Tramway Co., Ltd. (北京公交有轨电车有限公司), formed in 2017, is a wholly owned subsidiary of Beijing Public Transport Corporation (北京公共交通控股(集团)有限公司 or BPTC) that operates the Xijiao line. Its corporate parent, BPTC, is the city's main public bus operator.
The (北京京城地铁有限公司), also branded as "Capital Metro" (京城地铁) in their official logo, operates the Capital Airport Express. Beijing City Metro Ltd. is a joint venture established on February 15, 2016, between Beijing Subway OpCo (51%) and BII Railway Transportation Technology Holdings Company Limited (49%)(京投轨道交通科技控股有限公司), a Hong Kong listed company (1522.HK) controlled by BIIC. On March 27, 2017, Beijing City Metro Ltd. acquired a 30-year right to operate the Capital Airport Express and sections of the Dongzhimen subway station.
Rolling stock
All subway train sets run on standard gauge rail, except the maglev trains on Line S1, which run on a maglev track. Beijing Subway operates Type B trains on most lines. However, due to increasing congestion on the network, high capacity Type A trains are increasingly being used. Additionally, Type D trains are being used in express subway lines.
Until 2003 nearly all trains were manufactured by the CRRC Changchun Railway Vehicles Co., Ltd., now a division of the CRRC. The newest Line 1 trains and those on Lines 4, 8, Batong, Changping and Daxing are made by CRRC Qingdao Sifang Co., Ltd. Line S1's maglev trains were produced by CRRC Tangshan.
The Beijing Subway Rolling Stock Equipment Co. Ltd., a wholly owned subsidiary of the Beijing Mass Transit Railway Operation Corp. Ltd., provides local assemblage, maintenance and repair services.
Automated lines
There will be 6 fully automated lines at the level of GoA4, including 4 lines in operation (the Yanfang line, Line 17 and Line 19 and the Daxing Airport Express) and 2 lines under construction (Line 3 and Line 12), using domestically developed communications-based train control systems.
History
1953–1965: origins
The subway was proposed in September 1953 by the city's planning committee and experts from the Soviet Union. After the end of the Korean War, Chinese leaders turned their attention to domestic reconstruction. They were keen to expand Beijing's mass transit capacity but also valued the subway as an asset for civil defense. They studied the use of the Moscow Metro to protect civilians, move troops and headquarter military command posts during the Battle of Moscow, and planned the Beijing Subway for both civilian and military use.
At that time, the Chinese lacked expertise in building subways and drew heavily on Soviet and East German technical assistance. In 1954, a delegation of Soviet engineers, including some who had built the Moscow Metro, was invited to plan the subway in Beijing. From 1953 to 1960, several thousand Chinese university students were sent to the Soviet Union to study subway construction. An early plan unveiled in 1957 called for one ring route and six other lines with 114 stations and of track. Two routes vied for the first to be built. One ran east–west from Wukesong to Hongmiao, underneath Changan Avenue. The other ran north–south from the Summer Palace to Zhongshan Park, via Xizhimen and Xisi. The former was chosen due to more favorable geological foundation and greater number of government bureaus served. The second route would not be built until construction on Line 4 began forty years later.
The original proposal called for deep subway tunnels that can better serve military functions. Between Gongzhufen and Muxidi, shafts as deep as were being dug. The world's deepest subway station at the time in the Kyiv Metro was only deep. But Beijing's high water table and high pressure head of ground water which complicated construction and posed risk of leakage, and along with the inconvenience of transporting passengers long distances from the surface, led the authorities to abandon the deep tunnel plan in May 1960 in favor of cut-and-cover shallow tunnels some below the surface.
The deterioration of relations between China and Soviet Union disrupted subway planning. Soviet experts began to leave in 1960, and were completely withdrawn by 1963. In 1961, the entire project was halted temporarily due to severe hardships caused by the Great Leap Forward. Eventually, planning work resumed. The route of the initial line was shifted westward to create an underground conduit to move personnel from the heart of the capital to the Western Hills. On February 4, 1965, Chairman Mao Zedong personally approved the project.
1965–1981: the slow beginning
Construction began on July 1, 1965, at a groundbreaking ceremony attended by several national leaders including Zhu De, Deng Xiaoping, and Beijing mayor Peng Zhen. The most controversial outcome of the initial subway line was the demolition of the Beijing's historic inner city wall to make way for the subway. Construction plans for the subway from Fuxingmen to the Beijing Railway Station called for the removal of the wall, as well as the gates and archery towers at Hepingmen, Qianmen, and Chongwenmen. Leading architect Liang Sicheng argued for protecting the wall as a landmark of the ancient capital. Chairman Mao favored demolishing the wall over demolishing homes. In the end, Premier Zhou Enlai managed to preserve several walls and gates, such as the Qianmen gate and its arrow tower by slightly altering the course of the subway.
The initial line was completed and began trial operations in time to mark the 20th anniversary of the founding of the People's Republic on October 1, 1969. It ran from Gucheng to the Beijing Railway Station and had 16 stations. This line forms parts of present-day Lines 1 and 2. It was the first subway to be built in China, and predates the metros of Hong Kong, Seoul, Singapore, San Francisco, and Washington, D.C., but technical problems would plague the project for the next decade.
Initially, the subway hosted guest visits. On November 11, 1969, an electrical fire killed three people, injured over 100 and destroyed two cars. Premier Zhou Enlai placed the subway under the control of the People's Liberation Army in early 1970, but reliability problems persisted.
On January 15, 1971, the initial line began operation on a trial basis between the Beijing railway station and . Single ride fare was set at ¥0.10 and only members of the public with credential letters from their work units could purchase tickets. The line was in length, had 10 stations and operated more than 60 train trips per day with a minimum wait time of 14 minutes. On August 15, the initial line was extended to and had 13 stations over . On November 7, the line was extended again, to Gucheng Lu, and had 16 stations over . The number of trains per day rose to 100. Overall, the line delivered 8.28 million rides in 1971, averaging 28,000 riders per day.
From 1971 to 1975, the subway was shut down for 398 days for political reasons. On December 27, 1972, the riders no longer needed to present credential letters to purchase tickets. In 1972, the subway delivered 15 million rides and averaged 41,000 riders per day. In 1973, the line was extended to and reached in length with 17 stations and 132 train trips per day. The line delivered 11 million rides in 1973, averaging 54,000 riders per day.
Despite its return to civilian control in 1976, the subway remained prone to closures due to fires, flooding, and accidents. Annual ridership grew from 22.2 million in 1976 and 28.4 million in 1977 to 30.9 million in 1978, and 55.2 million in 1980.
1981–2000: two lines for two decades
On April 20, 1981, the Beijing Subway Company, then a subsidiary of the Beijing Public Transportation Company, was organized to take over subway operations. On September 15, 1981, the initial line passed its final inspections, and was handed over to the Beijing Subway Company, ending a decade of trial operations. It had 19 stations and ran from Fushouling in the Western Hills to the Beijing railway station. Investment in the project totaled ¥706 million. Annual ridership rose from 64.7 million in 1981 and 72.5 million in 1982 to 82 million in 1983.
On September 20, 1984, a second line was opened to the public. This horseshoe-shaped line was created from the eastern half of the initial line and corresponds to the southern half of the present-day Line 2. It ran from to with 16 stations. Ridership reached 105 million in 1985.
On December 28, 1987, the two existing lines were reconfigured into Lines 1, which ran from Pingguoyuan to Fuxingmen and Line 2, in its current loop, tracing the Ming city wall. Fares doubled to ¥0.20 for single-line rides and ¥0.30 for rides with transfers. Ridership reached 307 million in 1988. The subway was closed from June 3–4, 1989 during the suppression of the Tiananmen Square demonstrations. In 1990, the subway carried more than one million riders per day for the first time, as total ridership reached 381 million. After a fare hike to ¥0.50 in 1991, annual ridership declined slightly to 371 million.
On January 26, 1991, planning began on the eastward extension of Line 1 under Chang'an Avenue from Fuxingmen. The project was funded by a 19.2 billion yen low-interest development assistance loan from Japan. Construction began on the eastern extension on June 24, 1992, and the Xidan station opened on December 12, 1992. The remaining extension to was completed on September 28, 1999. National leaders Wen Jiabao, Jia Qinglin, Yu Zhengsheng and mayor Liu Qi were on hand to mark the occasion. The full-length of Line 1 became operational on June 28, 2000.
Despite little track expansion in the early 1990s, ridership grew rapidly to reach a record high of 558 million in 1995, but fell to 444 million the next year when fares rose from ¥0.50 to ¥2.00. After fares rose again to ¥3.00 in 2000, annual ridership fell to 434 million from 481 million in 1999.
2001–2008: planning for the Olympics
In the summer of 2001, the city won the bid to host the 2008 Summer Olympics and accelerated plans to expand the subway. From 2002 to 2008, the city planned to invest ¥63.8 billion (US$7.69 billion) in subway projects and build an ambitious subway network. The plan, termed "three ring, four horizontal, five vertical and seven radial" in 2007, consisted of 19 lines:
Three ring lines: 2, 10 and 13
Four horizontal lines: 1, 6, 7, 14 (West)
Five horizontal lines: 4, 5, 8, 9, 14 (East)
Seven radial lines: Batong, Changping, Daxing, Fangshan, Shunyi (Line 15), Yizhuang, Line S1
Work on Line 5 had already begun on September 25, 2000. Land clearing for Lines 4 and 10 began in November 2003 and construction commenced by the end of the year. Most new subway construction projects were funded by loans from the Big Four state banks. Line 4 was funded by the Beijing MTR Corporation, a joint-venture with the Hong Kong MTR. To achieve plans for 19 lines and by 2015, the city planned to invest a total of ¥200 billion ($29.2 billion).
The next additions to the subway were surface commuter lines that linked to the north and east of the city. Line 13, a half loop that links the northern suburbs, first opened on the western half from Huilongguan to Xizhimen on September 28, 2002 and the entire line became operational on January 28, 2003. Batong line, built as an extension to Line 1 to Tongzhou District, was opened as a separate line on December 27, 2003. Work on these two lines had begun respectively in December 1999 and 2000. Ridership hit 607 million in 2004.
Line 5 came into operation on October 7, 2007. It was the city's first north–south line, extending from in the south to in the north. On the same day, subway fares were reduced from between ¥3 and ¥7 per trip, depending on the line and number of transfers, to a single flat fare of ¥2 with unlimited transfers. The lower fare policy caused the Beijing Subway to run a deficit of ¥600 million in 2007, which was expected to widen to ¥1 billion in 2008. The Beijing municipal government covered these deficits to encourage mass transit use, and reduce traffic congestion and air pollution. On a total of 655 million rides delivered in 2007, the government's subsidy averaged ¥0.92 per ride.
As part of the urban re-development for the 2008 Olympics, the subway system was significantly expanded.In the summer of 2008, in anticipation of the Summer Olympic Games, three new lines—Line 10 (Phase 1), Line 8 (Phase 1) and the Capital Airport Express—opened on July 19. The use of paper tickets, hand checked by clerks for 38 years, was discontinued and replaced by electronic tickets that are scanned by automatic fare collection machines upon entry and exit of the subway. Stations are outfitted with touch screen vending machines that sell single-ride tickets and multiple-ride Yikatong fare cards. The subway operated throughout the night from August 8–9, 2008 to accommodate the Opening Ceremonies of the Olympic Games, and is extending evening operations of all lines by one to three hours (to 1-2 a.m.) through the duration of the Games. The subway set a daily ridership record of 4.92 million on August 22, 2008, the day of the Games' closing ceremony. In 2008, total ridership rose by 75% to 1.2 billion.
2008–2015: rapid expansion
After the Chinese government announced a ¥4 trillion economic stimulus package in November 2008, the Beijing urban planning commission further expedited subway building plans, especially for elevated lines to suburban districts that are cheaper to build. In December 2008, the commission moved completion dates of the Yizhuang and Daxing Lines to 2010 from 2012, finalized the route of the Fangshan Line, and unveiled the Changping and Xijiao Lines.
Line 4 started operation on September 28, 2009, bringing subway service to much of western Beijing. It is managed by the MTR Corporation through a joint venture with the city. In 2009, the subway delivered 1.457 billion rides, 19.24% of mass transit trips in Beijing.
In 2010, Beijing's worsening traffic congestion prompted city planners to move the construction of several lines from the 13th Five Year Plan to the 12th Five Year Plan. This meant Lines 8 (Phase III), , , , the Yanfang line, as well as additional lines to Changping District and Tiantongyuan were to begin construction before 2015. Previously, Lines 3, 12 and 16 were being planned for the more distant future. On December 30, 2010, five suburban lines: Lines 15 (Phase I from to except Wangjing East station), Changping, Fangshan (except Guogongzhuang station), Yizhuang (except Yizhuang railway station), and Daxing, commenced operation. The addition of of track, a nearly 50% increase, made the subway the fourth longest metro in the world. One year later, on December 31, 2011, the subway surpassed the New York City Subway to become the third longest metro in revenue track length with the extension of Line 8 north from the to , the opening of Line 9 in southwest Beijing from Beijing West railway station to (except , which opened on October 12, 2012), the extension of the Fangshan Line to Guogongzhuang, and the extension of Line 15 from to in central Shunyi. In the same year, the Beijing government unveiled an ambitious expansion plan envisioning the subway network to reach a track density of 0.51 km per km2 (0.82 mi per sq. mi.) inside the Fifth Ring Road where residents would on average have to walk to the nearest subway station. Ridership reached 2.18 billion in 2011.
In February 2012, the city government confirmed that Lines , , , and were under planning as part of Phase II expansion. Retroactively implying that the original three ring, four horizontal, five vertical and seven radial plan was part of Phase I expansion. Line 17 was planned to run north–south, parallel and to the east of Line 5, from Future Science Park North to Yizhuang Zhanqianqu South. Line 19 was planned to run north–south, from Mudanyuan to Xin'gong.
On December 30, 2012, Line 6 (Phase I from to ), the extension of Line 8 from south to (except ), the remainder of Line 9 (except Military Museum station) and the remainder of the Line 10 loop (except the - section and Jiaomen East station) entered service. The addition of of track increased the network length to and allowed the subway to overtake the Shanghai Metro, for several months, as the world's longest metro. The subway delivered 2.46 billion rides in 2012.
On May 5, 2013, the Line 10 loop was completed with the opening of the Xiju-Shoujingmao section and the Jiaomen East Station. The loop line became the longest underground subway loop in the world. On the same day, the first section of Line 14 from to Xiju also entered operation, ahead of the opening of the Ninth China International Garden Expo in Fengtai District. The subway's total length reached . On December 28, 2013, two sections were added to Line 8, which extended the line north to Zhuxinzhuang and south to Nanluoguxiang. In 2013, the subway delivered 3.209 billion rides, an increase of 30% from the year before.
On December 28, 2014, the subway network expanded by to 18 lines and with the opening of Line 7, the eastern extension of line 6 (from to ), the eastern section of line 14 (from to ), and the western extension of line 15 (from to ). At the same time, the ¥2 flat-rate fare was replaced with a variable-rate fare (a minimum of ¥3), to cover operation costs. In 2014, the subway delivered 3.387 billion rides, an increase of 5.68% from the year before. Average daily and weekday ridership also set new highs of 9.2786 million and 10.0876 million, respectively.
From 2007 to 2014, the cost of subway construction in Beijing rose sharply from ¥0.571 billion per km to ¥1.007 billion per km. The cost includes land acquisition, compensation to relocate residents and firms, actual construction costs and equipment purchase. In 2014, city budgeted ¥15.5 billion for subway construction, and the remainder of subway building costs was financed by the Beijing Infrastructure Investment Co. LTD, a city-owned investment firm.
In 2014, Beijing planning authorities assessed mass transit monorail lines for areas of the city in which subway construction or operation is difficult. Straddle beam monorail trains have lower transport capacity and operating speed () than conventional subways, but are quieter to operate, have smaller turning radius and better climbing capability, and cost only one-third to one-half of subways to build. According to the initial environmental assessment report by the Chinese Academy of Rail Sciences, the Yuquanlu Line was planned to have 21 stations over in western Beijing. The line was to begin construction in 2014 and would take two years to complete. The Dongsihuan Line (named for the Eastern Fourth Ring Road it was to follow) was planned to have 21 stations over .
In early 2015, plans for both monorail lines were shelved indefinitely, due to low capacity and resident opposition. The Yuquanlu Line remains on the city's future transportation plan, and it will be built as a conventional underground subway line. The Dongsihuan Line was replaced by the East extension of Line 7.
On December 26, 2015, the subway network expanded to with the opening of the section of Line 14 from Beijing South railway station to (11 stations; ), Phase II of the Changping line from to (5 stations; ), Andelibeijie station on Line 8, and Datunlu East station on Line 15. Ridership in 2015 fell by 4% to 3.25 billion due to a fare increase from a flat fare back to a distance based fare.
2015–present: Phase II projects
With the near completion of the three ring, four horizontal, five vertical and seven radial subway network, work began on Phase II expansion projects. These new extensions and lines were expected to be operational in 2019–2021. The following lines were included in the approved Phase II construction plans:
Line 3
Line 12
Line 17
Line 19: Phase 1
Line 7: Phase 2 (eastern extension)
Line 8: Phase 4
Capital Airport Express: Phase 2 (western extension)
Fangshan line: Phase 2 (northern extension)
Changping line: Phase 2 (southern extension)
Batong line: Phase 2 (southern extension)
Line 22
CBD line
On December 9, 2016, construction started on of new line with the southern extension of Batong Line, the southern extension of Changping line, the Pinggu line, phase one of the New Airport line, and Line 3 Phase I breaking ground. The northern section of Line 16 opened on December 31, 2016. Ridership reached a new high of 3.66 billion. On December 30, 2017, a one-station extension of Fangshan Line (Suzhuang – Yancun East), Yanfang line (Yancun Dong - Yanshan), Xijiao line (Bagou - Fragrant Hills) and S1 line (Shichang – Jin'anqiao) were opened. On December 30, 2018, the western extension of Line 6 (Jin'anqiao – Haidian Wuluju), the South section of Line 8 (Zhushikou – Yinghai), a one-station extension on Line 8 North section (Nanluoguxiang – National Art Museum), a one-station extension on Yizhuang line (Ciqu – Yizhuang Railway Station) were opened. On September 26, 2019, the Daxing Airport Express (Phase 1) (Caoqiao - Daxing Airport) was opened. On December 28, 2019, the eastern extension of Line 7 (Jiaohuachang-Huazhuang) and the southern extension of Batong line (Tuqiao-Huazhuang) were opened. A revision to the Phase II plans in 2019 added Line 11 (branch line for the 2022 Winter Olympics) and a project to split Line 13 to the construction schedule.
On January 24, 2020, the day after a lockdown was declared in the city of Wuhan to contain the outbreak of COVID-19 in China, the Beijing Subway began testing body temperature of passengers at the 55 subway stations including the three main railway stations and capital Airport. Temperature checks expanded to all subway stations by January 27.
On April 4, 2020, at 10:00am, Beijing Subway trains joined in China's national mourning of lives lost in the COVID-19 pandemic, by stopping for three minutes and sounding their horns three times, as conductors and passengers stood in silence. To control the spread of COVID-19, certain Line 6 trains were outfitted with smart surveillance cameras that can detect passengers not wearing masks.
In May 2020, the Beijing Subway began to pilot a new style of wayfinding on Line 13 and Airport Express. However, since then the new designs were not rolled out to other lines or even new lines that opened afterward.
On December 31, 2020, the middle section of Line 16 (Xi Yuan-Ganjia Kou), the northern section of the Fangshan line (Guogongzhuang-Dongguantou Nan(S)), and the Yizhuang T1 line tram were opened.
On August 26, 2021, Line 7 and Batong line extended to station. On August 29, 2021, through operation of Line 1 and Batong line started. On December 31, 2021, the initial sections of Line 11 (Jin'anqiao - Shougang Park), Line 17 (Shilihe - Jiahuihu), Line 19 (Mudanyuan - Xingong); extensions of Capital Airport Express (Dongzhimen - Beixinqiao), Changping line (Xierqi - Qinghe Railway Station), Line S1 (Jin'anqiao - Pingguoyuan), Line 16 (Ganjiakou - Yuyuantan Park East Gate); and the central sections of Line 8 (Zhushikou - National Art Museum) and Line 14 (Beijing South Railway Station - Xiju) were opened. The opening of the central sections of Lines 8 and 14 along with the final section of Line S1 completed the three ring, four horizontal, five vertical and seven radial subway network plan (retroactively named Phase I expansion).
On July 30, 2022, stations Beitaipingzhuang, Ping'anli, Taipingqiao, Jingfengmen of Line 19 were opened. On December 31, 2022, the extension of Line 16 (Yuyuantan Park East Gate - Yushuzhuang) was opened.
On January 18, 2023, in the morning and evening peak hours of the workday, the cross-line operation of Fangshan Line and Line 9 began. On February 4, 2023, the extension of Changping Line (Qinghe Railway Station - Xitucheng) was opened.
On December 15, 2024, lines 3 and 12 were opened together with the remainder of the Changping line's southern extension. By the end of 2024, all of Beijing's 7 major railway stations and 2 international airports have been connected to the metro network.
Ridership
Facilities
Accessibility
Each station is equipped with ramps, lifts, or elevators to facilitate wheelchair access. Newer model train cars now provide space to accommodate wheelchairs. Automated audio announcements for incoming trains are available in all lines. On all lines, station names are announced in Mandarin Chinese and English. Under subway regulations, riders with mobility limitations may obtain assistance from subway staff to enter and exit stations and trains, and visually impaired riders may bring assistance devices and guide dogs into the subway.
Cellular network coverage
Mobile phones can currently be used throughout the network. In 2014, Beijing Subway started upgrading cellular networks in the Beijing subway to 4G. In 2016, the entire subway network has 4G coverage. Since 2019, 5G coverage is being rolled out across the network.
Commercial facilities
In the 1990s a number of fast food and convenience stores operated in the Beijing Subway. In 2002, fourteen Wumart convenience stores opened in various Line 2 stations.
After witnessing the Daegu subway fire in February 2003, the Beijing Subway gradually removed the 80 newsstands and fast food restaurants across 39 stations in Line 1 and Line 2. The popular underground mall at Xidan station was closed. This is in contrast other systems in China which added more station commerce as they started to rapidly expand their networks. Since the implementation of this policy new lines did not have any station commerce upon opening.
Passengers consistently complained that the lack of station commerce in the Beijing Subway is inconvenient. In the early 2010s, Beijing Subway started reversing some of these policies. Vending machines selling drinks and snacks has gradually introduced inside stations since 2013. Later machines with of common items such as flowers, earphones, masks, etc. were also introduced. In 2013, China Resources Vanguard and FamilyMart expressed interest in opening convenience stores in the Beijing Subway but this never materialized.
The survey report on passenger satisfaction in subway services since 2018 shows that more than 70% of passengers want convenience stores in subway stations, especially for various hot and cold drinks, ready-to-eat food, and bento meals. In December 2020, "the deployment of 130 convenient service facilities at subway stations" was listed as a key project for the Beijing municipal government. On July 25, 2021, Beijing Subway selected three stations, Hepingli Beijie station of Line 5, Qingnian Lu station of Line 6, and Caishikou station of Line 7, to carry out a pilot program of opening convenience stores. Since December 2021, a rapid rollout of station commerce began on a large scale across the network with a variety of commercial establishments such as bookstores, pharmacies, flower shops and specialty vendors being constructed inside stations.
Information hotline and app
The Beijing Subway telephone hotline was initiated on the eve of the 2008 Summer Olympic Games to provide traveler information, receive complaints and suggestions, and file lost and found reports. The hotline combined the nine public service telephones of various subway departments. On December 29, 2013, the hotline number was switched from (010)-6834-5678 to (010)-96165 for abbreviated dialing. In December 2014, the hotline began offering fare information, as the subway switched to distance-based fare. The hotline has staffed service from 5 am to midnight and has automated service during unstaffed hours.
The Beijing Subway has an official mobile application and a number of third-party apps.
English station names
According to the related rules released in 2006, all the place names, common names and proper names of subway stations and bus stops should use uppercase Hanyu Pinyin. For example, Nanlishi Lu Station should be written as NANLISHILU Station. However, names of venues can use English translation, such as Military Museum.
According to the translation standard released in December 2017, station names of rail transit and public transport have to follow the laws.
Since December 2018, Beijing Subway has changed the format of names of the new subway stations every year. On the subway map of December 2018, the station names used Roman script, and it gave consideration to English writing habit and pronunciation. The format changed to verbatim in December 2019, where the positions (East, South, West and North) were written in Hanyu Pinyin and an English abbreviation was added to them.
Since December 31, 2021, Beijing Subway has started using new station name format. The Pinyin "Zhan" is used instead of English word "Station" on the light box at the subway entrance. This caused a strong disagreement. Citizens criticized it, making comments like "Chinese do not need to read and foreigners cannot read it". Some of the landmark named stations uses Chinese name, Hanyu Pinyin and English translation. Station names ending with positions no longer add English abbreviation. Some of the stations that used English translation names (such as Shahe Univ. Park, Life Science Park and Liangxiang Univ. Town) changed to Hanyu Pinyin only (The new station names are Shahe Gaojiaoyuan, Shengming Kexueyuan and Liangxiang Daxuecheng).
System upgrades
Capacity
With new lines drawing more riders to the network, the subway has experienced severe overcrowding, especially during the rush hour. Since 2015, significant sections of Lines 1, 4 – Daxing, 5, 10, 13, Batong and Changping are officially over capacity during rush hour. By 2019, Lines 1, 2, 4, 5, 6 and 10 all have daily weekday ridership's of over 1 million passengers a day each. In short term response, the subway upgraded electrical, signal and yard equipment to increase the frequency of trains to add additional capacity. Peak headways have been reduced to 1 min. 43 sec. on Line 4; 1 min. 45 sec. on Lines 1/Batong, 5, 9, and 10; 2 min. on Lines 2, 6, 13 and Changping; 2 min. and 35 sec. on Line 15; 3 min. 30 sec. on Line 8; and 15 min. on the Airport Express. The Beijing Subway is investigating the feasibility of reducing headways of Line 10 down to 1 min 40 seconds.
Lines 13 and Batong have converted 4-car to 6-car trains. Lines 6 and 7 have longer platforms that can accommodate 8-car type B trains, while lines 14, 16, 17 and 19 use higher capacity wide-body type A trains (all mentioned except Line 14 use eight-car trains). New lines that cross the city center such as Line 3 and Line 12, now under construction, will also adopt high capacity 8-car type A trains with a 70 percent increase in capacity over older lines using 6 car type B. When completed these lines are expected to greatly relieve overcrowding in the existing network.
Despite these efforts, during the morning rush hour, conductors at line terminals and other busy stations must routinely restrict the number of passengers who can board each train to prevent the train from becoming too crowded for passengers waiting at other stations down the line. Some of these stations have built queuing lines outside the stations to manage the flow of waiting passengers. As of August 31, 2011, 25 stations mainly on Lines 1, 5, 13, and Batong have imposed such restrictions. By January 7, 2013, 41 stations on Lines 1, 2, 5, 13, Batong, and Changping had instituted passenger flow restrictions during the morning rush hour. The number of stations with passenger flow restrictions reached 110 in January 2019, affecting all lines except Lines 15, 16, Fangshan, Yanfang and S1. Lines 4, 5, 10 and 13 strategically run several empty train runs during rush hour bound for specific stations help clear busy station queues. Counter peak flow express trains started operating on Line 15, Changping and Batong to minimize line runtimes and allow the existing fleet size to serve more passengers during peak periods. Additionally, investigations are being carried out on Line 15 and Yizhuang for upgrading to 120 km/h operations.
Transfers
Interchange stations that permit transfers across two or more subway lines receive heavy traffic passenger flow. The older interchange stations are known for lengthy transfer corridors and slow transfers during peak hours. The average transfer distance at older interchange stations is The transfer between Lines 2 and 13 at Xizhimen once required 15 minutes to complete during rush hours. In 2011, this station was rebuilt to reduce the transfer distance to about long. There are plans to rebuild other interchange stations such as Dongzhimen.
In newer interchange stations, which are designed to permit more efficient transfers, the average transfer distance is . Many of the newer interchange stations including Guogongzhuang (Lines 9 and Fangshan), Nanluoguxiang (Lines 8 and 6), Zhuxinzhuang (Changping and Line 8), Beijing West railway station (Lines 9 and 7), National Library (Lines 9 and 4), Yancun East (Fangshan Line and Yanfang Line) feature cross platform transfers. Nevertheless, longer transfer corridors must still be used when the alignment of the lines do not permit cross-platform transfer.
The transfer corridors between Lines 1 and 9 at the Military Museum, which opened on December 23, 2013, are in one direction and just under in the other.
Safety
Security check
To ensure public safety during the 2008 Summer Olympic and Paralympic Games, the subway initiated a three-month heightened security program from June 29 to September 20, 2008. Riders were subject to searches of their persons and belongings at all stations by security inspectors using metal detectors, X-ray machines and sniffer dogs. Items banned from public transportation such as "guns, ammunition, knives, explosives, flammable and radioactive materials, and toxic chemicals" were subject to confiscation. The security program was reinstituted during the 2009 New Year Holiday and has since been made permanent through regulations enacted in February 2009.
Accidents and incidents
The subway was plagued by numerous accidents in its early years, including a fire in 1969 that killed six people and injured over 200. But its operations have improved dramatically and there have been few reported accidents in recent years. Most of the reported fatalities on the subway are the result of suicides. Authorities have responded by installing doors on platforms of newer lines.
On October 8, 2003, the collapse of steel beams at the construction site of Line 5's Chongwenmen station killed three workers and injured one.
On March 29, 2007, the construction site at the Suzhoujie station on Line 10 collapsed, burying six workers.
On June 6, 2008, prior to the opening of Line 10, a worker was crushed to death inside an escalator in Zhichunlu station when an intern turned on the moving staircase.
On July 14, 2010, two workers were killed and eight were injured at the construction site of Line 15's Shunyi station when the steel support structure collapsed on them.
On September 17, 2010, Line 9 tunnels under construction beneath Yuyuantan Lake were flooded, killing one worker. A city official who oversaw waterworks contracts at the site was convicted of corruption and given a death sentence with reprieve.
On June 1, 2011, one worker was killed when a section of Line 6 under construction in Xicheng District near Ping'anli collapsed.
On July 5, 2011, an escalator collapsed at Beijing Zoo Station, killing one 13-year-old boy and injuring 28.
On July 19, 2012, a man was fatally shot at Hujialou station by a sniper from the Beijing Special Weapons and Tactics Unit after taking a subway worker hostage.
On May 4, 2013, a train derailed when it overran a section of track on Line 4. The section was not open to the public and was undergoing testing. There were no injuries.
On November 6, 2014, a woman was killed when she tried to board the train at Huixinxijie Nankou station on Beijing Subway's Line 5. She became trapped between the train door and the platform edge door and was crushed to death by the departing train. The accident happened on the second day of APEC China 2014 meetings in the city during which the municipal government has banned cars from the roads on alternate days to ease congestion and reduce pollution during the summit – measures which the capital's transport authorities have estimated would lead to an extra one million passengers on the subway every day.
On March 26, 2015, a Yizhuang line train was testing when it derailed around . No passengers were on board and the driver faced leg injuries.
On January 1, 2018, a Xijiao line train derailed around Fragrant Hills station. There were no injuries. Fragrant Hills station was temporarily closed until March 1, 2018.
On December 14, 2023, two trains on the Changping line collided between Xi'erqi station and Life Science Park station, causing one of the carriages to break apart and injuring over 500 passengers on board.
Subway culture
Logo
The subway's logo, a capital letter "G" encircling a capital letter "D" with the letter "B" silhouetted inside the letter D, was designed by Zhang Lide, a subway employee, and officially designated in April 1984. The letters B, G, and D form the pinyin abbreviation for
"" ().
Subway Culture Park
The Beijing Subway Culture Park, located near in Daxing District, opened in 2010 to commemorate the 40-year history of the Beijing Subway. The park was built using dirt and debris removed from the construction of the Daxing line and contains old rolling stock, sculpture, and informational displays. Admission to the park is free.
Beijing Suburban Railway
The Beijing Suburban Railway, a suburban commuter train service, is managed separately from the Beijing Subway. The two systems, although complementary, are not related to each other operationally. Beijing Suburban Railway is operated by the China Railway Beijing Group.
There are 4 suburban railway lines currently in operation: Line S2, Sub-Central line, Huairou–Miyun line and Tongmi line.
Network map
| Technology | China | null |
609079 | https://en.wikipedia.org/wiki/Paper%20towel | Paper towel | A paper towel is an absorbent, disposable towel made from paper. In Commonwealth English, paper towels for kitchen use are also known as kitchen rolls, kitchen paper, or kitchen towels. For home use, paper towels are usually sold in a roll of perforated sheets, but some are sold in stacks of pre-cut and pre-folded layers for use in paper-towel dispensers. Unlike cloth towels, paper towels are disposable and intended to be used only once. Paper towels absorb water because they are loosely woven, which enables water to travel between the fibers, even against gravity (capillary effect). They have similar purposes to conventional towels, such as drying hands, wiping windows and other surfaces, dusting, and cleaning up spills. Paper towel dispensers are commonly used in toilet facilities shared by many people (such as at schools or shopping malls), as they are often considered more hygienic than hot-air hand dryers or shared cloth towels.
History
In 1907, the Philadelphia-based Scott Paper Company developed the first restroom tissues. They started the paper towel industry when they began selling Sani-Towels and used advertising to convince the public that paper towels were essential for personal hygiene.
In 1919, William E. Corbin, Henry Chase, and Harold Titus began experimenting with paper towels in the Research and Development building of the Brown Company in Berlin, New Hampshire. By 1922, Corbin perfected their product and began mass-producing it at the Cascade Mill on the Berlin/Gorham line. This product was called Nibroc Paper Towels (Corbin spelled backwards). In 1931, the Scott Paper Company introduced their paper towel rolls for kitchens.
Paper towels are commonly used for drying hands in public bathrooms. In the 21st century, however, electric jet-air dryers have threatened their dominance. While there is no clear scientific consensus over which method is more hygienic, the paper towel industry and hand dryer manufacturers such as Dyson have each attempted to discredit each other by funding studies which spur sensationalist headlines and running advertisements. The public relations battle has also been fueled by animosity between both sides.
Production
Paper towels are made from either virgin or recycled paper pulp, which is extracted from wood or fiber crops. They are sometimes bleached during the production process to lighten coloration, and may also be decorated with colored images on each square (such as flowers or teddy bears). Resin size is used to improve the wet strength. Paper towels are packed individually and sold as stacks, or are held on a continuous roll, and come in two distinct classes: domestic and institutional. Many companies produce paper towels. Some common brand names are Bounty, Seventh Generation, Scott, Viva, and Kirkland brand among many others.
Market
Tissue products in North America, including paper towels, are divided into consumer and commercial markets, with household consumer usage accounting for approximately two thirds of total North American consumption. Commercial usage, or otherwise any use outside of the household, accounts for the remaining third of North American consumption. The growth in commercial use of paper towels can be attributed to the migration from folded towels (in public bathrooms, for example) to roll towel dispensers, which reduces the amount of paper towels used by each patron.
Within the forest products industry, paper towels are a major part of the "tissue market", second only to toilet paper.
Globally, Americans are the highest per capita users of paper towels in the home, at approximately yearly consumption per capita (combined consumption approximately per year). This is 50% higher than in Europe and nearly 500% higher than in Latin America. By contrast, people in the Middle East tend to prefer reusable cloth towels, and people in Europe tend to prefer reusable cleaning sponges.
Paper towels are popular primarily among people who have disposable income, so their use is higher in wealthy countries and low in developing countries.
Growing hygiene consciousness during the COVID-19 pandemic led to a boost in paper towel market growth.
Environmental issues
Paper towels are a global product with rising production and consumption. Being second in tissue consumption only to toilet paper (36% vs. 45% in the U.S.), the proliferation of paper towels, which are mostly non-recyclable, has globally adverse effects on the environment. However, paper towels made from recycled paper do exist, and are sold at many outlets. Some are manufactured from bamboo, which grows faster than trees.
Electric hand dryers are an alternative to using paper towels for hand drying. However, paper towels are quicker than hand dryers: after ten seconds, paper towels achieve 90% dryness, while hot air dryers require 40 seconds to achieve a similar dryness. Electric hand dryers may also spread bacteria to hands and clothing.
| Biology and health sciences | Hygiene products | Health |
609147 | https://en.wikipedia.org/wiki/Transmission%20%28mechanical%20device%29 | Transmission (mechanical device) | A transmission (also called a gearbox) is a mechanical device which uses a gear set—two or more gears working together—to change the speed, direction of rotation, or torque multiplication/reduction in a machine.
Transmissions can have a single fixed-gear ratio, multiple distinct gear ratios, or continuously variable ratios. Variable-ratio transmissions are used in all sorts of machinery, especially vehicles.
Applications
Early uses
Early transmissions included the right-angle drives and other gearing in windmills, horse-powered devices, and steam-powered devices. Applications of these devices included pumps, mills and hoists.
Bicycles
Bicycles traditionally have used hub gear or Derailleur gear transmissions, but there are other more recent design innovations.
Automobiles
Since the torque and power output of an internal combustion engine varies with its rpm, automobiles powered by ICEs require multiple gear ratios to keep the engine within its power band to produce optimal power, fuel efficiency, and smooth operation. Multiple gear ratios are also needed to provide sufficient acceleration and velocity for safe and reliable operation at modern highway speeds. ICEs typically operate over a range of approximately 600–7000 rpm, while the vehicle's speeds requires the wheels to rotate in the range of 0–1800 rpm.
In the early mass-produced automobiles, the standard transmission design was manual: the combination of gears was selected by the driver through a lever (the gear stick) that displaced gears and gear groups along their axes. Starting in 1939, cars using various types of automatic transmission became available in the US market. These vehicles used the engine's own power to change the effective gear ratio depending on the load so as to keep the engine running close to its optimal rotation speed. Automatic transmissions now are used in more than two thirds of cars globally, and on almost all new cars in the US.
Most currently-produced passenger cars with gasoline or diesel engines use transmissions with 4–10 forward gear ratios (also called speeds) and one reverse gear ratio. Electric vehicles typically use a fixed-gear or two-speed transmission with no reverse gear ratio.
Motorcycles
Fixed-ratio
The simplest transmissions used a fixed ratio to provide either a gear reduction or increase in speed, sometimes in conjunction with a change in the orientation of the output shaft. Examples of such transmissions are used in helicopters and wind turbines. In the case of a wind turbine, the first stage of the gearbox is usually a planetary gear, to minimize the size while withstanding the high torque inputs from the turbine.
Multi-ratio
Many transmissions – especially for transportation applications – have multiple gears that are used to change the ratio of input speed (e.g. engine rpm) to the output speed (e.g. the speed of a car) as required for a given situation. Gear (ratio) selection can be manual, semi-automatic, or automatic.
Manual
A manual transmission requires the driver to manually select the gears by operating a gear stick and clutch (which is usually a foot pedal for cars or a hand lever for motorcycles).
Most transmissions in modern cars use synchromesh to synchronise the speeds of the input and output shafts. However, prior to the 1950s, most cars used non-synchronous transmissions.
Sequential manual
A sequential manual transmission is a type of non-synchronous transmission used mostly for motorcycles and racing cars. It produces faster shift times than synchronized manual transmissions, through the use of dog clutches rather than synchromesh. Sequential manual transmissions also restrict the driver to selecting either the next or previous gear, in a successive order.
Semi-automatic
A semi-automatic transmission is where some of the operation is automated (often the actuation of the clutch), but the driver's input is required to move off from a standstill or to change gears.
Automated manual / clutchless manual
An automated manual transmission (AMT) is essentially a conventional manual transmission that uses automatic actuation to operate the clutch and/or shift between gears.
Many early versions of these transmissions were semi-automatic in operation, such as Autostick, which automatically control only the clutch, but still require the driver's input to initiate gear changes. Some of these systems are also referred to as clutchless manual systems. Modern versions of these systems that are fully automatic in operation, such as Selespeed and Easytronic, can control both the clutch operation and the gear shifts automatically, without any input from the driver.
Automatic
An automatic transmission does not require any input from the driver to change forward gears under normal driving conditions.
Hydraulic automatic
The most common design of automatic transmissions is the hydraulic automatic, which typically uses planetary gearsets that are operated using hydraulics. The transmission is connected to the engine via a torque converter (or a fluid coupling prior to the 1960s), instead of the friction clutch used by most manual transmissions and dual-clutch transmissions.
Dual-clutch (DCT)
A dual-clutch transmission (DCT) uses two separate clutches for odd and even gear sets. The design is often similar to two separate manual transmissions with their respective clutches contained within one housing, and working as one unit. In car and truck applications, the DCT functions as an automatic transmission, requiring no driver input to change gears.
Continuously-variable Ratio
A continuously variable transmission (CVT) can change seamlessly through a continuous range of gear ratios. This contrasts with other transmissions that provide a limited number of gear ratios in fixed steps. The flexibility of a CVT with suitable control may allow the engine to operate at a constant RPM while the vehicle moves at varying speeds.
CVTs are used in cars, tractors, side-by-sides, motor scooters, snowmobiles, bicycles, and earthmoving equipment.
The most common type of CVT uses two pulleys connected by a belt or chain; however, several other designs have also been used at times.
Noise and vibration
Gearboxes are often a major source of noise and vibration in vehicles and stationary machinery. Higher sound levels are generally emitted when the vehicle is engaged in lower gears. The design life of the lower ratio gears is shorter, so cheaper gears may be used, which tend to generate more noise due to smaller overlap ratio and a lower mesh stiffness etc. than the helical gears used for the high ratios. This fact has been used to analyze vehicle-generated sound since the late 1960s, and has been incorporated into the simulation of urban roadway noise and corresponding design of urban noise barriers along roadways.
| Technology | Mechanisms | null |
610102 | https://en.wikipedia.org/wiki/Whiteleg%20shrimp | Whiteleg shrimp | Whiteleg shrimp (Litopenaeus vannamei, synonym Penaeus vannamei), also known as Pacific white shrimp or King prawn, is a species of prawn of the eastern Pacific Ocean commonly caught or farmed for food.
Description
Litopenaeus vannamei grows to a maximum length of , with a carapace length of . Adults live in the ocean, at depths to , while juveniles live in estuaries. The rostrum is moderately long, with 7–10 teeth on the dorsal side and two to four teeth on the ventral side.
Distribution and habitat
Whiteleg shrimp are native to the eastern Pacific Ocean, from the Mexican state of Sonora to as far south as northern Peru. It is restricted to areas where the water temperatures remain above throughout the year.
Fishery and aquaculture
During the 20th century, L. vannamei was an important species for Mexican inshore fishermen, as well as for trawlers further offshore.
In the late 20th century, the wild fishery was overtaken by the development of aquaculture production; this began in 1973 in Florida using prawns captured in Panama, that were used in hatcheries for larvae production.
In Latin America, the culture of L. vannamei started to develop with the availability of hatchery larvae, the development of feeds, the technification of the growth processes, the freezing installations and market channels, among others.
From Mexico to Peru, most countries developed large production areas in the 70s and 80s.
Ecuador became one of the world leaders producers of this type of shrimp.
Around the beginning of the millennium, Asia introduced this species in their aquaculture operations (changing from Penaeus monodon).
China, Vietnam, India and others have become major packers as well.
The packing of shrimp from aquaculture origin has surpassed the quantity of ocean caught wild shrimp in recent years.
Both origins, ocean caught and aquaculture, are subject to weather changes and diseases.
By 2004, global production of L. vannamei approached 1,116,000 t, and exceeded that of Penaeus monodon.
Litopenaeus vannamei have been cultivated indoors through a recirculating aquaculture systems at TransparentSea Farm, a startup in Downey, California.
Weather effect
Normally, there are peaks of production during the warm El Niño years, and reduced production during the cooler La Niña years. The effect is on ocean caught as well as on aquaculture origin.
Diseases
There are several known diseases. Production of L. vannamei is limited by its susceptibility to white spot syndrome, Taura syndrome, infectious hypodermal and haematopoietic necrosis, baculoviral midgut gland necrosis, and Vibrio infections.
Impact on nature
In 2010, Greenpeace International added the whiteleg shrimp to its seafood red list.
This lists fish that are commonly sold in supermarkets around the world, and which have a very high risk of being sourced from unsustainable fisheries. The reasons given by Greenpeace were "destruction of vast areas of mangroves in several countries, overfishing of juvenile shrimp from the wild to supply shrimp farms, and significant human rights abuses". In 2016, L. vannamei accounted for 53% of the total production of farmed crustaceans globally.
| Biology and health sciences | Shrimps and prawns | Animals |
610191 | https://en.wikipedia.org/wiki/Ragdoll | Ragdoll | The Ragdoll is a breed of cat with a distinct colorpoint coat and blue eyes. Its morphology is large and weighty, and it has a semi-long and silky soft coat. American breeder Ann Baker developed Ragdolls in the 1960s. They are best known for their docile, placid temperament and affectionate nature. The name Ragdoll is derived from the tendency of individuals from the original breeding stock to go limp and relaxed when picked up. The breed is particularly popular in both the United Kingdom and the United States.
Ragdolls are often known as "dog-like cats" or "puppy-like cats", due to their tendency to follow people around, their receptiveness to handling, and their relative lack of aggression towards other pets.
Ragdolls are distinguishable by their pointed coloration (where the body is lighter than the face, ears, legs, and tail), large round blue eyes, soft, thick coats, thick limbs, long tails, and soft bodies. Their color rings are commonly tricolor or bicolor.
History
The breed was developed in Riverside, California, by breeder Ann Baker. In 1963, a regular, non-pedigreed, white domestic longhaired cat named Josephine produced several litters of typical cats. Josephine was not of any particular breed, nor were the males who sired the original litters. Ann Baker herself said that the original cats of the Ragdoll breed were "alley cats". Josephine later produced kittens with a docile, placid temperament, affectionate nature, and a tendency to go limp and relaxed when picked up.
Out of those early litters came Blackie, an all-black male, and Daddy Warbucks, a seal point with white feet. Daddy Warbucks sired the founding bi-color female Fugianna, and Blackie sired Buckwheat, a dark brown-black Burmese-like female. Both Fugianna and Buckwheat were Josephine's daughters. All Ragdolls are descended from Baker's cats through matings of Daddy Warbucks to Fugianna and Buckwheat.
Baker, in an unusual move, spurned traditional cat breeding associations. She trademarked the name Ragdoll, set up her own registry—the International Ragdoll Cat Association (IRCA)—around 1971, and enforced stringent standards on anyone who wanted to breed or sell cats under that name. The Ragdolls were also not allowed to be registered by other breed associations. The IRCA is still in existence today but is quite small, particularly since Baker's death in 1997.
In 1975, a group led by a husband-and-wife team, Denny and Laura Dayton, broke ranks with the IRCA to gain mainstream recognition for the Ragdoll. Beginning with a breeding pair of IRCA cats, this group eventually developed the Ragdoll standard currently accepted by major cat registries such as the CFA and the FIFe. Around the time of the spread of the Ragdoll breed in America during the early 1960s, a breeding pair of Ragdolls was exported to the UK. Eight more cats followed this pair to fully establish the breed in the UK, where the Governing Council of the Cat Fancy recognizes it.
Breed description
Temperament
The Ragdoll has been known to have a very floppy and calm nature, with claims that these characteristics have been passed down from the Persian and Birman breeds. Opinions vary as to whether this trait might be the result of genetic mutation or merely an instinctive reaction from being picked up as kittens by their mother. The extreme docility of some individuals has led to the myth that Ragdolls are pain resistant. Some breeders in Britain have tried to breed away from the limpness owing to concerns that extreme docility "might not be in the best interests of the cat".
Breed standard marketing and publicity material describe the Ragdoll as affectionate, intelligent, relaxed in temperament, gentle, and an easy-to-handle lap cat. The animals are often known as "puppy cats", "dog-like cats", "cat-dogs", etc., because of their placid nature and affectionate behavior, with the cats often following owners from room to room as well as seeking physical affection akin to certain dog breeds. Ragdolls can be trained to retrieve toys and enjoy doing so. They have a very playful nature that often lasts well into their senior years. Unlike many other breeds, Ragdolls prefer staying low to the ground rather than the highest point in the household.
Physical characteristics
The Ragdoll is one of the largest domesticated cat breeds. Fully-grown females weigh from . Males are substantially larger, ranging from or more. It can take up to four years for a Ragdoll to reach mature size. They have a sturdy body, bulky frame, and proportionate legs. Their heads are broad with a flat top and wide space between the ears. They have long, muscular bodies with broad chests and short necks. Their tails are bushy and long in length, their paws are large, round, and tufted, and their coats are silky, dense, and medium to long length. Due to their coats tending to be long, they usually require brushing at least twice a week. Adults develop knickerbockers on their hind legs and a ruff around their necks.
The breed is often known for its large, round, deep-blue eyes, though other cats may have that feature as well. The genes for point coloration are also responsible for these distinctive blue eyes. Deeper shades of blue are favored in cat shows.
Although the breed has a plush coat, it consists mainly of long guard hairs, while the lack of a dense undercoat results, according to the Cat Fanciers' Association, in "reduced shedding and matting". There may be a noticeable increase of shedding in the spring.
Ragdolls come in six distinct colors: seal, chocolate, red, and the corresponding dilutes: blue, lilac, and cream. There also are the lynx and tortoiseshell variations in all colors and the three patterns. Ragdoll kittens are born white; they have good color at 8–10 weeks and full color and coat at 3–4 years.
Patterns
Colorpoint: one color darkening at the extremities (nose, ears, tail, and paws)
Mitted: same as pointed but with white paws and abdomen. With or without a blaze (a white line or spot on the face), they must have a belly stripe (white stripe that runs from the chin to the genitals) and a white chin. Mitted Ragdolls, which weren't allowed titling in CFA until the 2008–2009 show season, are often confused with Birmans. The easiest way to tell the difference is by size (the Ragdoll being larger) and chin color (Mitted Ragdolls have white chins, while Birmans have colored chins), although breeders recognize the two by head shape and boning.
Bicolor: white legs, white inverted V on the face, white abdomen, and sometimes white patches on the back (excessive amounts of white, or high white, on a bicolor are known as the Van pattern, although this does not occur as often as the other patterns).
Variations
Lynx: a variant of the colorpoint type having tabby markings. This variation always comes with white ear lines, no matter the pattern.
Tortoiseshell or tortie: a variant noted for mottled or parti-colored markings in the above patterns. Despite the mostly white coat, tortie points are not calico, as the calico gene is separate and not present in colorpoints.
Health
A UK study utilizing veterinary records found a life expectancy of 10.31 years compared to 11.74 overall. One study, utilizing Swedish insurance data, showed that of the common cat breeds, the Ragdoll and Siamese have the lowest survival rate, with a 78% chance of survival to 10 years. An English study of patient records found a life expectancy of 10.1 years. In a review of over 5,000 cases of urate urolithiasis, the Ragdoll was over-represented, with an odds ratio of 5.14. An English study reviewing over 190,000 patient records found the Ragdoll to be less likely to acquire diabetes mellitus than mixed breed cats. The prevalence in Ragdolls was 0.24% compared to 0.58% overall.
Hypertrophic cardiomyopathy
The Ragdoll is one of the more commonly affected breeds for hypertrophic cardiomyopathy. An autosomal dominant mutation of the MYBPC group of genes is responsible for the condition in the breed.
The allelic frequencies of the mutation R820W were 0.17 in cats from Italy and 0.23 in cats from the US in 2013. This reference states that the R820W prevalence is 30% in the UK. The HCM prevalence was found to be 2.9% (95% CI = 2.7–8.6%) in this study.
| Biology and health sciences | Cats | Animals |
610202 | https://en.wikipedia.org/wiki/Fine%20structure | Fine structure | In atomic physics, the fine structure describes the splitting of the spectral lines of atoms due to electron spin and relativistic corrections to the non-relativistic Schrödinger equation. It was first measured precisely for the hydrogen atom by Albert A. Michelson and Edward W. Morley in 1887, laying the basis for the theoretical treatment by Arnold Sommerfeld, introducing the fine-structure constant.
Background
Gross structure
The gross structure of line spectra is the structure predicted by the quantum mechanics of non-relativistic electrons with no spin. For a hydrogenic atom, the gross structure energy levels only depend on the principal quantum number n. However, a more accurate model takes into account relativistic and spin effects, which break the degeneracy of the energy levels and split the spectral lines. The scale of the fine structure splitting relative to the gross structure energies is on the order of (Zα)2, where Z is the atomic number and α is the fine-structure constant, a dimensionless number equal to approximately 1/137.
Relativistic corrections
The fine structure energy corrections can be obtained by using perturbation theory. To perform this calculation one must add three corrective terms to the Hamiltonian: the leading order relativistic correction to the kinetic energy, the correction due to the spin–orbit coupling, and the Darwin term coming from the quantum fluctuating motion or zitterbewegung of the electron.
These corrections can also be obtained from the non-relativistic limit of the Dirac equation, since Dirac's theory naturally incorporates relativity and spin interactions.
Hydrogen atom
This section discusses the analytical solutions for the hydrogen atom as the problem is analytically solvable and is the base model for energy level calculations in more complex atoms.
Kinetic energy relativistic correction
The gross structure assumes the kinetic energy term of the Hamiltonian takes the same form as in classical mechanics, which for a single electron means
where is the potential energy, is the momentum, and is the electron rest mass.
However, when considering a more accurate theory of nature via special relativity, we must use a relativistic form of the kinetic energy,
where the first term is the total relativistic energy, and the second term is the rest energy of the electron ( is the speed of light). Expanding the square root for large values of , we find
Although there are an infinite number of terms in this series, the later terms are much smaller than earlier terms, and so we can ignore all but the first two. Since the first term above is already part of the classical Hamiltonian, the first order correction to the Hamiltonian is
Using this as a perturbation, we can calculate the first order energy corrections due to relativistic effects.
where is the unperturbed wave function. Recalling the unperturbed Hamiltonian, we see
We can use this result to further calculate the relativistic correction:
For the hydrogen atom,
and
where is the elementary charge, is the vacuum permittivity, is the Bohr radius, is the principal quantum number, is the azimuthal quantum number and is the distance of the electron from the nucleus. Therefore, the first order relativistic correction for the hydrogen atom is
where we have used:
On final calculation, the order of magnitude for the relativistic correction to the ground state is .
Spin–orbit coupling
For a hydrogen-like atom with protons ( for hydrogen), orbital angular momentum and electron spin , the spin–orbit term is given by:
where is the spin g-factor.
The spin–orbit correction can be understood by shifting from the standard frame of reference (where the electron orbits the nucleus) into one where the electron is stationary and the nucleus instead orbits it. In this case the orbiting nucleus functions as an effective current loop, which in turn will generate a magnetic field. However, the electron itself has a magnetic moment due to its intrinsic angular momentum. The two magnetic vectors, and couple together so that there is a certain energy cost depending on their relative orientation. This gives rise to the energy correction of the form
Notice that an important factor of 2 has to be added to the calculation, called the Thomas precession, which comes from the relativistic calculation that changes back to the electron's frame from the nucleus frame.
Since
by Kramers–Pasternack relations and
the expectation value for the Hamiltonian is:
Thus the order of magnitude for the spin–orbital coupling is:
When weak external magnetic fields are applied, the spin–orbit coupling contributes to the Zeeman effect.
Darwin term
There is one last term in the non-relativistic expansion of the Dirac equation. It is referred to as the Darwin term, as it was first derived by Charles Galton Darwin, and is given by:
The Darwin term affects only the s orbitals. This is because the wave function of an electron with vanishes at the origin, hence the delta function has no effect. For example, it gives the 2s orbital the same energy as the 2p orbital by raising the 2s state by .
The Darwin term changes potential energy of the electron. It can be interpreted as a smearing out of the electrostatic interaction between the electron and nucleus due to zitterbewegung, or rapid quantum oscillations, of the electron. This can be demonstrated by a short calculation.
Quantum fluctuations allow for the creation of virtual electron-positron pairs with a lifetime estimated by the uncertainty principle . The distance the particles can move during this time is , the Compton wavelength. The electrons of the atom interact with those pairs. This yields a fluctuating electron position . Using a Taylor expansion, the effect on the potential can be estimated:
Averaging over the fluctuations
gives the average potential
Approximating , this yields the perturbation of the potential due to fluctuations:
To compare with the expression above, plug in the Coulomb potential:
This is only slightly different.
Another mechanism that affects only the s-state is the Lamb shift, a further, smaller correction that arises in quantum electrodynamics that should not be confused with the Darwin term. The Darwin term gives the s-state and p-state the same energy, but the Lamb shift makes the s-state higher in energy than the p-state.
Total effect
The full Hamiltonian is given by
where is the Hamiltonian from the Coulomb interaction.
The total effect, obtained by summing the three components up, is given by the following expression:
where is the total angular momentum quantum number ( if and otherwise). It is worth noting that this expression was first obtained by Sommerfeld based on the old Bohr theory; i.e., before the modern quantum mechanics was formulated.
Exact relativistic energies
The total effect can also be obtained by using the Dirac equation. The exact energies are given by
This expression, which contains all higher order terms that were left out in the other calculations, expands to first order to give the energy corrections derived from perturbation theory. However, this equation does not contain the hyperfine structure corrections, which are due to interactions with the nuclear spin. Other corrections from quantum field theory such as the Lamb shift and the anomalous magnetic dipole moment of the electron are not included.
| Physical sciences | Atomic physics | Physics |
610527 | https://en.wikipedia.org/wiki/Whiteout%20%28weather%29 | Whiteout (weather) | Whiteout or white-out is a weather condition in which visibility and contrast are severely reduced by snow, fog, or sand. The horizon disappears from view while the sky and landscape appear featureless, leaving no points of visual reference by which to navigate.
A whiteout may be due simply to extremely heavy snowfall rates as seen in lake effect conditions, or to other factors such as diffuse lighting from overcast clouds, mist or fog, or a background of snow. A person traveling in a true whiteout is at significant risk of becoming completely disoriented and losing their way, even in familiar surroundings. Motorists typically have to stop their cars where they are, as the road is impossible to see. Normal snowfalls and blizzards, where snow is falling at /h), or where the relief visibility is not clear yet having a clear field of view for over , are often incorrectly called whiteouts.
Types
There are three different forms of a whiteout:
In blizzard conditions, snow already on the ground can become windblown, reducing visibility to near zero.
In snowfall conditions, the volume of snow falling may obscure objects reducing visibility to near zero. An example of this is during lake-effect snow or mountain-effect snow, where the volume of snow can be many times greater than normal snows or blizzards.
Where ground-level thick fog exists in a snow-covered environment, especially on open areas devoid of features.
Variations
A whiteout should not be confused with flat-light. Whilst there are similarities, both the causes and effects are different.
A whiteout is a reduction and scattering of sunlight.
Cause: Sunlight is blocked, reduced and scattered by ice crystals in falling snow, wind-blown spin-drift, water droplets in low-lying clouds or localised fog, etc. The remaining scattered light is merged and blended.
Result: Due to a reduction in reflected light, visual references e.g. the horizon, terrain features, slope aspect, etc. are significantly reduced or completely blocked. This leads to an inability to position yourself relative to the surroundings. In severe conditions an individual may experience a loss of kinesthesia (ability to discern position and movement), confusion, loss of balance, and an overall reduction in the ability to operate.
Flat-light is a diffusion of sunlight.
Cause: Sunlight is both scattered and diffused by atmospheric particles (e.g. water molecules, ice crystals) and by snow lying on the ground; this causes light to be received from multiple directions. Commonly, the effect is increased during a whiteout and/or later in the day when the sun drops towards the horizon, due to sunlight passing through the atmosphere for a greater distance.
Result: Light is received from multiple directions with each light source producing overlapping shadows which cancel-out each other. This dulls the area and removes indicators such as tones and contrast, making it difficult to discern similarly coloured slope features. The loss of visual indicators of shape and edge detail results in objects and features seeming to blend into each other, producing a flat featureless vista. An effect of visual blending may be a loss of depth of field resulting in disorientation.
Hazards
Whiteout conditions pose threats to mountain climbers, skiers, aviation, and mobile ground traffic. Motorists, especially those on large high-speed routes, are also at risk. There have been many major multiple-vehicle collisions associated with whiteout conditions. One forward motorist may come to a complete stop when they cannot see the road, while the motorist behind is still moving.
Local, short-duration whiteout conditions can be created artificially in the vicinity of airports and helipads due to aircraft operations. Snow on the ground can be stirred up by helicopter rotor down-wash or airplane jet blast, presenting hazards to both aircraft and bystanders on the ground.
| Physical sciences | Storms | Earth science |
610582 | https://en.wikipedia.org/wiki/Escapement | Escapement | An escapement is a mechanical linkage in mechanical watches and clocks that gives impulses to the timekeeping element and periodically releases the gear train to move forward, advancing the clock's hands. The impulse action transfers energy to the clock's timekeeping element (usually a pendulum or balance wheel) to replace the energy lost to friction during its cycle and keep the timekeeper oscillating. The escapement is driven by force from a coiled spring or a suspended weight, transmitted through the timepiece's gear train. Each swing of the pendulum or balance wheel releases a tooth of the escapement's escape wheel, allowing the clock's gear train to advance or "escape" by a fixed amount. This regular periodic advancement moves the clock's hands forward at a steady rate. At the same time, the tooth gives the timekeeping element a push, before another tooth catches on the escapement's pallet, returning the escapement to its "locked" state. The sudden stopping of the escapement's tooth is what generates the characteristic "ticking" sound heard in operating mechanical clocks and watches.
The first mechanical escapement, the verge escapement, was invented in medieval Europe during the 13th century and was the crucial innovation that led to the development of the mechanical clock. The design of the escapement has a large effect on a timepiece's accuracy, and improvements in escapement design drove improvements in time measurement during the era of mechanical timekeeping from the 13th through the 19th century.
Escapements are also used in other mechanisms besides timepieces. Manual typewriters used escapements to step the carriage as each letter (or space) was typed.
History
The invention of the escapement was an important step in the history of technology, as it made the all-mechanical clock possible. The first all-mechanical escapement, the verge escapement, was invented in 13th-century Europe. It allowed timekeeping methods to move from continuous processes such as the flow of water in water clocks, to repetitive oscillatory processes such as the swing of pendulums, enabling more accurate timekeeping. Oscillating timekeepers are the controlling devices in all modern clocks.
Liquid-driven escapements
The earliest liquid-driven escapement was described by the Greek engineer Philo of Byzantium in the 3rd century BC in chapter 31 of his technical treatise Pneumatics, as part of a washstand. A counterweighted spoon, supplied by a water tank, tips over in a basin when full, releasing a spherical piece of pumice in the process. Once the spoon has emptied, it is pulled up again by the counterweight, closing the door on the pumice by the tightening string. Remarkably, Philo's comment that "its construction is similar to that of clocks" indicates that such escapement mechanisms were already integrated in ancient water clocks.
In China, the Tang dynasty Buddhist monk Yi Xing, along with government official Liang Lingzan, made the escapement in 723 (or 725) AD for the workings of a water-powered armillary sphere and clock drive, which was the world's first clockwork escapement. Song dynasty horologists Zhang Sixun and Su Song duly applied escapement devices for their astronomical clock towers in the 10th century, where water flowed into a container on a pivot. However, the technology later stagnated and retrogressed. According to historian Derek J. de Solla Price, the Chinese escapement spread west and was the source of Western escapement technology.
According to Ahmad Y. Hassan, a mercury escapement in a Spanish work for Alfonso X in 1277 can be traced back to earlier Arabic sources. Knowledge of these mercury escapements may have spread through Europe with translations of Arabic and Spanish texts.
However, none of these were true mechanical escapements, since they still depended on the flow of liquid through a hole to measure time. In these designs, a container tipped over each time it filled up, thus advancing the clock's wheels each time an equal quantity of water was measured out. The time between releases depended on the rate of flow, as do all liquid clocks. The rate of flow of a liquid through a hole varies with temperature and viscosity changes and decreases with pressure as the level of liquid in the source container drops. The development of mechanical clocks depended on the invention of an escapement which would allow a clock's movement to be controlled by an oscillating weight, which would stay constant.
Mechanical escapements
The first mechanical escapement, the verge escapement, was used in a bell-ringing apparatus called an for several centuries before it was adapted to clocks. Some sources claim that French architect Villard de Honnecourt invented the first escapement in 1237, citing a drawing of a rope linkage to turn a statue of an angel to follow the sun, found in his notebooks; however, the consensus is that this was not an escapement.
Astronomer Robertus Anglicus wrote in 1271 that clockmakers were trying to invent an escapement, but had not yet been successful. Records in financial transactions for the construction of clocks point to the late 13th century as the most likely date for when tower clock mechanisms transitioned from water clocks to mechanical escapements. Most sources agree that mechanical escapement clocks existed by 1300.
However, the earliest available description of an escapement was not a verge escapement, but a variation called a strob escapement. Described in Richard of Wallingford's 1327 manuscript on the clock that he built at the Abbey of St. Albans, this consisted of a pair of escape wheels on the same axle, with alternating radial teeth. The verge rod was suspended between them, with a short crosspiece that rotated first in one direction and then the other as the staggered teeth pushed past. Although no other example is known, it is possible that this was the first clock escapement design.
The verge became the standard escapement used in all other early clocks and watches, and remained the only known escapement for 400 years. Its performance was limited by friction and recoil, but most importantly, the early balance wheels used in verge escapements, known as the foliot, lacked a balance spring and thus had no natural "beat", severely limiting their timekeeping accuracy.
A great leap in the accuracy of escapements happened after 1657, due to the invention of the pendulum and the addition of the balance spring to the balance wheel, which made the timekeepers in both clocks and watches harmonic oscillators. The resulting improvement in timekeeping accuracy enabled greater focus on the accuracy of the escapement. The next two centuries, the "golden age" of mechanical horology, saw the invention of over 300 escapement designs, although only about ten of these were ever widely used in clocks and watches.
The invention of the crystal oscillator and the quartz clock in the 1920s, which became the most accurate clock by the 1930s, shifted technological research in timekeeping to electronic methods, and escapement design ceased to play a role in advancing timekeeping precision.
Reliability
The reliability of an escapement depends on the quality of workmanship and the level of maintenance given. A poorly constructed or poorly maintained escapement will cause problems. The escapement must accurately convert the oscillations of the pendulum or balance wheel into rotation of the clock or watch gear train, and it must deliver enough energy to the pendulum or balance wheel to maintain its oscillation.
In many escapements, the unlocking of the escapement involves sliding motion; for example, in the animation shown above, the pallets of the anchor slide against the escapement wheel teeth as the pendulum swings. The pallets are often made of very hard materials such as polished stone (for example, artificial ruby), but even so, they normally require lubrication. Since lubricating oil degrades over time due to evaporation, dust, oxidation, etc., periodic re-lubrication is needed. If this is not done, the timepiece may work unreliably or stop altogether, and the escapement components may be subjected to rapid wear. The increased reliability of modern watches is due primarily to the higher-quality oils used for lubrication. Lubricant lifetimes can be greater than five years in a high-quality watch.
Some escapements avoid sliding friction; examples include the grasshopper escapement of John Harrison in the 18th century, This may avoid the need for lubrication in the escapement (though it does not obviate the requirement for lubrication of other parts of the gear train).
Accuracy
The accuracy of a mechanical clock is dependent on the accuracy of the timing device. If this is a pendulum, then the period of swing of the pendulum determines the accuracy. If the pendulum rod is made of metal it will expand and contract with heat, lengthening or shortening the pendulum; this changes the time taken for a swing. Special alloys are used in expensive pendulum-based clocks to minimize this distortion. The degrees of arc in which a pendulum may swing varies; highly accurate pendulum-based clocks have very small arcs in order to minimize the circular error.
Pendulum-based clocks can achieve outstanding accuracy. Even into the 20th century, pendulum-based clocks were reference timepieces in laboratories.
Escapements play a big part in accuracy as well. The precise point in the pendulum's travel at which impulse is supplied will affect how closely to time the pendulum will swing. Ideally, the impulse should be evenly distributed on either side of the lowest point of the pendulum's swing. This is called "being in beat." This is because pushing a pendulum when it is moving towards mid-swing makes it gain, whereas pushing it while it is moving away from mid-swing makes it lose. If the impulse is evenly distributed then it gives energy to the pendulum without changing the time of its swing.
The pendulum's period depends slightly on the size of the swing. If the amplitude changes from 4° to 3°, the period of the pendulum will decrease by about 0.013 percent, which translates into a gain of about 12 seconds per day. This is caused by the restoring force on the pendulum being circular not linear; thus, the period of the pendulum is only approximately linear in the regime of the small angle approximation. To be time-independent, the path must be cycloidal. To minimize the effect with amplitude, pendulum swings are kept as small as possible.
As a rule, whatever the method of impulse the action of the escapement should have the smallest effect on the oscillator which can be achieved, whether a pendulum or the balance in a watch. This effect, which all escapements have to a larger or smaller degree is known as the escapement error.
Any escapement with sliding friction will need lubrication, but as this deteriorates the friction will increase, and, perhaps, insufficient power will be transferred to the timing device. If the timing device is a pendulum, the increased frictional forces will decrease the Q factor, increasing the resonance band, and decreasing its precision. For spring-driven clocks, the impulse force applied by the spring changes as the spring is unwound, following Hooke's law. For gravity-driven clocks, the impulse force also increases as the driving weight falls and more chain suspends the weight from the gear train; in practice, however, this effect is only seen in large public clocks, and it can be avoided by a closed-loop chain.
Watches and smaller clocks do not use pendulums as the timing device. Instead, they use a balance spring: a fine spring connected to a metal balance wheel that oscillates (rotates back and forth). Most modern mechanical watches have a working frequency of 3–4 Hz (oscillations per second) or 6–8 beats per second (21,600–28,800 beats per hour; bph). Faster or slower speeds are used in some watches (33,600bph, or 19,800bph). The working frequency depends on the balance spring's stiffness (spring constant); to keep time, the stiffness should not vary with temperature. Consequently, balance springs use sophisticated alloys; in this area, watchmaking is still advancing. As with the pendulum, the escapement must provide a small kick each cycle to keep the balance wheel oscillating. Also, the same lubrication problem occurs over time; the watch will lose accuracy (typically it will speed up) when the escapement lubrication starts failing.
Pocket watches were the predecessor of modern wristwatches. Pocket watches, being in the pocket, were usually in a vertical orientation. Gravity causes some loss of accuracy as it magnifies over time any lack of symmetry in the weight of the balance. The tourbillon was invented to minimize this: the balance and spring are put in a cage that rotates (typically but not necessarily, once a minute), smoothing gravitational distortions. This very clever and sophisticated clockwork is a prized complication in wristwatches, even though the natural movement of the wearer tends to smooth gravitational influences anyway.
The most accurate commercially produced mechanical clock was the electromechanical Shortt-Synchronome free pendulum clock invented by W. H. Shortt in 1921, which had an uncertainty of about 1 second per year. The most accurate mechanical clock to date is probably the electromechanical Littlemore Clock, built by noted archaeologist E. T. Hall in the 1990s. In Hall's paper, he reports an uncertainty of 3 parts in 109 measured over 100 days (an uncertainty of about 0.02 seconds over that period). Both of these clocks are electromechanical clocks: they use a pendulum as the timekeeping element, but electrical power rather than a mechanical gear train to supply energy to the pendulum.
Mechanical escapements
Since 1658 when the introduction of the pendulum and balance spring made accurate timepieces possible, it has been estimated that more than three hundred different mechanical escapements have been devised, but only about 10 have seen widespread use. These are described below. In the 20th century, electric timekeeping methods replaced mechanical clocks and watches, so escapement design became a little-known curiosity.
Verge escapement
The earliest mechanical escapement, from the late 1200s was the verge escapement, also known as the crown-wheel escapement. It was used in the first mechanical clocks and was originally controlled by a foliot, a horizontal bar with weights at either end. The escapement consists of an escape wheel shaped somewhat like a crown, with pointed teeth sticking axially out of the side, oriented horizontally. In front of the crown wheel is a vertical shaft, attached to the foliot at the top, which carries two metal plates (pallets) sticking out like flags from a flag pole, oriented about ninety degrees apart, so only one engages the crown wheel teeth at a time. As the wheel turns, one tooth pushes against the upper pallet, rotating the shaft and the attached foliot. As the tooth pushes past the upper pallet, the lower pallet swings into the path of the teeth on the other side of the wheel. A tooth catches on the lower pallet, rotating the shaft back the other way, and the cycle repeats. A disadvantage of the escapement was that each time a tooth landed on a pallet, the momentum of the foliot pushed the crown wheel backward a short distance before the force of the wheel reversed the motion. This is called "recoil" and was a source of wear and inaccuracy.
The verge was the only escapement used in clocks and watches for 350 years. In spring-driven clocks and watches, it required a fusee to even out the force of the mainspring. It was used in the first pendulum clocks for about 50 years after the pendulum clock was invented in 1656. In a pendulum clock, the crown wheel and staff were oriented so they were horizontal, and the pendulum was hung from the staff. However, the verge is the most inaccurate of the common escapements, and after the pendulum was introduced in the 1650s, the verge began to be replaced by other escapements, being abandoned only by the late 1800s. By this time, the fashion for thin watches had required that the escape wheel be made very small, amplifying the effects of wear, and when a watch of this period is wound up today, it will often be found to run very fast, gaining many hours per day.
Cross-beat escapement
Jost Bürgi invented the cross-beat escapement in 1584, a variation of the verge escapement which had two foliots that rotated in opposite directions. According to contemporary accounts, his clocks achieved remarkable accuracy of within a minute per day, two orders of magnitude better than other clocks of the time. However, this improvement was probably not due to the escapement itself, but rather to better workmanship and his invention of the remontoire, a device that isolated the escapement from changes in drive force. Without a balance spring, the crossbeat would have been no more isochronous than the verge.
Galileo's escapement
Galileo's escapement is a design for a clock escapement, invented around 1637 by Italian scientist Galileo Galilei (1564 - 1642). It was the earliest design of a pendulum clock. Since he was by then blind, Galileo described the device to his son, who drew a sketch of it. The son began construction of a prototype, but both he and Galileo died before it was completed.
Anchor escapement
Invented around 1657 by Robert Hooke, the anchor (see animation to the right) quickly superseded the verge to become the standard escapement used in pendulum clocks through to the 19th century. Its advantage was that it reduced the wide pendulum swing angles of the verge to 3–6°, making the pendulum nearly isochronous, and allowing the use of longer, slower-moving pendulums, which used less energy. The anchor is responsible for the long narrow shape of most pendulum clocks, and for the development of the grandfather clock, the first anchor clock to be sold commercially, which was invented around 1680 by William Clement, who disputed credit for the escapement with Hooke.
The anchor consists of an escape wheel with pointed, backward slanted teeth, and an "anchor"-shaped piece pivoted above it which rocks from side to side, linked to the pendulum. The anchor has slanted pallets on the arms which alternately catch on the teeth of the escape wheel, receiving impulses. Operation is mechanically similar to the verge escapement, and it has two of the verge's disadvantages: (1) The pendulum is constantly being pushed by an escape wheel tooth throughout its cycle, and is never allowed to swing freely, which disturbs its isochronism, and (2) it is a recoil escapement; the anchor pushes the escape wheel backward during part of its cycle. This causes backlash, increased wear in the clock's gears, and inaccuracy. These problems were eliminated in the deadbeat escapement, which slowly replaced the anchor in precision clocks.
Deadbeat escapement
The Graham or deadbeat escapement was an improvement of the anchor escapement first made by Thomas Tompion to a design by Richard Towneley in 1675 although it is often credited to Tompion's successor George Graham who popularized it in 1715. In the anchor escapement the swing of the pendulum pushes the escape wheel backward during part of its cycle. This 'recoil' disturbs the motion of the pendulum, causing inaccuracy, and reverses the direction of the gear train, causing backlash and introducing high loads into the system, leading to friction and wear. The main advantage of the deadbeat is that it eliminated recoil.
In the deadbeat, the pallets have a second curved "locking" face on them, concentric about the pivot on which the anchor turns. During the extremities of the pendulum's swing, the escape wheel tooth rests against this locking face, providing no impulse to the pendulum, which prevents recoil. Near the bottom of the pendulum's swing, the tooth slides off the locking face onto the angled "impulse" face, giving the pendulum a push, before the pallet releases the tooth. The deadbeat was first used in precision regulator clocks, but because of its greater accuracy superseded the anchor in the 19th century. It is used in almost all modern pendulum clocks except for tower clocks which often use gravity escapements.
Pin wheel escapement
Invented around 1741 by Louis Amant, this version of a deadbeat escapement can be made quite rugged. Instead of using teeth, the escape wheel has round pins that are stopped and released by a scissors-like anchor. This escapement, which is also called Amant escapement or (in Germany) Mannhardt escapement, is used quite often in tower clocks.
Detent escapement
The detent or chronometer escapement was used in marine chronometers, although some precision watches during the 18th and 19th centuries also used it. It was considered the most accurate of the balance wheel escapements before the beginning of the 20th century, when lever escapement chronometers began to outperform them in competition. The early form was invented by Pierre Le Roy in 1748, who created a pivoted detent type of escapement, though this was theoretically deficient. The first effective design of detent escapement was invented by John Arnold around 1775, but with the detent pivoted. This escapement was modified by Thomas Earnshaw in 1780 and patented by Wright (for whom he worked) in 1783; however, as depicted in the patent it was unworkable. Arnold also designed a spring detent escapement but, with improved design, Earnshaw's version eventually prevailed as the basic idea underwent several minor modifications during the last decade of the 18th century. The final form appeared around 1800, and this design was used until mechanical chronometers became obsolete in the 1970s.
The detent is a detached escapement; it allows the balance wheel to swing undisturbed during most of its cycle, except the brief impulse period, which is only given once per cycle (every other swing). Because the driving escape wheel tooth moves almost parallel to the pallet, the escapement has little friction and does not need oiling. For these reasons among others, the detent was considered the most accurate escapement for balance wheel timepieces. John Arnold was the first to use the detent escapement with an overcoil balance spring (patented 1782), and with this improvement his watches were the first truly accurate pocket timekeepers, keeping time to within 1 or 2 seconds per day. These were produced from 1783 onwards.
However, the escapement had disadvantages that limited its use in watches: it was fragile and required skilled maintenance; it was not self-starting, so if the watch was jarred in use so the balance wheel stopped, it would not start up again; and it was harder to manufacture in volume. Therefore, the self-starting lever escapement became dominant in watches.
Cylinder escapement
The horizontal or cylinder escapement, invented by Thomas Tompion in 1695 and perfected by George Graham in 1726, was one of the escapements which replaced the verge escapement in pocketwatches after 1700. A major attraction was that it was much thinner than the verge, allowing watches to be made fashionably slim. Clockmakers found it suffered from excessive wear, so it was not much used during the 18th century, except in a few high-end watches with cylinders made from ruby. The French solved this problem by making the cylinder and escape wheel of hardened steel, and the escapement was used in large numbers in inexpensive French and Swiss pocketwatches and small clocks from the mid-19th to the 20th century.
Rather than pallets, the escapement uses a cutaway cylinder on the balance wheel shaft, which the escape teeth enter one by one. Each wedge-shaped tooth impulses the balance wheel by pressure on the cylinder edge as it enters, is held inside the cylinder as it turns, and impulses the wheel again as it leaves out the other side. The wheel usually had 15 teeth and impulsed the balance over an angle of 20° to 40° in each direction. It is a frictional rest escapement, with the teeth in contact with the cylinder over the whole balance wheel cycle, and so was not as accurate as "detached" escapements like the lever, and the high friction forces caused excessive wear and necessitated more frequent cleaning.
Duplex escapement
The duplex watch escapement was invented by Robert Hooke around 1700, improved by Jean Baptiste Dutertre and Pierre Le Roy, and put in final form by Thomas Tyrer, who patented it in 1782.
The early forms had two escape wheels. The duplex escapement was difficult to make but achieved much higher accuracy than the cylinder escapement, and could equal that of the (early) lever escapement and when carefully made was almost as good as a detent escapement.
It was used in quality English pocketwatches from about 1790 to 1860,
and in the Waterbury, a cheap American 'everyman's' watch, during 1880–1898.
In the duplex, as in the chronometer escapement to which it has similarities, the balance wheel only receives an impulse during one of the two swings in its cycle.
The escape wheel has two sets of teeth (hence the name 'duplex'); long locking teeth project from the side of the wheel, and short impulse teeth stick up axially from the top. The cycle starts with a locking tooth resting against the ruby disk. As the balance wheel swings counterclockwise through its center position, the notch in the ruby disk releases the tooth. As the escape wheel turns, the pallet is in just the right position to receive a push from an impulse tooth. Then the next locking tooth drops onto the ruby roller and stays there while the balance wheel completes its cycle and swings back clockwise (CW), and the process repeats. During the CW swing, the impulse tooth falls momentarily into the ruby roller notch again but is not released.
The duplex is technically a frictional rest escapement; the tooth resting against the roller adds some friction to the balance wheel during its swing but this is very minimal. As in the chronometer, there is little sliding friction during impulse since pallet and impulse tooth are moving almost parallel, so little lubrication is needed.
However, it lost favor to the lever; its tight tolerances and sensitivity to shock made duplex watches unsuitable for active people. Like the chronometer, it is not self-starting and is vulnerable to "setting;" if a sudden jar stops the balance during its CW swing, it cannot get started again.
Lever escapement
The lever escapement, invented by Thomas Mudge in 1750, has been used in the vast majority of watches since the 19th century. Its advantages are (1) it is a "detached" escapement; unlike the cylinder or duplex escapements the balance wheel is only in contact with the lever during the short impulse period when it swings through its centre position and swings freely the rest of its cycle, increasing accuracy, and (2) it is a self-starting escapement, so if the watch is shaken so that the balance wheel stops, it will automatically start again. The original form was the rack lever escapement, in which the lever and the balance wheel were always in contact via a gear rack on the lever. Later, it was realized that all the teeth from the gears could be removed except one, and this created the detached lever escapement. British watchmakers used the English detached lever, in which the lever was at right angles to the balance wheel. Later Swiss and American manufacturers used the inline lever, in which the lever is inline between the balance wheel and the escape wheel; this is the form used in modern watches. In 1798, Louis Perron invented an inexpensive, less accurate form called the pin-pallet escapement, which was used in cheap "dollar watches" in the early 20th century and is still used in cheap alarm clocks and kitchen timers.
Grasshopper escapement
A rare but interesting mechanical escapement is John Harrison's grasshopper escapement invented in 1722. In this escapement, the pendulum is driven by two hinged arms (pallets). As the pendulum swings, the end of one arm catches on the escape wheel and drives it slightly backwards; this releases the other arm which moves out of the way to allow the escape wheel to pass. When the pendulum swings back again, the other arm catches the wheel, pushes it back and releases the first arm, and so on. The grasshopper escapement has been used in very few clocks since Harrison's time. Grasshopper escapements made by Harrison in the 18th century are still operating. Most escapements wear far more quickly, and waste far more energy. However, like other early escapements, the grasshopper impulses the pendulum throughout its cycle; it is never allowed to swing freely, causing error due to variations in drive force, and 19th-century clockmakers found it uncompetitive with more detached escapements like the deadbeat. Nevertheless, with enough care in construction it is capable of accuracy. A modern experimental grasshopper clock, the Burgess Clock B, had a measured error of only of a second during 100 running days. After two years of operation, it had an error of only ±0.5 sec, after barometric correction.
Gravity escapement
A gravity escapement uses a small weight or a weak spring to give an impulse directly to the pendulum. The earliest form consisted of two arms which were pivoted very close to the suspension spring of the pendulum with one arm on each side of the pendulum. Each arm carried a small deadbeat pallet with an angled plane leading to it. When the pendulum lifted one arm far enough, its pallet would release the escape wheel. Almost immediately, another tooth on the escape wheel would start to slide up the angle face on the other arm thereby lifting the arm. It would reach the pallet and stop. The other arm meanwhile was still in contact with the pendulum and coming down again to a point lower than it had started from. This lowering of the arm provides the impulse to the pendulum. The design was developed steadily from the middle of the 18th century to the middle of the 19th century. It eventually became the escapement of choice for turret clocks, because their wheel trains are subjected to large variations in drive force caused by the large exterior hands, with their varying wind, snow, and ice loads. Since in a gravity escapement, the drive force from the wheel train does not itself impel the pendulum but merely resets the weights that provide the impulse, the escapement is not affected by variations in drive force.
The 'Double Three-legged Gravity Escapement' shown here is a form of escapement first devised by a barrister named Bloxam and later improved by Lord Grimthorpe. It is the standard for all accurate 'Tower' clocks.
In the animation shown here, the two "gravity arms" are coloured blue and red. The two three-legged escape wheels are also coloured blue and red. They work in two parallel planes so that the blue wheel only impacts the locking block on the blue arm and the red wheel only impacts the red arm. In a real escapement, these impacts give rise to loud audible "ticks" and these are indicated by the appearance of a * beside the locking blocks. The three black lifting pins are key to the operation of the escapement. They cause the weighted gravity arms to be raised by an amount indicated by the pair of parallel lines on each side of the escapement. This gain in potential energy is the energy given to the pendulum on each cycle. For the Trinity College Cambridge Clock, a mass of around 50 grams is lifted through 3 mm each 1.5 seconds - which works out to 1 mW of power. The driving power from the falling weight is about 12 mW, so there is a substantial excess of power used to drive the escapement. Much of this energy is dissipated in the acceleration and deceleration of the frictional "fly" attached to the escape wheels.
The great clock in Elizabeth Tower at Westminster that rings London's Big Ben uses a double three-legged gravity escapement.
Coaxial escapement
Invented around 1974 and patented 1980 by British watchmaker George Daniels, the coaxial escapement is one of the few new watch escapements adopted commercially in modern times.
It could be regarded as having its distant origins in the escapement invented by Robert Robin, C.1792, which gives a single impulse in one direction; with the locking achieved by passive lever pallets, the design of the coaxial escapement is more akin to that of another Robin variant, the Fasoldt escapement, which was invented and patented by the American Charles Fasoldt in 1859.
Both Robin and Fasoldt escapements give impulse in one direction only.
The latter escapement has a lever with unequal drops; this engages with two escape wheels of differing diameters. The smaller impulse wheel acts on the single pallet at the end of the lever, whilst the pointed lever pallets lock on the larger wheel.
The balance engages with and is impelled by the lever through a roller pin and lever fork. The lever 'anchor' pallet locks the larger wheel and, on this being unlocked, a pallet on the end of the lever is given an impulse by the smaller wheel through the lever fork. The return stroke is 'dead', with the 'anchor' pallets serving only to lock and unlock, impulse being given in one direction through the single lever pallet.
As with the duplex, the locking wheel is larger in order to reduce pressure and thus friction.
The Daniels escapement, however, achieves a double impulse with passive lever pallets serving only to lock and unlock the larger wheel. On one side, impulse is given by means of the smaller wheel acting on the lever pallet through the roller and impulse pin. On the return, the lever again unlocks the larger wheel, which gives an impulse directly onto an impulse roller on the balance staff.
The main advantage is that this enables both impulses to occur on or around the centre line, with disengaging friction in both directions.
This mode of impulse is in theory superior to the lever escapement, which has engaging friction on the entry pallet. For long, this was recognized as a disturbing influence on the isochronism of the balance.
Purchasers no longer buy mechanical watches primarily for their accuracy, so manufacturers had little interest in investing in the tooling required, although finally, Omega adopted it in 1990.
Other modern watch escapements
Since accuracy far greater than any mechanical watch is achievable with low-cost quartz watches, improved escapement designs are no longer motivated by practical timekeeping needs but as novelties in the high-end watch market. In an effort to attract publicity, in recent decades some high-end mechanical watchmakers have introduced new escapements. None of these have been adopted by any watchmakers beyond their original creator.
Based on patents initially submitted by Rolex on behalf of inventor Nicolas Déhon, the constant escapement was developed by Girard-Perregaux as working prototypes in 2008 (Nicolas Déhon was then head of Girard-Perregaux R&D department) and in watches by 2013.
The key component of this escapement is a silicon buckled-blade which stores elastic energy. This blade is flexed to a point close to its unstable state and is released with a snap each swing of the balance wheel to give the wheel an impulse, after which it is cocked again by the wheel train. The advantage claimed is that since the blade imparts the same amount of energy to the wheel each release, the balance wheel is isolated from variations in impulse force due to the wheel train and mainspring which cause inaccuracies in conventional escapements.
Parmigiani Fleurier with its Genequand escapement and Ulysse Nardin with its Ulysse Anchor escapement have taken advantage of the properties of silicon flat springs. The independent watchmaker, De Bethune, has developed a concept where a magnet makes a resonator vibrate at high frequency, replacing the traditional balance spring.
Electromechanical escapements
In the late 19th century, electromechanical escapements were developed for pendulum clocks. In these, a switch or phototube energised an electromagnet for a brief section of the pendulum's swing. On some clocks, the pulse of electricity that drove the pendulum also drove a plunger to move the gear train.
Hipp clock
In 1843, Matthäus Hipp first mentioned a purely mechanical clock being driven by a switch called "echappement à palette". A varied version of that escapement has been used from the 1860s inside electrically driven pendulum clocks, the so-called "hipp-toggle". Since the 1870s, in an improved version the pendulum drove a ratchet wheel via a pawl on the pendulum rod, and the ratchet wheel drove the rest of the clock train to indicate the time. The pendulum was not impelled on every swing or even at a set interval of time. It was only impelled when its arc of swing had decayed below a certain level. As well as the counting pawl, the pendulum carried a small vane, known as a Hipp's toggle, pivoted at the top, which was completely free to swing. It was placed so that it dragged across a triangular polished block with a vee-groove in the top of it. When the arc of swing of the pendulum was large enough, the vane crossed the groove and swung free on the other side. If the arc was too small the vane never left the far side of the groove, and when the pendulum swung back it pushed the block strongly downwards. The block carried a contact which completed the circuit to the electromagnet which impelled the pendulum. The pendulum was only impelled as required.
This type of clock was widely used as a master clock in large buildings to control numerous slave clocks. Most telephone exchanges used such a clock to control timed events such as were needed to control the setup and charging of telephone calls by issuing pulses of varying durations such as every second, six seconds and so on.
Synchronome switch
Designed in 1895 by Frank Hope-Jones, the Synchronome switch and gravity escapement were the basis for the majority of their clocks in the 20th century. And also the basis of the slave pendulum in the Shortt-Synchronome free pendulum clock. A gathering arm attached to the pendulum moves a 15-tooth count wheel in one position, with a pawl preventing movement in the reverse direction. The wheel has a vane attached which, once per 30-second turn, releases the gravity arm. When the gravity arm falls it pushes against a pallet attached directly to the pendulum, giving it a push. Once the arm has fallen, it makes an electrical contact that energises an electromagnet to reset the gravity arm and acts as the half-minute impulse for the slave clocks.
Free pendulum clock
In the 20th century, the English horologist William Hamilton Shortt invented a free pendulum clock, patented in September 1921 and manufactured by the Synchronome Company, with an accuracy of one-hundredth of a second a day. In this system the timekeeping "master" pendulum, whose rod is made from a special steel alloy with 36% nickel called Invar whose length changes very little with temperature, swings as free of external influence as possible sealed in a vacuum chamber and does no work. It is in mechanical contact with its escapement for only a fraction of a second every 30 seconds. A secondary "slave" pendulum turns a ratchet, which triggers an electromagnet slightly less than every thirty seconds. This electromagnet releases a gravity lever onto the escapement above the master pendulum. A fraction of a second later (but exactly every 30 seconds), the motion of the master pendulum releases the gravity lever to fall farther. In the process, the gravity lever gives a tiny impulse to the master pendulum, which keeps that pendulum swinging. The gravity lever falls onto a pair of contacts, completing a circuit that does several things:
energizes a second electromagnet to raise the gravity lever above the master pendulum to its top position,
sends a pulse to activate one or more clock dials, and
sends a pulse to a synchronizing mechanism that keeps the slave pendulum in step with the master pendulum.
Since it is the slave pendulum that releases the gravity lever, this synchronization is vital to the functioning of the clock. The synchronizing mechanism used a small spring attached to the shaft of the slave pendulum and an electromagnetic armature that would catch the spring if the slave pendulum was running slightly late, thus shortening the period of the slave pendulum for one swing. The slave pendulum was adjusted to run slightly slow, such that on approximately every other synchronization pulse the spring would be caught by the armature.
This form of clock became a standard for use in observatories (roughly 100 such clocks were manufactured), and was the first clock capable of detecting small variations in the speed of Earth's rotation.
| Technology | Mechanisms | null |
4206717 | https://en.wikipedia.org/wiki/Position%20angle | Position angle | In astronomy, position angle (usually abbreviated PA) is the convention for measuring angles on the sky. The International Astronomical Union defines it as the angle measured relative to the north celestial pole (NCP), turning positive into the direction of the right ascension. In the standard (non-flipped) images, this is a counterclockwise measure relative to the axis into the direction of positive declination.
In the case of observed visual binary stars, it is defined as the angular offset of the secondary star from the primary relative to the north celestial pole.
As the example illustrates, if one were observing a hypothetical binary star with a PA of 30°, that means an imaginary line in the eyepiece drawn from the north celestial pole to the primary (P) would be offset from the secondary (S) such that the angle would be 30°.
When graphing visual binaries, the NCP is, as in the illustration, normally drawn from the center point (origin) that is the Primary downward–that is, with north at bottom–and PA is measured counterclockwise. Also, the direction of the proper motion can, for example, be given by its position angle.
The definition of position angle is also applied to extended objects like galaxies, where it refers to the angle made by the major axis of the object with the NCP line.
Nautics
The concept of the position angle is inherited from nautical navigation on the oceans, where the optimum compass course is the course from a known position to a target position with minimum effort. Setting aside the influence of winds and ocean currents, the optimum course is the course of smallest distance between the two positions on the ocean surface. Computing the compass course is known as the inverse geodetic problem.
This article considers only the abstraction of minimizing the distance between and traveling on the surface of a sphere with some radius : In which direction angle relative to North should the ship steer to reach the target position?
| Physical sciences | Celestial sphere: General | Astronomy |
4207510 | https://en.wikipedia.org/wiki/Oil | Oil | An oil is any nonpolar chemical substance that is composed primarily of hydrocarbons and is hydrophobic (does not mix with water) and lipophilic (mixes with other oils). Oils are usually flammable and surface active. Most oils are unsaturated lipids that are liquid at room temperature.
The general definition of oil includes classes of chemical compounds that may be otherwise unrelated in structure, properties, and uses. Oils may be animal, vegetable, or petrochemical in origin, and may be volatile or non-volatile. They are used for food (e.g., olive oil), fuel (e.g., heating oil), medical purposes (e.g., mineral oil), lubrication (e.g. motor oil), and the manufacture of many types of paints, plastics, and other materials. Specially prepared oils are used in some religious ceremonies and rituals as purifying agents.
Etymology
First attested in English 1176, the word oil comes from Old French oile, from Latin oleum, which in turn comes from the Greek (elaion), "olive oil, oil" and that from (elaia), "olive tree", "olive fruit". The earliest attested forms of the word are the Mycenaean Greek , e-ra-wo and , e-rai-wo, written in the Linear B syllabic script.
Types
Organic oils
Organic oils are produced in remarkable diversity by plants, animals, and other organisms through natural metabolic processes. Lipid is the scientific term for the fatty acids, steroids and similar chemicals often found in the oils produced by living things, while oil refers to an overall mixture of chemicals. Organic oils may also contain chemicals other than lipids, including proteins, waxes (class of compounds with oil-like properties that are solid at common temperatures) and alkaloids.
Lipids can be classified by the way that they are made by an organism, their chemical structure and their limited solubility in water compared to oils. They have a high carbon and hydrogen content and are considerably lacking in oxygen compared to other organic compounds and minerals; they tend to be relatively nonpolar molecules, but may include both polar and nonpolar regions as in the case of phospholipids and steroids.
Mineral oils
Crude oil, or petroleum, and its refined components, collectively termed petrochemicals, are crucial resources in the modern economy. Crude oil originates from ancient fossilized organic materials, such as zooplankton and algae, which geochemical processes convert into oil. The name "mineral oil" is a misnomer, in that minerals are not the source of the oil—ancient plants and animals are. Mineral oil is organic. However, it is classified as "mineral oil" instead of as "organic oil" because its organic origin is remote (and was unknown at the time of its discovery), and because it is obtained in the vicinity of rocks, underground traps, and sands. Mineral oil also refers to several specific distillates of crude oil.
Applications
Cooking
Several edible vegetable and animal oils, and also fats, are used for various purposes in cooking and food preparation. In particular, many foods are fried in oil much hotter than boiling water. Oils are also used for flavoring and for modifying the texture of foods (e.g. stir fry). Cooking oils are derived either from animal fat, as butter, lard and other types, or plant oils from olive, maize, sunflower and many other species.
Cosmetics
Oils are applied to hair to give it a lustrous look, to prevent tangles and roughness and to stabilize the hair to promote growth. See hair conditioner.
Religion
Oil has been used throughout history as a religious medium. It is often considered a spiritually purifying agent and is used for anointing purposes. As a particular example, holy anointing oil has been an important ritual liquid for Judaism and Christianity.
Health
Oils have been consumed since ancient times. Oils hold lots of fats and medical properties. A good example is olive oil. Olive oil holds a lot of fats within it which is why it was also used in lighting in ancient Greece and Rome. So people would use it to bulk out food so they would have more energy to burn through the day. Olive oil was also used to clean the body in this time as it would trap the moisture in the skin while pulling the grime to the surface. It was used as an ancient form of unsophisticated soap. It was applied on the skin then scrubbed off with a wooden stick pulling off the excess grime and creating a layer where new grime could form but be easily washed off in the water as oil is hydrophobic. Fish oils hold the omega-3 fatty acid. This fatty acid helps with inflammation and reduces fat in the bloodstream.
Painting
Color pigments are easily suspended in oil, making it suitable as a supporting medium for paints. The oldest known extant oil paintings date from 650 AD.
Heat transfer
Oils are used as coolants in oil cooling, for instance in electric transformers. Heat transfer oils are used both as coolants (see oil cooling), for heating (e.g. in oil heaters) and in other applications of heat transfer.
Lubrication
Given that they are non-polar, oils do not easily adhere to other substances. This makes them useful as lubricants for various engineering purposes. Mineral oils are more commonly used as machine lubricants than biological oils are. Whale oil is preferred for lubricating clocks, because it does not evaporate, leaving dust, although its use was banned in the US in 1980.
It is a long-running myth that spermaceti from whales has still been used in NASA projects such as the Hubble Space Telescope and the Voyager probe because of its extremely low freezing temperature. Spermaceti is not actually an oil, but a mixture mostly of wax esters, and there is no evidence that NASA has used whale oil.
Fuel
Some oils burn in liquid or aerosol form, generating light, and heat which can be used directly or converted into other forms of energy such as electricity or mechanical work. In order to obtain many fuel oils, crude oil is pumped from the ground and is shipped via oil tanker or a pipeline to an oil refinery. There, it is converted from crude oil to diesel fuel (petrodiesel), ethane (and other short-chain alkanes), fuel oils (heaviest of commercial fuels, used in ships/furnaces), gasoline (petrol), jet fuel, kerosene, benzene (historically), and liquefied petroleum gas. A barrel of crude oil produces approximately of diesel, of jet fuel, of gasoline, of other products, split between heavy fuel oil and liquified petroleum gases, and of heating oil. The total production of a barrel of crude into various products results in an increase to .
In the 18th and 19th centuries, whale oil was commonly used for lamps, which was replaced with natural gas and then electricity.
Chemical feedstock
Crude oil can be refined into a wide variety of component hydrocarbons. Petrochemicals are the refined components of crude oil and the chemical products made from them. They are used as detergents, fertilizers, medicines, paints, plastics, synthetic fibers, and synthetic rubber.
Organic oils are another important chemical feedstock, especially in green chemistry.
| Physical sciences | Hydrocarbons | Chemistry |
4209093 | https://en.wikipedia.org/wiki/Bacterial%20cell%20structure | Bacterial cell structure | A bacterium, despite its simplicity, contains a well-developed cell structure which is responsible for some of its unique biological structures and pathogenicity. Many structural features are unique to bacteria and are not found among archaea or eukaryotes. Because of the simplicity of bacteria relative to larger organisms and the ease with which they can be manipulated experimentally, the cell structure of bacteria has been well studied, revealing many biochemical principles that have been subsequently applied to other organisms.
Cell morphology
Perhaps the most elemental structural property of bacteria is their morphology (shape). Typical examples include:
coccus (circle or spherical)
bacillus (rod-like)
coccobacillus (between a sphere and a rod)
spiral (corkscrew-like)
filamentous (elongated)
Cell shape is generally characteristic of a given bacterial species, but can vary depending on growth conditions. Some bacteria have complex life cycles involving the production of stalks and appendages (e.g. Caulobacter) and some produce elaborate structures bearing reproductive spores (e.g. Myxococcus, Streptomyces). Bacteria generally form distinctive cell morphologies when examined by light microscopy and distinct colony morphologies when grown on Petri plates.
Perhaps the most obvious structural characteristic of bacteria is (with some exceptions) their small size. For example, Escherichia coli cells, an "average" sized bacterium, are about 2 μm (micrometres) long and 0.5 μm in diameter, with a cell volume of 0.6–0.7 μm3. This corresponds to a wet mass of about 1 picogram (pg), assuming that the cell consists mostly of water. The dry mass of a single cell can be estimated as 23% of the wet mass, amounting to 0.2 pg. About half of the dry mass of a bacterial cell consists of carbon, and also about half of it can be attributed to proteins. Therefore, a typical fully grown 1-liter culture of Escherichia coli (at an optical density of 1.0, corresponding to c. 109 cells/ml) yields about 1 g wet cell mass. Small size is extremely important because it allows for a large surface area-to-volume ratio which allows for rapid uptake and intracellular distribution of nutrients and excretion of wastes. At low surface area-to-volume ratios the diffusion of nutrients and waste products across the bacterial cell membrane limits the rate at which microbial metabolism can occur, making the cell less evolutionarily fit. The reason for the existence of large cells is unknown, although it is speculated that the increased cell volume is used primarily for storage of excess nutrients.
Comparison of a typical bacterial cell and a typical human cell (assuming both cells are spheres) :
Cell wall
The cell envelope is composed of the cell membrane and the cell wall. As in other organisms, the bacterial cell wall provides structural integrity to the cell. In prokaryotes, the primary function of the cell wall is to protect the cell from internal turgor pressure caused by the much higher concentrations of proteins and other molecules inside the cell compared to its external environment. The bacterial cell wall differs from that of all other organisms by the presence of peptidoglycan which is located immediately outside of the cell membrane. Peptidoglycan is made up of a polysaccharide backbone consisting of alternating N-Acetylmuramic acid (NAM) and N-acetylglucosamine (NAG) residues in equal amounts. Peptidoglycan is responsible for the rigidity of the bacterial cell wall, and for the determination of cell shape. It is relatively porous and is not considered to be a permeability barrier for small substrates. While all bacterial cell walls (with a few exceptions such as extracellular parasites such as Mycoplasma) contain peptidoglycan, not all cell walls have the same overall structures. Since the cell wall is required for bacterial survival, but is absent in some eukaryotes, several antibiotics (notably the penicillins and cephalosporins) stop bacterial infections by interfering with cell wall synthesis, while having no effects on human cells which have no cell wall, only a cell membrane. There are two main types of bacterial cell walls, those of Gram-positive bacteria and those of Gram-negative bacteria, which are differentiated by their Gram staining characteristics. For both these types of bacteria, particles of approximately 2 nm can pass through the peptidoglycan. If the bacterial cell wall is entirely removed, it is called a protoplast while if it's partially removed, it is called a spheroplast. Beta-lactam antibiotics such as penicillin inhibit the formation of peptidoglycan cross-links in the bacterial cell wall. The enzyme lysozyme, found in human tears, also digests the cell wall of bacteria and is the body's main defense against eye infections.
Gram-positive cell wall
Gram-positive cell walls are thick and the peptidoglycan (also known as murein) layer constitutes almost 95% of the cell wall in some Gram-positive bacteria and as little as 5-10% of the cell wall in Gram-negative bacteria. The peptidoglycan layer takes up the crystal violet dye and stains purple in the Gram stain. Bacteria within the Deinococcota group may also exhibit Gram-positive staining but contain some cell wall structures typical of Gram-negative bacteria.
The cell wall of some Gram-positive bacteria can be completely dissolved by lysozymes which attack the bonds between N-acetylmuramic acid and N-acetylglucosamine. In other Gram-positive bacteria, such as Staphylococcus aureus, the walls are resistant to the action of lysozymes. They have O-acetyl groups on carbon-6 of some muramic acid residues.
The matrix substances in the walls of Gram-positive bacteria may be polysaccharides or teichoic acids. The latter are very widespread, but have been found only in Gram-positive bacteria. There are two main types of teichoic acid: ribitol teichoic acids and glycerol teichoic acids. The latter one is more widespread. These acids are polymers of ribitol phosphate and glycerol phosphate, respectively, and only located on the surface of many Gram-positive bacteria. However, the exact function of teichoic acid is debated and not fully understood. Some are lipid-linked to form lipoteichoic acids. Because lipoteichoic acids are covalently linked to lipids within the cytoplasmic membrane they are responsible for linking and anchoring the peptidoglycan to the cytoplasmic membrane. Lipotechoic acid is a major component of the gram-positive cell wall. One of its purposes is providing an antigenic function. The lipid element is to be found in the membrane where its adhesive properties assist in its anchoring to the membrane. Teichoic acids give the gram-positive cell wall an overall negative charge due to the presence of phosphodiester bonds between teichoic acid monomers.
Outside the cell wall, many gram-positive bacteria have an S-layer of "tiled" proteins. The S-layer assists attachment and biofilm formation. Outside the S-layer, there is often a capsule of polysaccharides. The capsule helps the bacterium evade host phagocytosis. In laboratory culture, the S-layer and capsule are often lost by reductive evolution (the loss of a trait in absence of positive selection).
Gram-negative cell wall
Gram-negative cell walls are much thinner than the Gram-positive cell walls, and they contain a second plasma membrane superficial to their thin peptidoglycan layer, in turn adjacent to the cytoplasmic membrane. Gram-negative bacteria stain as pink in the Gram stain. The chemical structure of the outer membrane's lipopolysaccharide is often unique to specific bacterial sub-species and is responsible for many of the antigenic properties of these strains.
In addition to the peptidoglycan layer the Gram-negative cell wall also contains an additional outer membrane composed of phospholipids and lipopolysaccharides which face into the external environment. The highly charged nature of lipopolysaccharides confer an overall negative charge to the Gram -negative cell wall. The chemical structure of the outer membrane lipopolysaccharides is often unique to specific bacterial strains, and is responsible for many of their antigenic properties.
As a phospholipid bilayer, the lipid portion of the outer membrane is largely impermeable to all charged molecules. However, channels called porins are present in the outer membrane that allow for passive transport of many ions, sugars and amino acids across the outer membrane. These molecules are therefore present in the periplasm, the region between the plasma membrane and outer membrane. The periplasm contains the peptidoglycan layer and many proteins responsible for substrate binding or hydrolysis and reception of extracellular signals. The periplasm is thought to exist as a gel-like state rather than a liquid due to the high concentration of proteins and peptidoglycan found within it. Because of its location between the cytoplasmic and outer membranes, signals received and substrates bound are available to be transported across the cytoplasmic membrane using transport and signaling proteins imbedded there.
Many uncultivated Gram-negative bacteria also have an S-layer and a capsule. These structures are often lost during laboratory cultivation.
Plasma membrane
The plasma membrane or bacterial cytoplasmic membrane is composed of a phospholipid bilayer and thus has all of the general functions of a cell membrane such as acting as a permeability barrier for most molecules and serving as the location for the transport of molecules into the cell. In addition to these functions, prokaryotic membranes also function in energy conservation as the location about which a proton motive force is generated. Unlike eukaryotes, bacterial membranes (with some exceptions e.g. Mycoplasma and methanotrophs) generally do not contain sterols. However, many microbes do contain structurally related compounds called hopanoids which likely fulfill the same function. Unlike eukaryotes, bacteria can have a wide variety of fatty acids within their membranes. Along with typical saturated and unsaturated fatty acids, bacteria can contain fatty acids with additional methyl, hydroxy or even cyclic groups. The relative proportions of these fatty acids can be modulated by the bacterium to maintain the optimum fluidity of the membrane (e.g. following temperature change).
Gram-negative and mycobacteria have an inner and outer bacteria membrane. As a phospholipid bilayer, the lipid portion of the bacterial outer membrane is impermeable to charged molecules. However, channels called porins are present in the outer membrane that allow for passive transport of many ions, sugars and amino acids across the outer membrane. These molecules are therefore present in the periplasm, the region between the cytoplasmic and outer membranes. The periplasm contains the peptidoglycan layer and many proteins responsible for substrate binding or hydrolysis and reception of extracellular signals. The periplasm is thought to exist in a gel-like state rather than a liquid due to the high concentration of proteins and peptidoglycan found within it. Because of its location between the cytoplasmic and outer membranes, signals received and substrates bound are available to be transported across the cytoplasmic membrane using transport and signaling proteins imbedded there.
Extracellular (external) structures
Fimbriae and pili
Fimbriae (sometimes called "attachment pili") are protein tubes that extend out from the outer membrane in many members of the Pseudomonadota. They are generally short in length and present in high numbers about the entire bacterial cell surface. Fimbriae usually function to facilitate the attachment of a bacterium to a surface (e.g. to form a biofilm) or to other cells (e.g. animal cells during pathogenesis). A few organisms (e.g. Myxococcus) use fimbriae for motility to facilitate the assembly of multicellular structures such as fruiting bodies. Pili are similar in structure to fimbriae but are much longer and present on the bacterial cell in low numbers. Pili are involved in the process of bacterial conjugation where they are called conjugation pili or "sex pili". Type IV pili (non-sex pili) also aid bacteria in gripping surfaces.
S-layers
An S-layer (surface layer) is a cell surface protein layer found in many different bacteria and in some archaea, where it serves as the cell wall. All S-layers are made up of a two-dimensional array of proteins and have a crystalline appearance, the symmetry of which differs between species. The exact function of S-layers is unknown, but it has been suggested that they act as a partial permeability barrier for large substrates. For example, an S-layer could conceivably keep extracellular proteins near the cell membrane by preventing their diffusion away from the cell. In some pathogenic species, an S-layer may help to facilitate survival within the host by conferring protection against host defence mechanisms.
Glycocalyx
Many bacteria secrete extracellular polymers outside of their cell walls called glycocalyx. These polymers are usually composed of polysaccharides and sometimes protein. Capsules are relatively impermeable structures that cannot be stained with dyes such as India ink. They are structures that help protect bacteria from phagocytosis and desiccation. Slime layer is involved in attachment of bacteria to other cells or inanimate surfaces to form biofilms. Slime layers can also be used as a food reserve for the cell.
Flagella
Perhaps the most recognizable extracellular bacterial cell structures are flagella. Flagella are whip-like structures protruding from the bacterial cell wall and are responsible for bacterial motility (movement). The arrangement of flagella about the bacterial cell is unique to the species observed. Common forms include:
Monotrichous – Single flagellum
Lophotrichous – A tuft of flagella found at one of the cell poles
Amphitrichous – Single flagellum found at each of two opposite poles
Peritrichous – Multiple flagella found at several locations about the cell
The bacterial flagellum consists of three basic components: a whip-like filament, a motor complex, and a hook that connects them. The filament is approximately 20 nm in diameter and consists of several protofilaments, each made up of thousands of flagellin subunits. The bundle is held together by a cap and may or may not be encapsulated. The motor complex consists of a series of rings anchoring the flagellum in the inner and outer membranes, followed by a proton-driven motor that drives rotational movement in the filament.
Intracellular (internal) structures
In comparison to eukaryotes, the intracellular features of the bacterial cell are extremely simple. Bacteria do not contain organelles in the same sense as eukaryotes. Instead, the chromosome and perhaps ribosomes are the only easily observable intracellular structures found in all bacteria. There do exist, however, specialized groups of bacteria that contain more complex intracellular structures, some of which are discussed below.
The bacterial DNA and plasmids
Unlike eukaryotes, the bacterial DNA is not enclosed inside of a membrane-bound nucleus but instead resides inside the cytoplasm. The processes concerning the transfer of genetic information — translation, transcription, and DNA replication — therefore all occur within the same compartment and can interact with other cytoplasmic structures, most notably ribosomes. Bacterial DNA can be located in two places:
Bacterial chromosome, located in the irregularly shaped region known as the nucleoid
Extrachromosomal DNA, located outside of the nucleoid region as circular or linear plasmids
The bacterial DNA is not packaged using histones to form chromatin as in eukaryotes but instead exists as a highly compact supercoiled structure, the precise nature of which remains unclear. Most bacterial chromosomes are circular, although some examples of linear chromosomes exist (e.g. Borrelia burgdorferi). Usually, a single bacterial chromosome is present, although some species with multiple chromosomes have been described.
Along with chromosomal DNA, most bacteria also contain small independent pieces of DNA called plasmids that often encode advantageous traits but are not essential to their bacterial host. Plasmids can be easily gained or lost by a bacterium and can be transferred between bacteria as a form of horizontal gene transfer.
Ribosomes and other multiprotein complexes
In most bacteria the most numerous intracellular structure is the ribosome, the site of protein synthesis in all living organisms. All prokaryotes have 70S (where S=Svedberg units) ribosomes while eukaryotes contain larger 80S ribosomes in their cytosol. The 70S ribosome is made up of a 50S and 30S subunits. The 50S subunit contains the 23S and 5S rRNA while the 30S subunit contains the 16S rRNA. These rRNA molecules differ in size in eukaryotes and are complexed with a large number of ribosomal proteins, the number and type of which can vary slightly between organisms. While the ribosome is the most commonly observed intracellular multiprotein complex in bacteria other large complexes do occur and can sometimes be seen using microscopy.
Intracellular membranes
While not typical of all bacteria some microbes contain intracellular membranes in addition to (or as extensions of) their cytoplasmic membranes. An early idea was that bacteria might contain membrane folds termed mesosomes, but these were later shown to be artifacts produced by the chemicals used to prepare the cells for electron microscopy. Examples of bacteria containing intracellular membranes are phototrophs, nitrifying bacteria and methane-oxidising bacteria. Intracellular membranes are also found in bacteria belonging to the poorly studied Planctomycetota group, although these membranes more closely resemble organellar membranes in eukaryotes and are currently of unknown function. Chromatophores are intracellular membranes found in phototrophic bacteria. Used primarily for photosynthesis, they contain bacteriochlorophyll pigments and carotenoids.
Cytoskeleton
The prokaryotic cytoskeleton is the collective name for all structural filaments in prokaryotes. It was once thought that prokaryotic cells did not possess cytoskeletons, but advances in imaging technology and structure determination have shown the presence of filaments in these cells. Homologues for all major cytoskeletal proteins in eukaryotes have been found in prokaryotes. Cytoskeletal elements play essential roles in cell division, protection, shape determination, and polarity determination in various prokaryotes.
Nutrient storage structures
Most bacteria do not live in environments that contain large amounts of nutrients at all times. To accommodate these transient levels of nutrients bacteria contain several different methods of nutrient storage in times of plenty for use in times of want. For example, many bacteria store excess carbon in the form of polyhydroxyalkanoates or glycogen. Some microbes store soluble nutrients such as nitrate in vacuoles. Sulfur is most often stored as elemental (S0) granules which can be deposited either intra- or extracellularly. Sulfur granules are especially common in bacteria that use hydrogen sulfide as an electron source. Most of the above-mentioned examples can be viewed using a microscope and are surrounded by a thin nonunit membrane to separate them from the cytoplasm.
Inclusions
Inclusions are considered to be nonliving components of the cell that do not possess metabolic activity and are not bounded by membranes. The most common inclusions are glycogen, lipid droplets, crystals, and pigments. Volutin granules are cytoplasmic inclusions of complexed inorganic polyphosphate. These granules are called metachromatic granules due to their displaying the metachromatic effect; they appear red or blue when stained with the blue dyes methylene blue or toluidine blue.
Gas vacuoles
Gas vacuoles are membrane-bound, spindle-shaped vesicles, found in some planktonic bacteria and Cyanobacteria, that provides buoyancy to these cells by decreasing their overall cell density. Positive buoyancy is needed to keep the cells in the upper reaches of the water column, so that they can continue to perform photosynthesis. They are made up of a shell of protein that has a highly hydrophobic inner surface, making it impermeable to water (and stopping water vapour from condensing inside) but permeable to most gases. Because the gas vesicle is a hollow cylinder, it is liable to collapse when the surrounding pressure increases. Natural selection has fine tuned the structure of the gas vesicle to maximise its resistance to buckling, including an external strengthening protein, GvpC, rather like the green thread in a braided hosepipe. There is a simple relationship between the diameter of the gas vesicle and pressure at which it will collapse – the wider the gas vesicle the weaker it becomes. However, wider gas vesicles are more efficient, providing more buoyancy per unit of protein than narrow gas vesicles. Different species produce gas vesicle of different diameter, allowing them to colonise different depths of the water column (fast growing, highly competitive species with wide gas vesicles in the top most layers; slow growing, dark-adapted, species with strong narrow gas vesicles in the deeper layers). The diameter of the gas vesicle will also help determine which species survive in different bodies of water. Deep lakes that experience winter mixing expose the cells to the hydrostatic pressure generated by the full water column. This will select for species with narrower, stronger gas vesicles.
The cell achieves its height in the water column by synthesising gas vesicles. As the cell rises up, it is able to increase its carbohydrate load through increased photosynthesis. Too high and the cell will suffer photobleaching and possible death, however, the carbohydrate produced during photosynthesis increases the cell's density, causing it to sink. The daily cycle of carbohydrate build-up from photosynthesis and carbohydrate catabolism during dark hours is enough to fine-tune the cell's position in the water column, bring it up toward the surface when its carbohydrate levels are low and it needs to photosynthesis, and allowing it to sink away from the harmful UV radiation when the cell's carbohydrate levels have been replenished. An extreme excess of carbohydrate causes a significant change in the internal pressure of the cell, which causes the gas vesicles to buckle and collapse and the cell to sink out.
Microcompartments
Bacterial microcompartments are widespread, organelle-like structures that are made of a protein shell that surrounds and encloses various enzymes. provide a further level of organization; they are compartments within bacteria that are surrounded by polyhedral protein shells, rather than by lipid membranes. These "polyhedral organelles" localize and compartmentalize bacterial metabolism, a function performed by the membrane-bound organelles in eukaryotes.
Carboxysomes
Carboxysomes are bacterial microcompartments found in many autotrophic bacteria such as Cyanobacteria, Knallgasbacteria, Nitroso- and Nitrobacteria. They are proteinaceous structures resembling phage heads in their morphology and contain the enzymes of carbon dioxide fixation in these organisms (especially ribulose bisphosphate carboxylase/oxygenase, RuBisCO, and carbonic anhydrase). It is thought that the high local concentration of the enzymes along with the fast conversion of bicarbonate to carbon dioxide by carbonic anhydrase allows faster and more efficient carbon dioxide fixation than possible inside the cytoplasm. Similar structures are known to harbor the coenzyme B12-containing glycerol dehydratase, the key enzyme of glycerol fermentation to 1,3-propanediol, in some Enterobacteriaceae (e. g. Salmonella).
Magnetosomes
Magnetosomes are bacterial microcompartments found in magnetotactic bacteria that allow them to sense and align themselves along a magnetic field (magnetotaxis). The ecological role of magnetotaxis is unknown but is thought to be involved in the determination of optimal oxygen concentrations. Magnetosomes are composed of the mineral magnetite or greigite and are surrounded by a lipid bilayer membrane. The morphology of magnetosomes is species-specific.
Endospores
Perhaps the best known bacterial adaptation to stress is the formation of endospores. Endospores are bacterial survival structures that are highly resistant to many different types of chemical and environmental stresses and therefore enable the survival of bacteria in environments that would be lethal for these cells in their normal vegetative form. It has been proposed that endospore formation has allowed for the survival of some bacteria for hundreds of millions of years (e.g. in salt crystals) although these publications have been questioned. Endospore formation is limited to several genera of gram-positive bacteria such as Bacillus and Clostridium. It differs from reproductive spores in that only one spore is formed per cell resulting in no net gain in cell number upon endospore germination. The location of an endospore within a cell is species-specific and can be used to determine the identity of a bacterium. Dipicolinic acid is a chemical compound which composes 5% to 15% of the dry weight of bacterial spores and is implicated in being responsible for the heat resistance of endospores. Archaeologists have found viable endospores taken from the intestines of Egyptian mummies as well as from lake sediments in Northern Sweden estimated to be many thousands of years old.
| Biology and health sciences | Basic anatomy | Biology |
2233425 | https://en.wikipedia.org/wiki/Aldrin | Aldrin | Aldrin is an organochlorine insecticide that was widely used until the 1990s, when it was banned in most countries. Aldrin is a member of the so-called "classic organochlorines" (COC) group of pesticides. COCs enjoyed a very sharp rise in popularity during and after World War II. Other noteworthy examples of COCs include dieldrin and DDT. After research showed that organochlorines can be highly toxic to the ecosystem through bioaccumulation, most were banned from use. Before the ban, it was heavily used as a pesticide to treat seed and soil. Aldrin and related "cyclodiene" pesticides (a term for pesticides derived from Hexachlorocyclopentadiene) became notorious as persistent organic pollutants.
Structure and Reactivity
Pure aldrin takes form as a white crystalline powder. Though it is not soluble in water (0.003% solubility), aldrin dissolves very well in organic solvents, such as ketones and paraffins. Aldrin decays very slowly once released into the environment. Though it is rapidly converted to dieldrin by plants and bacteria, dieldrin maintains the same toxic effects and slow decay of aldrin. Aldrin is easily transported through the air by dust particles. Aldrin does not react with mild acids or bases and is stable in an environment with a pH between 4 and 8. It is highly flammable when exposed to temperatures above 200 °C In the presence of oxidizing agents aldrin reacts with concentrated acids and phenols.
Synthesis
Aldrin is not formed in nature. It is named after the German chemist Kurt Alder, one of the coinventors of this kind of reaction. Aldrin is synthesized by combining hexachlorocyclopentadiene with norbornadiene in a Diels-Alder reaction to give the adduct. In 1967, the composition of technical-grade aldrin was reported to consist of 90.5% of hexachlorohexahydrodimethanonaphthalene (HHDN).
Similarly, an isomer of aldrin, known as isodrin, is produced by reaction of hexachloronobornadiene with cyclopentadiene. Isodrin is also produced as a byproduct of aldrin synthesis, with technical-grade aldrin containing about 3.5% isodrin.
An estimated 270 million kilograms of aldrin and related cyclodiene pesticides were produced between 1946 and 1976. The estimated production volume of aldrin in the US peaked in the mid-1960s at about 18 million pounds a year and then declined.
Available forms
There are multiple available forms of aldrin. One of these is the isomer isodrin, which cannot be found in nature, but needs to be synthesized like aldrin. When aldrin enters the human body or the environment it is rapidly converted to dieldrin. Degradation by ultraviolet radiation or microbes can convert dieldrin to photodieldrin and aldrin to photoaldrin.
Mechanism of action
Even though many toxic effects of aldrin have been discovered, the exact mechanisms underlying the toxicity are yet to be determined. The only toxic aldrin induced process that is largely understood is that of neurotoxicity.
Neurotoxicity
One of the effects that intoxication with aldrin gives rise to is neurotoxicity. Studies have shown that aldrin stimulates the central nervous system (CNS), which may cause hyperexcitation and seizures. This phenomenon exerts its effect through two different mechanisms.
One of the mechanisms uses the ability of aldrin to inhibit brain calcium ATPases. These ion pumps relieve the nerve terminal from calcium by actively pumping it out. However, when aldrin inhibits these pumps, the intracellular calcium levels rise. This results in an enhanced neurotransmitter release.
The second mechanism makes use of aldrin's ability to block gamma-aminobutyric acid (GABA) activity. GABA is a major inhibitory neurotransmitter in the central nervous system. Aldrin induces neurotoxic effects by blocking the GABAA receptor-chloride channel complex. By blocking this receptor, chloride is unable to move into the synapse, which prevents hyperpolarization of neuronal synapses. Therefore, the synapses are more likely to generate action potentials.
Metabolism
The metabolism of oral aldrin exposure has not been studied in humans. However, animal studies are able to provide an extensive overview of the metabolism of aldrin. This data can be related to humans.
Biotransformation of aldrin starts with epoxidation of aldrin by mixed-function oxidases (CYP-450), which forms dieldrin. This conversion happens mainly in the liver. Tissues with low CYP-450 expression use the prostaglandin endoperoxide synthase (PES) instead. This oxidative pathway bisdioxygenises the arachidonic acid to prostaglandin G2 (PGG2). Subsequently, PGG2 is reduced to prostaglandin H2 (PGH2) by hydroperoxidase.
Dieldrin can then be directly oxidized by cytochrome oxidases, which forms 9-hydroxydieldrin. An alternative for oxidation involves the opening of the epoxide ring by epoxide hydrases, which forms the product 6,7-trans-dihydroxydihydroaldrin. Both products can be conjugated to form 6,7-trans-dihydroxydihydroaldrin glucuronide and 9-hydroxydieldrin glucuronide, respectively. 6,7-trans-dihydroxydihydroaldrin can also be oxidized to form aldrin dicarboxylic acid.
Efficacy and side effects
Considering the toxicokinetics of aldrin in the environment, the efficacy of the compound has been determined. In addition, the adverse effects after exposure to aldrin are demonstrated, indicating the risk regarding the compound.
Efficacy
The ability of aldrin, in its use for the control of termites, is examined in order to determine the maximum response when applied. In 1953 US researchers tested aldrin and dieldrin on terrains with rats known to carry chiggers, at a rate of . The aldrin and dieldrin treatment demonstrated a decrease of 75 times less chiggers on rats for dieldrin treated terrains and 25 times less chiggers on the rats when treated with aldrin. The aldrin treatment indicate a high productivity, especially in comparison to other insecticides that were used at the time, such as DDT, sulfur or lindane.
Adverse effects
Exposure of aldrin to the environment leads to the localization of the chemical compound in the air, soil, and water. Aldrin gets changed quickly to dieldrin and that compound degrades slowly, which accounts for aldrin concentrations in the environment around the primary exposure and in the plants. These concentrations can also be found in animals, which eat contaminated plants or animals that reside in the contaminated water. This biomagnification can lead to a high concentrations in their fat.
There are some reported cases of workers who developed anemia after multiple dieldrin exposures. However the main adverse effect of aldrin and dieldrin is in relationship to the central nervous system. The accumulated levels of dieldrin in the body were believed to lead to convulsions. Besides that other symptoms were also reported like headaches, nausea and vomiting, anorexia, muscle twitching and myoclonic jerking and EEG distortions. In all these cases removal of the source of exposure to aldrin/dieldrin led to a rapid recovery.
Toxicity
The toxicity of aldrin and dieldrin is determined by the results of several animal studies. Reports of a significant increase in workers death in relation to aldrin has not been found, although death by anemia is reported in some cases after multiple exposure to aldrin. Immunological tests linked an antigenic response to erythrocytes coated with dieldrin in those cases. Direct dose-response relations being a cause for death are yet to be examined.
The NOAEL that was derived from rat studies:
The minimal risk level at acute oral exposure to aldrin is 0.002 mg/kg/day.
The minimal risk level at intermediate exposure to dieldrin is 0.0001 mg/kg/day.
The minimal risk level at chronic exposure to aldrin is 0.00003 mg/kg/day.
The minimal risk level at chronic exposure to dieldrin is 0.00005 mg/kg/day.
In addition to these studies, breast cancer risk studies were performed demonstrating a significant increased breast cancer risk. After comparing blood concentrations to number of lymph nodes and tumor size a 5-fold higher risk of death was determined, comparing the highest quartile range in the research to the lower quartile range.
Effects on animals
Most of the animal studies done with aldrin and dieldrin used rats. High doses of aldrin and dieldrin demonstrated neurotoxicity, but in multiple rat studies also showed a unique sensitivity of the mouse liver to dieldrin induced hepatocarcinogenicity. Furthermore, aldrin treated rats demonstrated an increased post-natal mortality, in which adults showed an increased susceptibility to the compounds compared to children in rats.
Environmental impact and regulation
Like related polychlorinated pesticides, aldrin is highly lipophilic. Its solubility in water is only 0.027 mg/L, which exacerbates its persistence in the environment. It was banned by the Stockholm Convention on Persistent Organic Pollutants. In the U.S., aldrin was cancelled in 1974. The substance is banned from use for plant protection by the EU.
Safety and environmental aspects
Aldrin has rat of 39 to 60 mg/kg (oral in rats). For fish however, it is extremely toxic, with an LC50 of 0.006–0.01 for trout and bluegill.
In the US, aldrin is considered a potential occupational carcinogen by the Occupational Safety and Health Administration and the National Institute for Occupational Safety and Health; these agencies have set an occupational exposure limit for dermal exposures at 0.25 mg/m3 over an eight-hour time-weighted average.
Further, an IDLH limit has been set at 25 mg/m3, based on acute toxicity data in humans to which subjects reacted with convulsions within 20 minutes of exposure.
It is classified as an extremely hazardous substance in the United States as defined in Section 302 of the U.S. Emergency Planning and Community Right-to-Know Act (42 U.S.C. 11002), and is subject to strict reporting requirements by facilities which produce, store, or use it in significant quantities.
| Technology | Pest and disease control | null |
2234123 | https://en.wikipedia.org/wiki/Rail%20freight%20transport | Rail freight transport | Rail freight transport is the use of railways and trains to transport cargo as opposed to human passengers.
A freight train, cargo train, or goods train is a group of freight cars (US) or goods wagons (International Union of Railways) hauled by one or more locomotives on a railway, transporting cargo all or some of the way between the shipper and the intended destination as part of the logistics chain. Trains may haul bulk material, intermodal containers, general freight or specialized freight in purpose-designed cars. Rail freight practices and economics vary by country and region.
When considered in terms of ton-miles or tonne-kilometers hauled, energy efficiency can be greater with rail transportation than with other means. Maximum economies are typically realized with bulk commodities (e.g., coal), especially when hauled over long distances. Moving goods by rail often involves transshipment costs, particularly when the shipper or receiver lack direct rail access. These costs may exceed that of operating the train itself, a factor that practices such as containerization, trailer-on-flatcar or rolling highway aim to minimize.
Overview
Traditionally, large shippers built factories and warehouses near rail lines and had a section of track on their property called a siding where goods were loaded onto or unloaded from rail cars. Other shippers had their goods hauled (drayed) by wagon or truck to or from a goods station (freight station in US). Smaller locomotives transferred the rail cars from the sidings and goods stations to a classification yard, where each car was coupled to one of several long-distance trains being assembled there, depending on that car's destination. When long enough, or based on a schedule, each long-distance train was then dispatched to another classification yard. At the next classification yard, cars are resorted. Those that are destined for stations served by that yard are assigned to local trains for delivery. Others are reassembled into trains heading to classification yards closer to their final destination. A single car might be reclassified or switched in several yards before reaching its final destination, a process that made rail freight slow and increased costs. Because, of this, freight rail operators have continually tried to reduce these costs by reducing or eliminating switching in classification yards through techniques such as unit trains and containerization, and in some countries these have completely replaced mixed freight trains. In many countries, railroads have been built to haul one commodity, such as coal or ore, from an inland point to a port.
Rail freight uses many types of goods wagon (UIC) or freight car (US). These include box cars (US) or covered wagons (UIC) for general merchandise, flat cars (US) or flat wagons (UIC) for heavy or bulky loads, well wagons or "low loader" wagons for transporting road vehicles; there are refrigerator vans for transporting food, simple types of open-topped wagons for transporting bulk material, such as minerals and coal, and tankers for transporting liquids and gases. Most coal and aggregates are moved in hopper wagons or gondolas (US) or open wagons (UIC) that can be filled and discharged rapidly, to enable efficient handling of the materials.
Rail transport is very energy-efficient, and much more environmentally friendly than road transport. Compared to road transport whісh employs the uѕе of trucks (lorries), rail transportation ensures that goods that соuld оtherwіѕе be transported on а number of trucks are transported in а single shipment. Thіѕ saves а lot аѕ fаr аѕ cost connected to the transportation are concerned. Rail freight transport also has very low external costs. Therefore, many governments have been stimulating the switch of freight from trucks onto trains, because of the environmental benefits that it would bring. Railway transport and inland navigation (also known as 'inland waterway transport' (IWT) or 'inland shipping') are similarly environmentally friendly modes of transportation, and both form major parts of the 2019 European Green Deal.
In Europe (particularly Britain), many manufacturing towns developed before the railway. Many factories did not have direct rail access. This meant that freight had to be shipped through a goods station, sent by train and unloaded at another goods station for onward delivery to another factory. When lorries (trucks) replaced horses it was often economical and faster to make one movement by road. In the United States, particularly in the West and Midwest, towns developed with railway and factories often had a direct rail connection. Despite the closure of many minor lines carload shipping from one company to another by rail remains common.
Railroads were early users of automatic data processing equipment, starting at the turn of the twentieth century with punched cards and unit record equipment. Many rail systems have turned to computerized scheduling and optimization for trains which has reduced costs and helped add more train traffic to the rails.
Freight railroads' relationship with other modes of transportation varies widely. There is almost no interaction with airfreight, close cooperation with ocean-going freight and a mostly competitive relationship with long distance trucking and barge transport. Many businesses ship their products by rail if they are shipped long distance because it can be cheaper to ship in large quantities by rail than by truck; however barge shipping remains a viable competitor where water transport is available.
Freight trains are sometimes illegally boarded by individuals who do not have the money or the desire to travel legally, a practice referred to as "hopping". Most hoppers sneak into train yards and stow away in boxcars. Bolder hoppers will catch a train "on the fly", that is, as it is moving, leading to occasional fatalities, some of which go unrecorded. The act of leaving a town or area, by hopping a freight train is sometimes referred to as "catching-out", as in catching a train out of town.
Bulk
Bulk cargo constitutes the majority of tonnage carried by most freight railroads. Bulk cargo is commodity cargo that is transported unpackaged in large quantities. These cargo are usually dropped or poured, with a spout or shovel bucket, as a liquid or solid, into a railroad car. Liquids, such as petroleum and chemicals, and compressed gases are carried by rail in tank cars.
Hopper cars are freight cars used to transport dry bulk commodities such as coal, ore, grain, track ballast, and the like. This type of car is distinguished from a gondola car (US) or open wagon (UIC) in that it has opening doors on the underside or on the sides to discharge its cargo. The development of the hopper car went along with the development of automated handling of such commodities, with automated loading and unloading facilities. There are two main types of hopper car: open and covered; Covered hopper cars are used for cargo that must be protected from the elements (chiefly rain) such as grain, sugar, and fertilizer. Open cars are used for commodities such as coal, which can get wet and dry out with less harmful effect. Hopper cars have been used by railways worldwide whenever automated cargo handling has been desired. Rotary car dumpers simply invert the car to unload it, and have become the preferred unloading technology, especially in North America; they permit the use of simpler, tougher, and more compact (because sloping ends are not required) gondola cars instead of hoppers.
Heavy-duty ore traffic
The heaviest trains in the world carry bulk traffic such as iron ore and coal. Loads can be 130 tonnes per wagon and tens of thousands of tonnes per train. Daqin Railway transports more than 1 million tonnes of coal to the east sea shore of China every day and in 2009 is the busiest freight line in the world Such economies of scale drive down operating costs. Some freight trains can be over 7 km long.
Containerization
Containerization is a system of intermodal freight transport using standard shipping containers (also known as 'ISO containers' or 'isotainers') that can be loaded with cargo, sealed and placed onto container ships, railroad cars, and trucks. Containerization has revolutionized cargo shipping. approximately 90% of non-bulk cargo worldwide is moved by containers stacked on transport ships; 26% of all container transshipment is carried out in China. , some 18 million total containers make over 200 million trips per year.
Use of the same basic sizes of containers across the globe has lessened the problems caused by incompatible rail gauge sizes in different countries by making transshipment between different gauge trains easier.
While typically containers travel for many hundreds or even thousands kilometers on the railway, Swiss experience shows that with properly coordinated logistics, it is possible to operate a viable intermodal (truck + rail) cargo transportation system even within a country as small as Switzerland.
Double-stack containerization
Most flatcars (flat wagons) cannot carry more than one standard container on top of another because of limited vertical clearance, even though they usually can carry the weight of two. Carrying half the possible weight is inefficient. However, if the rail line has been built with sufficient vertical clearance, a double-stack car can accept a container and still leave enough clearance for another container on top. Both China and India run electrified double-stack trains with overhead wiring.
In the United States, Southern Pacific Railroad (SP) with Malcom McLean came up with the idea of the first double-stack intermodal car in 1977. SP then designed the first car with ACF Industries that same year. At first it was slow to become an industry standard, then in 1984 American President Lines started working with the SP and that same year, the first all "double stack" train left Los Angeles, California for South Kearny, New Jersey, under the name of "Stacktrain" rail service. Along the way the train transferred from the SP to Conrail. It saved shippers money and now accounts for almost 70 percent of intermodal freight transport shipments in the United States, in part due to the generous vertical clearances used by U.S. railroads. These lines are diesel-operated with no overhead wiring.
Double stacking is also used in Australia between Adelaide, Parkes, Perth and Darwin. These are diesel-only lines with no overhead wiring. Saudi Arabian Railways use double-stack in its Riyadh-Dammam corridor. Double stacking is used in India for selected freight-only lines.
Rolling highways and piggyback service
In some countries rolling highway, or rolling road, trains are used; trucks can drive straight onto the train and drive off again when the end destination is reached. A system like this is used on the Channel Tunnel between the United Kingdom and France, as well as on the Konkan Railway in India. In other countries, the tractor unit of each truck is not carried on the train, only the trailer. Piggyback trains are common in the United States, where they are also known as trailer on flat car or TOFC trains, but they have lost market share to containers (COFC), with longer, 53-foot containers frequently used for domestic shipments. There are also roadrailer vehicles, which have two sets of wheels, for use in a train, or as the trailer of a road vehicle.
Special cargo
Several types of cargo are not suited for containerization or bulk; these are transported in special cars custom designed for the cargo.
Automobiles are stacked in open or closed autoracks, the vehicles being driven on or off the carriers.
Coils of steel strip are transported in modified gondolas called coil cars.
Goods that require certain temperatures during transportation can be transported in refrigerator cars (reefers, US), or refrigerated vans (UIC), but refrigerated containers are becoming more dominant.
Center beam flat cars are used to carry lumber and other building supplies.
Extra heavy and oversized loads are carried in Schnabel cars
Less than carload freight
Less-than-carload freight is any load that does not fill a boxcar or box motor or less than a Boxcar load.
Historically in North America, trains might be classified as either way freight or through freight. A way freight generally carried less-than-carload shipments to/from a location, whose origin/destination was a rail terminal yard. This product sometimes arrived at/departed from that yard by means of a through freight.
At a minimum, a way freight comprised a locomotive and caboose, to which cars called pickups and setouts were added or dropped off along the route. For convenience, smaller consignments might be carried in the caboose, which prompted some railroads to define their cabooses as way cars, although the term equally applied to boxcars used for that purpose. Way stops might be industrial sidings, stations/flag stops, settlements, or even individual residences.
With the difficulty of maintaining an exact schedule, way freights yielded to scheduled passenger and through trains. They were often mixed trains that served isolated communities. Like passenger service generally, way freights and their smaller consignments became uneconomical. In North America, the latter ceased, and the public sector took over passenger transportation.
Regional differences
Railroads are subject to the network effect: the more points they connect to, the greater the value of the system as a whole. Early railroads were built to bring resources, such as coal, ores and agricultural products from inland locations to ports for export. In many parts of the world, particularly the southern hemisphere, that is still the main use of freight railroads. Greater connectivity opens the rail network to other freight uses including non-export traffic. Rail network connectivity is limited by a number of factors, including geographical barriers, such as oceans and mountains, technical incompatibilities, particularly different track gauges and railway couplers, and political conflicts. The largest rail networks are located in North America and Eurasia. Long distance freight trains are generally longer than passenger trains, with greater length improving efficiency. Maximum length varies widely by system. (See longest trains for train lengths in different countries.)
Many countries are moving to increase speed and volume of rail freight in an attempt to win markets over or to relieve overburdened roads and/or speed up shipping in the age of online shopping. In Japan, trends towards adding rail freight shipping are more due to availability of workers rather than other concerns.
Rail freight tonnage as a percent of total moved by country:
Russia: about 12% in 2016 up 11%
Japan: 5% in 2017
Rail freight ton-milage as a percent of total moved by country:
USA: 27.4% in 2020
China: 15.9% in 2022
EU28: more than 20% of all "inland traffic" in 2021
Eurasia
There are four major interconnecting rail networks on the Eurasian land mass, along with other smaller national networks.
Most countries in the European Union participate in an auto-gauge network. The United Kingdom is linked to this network via the Channel Tunnel. The Marmaray project connects Europe with eastern Turkey, Iran, and the Middle East via a rail tunnel under the Bosphorus. The 57-km Gotthard Base Tunnel improved north–south rail connections when it opened in 2016. Spain and Portugal are mostly broad gauge, though Spain has built some standard gauge lines that connect with the European high-speed passenger network. A variety of electrification and signaling systems is in use, though this is less of an issue for freight; however, clearances prevent double-stack service on most lines. Buffer-and-screw couplings are generally used between freight vehicles, although there are plans to develop an automatic coupler compatible with the Russian SA3. See Railway coupling conversion.
The countries of the former Soviet Union, along with Finland and Mongolia, participate in a Russian gauge-compatible network, using SA3 couplers. Major lines are electrified. Russia's Trans-Siberian Railroad connects Europe with Asia, but does not have the clearances needed to carry double-stack containers. Numerous connections are available between Russian-gauge countries with their standard-gauge neighbors in the west (throughout Europe) and south (to China, North Korea, and Iran via Turkmenistan). While the USSR had important railway connections to Turkey (from Armenia) and to Iran (from Azerbaijan's Nakhchivan enclave), these have been out of service since the early 1990s, since a number of frozen conflicts in the Caucasus region have forced the closing of the rail connections between Russia and Georgia via Abkhazia, between Armenia and Azerbaijan, and between Armenia and Turkey.
China has an extensive standard-gauge network. Its freight trains use Janney couplers. China's railways connect with the standard-gauge network of North Korea in the east, with the Russian-gauge network of Russia, Mongolia, and Kazakhstan in the north, and with the meter-gauge network of Vietnam in the south.
India and Pakistan operate entirely on broad gauge networks. Indo-Pakistani wars and conflicts currently restrict rail traffic between the two countries to two passenger lines. There are also links from India to Bangladesh and Nepal, and from Pakistan to Iran, where a new, but little-used, connection to the standard-gauge network is available at Zahedan.
The four major Eurasian networks link to neighboring countries and to each other at several break of gauge points. Containerization has facilitated greater movement between networks, including a Eurasian Land Bridge.
North America
Canada, Mexico and the United States are connected by an extensive, unified standard gauge rail network. The one notable exception is the isolated Alaska Railroad, which is connected to the main network by rail barge.
Due primarily to external factors such as geography and the commodity mix favoring commodities such as coal, the modal share of freight rail in North America is one of the highest worldwide.
Rail freight is well standardized in North America, with Janney couplers and compatible air brakes. The main variations are in loading gauge and maximum car weight. Most trackage is owned by private companies that also operate freight trains on those tracks. Since the Staggers Rail Act of 1980, the freight rail industry in the U.S. has been largely deregulated. Freight cars are routinely interchanged between carriers, as needed, and are identified by company reporting marks and serial numbers. Most have computer readable automatic equipment identification transponders. With isolated exceptions, freight trains in North America are hauled by diesel locomotives, even on the electrified Northeast Corridor.
Ongoing freight-oriented development includes upgrading more lines to carry heavier and taller loads, particularly for double-stack service, and building more efficient intermodal terminals and transload facilities for bulk cargo. Many railroads interchange in Chicago, and a number of improvements are underway or proposed to eliminate bottlenecks there. The U.S. Rail Safety Improvement Act of 2008 mandates eventual conversion to Positive Train Control signaling. In the 2010s, most North American Class I railroads have adopted some form of precision railroading.
Central America
The Guatemala railroad is currently inactive, preventing rail shipment south of Mexico. Panama has freight rail service, recently converted to standard gauge, that parallels the Panama Canal. A few other rail systems in Central America are still in operation, but most have closed. There has never been a rail line through Central America to South America.
South America
Brazil has a large rail network, mostly metre gauge, with some broad gauge. It runs some of the heaviest iron ore trains in the world on its metre gauge network.
Argentina have Indian gauge networks in the south, standard gauge in the east and metre gauge networks in the north. The metre gauge networks are connected at one point, but there has never been a broad gauge connection. (A metre-gauge connection between the two broad gauge networks, the Transandine Railway was constructed but is not currently in service. | Technology | Rail and cable transport | null |
2234192 | https://en.wikipedia.org/wiki/Reduction%20potential | Reduction potential | Redox potential (also known as oxidation / reduction potential, ORP, pe, , or ) is a measure of the tendency of a chemical species to acquire electrons from or lose electrons to an electrode and thereby be reduced or oxidised respectively. Redox potential is expressed in volts (V). Each species has its own intrinsic redox potential; for example, the more positive the reduction potential (reduction potential is more often used due to general formalism in electrochemistry), the greater the species' affinity for electrons and tendency to be reduced.
Measurement and interpretation
In aqueous solutions, redox potential is a measure of the tendency of the solution to either gain or lose electrons in a reaction. A solution with a higher (more positive) reduction potential than some other molecule will have a tendency to gain electrons from this molecule (i.e. to be reduced by oxidizing this other molecule) and a solution with a lower (more negative) reduction potential will have a tendency to lose electrons to other substances (i.e. to be oxidized by reducing the other substance). Because the absolute potentials are next to impossible to accurately measure, reduction potentials are defined relative to a reference electrode. Reduction potentials of aqueous solutions are determined by measuring the potential difference between an inert sensing electrode in contact with the solution and a stable reference electrode connected to the solution by a salt bridge.
The sensing electrode acts as a platform for electron transfer to or from the reference half cell; it is typically made of platinum, although gold and graphite can be used as well. The reference half cell consists of a redox standard of known potential. The standard hydrogen electrode (SHE) is the reference from which all standard redox potentials are determined, and has been assigned an arbitrary half cell potential of 0.0 V. However, it is fragile and impractical for routine laboratory use. Therefore, other more stable reference electrodes such as silver chloride and saturated calomel (SCE) are commonly used because of their more reliable performance.
Although measurement of the redox potential in aqueous solutions is relatively straightforward, many factors limit its interpretation, such as effects of solution temperature and pH, irreversible reactions, slow electrode kinetics, non-equilibrium, presence of multiple redox couples, electrode poisoning, small exchange currents, and inert redox couples. Consequently, practical measurements seldom correlate with calculated values. Nevertheless, reduction potential measurement has proven useful as an analytical tool in monitoring changes in a system rather than determining their absolute value (e.g. process control and titrations).
Explanation
Similar to how the concentration of hydrogen ions determines the acidity or pH of an aqueous solution, the tendency of electron transfer between a chemical species and an electrode determines the redox potential of an electrode couple. Like pH, redox potential represents how easily electrons are transferred to or from species in solution. Redox potential characterises the ability under the specific condition of a chemical species to lose or gain electrons instead of the amount of electrons available for oxidation or reduction.
The notion of is used with Pourbaix diagrams. is a dimensionless number and can easily be related to EH by the following relationship:
where, is the thermal voltage, with , the gas constant (), , the absolute temperature in Kelvin (298.15 K = 25 °C = 77 °F), , the Faraday constant (96 485 coulomb/mol of ), and λ = ln(10) ≈ 2.3026.
In fact, is defined as the negative logarithm of the free electron concentration in solution, and is directly proportional to the redox potential. Sometimes is used as a unit of reduction potential instead of , for example, in environmental chemistry. If one normalizes of hydrogen to zero, one obtains the relation at room temperature. This notion is useful for understanding redox potential, although the transfer of electrons, rather than the absolute concentration of free electrons in thermal equilibrium, is how one usually thinks of redox potential. Theoretically, however, the two approaches are equivalent.
Conversely, one could define a potential corresponding to pH as a potential difference between a solute and pH neutral water, separated by porous membrane (that is permeable to hydrogen ions). Such potential differences actually do occur from differences in acidity on biological membranes. This potential (where pH neutral water is set to 0 V) is analogous with redox potential (where standardized hydrogen solution is set to 0 V), but instead of hydrogen ions, electrons are transferred across in the redox case. Both pH and redox potentials are properties of solutions, not of elements or chemical compounds themselves, and depend on concentrations, temperature etc.
The table below shows a few reduction potentials, which can be changed to oxidation potentials by reversing the sign. Reducers donate electrons to (or "reduce") oxidizing agents, which are said to "be reduced by" the reducer. The reducer is stronger when it has a more negative reduction potential and weaker when it has a more positive reduction potential. The more positive the reduction potential the greater the species' affinity for electrons and tendency to be reduced. The following table provides the reduction potentials of the indicated reducing agent at 25 °C. For example, among sodium (Na) metal, chromium (Cr) metal, cuprous (Cu+) ion and chloride (Cl−) ion, it is Na metal that is the strongest reducing agent while Cl− ion is the weakest; said differently, Na+ ion is the weakest oxidizing agent in this list while molecule is the strongest.
Some elements and compounds can be both reducing or oxidizing agents. Hydrogen gas is a reducing agent when it reacts with non-metals and an oxidizing agent when it reacts with metals.
Hydrogen (whose reduction potential is 0.0) acts as an oxidizing agent because it accepts an electron donation from the reducing agent lithium (whose reduction potential is -3.04), which causes Li to be oxidized and Hydrogen to be reduced.
Hydrogen acts as a reducing agent because it donates its electrons to fluorine, which allows fluorine to be reduced.
Standard reduction potential
The standard reduction potential is measured under standard conditions: T = 298.15 K (25 °C, or 77 °F), a unity activity () for each ion participating into the reaction, a partial pressure of 1 atm (1.013 bar) for each gas taking part into the reaction, and metals in their pure state. The standard reduction potential is defined relative to the standard hydrogen electrode (SHE) used as reference electrode, which is arbitrarily given a potential of 0.00 V. However, because these can also be referred to as "redox potentials", the terms "reduction potentials" and "oxidation potentials" are preferred by the IUPAC. The two may be explicitly distinguished by the symbols and , with .
Half cells
The relative reactivities of different half cells can be compared to predict the direction of electron flow. A higher means there is a greater tendency for reduction to occur, while a lower one means there is a greater tendency for oxidation to occur.
Any system or environment that accepts electrons from a normal hydrogen electrode is a half cell that is defined as having a positive redox potential; any system donating electrons to the hydrogen electrode is defined as having a negative redox potential. is usually expressed in volts (V) or millivolts (mV). A high positive indicates an environment that favors oxidation reaction such as free oxygen. A low negative indicates a strong reducing environment, such as free metals.
Sometimes when electrolysis is carried out in an aqueous solution, water, rather than the solute, is oxidized or reduced. For example, if an aqueous solution of NaCl is electrolyzed, water may be reduced at the cathode to produce H2(g) and OH− ions, instead of Na+ being reduced to Na(s), as occurs in the absence of water. It is the reduction potential of each species present that will determine which species will be oxidized or reduced.
Absolute reduction potentials can be determined if one knows the actual potential between electrode and electrolyte for any one reaction. Surface polarization interferes with measurements, but various sources give an estimated potential for the standard hydrogen electrode of 4.4 V to 4.6 V (the electrolyte being positive).
Half-cell equations can be combined if the one corresponding to oxidation is reversed so that each electron given by the reductant is accepted by the oxidant. In this way, the global combined equation no longer contains electrons.
Nernst equation
The and pH of a solution are related by the Nernst equation as commonly represented by a Pourbaix diagram . For a half cell equation, conventionally written as a reduction reaction (i.e., electrons accepted by an oxidant on the left side):
The half-cell standard reduction potential is given by
where is the standard Gibbs free energy change, is the number of electrons involved, and is Faraday's constant. The Nernst equation relates pH and :
where curly brackets indicate activities, and exponents are shown in the conventional manner.This equation is the equation of a straight line for as a function of pH with a slope of volt (pH has no units).
This equation predicts lower at higher pH values. This is observed for the reduction of O2 into H2O, or OH−, and for reduction of H+ into H2:
In most (if not all) of the reduction reactions involving oxyanions with a central redox-active atom, oxide anions () being in excess are freed-up when the central atom is reduced. The acid-base neutralization of each oxide ion consumes 2 or one molecule as follows:
+ 2 ⇌
+ ⇌ 2
This is why protons are always engaged as reagent on the left side of the reduction reactions as can be generally observed in the table of standard reduction potential (data page).
If, in very rare instances of reduction reactions, the H+ were the products formed by a reduction reaction and thus appearing on the right side of the equation, the slope of the line would be inverse and thus positive (higher at higher pH).
An example of that would be the reductive dissolution of magnetite ( ≈ ·FeO with 2 and 1 ) to form 3 HFeO (in which dissolved iron, Fe(II), is divalent and much more soluble than Fe(III)), while releasing one :
where:
Note that the slope 0.0296 of the line is −1/2 of the −0.05916 value above, since . Note also that the value –0.0885 corresponds to –0.05916 × 3/2.
Biochemistry
Many enzymatic reactions are oxidation–reduction reactions, in which one compound is oxidized and another compound is reduced. The ability of an organism to carry out oxidation–reduction reactions depends on the oxidation–reduction state of the environment, or its reduction potential ().
Strictly aerobic microorganisms are generally active at positive values, whereas strict anaerobes are generally active at negative values. Redox affects the solubility of nutrients, especially metal ions.
There are organisms that can adjust their metabolism to their environment, such as facultative anaerobes. Facultative anaerobes can be active at positive Eh values, and at negative Eh values in the presence of oxygen-bearing inorganic compounds, such as nitrates and sulfates.
In biochemistry, apparent standard reduction potentials, or formal potentials, (, noted with a prime mark in superscript) calculated at pH 7 closer to the pH of biological and intra-cellular fluids are used to more easily assess if a given biochemical redox reaction is possible. They must not be confused with the common standard reduction potentials determined under standard conditions (; ) with the concentration of each dissolved species being taken as 1 M, and thus .
Environmental chemistry
In the field of environmental chemistry, the reduction potential is used to determine if oxidizing or reducing conditions are prevalent in water or soil, and to predict the states of different chemical species in the water, such as dissolved metals. pe values in water range from -12 to 25; the levels where the water itself becomes reduced or oxidized, respectively.
The reduction potentials in natural systems often lie comparatively near one of the boundaries of the stability region of water. Aerated surface water, rivers, lakes, oceans, rainwater and acid mine water, usually have oxidizing conditions (positive potentials). In places with limitations in air supply, such as submerged soils, swamps and marine sediments, reducing conditions (negative potentials) are the norm. Intermediate values are rare and usually a temporary condition found in systems moving to higher or lower pe values.
In environmental situations, it is common to have complex non-equilibrium conditions between a large number of species, meaning that it is often not possible to make accurate and precise measurements of the reduction potential. However, it is usually possible to obtain an approximate value and define the conditions as being in the oxidizing or reducing regime.
In the soil there are two main redox constituents: 1) anorganic redox systems (mainly ox/red compounds of Fe and Mn) and measurement in water extracts; 2) natural soil samples with all microbial and root components and measurement by direct method.
Water quality
The oxido-reduction potential (ORP) can be used for the systems monitoring water quality with the advantage of a single-value measure for the disinfection potential, showing the effective activity of the disinfectant rather than the applied dose. For example, E. coli, Salmonella, Listeria and other pathogens have survival times of less than 30 seconds when the ORP is above 665 mV, compared to more than 300 seconds when ORP is below 485 mV.
A study was conducted comparing traditional parts per million (ppm) chlorination reading and ORP in Hennepin County, Minnesota. The results of this study presents arguments in favor of the inclusion of ORP above 650 mV in the local health regulation codes.
Geochemistry and mineralogy
Eh–pH (Pourbaix) diagrams are commonly used in mining and geology for assessment of the stability fields of minerals and dissolved species. Under the conditions where a mineral (solid) phase is predicted to be the most stable form of an element, these diagrams show that mineral. As the predicted results are all from thermodynamic (at equilibrium state) evaluations, these diagrams should be used with caution. Although the formation of a mineral or its dissolution may be predicted to occur under a set of conditions, the process may practically be negligible because its rate is too slow. Consequently, kinetic evaluations at the same time are necessary. Nevertheless, the equilibrium conditions can be used to evaluate the direction of spontaneous changes and the magnitude of the driving force behind them.
| Physical sciences | Electrochemistry | Chemistry |
2234333 | https://en.wikipedia.org/wiki/Data%20%28computer%20science%29 | Data (computer science) | In computer science, data (treated as singular, plural, or as a mass noun) is any sequence of one or more symbols; datum is a single symbol of data. Data requires interpretation to become information. Digital data is data that is represented using the binary number system of ones (1) and zeros (0), instead of analog representation. In modern (post-1960) computer systems, all data is digital.
Data exists in three states: data at rest, data in transit and data in use. Data within a computer, in most cases, moves as parallel data. Data moving to or from a computer, in most cases, moves as serial data. Data sourced from an analog device, such as a temperature sensor, may be converted to digital using an analog-to-digital converter. Data representing quantities, characters, or symbols on which operations are performed by a computer are stored and recorded on magnetic, optical, electronic, or mechanical recording media, and transmitted in the form of digital electrical or optical signals. Data pass in and out of computers via peripheral devices.
Physical computer memory elements consist of an address and a byte/word of data storage. Digital data are often stored in relational databases, like tables or SQL databases, and can generally be represented as abstract key/value pairs. Data can be organized in many different types of data structures, including arrays, graphs, and objects. Data structures can store data of many different types, including numbers, strings and even other data structures.
Characteristics
Metadata helps translate data to information. Metadata is data about the data. Metadata may be implied, specified or given.
Data relating to physical events or processes will have a temporal component. This temporal component may be implied. This is the case when a device such as a temperature logger receives data from a temperature sensor. When the temperature is received it is assumed that the data has a temporal reference of now. So the device records the date, time and temperature together. When the data logger communicates temperatures, it must also report the date and time as metadata for each temperature reading.
Fundamentally, computers follow a sequence of instructions they are given in the form of data. A set of instructions to perform a given task (or tasks) is called a program. A program is data in the form of coded instructions to control the operation of a computer or other machine. In the nominal case, the program, as executed by the computer, will consist of machine code. The elements of storage manipulated by the program, but not actually executed by the central processing unit (CPU), are also data. At its most essential, a single datum is a value stored at a specific location. Therefore, it is possible for computer programs to operate on other computer programs, by manipulating their programmatic data.
To store data bytes in a file, they have to be serialized in a file format. Typically, programs are stored in special file types, different from those used for other data. Executable files contain programs; all other files are also data files. However, executable files may also contain data used by the program which is built into the program. In particular, some executable files have a data segment, which nominally contains constants and initial values for variables, both of which can be considered data.
The line between program and data can become blurry. An interpreter, for example, is a program. The input data to an interpreter is itself a program, just not one expressed in native machine language. In many cases, the interpreted program will be a human-readable text file, which is manipulated with a text editor program. Metaprogramming similarly involves programs manipulating other programs as data. Programs like compilers, linkers, debuggers, program updaters, virus scanners and such use other programs as their data.
For example, a user might first instruct the operating system to load a word processor program from one file, and then use the running program to open and edit a document stored in another file. In this example, the document would be considered data. If the word processor also features a spell checker, then the dictionary (word list) for the spell checker would also be considered data. The algorithms used by the spell checker to suggest corrections would be either machine code data or text in some interpretable programming language.
In an alternate usage, binary files (which are not human-readable) are sometimes called data as distinguished from human-readable text.
The total amount of digital data in 2007 was estimated to be 281 billion gigabytes (281 exabytes).
Data keys and values, structures and persistence
Keys in data provide the context for values. Regardless of the structure of data, there is always a key component present. Keys in data and data-structures are essential for giving meaning to data values. Without a key that is directly or indirectly associated with a value, or collection of values in a structure, the values become meaningless and cease to be data. That is to say, there has to be a key component linked to a value component in order for it to be considered data.
Data can be represented in computers in multiple ways, as per the following examples:
RAM
Random access memory (RAM) holds data that the CPU has direct access to. A CPU may only manipulate data within its processor registers or memory. This is as opposed to data storage, where the CPU must direct the transfer of data between the storage device (disk, tape...) and memory. RAM is an array of linear contiguous locations that a processor may read or write by providing an address for the read or write operation. The processor may operate on any location in memory at any time in any order. In RAM the smallest element of data is the binary bit. The capabilities and limitations of accessing RAM are processor specific. In general main memory is arranged as an array of locations beginning at address 0 (hexadecimal 0). Each location can store usually 8 or 32 bits depending on the computer architecture.
Keys
Data keys need not be a direct hardware address in memory. Indirect, abstract and logical keys codes can be stored in association with values to form a data structure. Data structures have predetermined offsets (or links or paths) from the start of the structure, in which data values are stored. Therefore, the data key consists of the key to the structure plus the offset (or links or paths) into the structure. When such a structure is repeated, storing variations of the data values and the data keys within the same repeating structure, the result can be considered to resemble a table, in which each element of the repeating structure is considered to be a column and each repetition of the structure is considered as a row of the table. In such an organization of data, the data key is usually a value in one (or a composite of the values in several) of the columns.
Organised recurring data structures
The tabular view of repeating data structures is only one of many possibilities. Repeating data structures can be organised hierarchically, such that nodes are linked to each other in a cascade of parent-child relationships. Values and potentially more complex data-structures are linked to the nodes. Thus the nodal hierarchy provides the key for addressing the data structures associated with the nodes. This representation can be thought of as an inverted tree. Modern computer operating system file systems are a common example; and XML is another.
Sorted or ordered data
Data has some inherent features when it is sorted on a key. All the values for subsets of the key appear together. When passing sequentially through groups of the data with the same key, or a subset of the key changes, this is referred to in data processing circles as a break, or a control break. It particularly facilitates the aggregation of data values on subsets of a key.
Peripheral storage
Until the advent of bulk non-volatile memory like flash, persistent data storage was traditionally achieved by writing the data to external block devices like magnetic tape and disk drives. These devices typically seek to a location on the magnetic media and then read or write blocks of data of a predetermined size. In this case, the seek location on the media, is the data key and the blocks are the data values. Early used raw disk data file-systems or disc operating systems reserved contiguous blocks on the disc drive for data files. In those systems, the files could be filled up, running out of data space before all the data had been written to them. Thus much unused data space was reserved unproductively to ensure adequate free space for each file. Later file-systems introduced partitions. They reserved blocks of disc data space for partitions and used the allocated blocks more economically, by dynamically assigning blocks of a partition to a file as needed. To achieve this, the file system had to keep track of which blocks were used or unused by data files in a catalog or file allocation table. Though this made better use of the disc data space, it resulted in fragmentation of files across the disc, and a concomitant performance overhead due additional seek time to read the data. Modern file systems reorganize fragmented files dynamically to optimize file access times. Further developments in file systems resulted in virtualization of disc drives i.e. where a logical drive can be defined as partitions from a number of physical drives.
Indexed data
Retrieving a small subset of data from a much larger set may imply inefficiently searching through the data sequentially. Indexes are a way to copy out keys and location addresses from data structures in files, tables and data sets, then organize them using inverted tree structures to reduce the time taken to retrieve a subset of the original data. In order to do this, the key of the subset of data to be retrieved must be known before retrieval begins. The most popular indexes are the B-tree and the dynamic hash key indexing methods. Indexing is overhead for filing and retrieving data. There are other ways of organizing indexes, e.g. sorting the keys and using a binary search algorithm.
Abstraction and indirection
Object-oriented programming uses two basic concepts for understanding data and software:
The taxonomic rank-structure of classes, which is an example of a hierarchical data structure; and
at run time, the creation of references to in-memory data-structures of objects that have been instantiated from a class library.
It is only after instantiation that an object of a specified class exists. After an object's reference is cleared, the object also ceases to exist. The memory locations where the object's data was stored are garbage and are reclassified as unused memory available for reuse.
Database data
The advent of databases introduced a further layer of abstraction for persistent data storage. Databases use metadata, and a structured query language protocol between client and server systems, communicating over a computer network, using a two phase commit logging system to ensure transactional completeness, when saving data.
Parallel distributed data processing
Modern scalable and high-performance data persistence technologies, such as Apache Hadoop, rely on massively parallel distributed data processing across many commodity computers on a high bandwidth network. In such systems, the data is distributed across multiple computers and therefore any particular computer in the system must be represented in the key of the data, either directly, or indirectly. This enables the differentiation between two identical sets of data, each being processed on a different computer at the same time.
| Technology | Basics_3 | null |
2235522 | https://en.wikipedia.org/wiki/Octahedral%20molecular%20geometry | Octahedral molecular geometry | In chemistry, octahedral molecular geometry, also called square bipyramidal, describes the shape of compounds with six atoms or groups of atoms or ligands symmetrically arranged around a central atom, defining the vertices of an octahedron. The octahedron has eight faces, hence the prefix octa. The octahedron is one of the Platonic solids, although octahedral molecules typically have an atom in their centre and no bonds between the ligand atoms. A perfect octahedron belongs to the point group Oh. Examples of octahedral compounds are sulfur hexafluoride SF6 and molybdenum hexacarbonyl Mo(CO)6. The term "octahedral" is used somewhat loosely by chemists, focusing on the geometry of the bonds to the central atom and not considering differences among the ligands themselves. For example, , which is not octahedral in the mathematical sense due to the orientation of the bonds, is referred to as octahedral.
The concept of octahedral coordination geometry was developed by Alfred Werner to explain the stoichiometries and isomerism in coordination compounds. His insight allowed chemists to rationalize the number of isomers of coordination compounds. Octahedral transition-metal complexes containing amines and simple anions are often referred to as Werner-type complexes.
Isomerism in octahedral complexes
When two or more types of ligands (La, Lb, ...) are coordinated to an octahedral metal centre (M), the complex can exist as isomers. The naming system for these isomers depends upon the number and arrangement of different ligands.
cis and trans
For MLL, two isomers exist. These isomers of MLL are cis, if the Lb ligands are mutually adjacent, and trans, if the Lb groups are situated 180° to each other. It was the analysis of such complexes that led Alfred Werner to the 1913 Nobel Prize–winning postulation of octahedral complexes.
Facial and meridional isomers
For MLL, two isomers are possible - a facial isomer (fac) in which each set of three identical ligands occupies one face of the octahedron surrounding the metal atom, so that any two of these three ligands are mutually cis, and a meridional isomer (mer) in which each set of three identical ligands occupies a plane passing through the metal atom.
Δ vs Λ isomers
Complexes with three bidentate ligands or two cis bidentate ligands can exist as enantiomeric pairs. Examples are shown below.
Other
For MLLL, a total of five geometric isomers and six stereoisomers are possible.
One isomer in which all three pairs of identical ligands are trans
Three isomers in which one pair of identical ligands (La or Lb or Lc) is trans while the other two pairs of ligands are mutually cis.
Two enantiomeric pair in which all three pairs of identical ligands are cis. These are equivalent to the Δ vs Λ isomers mentioned above.
The number of possible isomers can reach 30 for an octahedral complex with six different ligands (in contrast, only two stereoisomers are possible for a tetrahedral complex with four different ligands). The following table lists all possible combinations for monodentate ligands:
Thus, all 15 diastereomers of MLaLbLcLdLeLf are chiral, whereas for MLLbLcLdLe, six diastereomers are chiral and three are not (the ones where La are trans). One can see that octahedral coordination allows much greater complexity than the tetrahedron that dominates organic chemistry. The tetrahedron MLaLbLcLd exists as a single enantiomeric pair. To generate two diastereomers in an organic compound, at least two carbon centers are required.
Deviations from ideal symmetry
Jahn–Teller effect
The term can also refer to octahedral influenced by the Jahn–Teller effect, which is a common phenomenon encountered in coordination chemistry. This reduces the symmetry of the molecule from Oh to D4h and is known as a tetragonal distortion.
Distorted octahedral geometry
Some molecules, such as XeF6 or , have a lone pair that distorts the symmetry of the molecule from Oh to C3v. The specific geometry is known as a monocapped octahedron, since it is derived from the octahedron by placing the lone pair over the centre of one triangular face of the octahedron as a "cap" (and shifting the positions of the other six atoms to accommodate it). These both represent a divergence from the geometry predicted by VSEPR, which for AX6E1 predicts a pentagonal pyramidal shape.
Bioctahedral structures
Pairs of octahedra can be fused in a way that preserves the octahedral coordination geometry by replacing terminal ligands with bridging ligands. Two motifs for fusing octahedra are common: edge-sharing and face-sharing. Edge- and face-shared bioctahedra have the formulas [M2L8(μ-L)]2 and M2L6(μ-L)3, respectively. Polymeric versions of the same linking pattern give the stoichiometries [ML2(μ-L)2]∞ and [M(μ-L)3]∞, respectively.
The sharing of an edge or a face of an octahedron gives a structure called bioctahedral. Many metal pentahalide and pentaalkoxide compounds exist in solution and the solid with bioctahedral structures. One example is niobium pentachloride. Metal tetrahalides often exist as polymers with edge-sharing octahedra. Zirconium tetrachloride is an example. Compounds with face-sharing octahedral chains include MoBr3, RuBr3, and TlBr3.
Trigonal prismatic geometry
For compounds with the formula MX6, the chief alternative to octahedral geometry is a trigonal prismatic geometry, which has symmetry D3h. In this geometry, the six ligands are also equivalent. There are also distorted trigonal prisms, with C3v symmetry; a prominent example is . The interconversion of Δ- and Λ-complexes, which is usually slow, is proposed to proceed via a trigonal prismatic intermediate, a process called the "Bailar twist". An alternative pathway for the racemization of these same complexes is the Ray–Dutt twist.
Splitting of d-orbital energies
For a free ion, e.g. gaseous Ni2+ or Mo0, the energy of the d-orbitals are equal in energy; that is, they are "degenerate". In an octahedral complex, this degeneracy is lifted. The energy of the dz2 and dx2−y2, the so-called eg set, which are aimed directly at the ligands are destabilized. On the other hand, the energy of the dxz, dxy, and dyz orbitals, the so-called t2g set, are stabilized. The labels t2g and eg refer to irreducible representations, which describe the symmetry properties of these orbitals. The energy gap separating these two sets is the basis of crystal field theory and the more comprehensive ligand field theory. The loss of degeneracy upon the formation of an octahedral complex from a free ion is called crystal field splitting or ligand field splitting. The energy gap is labeled Δo, which varies according to the number and nature of the ligands. If the symmetry of the complex is lower than octahedral, the eg and t2g levels can split further. For example, the t2g and eg sets split further in trans-MLL.
Ligand strength has the following order for these electron donors:
weak: iodine < bromine < fluorine < acetate < oxalate < water < pyridine < cyanide :strong
So called "weak field ligands" give rise to small Δo and absorb light at longer wavelengths.
Reactions
Given that a virtually uncountable variety of octahedral complexes exist, it is not surprising that a wide variety of reactions have been described. These reactions can be classified as follows:
Ligand substitution reactions (via a variety of mechanisms)
Ligand addition reactions, including among many, protonation
Redox reactions (where electrons are gained or lost)
Rearrangements where the relative stereochemistry of the ligand changes within the coordination sphere.
Many reactions of octahedral transition metal complexes occur in water. When an anionic ligand replaces a coordinated water molecule the reaction is called an anation. The reverse reaction, water replacing an anionic ligand, is called aquation. For example, the slowly yields in water, especially in the presence of acid or base. Addition of concentrated HCl converts the aquo complex back to the chloride, via an anation process.
| Physical sciences | Bond structure | Chemistry |
2236780 | https://en.wikipedia.org/wiki/Aedes%20aegypti | Aedes aegypti | Aedes aegypti (; or from Greek 'hateful' and from Latin, meaning 'of Egypt'), the yellow fever mosquito, is a mosquito that can spread dengue fever, chikungunya, Zika fever, Mayaro and yellow fever viruses, and other disease agents. The mosquito can be recognized by black and white markings on its legs and a marking in the form of a lyre on the upper surface of its thorax. This mosquito originated in Africa, but is now found in tropical, subtropical and temperate regions throughout the world.
Biology
Aedes aegypti is a , dark mosquito which can be recognized by white markings on its legs and a marking in the form of a lyre on the upper surface of its thorax. Females are larger than males. Microscopically females possess small palps tipped with silver or white scales, and their antennae have sparse short hairs, whereas those of males are feathery. Aedes aegypti can be confused with Aedes albopictus without a magnifying glass: the latter have a white stripe on the top of the mid thorax.
Males live off fruit and only the female bites for blood, which she needs to mature her eggs. To find a host, she is attracted to chemical compounds emitted by mammals, including ammonia, carbon dioxide, lactic acid, and octenol. Scientists at The United States Department of Agriculture (USDA) Agricultural Research Service studied the specific chemical structure of octenol to better understand why this chemical attracts the mosquito to its host and found the mosquito has a preference for "right-handed" (dextrorotatory) octenol molecules. The preference for biting humans is dependent on expression of the odorant receptor AaegOr4.
The white eggs are laid separately into water and not together, unlike most other mosquitoes, and soon turn black. The larvae feed on bacteria, growing over a period of weeks until they reach the pupa stage.
The lifespan of an adult Ae. aegypti is two to four weeks depending on conditions, but the eggs can be viable for over a year in a dry state, which allows the mosquito to re-emerge after a cold winter or dry spell.
Hosts
Mammalian hosts include domesticated horses, and feral and wild horses and equids more generally.
As of 2009 birds were found to be the best food supply for Ae. aegypti among all taxa.
Distribution
Aedes aegypti originated in Africa and was spread to the New World through the slave trade, but is now found in tropical, subtropical and temperate regions throughout the world.
Ae. aegypti distribution has increased in the past two to three decades worldwide, and it is considered to be among the most widespread mosquito species.
In 2016, Zika virus-capable mosquito populations have been found adapting for persistence in warm temperate climates. Such a population has been identified to exist in parts of Washington, DC, and genetic evidence suggests they survived at least the last four winters in the region. One of the study researchers noted, "...some mosquito species are finding ways to survive in normally restrictive environments by taking advantage of underground refugia".
As the world's climate becomes warmer, the range of Aedes aegypti and a hardier species originating in Asia, the tiger mosquito Aedes albopictus, which can expand its range to relatively cooler climates, will inexorably spread north and south. Sadie Ryan of the University of Florida was the lead author in a 2019 study that estimated the vulnerability of naïve populations in geographic regions that currently do not harbor vectors i.e., for Zika in the Old World. Ryan's co-author, Georgetown University's Colin Carlson remarked,"Plain and simple, climate change is going to kill a lot of people." As of 2020, the Northern Territory Government Australia and the Darwin City Council have recommended tropical cities initiate rectification programs to rid their cities of potential mosquito breeding stormwater sumps. A 2019 study found that accelerating urbanization and human movement would also contribute to the spread of Aedes mosquitoes.
In continental Europe, Aedes aegypti is not established but it has been found in localities close to Europe such as the Asian part of Turkey. However, a single adult female specimen was found in Marseille (Southern France) in 2018. On the basis of a genetic study and an analysis of the movements of commercial ships, the origin of the specimen could be traced as coming from Cameroon, in Central Africa.
Genomics
In 2007, the genome of Aedes aegypti was published, after it had been sequenced and analyzed by a consortium including scientists at The Institute for Genomic Research (now part of the J. Craig Venter Institute), the European Bioinformatics Institute, the Broad Institute, and the University of Notre Dame. The effort in sequencing its DNA was intended to provide new avenues for research into insecticides and possible genetic modification to prevent the spread of virus. This was the second mosquito species to have its genome sequenced in full (the first was Anopheles gambiae). The published data included the 1.38 billion base pairs containing the insect's estimated 15,419 protein-encoding genes. The sequence indicates the species diverged from Drosophila melanogaster (the common fruit fly) about , and Anopheles gambiae and this species diverged about . Matthews et al., 2018 finds A. aegypti to carry a large and diverse number of transposable elements. Their analysis suggests this is common to all mosquitoes.
Vector of disease
Aedes aegypti is a vector for transmitting numerous pathogens. According to the Walter Reed Biosystematics Units as of 2022, it is associated with the following 54 viruses and 2 species of Plasmodium:
Aino virus (AINOV), African horse sickness virus (AHSV), Bozo virus (BOZOV), Bussuquara virus (BSQV), Bunyamwera virus (BUNV), Catu virus (CATUV), Chikungunya virus (CHIKV), Chandipura vesiculovirus (CHPV), Cypovirus (unnamed), Cache Valley virus (CVV), Dengue virus (DENV), Eastern Equine Encephalitis virus (EEEV), Epizootic hemorrhagic disease virus (EHDV), Guaroa virus (GROV), Hart Park virus (HPV), Ilheus virus (ILHV), Irituia virus (IRIV), Israel Turkey Meningoencephalitis virus (ITV), Japanaut virus (JAPV), Joinjakaka (JOIV), Japanese encephalitis virus (JBEV), Ketapang virus (KETV), Kunjin virus (KUNV), La Crosse virus (LACV), Mayaro virus (MAYV), Marburg virus (MBGV), Marco virus (MCOV), Melao virus (MELV), Marituba virus (MTBV), Mount Elgon bat virus (MEBV), Mucambo virus (MUCV), Murray Valley Encephalitis virus (MVEV), Navarro virus (NAVV), Nepuyo virus (NEPV), Nola virus (NOLV), Ntaya virus (NTAV), Oriboca virus (ORIV), Orungo virus (ORUV), Restan virus (RESV), Rift Valley fever virus (RVFV), Semliki Forest virus (SFV), Sindbis virus (SINV), Tahyna virus(TAHV), Tsuruse virus (TSUV), Tyuleniy virus (TYUV), Venezuelan equine encephalitis virus (VEEV), Vesicular stomatitis virus (Indiana serotype), Warrego virus (WARV), West Nile virus (WNV), Wesselsbron virus (WSLV), Yaounde virus (YAOV), Yellow fever virus (YFV), Zegla virus (ZEGV), Zika virus, as well as Plasmodium gallinaceum and Plasmodium lophurae.
This mosquito also mechanically transmits some veterinary diseases. In 1952 Fenner et al., found it transmitting the myxoma virus between rabbits and in 2001 Chihota et al., the lumpy skin disease virus between cattle.
The yellow fever mosquito can contribute to the spread of reticular cell sarcoma among Syrian hamsters.
Bite prevention methods
The Centers for Disease Control and Prevention traveler's page on preventing dengue fever suggests using mosquito repellents that contain DEET (N, N-diethylmetatoluamide, 20% to 30%). It also suggests:
Although Aedes aegypti mosquitoes most commonly feed at dusk and dawn, indoors, in shady areas, or when the weather is cloudy, "they can bite and spread infection all year long and at any time of day."
Once a week, scrub off eggs sticking to wet containers, seal or discard them. The mosquitoes prefer to breed in areas of stagnant water, such as flower vases, uncovered barrels, buckets, and discarded tires, but the most dangerous areas are wet shower floors and toilet tanks, as they allow the mosquitos to breed in the residence. Research has shown that certain chemicals emanating from bacteria in water containers stimulate the female mosquitoes to lay their eggs. They are particularly motivated to lay eggs in water containers that have the correct amounts of specific fatty acids associated with bacteria involved in the degradation of leaves and other organic matter in water. The chemicals associated with the microbial stew are far more stimulating to discerning female mosquitoes than plain or filtered water in which the bacteria once lived.
Wear long-sleeved clothing and long pants when outdoors during the day and evening.
Use mosquito netting over the bed if the bedroom is not air conditioned or screened, and for additional protection, treat the mosquito netting with the insecticide permethrin.
Insect repellents containing DEET (particularly concentrated products) or p-menthane-3,8-diol (from lemon eucalyptus) were effective in repelling Ae. aegypti mosquitoes, while others were less effective or ineffective in a scientific study. The Centers for Disease Control and Prevention article on "Protection against Mosquitoes, Ticks, & Other Arthropods" notes that "Studies suggest that concentrations of DEET above approximately 50% do not offer a marked increase in protection time against mosquitoes; DEET efficacy tends to plateau at a concentration of approximately 50%". Other insect repellents recommended by the CDC include Picaridin (KBR 3023/icaridin), IR3535, and 2-undecanone.
Population control efforts
Insecticides
Pyrethroids are commonly used. This widespread use of pyrethroids and DDT has caused Knockdown resistance (kdr) mutations. Almost no research has been done on the fitness implications. studies by Kumar et al., 2009 on deltamethrin in India, Plernsub et al., 2013 on permethrin in Thailand, by Jaramillo-O et al., 2014 on λ-cyhalothrin in Colombia, by Alvarez-Gonzalez et al., 2017 on deltamethrin in Venezuela, are all substantially confounded. As of 2019, understanding of selective pressure under withdrawal of insecticide is hence limited.
Genetic modification
Ae. aegypti has been genetically modified to suppress its own species in an approach similar to the sterile insect technique, thereby reducing the risk of disease. The mosquitoes, known as , were developed by Oxitec, a spinout of Oxford University. Field trials in the Cayman Islands, in Juazeiro, Brazil, by Carvalho et al., 2015, and in Panama by Neira et al., 2014 have shown that the OX513A mosquitoes reduced the target mosquito populations by more than 90%. This mosquito suppression effect is achieved by a self-limiting gene that prevents the offspring from surviving. Male modified mosquitoes, which do not bite or spread disease, are released to mate with the pest females. Their offspring inherit the self-limiting gene and die before reaching adulthood—before they can reproduce or spread disease. The OX513A mosquitoes and their offspring also carry a fluorescent marker for simple monitoring. To produce more OX513A mosquitoes for control projects, the self-limiting gene is switched off (using the Tet-Off system) in the mosquito production facility using an antidote (the antibiotic tetracycline), allowing the mosquitoes to reproduce naturally. In the environment, the antidote is unavailable to rescue mosquito reproduction, so the pest population is suppressed.
The mosquito control effect is nontoxic and species-specific, as the OX513A mosquitoes are Ae. aegypti and only breed with Ae. aegypti. The result of the self-limiting approach is that the released insects and their offspring die and do not persist in the environment.
In Brazil, the modified mosquitoes were approved by the National Biosecurity Technical Commission for releases throughout the country. Insects were released into the wild populations of Brazil, Malaysia, and the Cayman Islands in 2012. In July 2015, the city of Piracicaba, São Paulo, started releasing the OX513A mosquitoes. In 2015, the UK House of Lords called on the government to support more work on genetically modified insects in the interest of global health. In 2016, the United States Food and Drug Administration granted preliminary approval for the use of modified mosquitoes to prevent the spread of the Zika virus.
Another proposed method consists in using radiation to sterilize male larvae so that when they mate, they produce no progeny. Male mosquitoes do not bite or spread disease.
Using CRISPR/Cas9 based genome editing to engineer the genome of Aedes aegypti genes like ECFP (enhanced cyan fluorescent protein), Nix (male-determining factor gene), Aaeg-wtrw (Ae. aegypti water witch locus), Kmo (kynurenine 3-monoxygenase), loqs (loquacious), r2d2 (r2d2 protein), ku70 (ku heterodimer protein gene) and lig4 (ligase4) were targeted to modify the genome of Aedes aegypti. The new mutant will become incapable of pathogen transmission or result in population control.
Infection with Wolbachia
In 2016 research into the use of a bacterium called Wolbachia as a method of biocontrol was published showing that invasion of Ae. aegypti by the endosymbiotic bacteria allows mosquitos to be resistant to certain arboviruses such as dengue fever and Zika virus strains currently circulating. In 2017 Alphabet, Inc. started the Debug Project to infect males of this species with Wolbachia bacteria, interrupting the reproductive cycle of these animals.
Fungus infection
Fungal species Erynia conica (from the family Entomophthoraceae) infects (and kills) two types of mosquitos: Aedes aegypti and Culex restuans. Studies on the fungus have been carried out on its potiential use as a biological control of the mosquitos.
Taxonomy
The species was first named (as Culex aegypti) in 1757 by Fredric Hasselquist in his treatise . Hasselquist was provided with the names and descriptions by his mentor, Carl Linnaeus. This work was later translated into German and published in 1762 as .
To stabilise the nomenclature, a petition to the International Commission on Zoological Nomenclature was made by P. F. Mattingly, Alan Stone, and Kenneth L. Knight in 1962. It also transpired that, although the name Aedes aegypti was universally used for the yellow fever mosquito, Linnaeus had actually described a species now known as Aedes (Ochlerotatus) caspius. In 1964, the commission ruled in favour of the proposal, validating Linnaeus' name, and transferring it to the species for which it was in general use.
The yellow fever mosquito belongs to the tribe Aedini of the dipteran family Culicidae and to the genus Aedes and subgenus Stegomyia. According to one recent analysis, the subgenus Stegomyia of the genus Aedes should be raised to the level of genus. The proposed name change has been ignored by most scientists; at least one scientific journal, the Journal of Medical Entomology, has officially encouraged authors dealing with aedile mosquitoes to continue to use the traditional names, unless they have particular reasons for not doing so. The generic name comes from the Ancient Greek , , meaning "unpleasant" or "odious".
Subspecies
Two subspecies are commonly recognized:
This classification is complicated by the results of Gloria-Soria et al., 2016. Although confirming the existence of these two major subspecies, Gloria-Sora et al. finds greater worldwide diversity than previously recognized and a large number of distinct populations separated by various geographic factors. Aedes aegypti formosus is found in natural habitats such as forests, while Aedes aegypti aegypti has adapted to urban domestic habitats.
| Biology and health sciences | Flies (Diptera) | Animals |
10989135 | https://en.wikipedia.org/wiki/Electroanalytical%20methods | Electroanalytical methods | Electroanalytical methods are a class of techniques in analytical chemistry which study an analyte by measuring the potential (volts) and/or current (amperes) in an electrochemical cell containing the analyte. These methods can be broken down into several categories depending on which aspects of the cell are controlled and which are measured. The three main categories are potentiometry (the difference in electrode potentials is measured), amperometry (electric current is the analytical signal), coulometry (charge passed during a certain time is recorded).
Potentiometry
Potentiometry passively measures the potential of a solution between two electrodes, affecting the solution very little in the process. One electrode is called the reference electrode and has a constant potential, while the other one is an indicator electrode whose potential changes with the sample's composition. Therefore, the difference in potential between the two electrodes gives an assessment of the sample's composition. In fact, since the potentiometric measurement is a non-destructive measurement, assuming that the electrode is in equilibrium with the solution, we are measuring the solution's potential.
Potentiometry usually uses indicator electrodes made selectively sensitive to the ion of interest, such as fluoride in fluoride selective electrodes, so that the potential solely depends on the activity of this ion of interest.
The time that takes the electrode to establish equilibrium with the solution will affect the sensitivity or accuracy of the measurement. In aquatic environments, platinum is often used due to its high electron transfer kinetics, although an electrode made from several metals can be used in order to enhance the electron transfer kinetics. The most common potentiometric electrode is by far the glass-membrane electrode used in a pH meter.
A variant of potentiometry is chronopotentiometry which consists in using a constant current and measurement of potential as a function of time. It has been initiated by Weber.
Amperometry
Amperometry indicates the whole of electrochemical techniques in which a current is measured as a function of an independent variable that is, typically, time (in a chronoamperometry) or electrode potential (in a voltammetry). Chronoamperometry is the technique in which the current is measured, at a fixed potential, at different times since the start of polarisation. Chronoamperometry is typically carried out in unstirred solution and at the fixed electrode, i.e., under experimental conditions avoiding convection as the mass transfer to the electrode. On the other hand, voltammetry is a subclass of amperometry, in which the current is measured by varying the potential applied to the electrode. According to the waveform that describes the way how the potential is varied as a function of time, the different voltammetric techniques are defined.
Chronoamperometry
In a chronoamperometry, a sudden step in potential is applied at the working electrode and the current is measured as a function of time. Since this is not an exhaustive method, microelectrodes are used and the amount of time used to perform the experiments is usually very short, typically 20 ms to 1 s, as to not consume the analyte.
Voltammetry
A voltammetry consists in applying a constant and/or varying potential at an electrode's surface and measuring the resulting current with a three-electrode system. This method can reveal the reduction potential of an analyte and its electrochemical reactivity. This method, in practical terms, is non-destructive since only a very small amount of the analyte is consumed at the two-dimensional surface of the working and auxiliary electrodes. In practice, the analyte solution is usually disposed of since it is difficult to separate the analyte from the bulk electrolyte, and the experiment requires a small amount of analyte. A normal experiment may involve 1–10 mL solution with an analyte concentration between 1 and 10 mmol/L. More advanced voltammetric techniques can work with microliter volumes and down to nanomolar concentrations. Chemically modified electrodes are employed for the analysis of organic and inorganic samples.
Polarography
Polarography is a subclass of voltammetry that uses a dropping mercury electrode as the working electrode.
Coulometry
Coulometry uses applied current or potential to convert an analyte from one oxidation state to another completely. In these experiments, the total current passed is measured directly or indirectly to determine the number of electrons passed. Knowing the number of electrons passed can indicate the concentration of the analyte or when the concentration is known, the number of electrons transferred in the redox reaction. Typical forms of coulometry include bulk electrolysis, also known as Potentiostatic coulometry or controlled potential coulometry, as well as a variety of coulometric titrations.
| Physical sciences | Basics_2 | Chemistry |
5613476 | https://en.wikipedia.org/wiki/Amphicyon | Amphicyon | Amphicyon is an extinct genus of large carnivorans belonging to the family Amphicyonidae (known colloquially as "bear-dogs"), subfamily Amphicyoninae, from the Miocene epoch. Members of this family received their vernacular name for possessing bear-like and dog-like features. They ranged over North America, Europe, Asia, and Africa.
Taxonomy
In a note dated back to May 16, 1836, French geologist Alexandre Leymerie wrote of a letter in April that he requested from French palaeontologist Édouard Lartet, which provided details of his exploits in palaeontological sites in the French department of Gers, in particular the commune Sansan. Lartet described his finds of fossil taxons that he found within the sites, including "Mastodonte" (species assigned to it were later reclassified to another mammutid Zygolophodon and the gomphothere Gomphotherium), "Dinotherium" (its species eventually reclassified as either Deinotherium or Prodeinotherium), "Rhinoceros" (reclassified as an aceratherine rhinocerotid Hoploaceratherium), and "Palaeotherium" (the referred equid species now known as belonging to an anchitherine Anchitherium). He also recalled finding fossil "deer" species of which he said that the largest ones were the size of extant deer in France while the smallest ones were the size of small antelope. The palaeontologist noted that the "peaceful ruminants" coexisted with a "formidable" large carnivore he provisionally named Amphicyon based on two half-jaws and bones that he sent to a museum. He described it as having unilobed incisors and compressed canines similar to raccoons but also a carnivorous molar and its first two tubercles conforming those to dogs. Lartet then stated that the genus's most distinct trait was the existence of a third tubercle at the upper jaw, which was not known in any other carnivore. The genus name appears to be derived from the Ancient Greek terms ἀμφί ("on both sides") and κύων ("dog"), but Lartet did not define the genus's etymology.
Despite the initial status of the genus name Amphicyon as nonpermanent, French anatomist Henri Marie Ducrotay de Blainville, a peer who Lartet had regularly discussed his fossil findings with, had sketched mammal skeletons and fossils in 1841, where he recognized the 2 species "Amphicyon major" and "Amphicyon? minor." In 1851, Lartet reviewed the fossil carnivoran genera from Sansan. Among them were Amphicyon, in which it was reconfirmed as a carnivorous mammal the size of extant bears that was discovered in Sansan in 1835. He recalled that its single-lobed incisors and its canines of serrated ridges are similar to the raccoon while the molars were similar to that of a dog. He confirmed the fossil specimens along with the third tubercle in the upper jaw (of which he said that it only exists in the extant bat-eared fox (then known as "Canis megalotis")) as belonging to the species Amphicyon major. The palaeontologist described it as also having an anatomy of plantigrade locomotion similar to extant bears with few differences in form. Blainville was mentioned as speculating that it must have had a long and very strong tail. The species "Amphicyon minor" was reclassified as a separate genus Hemicyon, which he described as a carnivore larger than a European wolf that was closer in form to a dog than Amphicyon and had dentition similar to mustelids. He also described a newer genus Pseudocyon, which he misidentified as being digitigrade and described as being smaller than Amphicyon and coming closest to canids based on its dentition and bones. All three genera, Lartet said, had canines that retained finely serrated edges, implying that they were some of the top coexisting predators of the Miocene in modern-day France.
Species
European species
Amphicyon astrei
The oldest known species of the genus, A. astrei is known from the Early Miocene sites Gardouch and Paulhiac in France, which date to MN1 (or "Mammal Neogene 1" as part of the Mammal Neogene zones). The species was originally described by Kuss in 1962, however, he also noted that its features do not completely match any known genus, and later moved it to the genus Pseudocyon, as subspecies of P. sansanienis, and considered it to be ancestral to P. s. intermedius (which has since then been moved to the separate genus Crassidia). Ginsburg and Antunes later reassigned it to Amphicyon, which was followed by other authors, and suggested that it was ancestral to later species of the genus. Unlike later members of the genus, it did not possess enlarged posterior molars.
Amphicyon lactorensis
This species was originally described by Astre on the basis of a single molar, from the French locality Le Mas d’Auvignon, which dates to MN4/5. Ginsburg referred more material from MN4-5 of France to this species, and assigned it to the subgenus Euroamphicyon. Its M2 is peculiar, as it is anteroposteriorly shortened but transversely elongated. Kuss synonymized it with A. depereti, which has since been moved to Ysengrinia, although later authors generally consider it to be a valid species of Amphicyon.
Amphicyon major
A. major, which was named by De Blainville in 1841, is both the type species of the genus but also the best known, as various cranial and even postcranial remains have been discovered across Western and Central Europe as well as Turkey. It first appeared in MN4 and lasted until at least MN6. Amphicyonid remains from La Grive Saint-Alban, dating back to MN7/8, have also been assigned to this species. Others point out the differences between these fossils and the type material of A. major, suggesting that they may belong to a separate species. It is likely closely related to the geologically younger A. eppelsheimensis, A. gutmanni and A. pannonicus, the first two of which had previously been assigned to A. major as subspecies.
Amphicyon eppelsheimensis
A mandible and a mandibular fragment belong to A. eppelsheimensis were originally discovered at the locality Eppelsheim in Germany, and described by Weitzel in 1930. Other remains have since been found at Gau-Weinheim, which is located in close proximity to Eppelsheim, and the Spanish Valles de Fuentidueña. All these localities date to MN9-10. The taxonomic status of this species is controversial, with Kuss and several other authors considering this taxon to be a subspecies or synonym of A. major. Later authors however suggest that the two species are distinct, with A. eppelsheimensis possibly being the last representative of the A. major lineage. Notably, the p4 is more strongly reduced than in A. major, and it is also slightly larger.
Amphicyon gutmanni
A. gutmanni was described by Kittl in 1891 on the basis of a single, robust and low-crowned lower carnassial. Kuss considered it to be a subspecies of A. major, but Kretzoi argued for its validity, based on the contour of its talonid, and even erected the separate genus Hubacyon, with H. gutmanni as type species. Viranta followed his arguments for the distinction of this species, but did not consider Hubacyon to be a valid genus. The highest point of its hypoconid is located more posterior than in other members of this genus, and a line drawn from the posterolingual corner to the posterobuccal corner possesses a greater angle on the buccal side, due to the extended posterobuccal corner. Both of these features are similar to those seen in thaumastocyonines. Its type locality Mannersdorf, in Austria, is of uncertain age, but the presence of hipparionine horses shows that it is no older than MN9. Viranta also tentatively assigns molars from Kohfidisch, previously referred to cf. A. giganteus, to this species. As this locality dates to MN11, this would make it one of the youngest members of the family. This species is likely closely related to A. major.
Amphicyon pannonicus
The molar of this species was discovered in the Danitzpuszta sandpit in Pécs, southern Hungary, and originally described by Kretzoi in 1985 as Hubacyon (Kanicyon) pannonicus. Some authors state that locality of where it was found has been considered to date to MN11-12, which would make it one of the youngest known amphicyonids, although its exact dating is unclear. However, the terrestrial assemblage of the sandpit generally points towards an Early Pannonian (Vallesian) age, as which is in agreement with Kretzoi's original description. This species is potentially hypercarnivorous, and only known from a single, fragmentary tooth, which is smaller, more slender and gracile than that of A. gutmanni, as well as considerably more brachydont. Just like A. gutmanni, it is considered to be closely related to A. major.
Amphicyon carnutense
A. carnutense, known from the MN3 of France and possibly Czechia, is a large species with a confusing taxonomic history. The type material from Chilleurs-aux-Boiwas was originally described a subspecies of A. giganteus, A. g. carnutense, and considered ancestral to the nominal subspecies A. g. giganteus. The subspecies was discarded later on, but other authors considered A. carnutense distinct enough for it to be classified as a separate species. Adding to the confusion is the status of Megamphicyon, to which A. carnutense is referred, which is variously considered to be synonymous with Amphicyon, a subgenus of the former or a separate genus altogether. Furthermore, Amphicyon lathanicus, originally described in 2000 on basis of isolated teeth from Beilleaux à Hommes, France, which date to MN3, with further remains reported across France, is likely synonymous with A. carnutense.
Amphicyon giganteus
A. giganteus was originally described by Schinz in 1825, and in 1965 Kuss erected the genus Megamphicyon for this species, based on differences in its dentition and size between it and A. major. Subsequent authors generally disregarded this assignment, with Ginsburg considering Megamphicyon a subgenus of Amphicyon. Siliceo et al. revived the genus in 2020, a classification that was followed by some authors. Others, however, reject the reclassification in favour of the older classification A. giganteus. A. giganteus was a widespread European species that lived during the late Burdigalian to late Seravallian, corresponding to the MN4-MN7/8. Most remains were found in Western Europe, although the youngest known record of the species is from Turkey, possibly suggesting the species survived in Anatolia after it had already gone extinct in Europe. Fossils from this species are also known from Bosnia and Herzegovina as well as the locality Arrisdrift in Namibia. It has also been referred to fossil specimens from Moghra in Egypt, but the referral of these fossils remains controversial. It has furthermore been reported from Pakistans lower Vihowa Formation. It differs from A. major through its larger size, bigger premolars, shorter diastemata, a P4 that possesses a larger and lingually extended protocone and the presence of a paracone, that is very large and high paracone in comparison with its metacone, on its elongated M1. A. eibiswaldensis is generally considered to be a junior synonym of this species.
Amphicyon laugnacensis
Originally described as subspecies of A. giganteus, A. laugnacensis was elevated to species level by Ginsburg in 1999. It is the oldest known member of the A. giganteus lineage, with both its type locality Laugnac and possible remains from Gérand-le-Puy and Grépiac dating to MN2. Its holotype is a maxilla, previously referred to A. astrei, possesses a parastyle and a more posteriorly located protocone.
Amphicyon olisiponensis
A. olisiponensis was described by Antunes and Ginsburg in 1977 on the basis of a mandible discovered near Lisbon. Isolated teeth belonging to this species have also been reported from Buñol in Spain. Both these localities date to MN4, although there is a possible report from La Retama, which dates to MN5, but the remains from there are as of yet undescribed. Differences in dentition, most notably the reduction of its premolars, led Viranta to erect the separate genus Euroamphicyon for this species. This proposal of a separate genus is followed by some authors. Others, however, do not recognize "Euroamphicyon" as a distinct genus and instead still use A. olisiponensis.
Asian species
Amphicyon ulungurensis
A. ulungurensis is known from the early Langhian in the Halamagai Formation, near the Ulungur River from which it derives its name. Due to the lack of observation on the characteristics of the upper molars, there is neither evidence for including it nor for excluding it from the genus, in which it is placed mostly on the basis of its very large size. the holotype of this species is a fragmentary right hemimandible, but postcranial remains belonging to this species have also been described, including a comparatively small calcaneum and cuboid, possibly indicating sexual dimorphism.
Amphicyon zhanxiangi
The only Asian amphicyonid which definitely belongs to the genus Amphicyon, A. zhanxiangi was described in 2018 based on a maxillary fragment from the Zhang’enbao Formation in Ningxia, China. The Yinziling subfauna to which it belongs dates to the late Shanwangian, roughly corresponding to MN5. It has also been reported from the slightly younger locality Lagou, part of the Hujialiang Formation, in the Linxia Basin, dating to the Tunggurian, which is equivalent to MN6. A. zhanxiangi is medium-sized, comparable to A. major, and closely related to A. giganteus. Over time, the diet of the species adapted towards omnivory as it moved towards more southern and humid areas, where greater amounts of plant material were available. The Lagou specimen showing greater adaptions to omnivory than the older one from Ningxia, which lived farther to the north, in a more arid terrain. This trend likely continued, with A. zhanxiangi being the probable ancestor of Arctamphicyon.
Amphicyon lydekkeri
A. lydekkeri is known from the Dhok Pathan horizon in Pakistan and was described by Pilgrim in 1910, who later attributed it to its own genus, Arctamphicyon. However, Pilgrim identified the holotype as first m1 and then as M1, despite it actually being a M2, making the diagnosis invalid. It has furthermore been argued that the differences between “Arctamphicyon” and Amphicyon are negligible, with the former being a junior synonym of the latter. Other authors consider the differences distinct enough for the separation of the two genera. Fossils from Yuanmou in Yunnan, and the Lower Irrawaddy Formation in Myanmar, show affinities to this species, and have been assigned to Arctamphicyon. As the locality Hasnot, where A. lydekkeri was found, has been dated to the latest Miocene (7-5 Ma), this species is one of the youngest amphicyonids known.
Amphicyon cooperi
This species is only definitely known from its holotype, a single m1, discovered in rocks of the Bugi Hills probably dating to the early Miocene, although possible remains have been reported from the zones 4 and 6 of the Dera Bugti synclinal. It was described by Pilgrim in 1932. He noted that the tooth is very similar to that of A. shahbazi, although A. cooperi lacks an external cingulum, and that it may actually belong to that species.
Amphicyon palaeindicus
A. palaeindicus was described by Richard Lydekker in 1876 on the basis of an isolated M2 collected at Kushalgarh in the Potwar Plateau. Later authors referred a fragmentary mandible from Chinji, isolated teeth from the Chinji and the Nagri zones, and the Dang Valley, to this species. The exact age of the Chinji specimens cannot be defined, as the fossil-bearing localities in this region stretch from ca. 15 to 9 Ma, although the correlation of the Dang Valley fauna suggests that they're of late middle Miocene age, whereas the Nagri fauna dates to the Vallesian. It has been suggested that none of the Siwalik species truly belong to Amphicyon, although others suggests that A. palaeindicus should be referred to this genus.
Amphicyon pithecophilus
Pilgrim erected this species in 1932 on basis of an isolated m2 from Chinji. He furthermore assigned two fragmentary mandibles, from Chinji and Nurpur, previously referred to A. palaeindicus to this species. Colbert considered it a synonym of that species, although later authors considered it distinct due to its larger metacone and stronger buccal cingulum on the M2.
Amphicyon sindiensis
A. sindiensis is one of the most poorly known species assigned to the genus, being only known from a fragmentary right mandible and an isolated molar from the basal beds of the Manchar Formation in Pakistan, dating to the early Middle Miocene. The dimensions of its m2 are similar to those of Maemohcyon.
Amphicyon shahbazi
A. shahbazi was described by Pilgrim in 1910 on the basis of two poorly preserved mandibular fragments from the Bugti Hills. the exact age of these fossils is not known, but other fragmentary remains assigned to this species, discovered in the upper Chitarwata Formation and lower Vihowa Formation, which correlate with MN2-3, suggests they date to the Early Miocene.
Amphicyon confucianus
It is only known from a single, fragmentary right hemimandible, which includes p3 and m1. A. confucianus is part of the Shanwang Local Fauna, which dates to ca. 16 Ma. It is a large species, comparable to A. ulungurensis in size. The attribution of this species to Amphicyon remains unclear, although it probably does not belong to this genus.
Amphicyon tairumensis
"Amphicyon" tairumensis was described by Edwin Harris Colbert in 1939, on the basis of a left hemimandible with heavily worn teeth discovered in the Inner Mongolian Tunggur Formation. It is a wolf-sized predator, considerably smaller than A. major. The m1 is swollen at the lingual point between the talonid and the trigonid, a feature not seen in European members of the genus. A similar, but currently unpublished, form from Laogou has upper dental characteristics quite unlike Amphicyon, and it has been proposed that it is more closely related to Pseudocyon because of its size and the lingual convexity of its m1.
North American species
Amphicyon galushai
A. galushai represents the first occurrence of Amphicyon in North America, approximately 18.8–17.5 Mya during the early Hemingfordian. Described by Robert M. Hunt Jr. in 2003, it is mostly known from fossils found in the Runningwater Formation of western Nebraska and includes a complete adult skull, a partial juvenile skull, 3 mandibles and teeth and postcranial elemenents representing least 15 individuals. There is an additional skull fragment from the Troublesome Formation of Colorado. A. galushai is considered ancestral to the late Hemingfordian species, Amphicyon frendens.
Amphicyon frendens
A. frendens lived during the late Hemingfordian, 17.5–15.9 Mya. The species was originally described by W. Matthew in 1924 from specimens found in the middle member of the Sheep Creek Formation, Sioux County, Nebraska. A. frendens specimens have since been found at sites in Harney and Malheur Counties, Oregon. It was considerably bigger than the earlier A. galushai, and possessed a larger M2.
Amphicyon ingens
This huge species lived during the early to middle Barstovian, 15.8–14.0 Mya. It was originally described by W. Matthew in 1924 from specimens found in the Olcott Formation, Sioux County, Nebraska. Specimens attributed to this species have since been found in California, Colorado and New Mexico. A. ingens possessed the largest canines of any amphicyonine.
Amphicyon longiramus
"Amphicyon intermedius" is a name used to refer to a dubious species found at Thomas Farm of the Hawthorne Formation in Florida, which was described by White in 1940. However, the name was preoccupied by a different species described by von Meyer in 1849, which is a synonym of Crassidia intermedia, a thaumastocyonine found in the localities of Germany and France that is not closely related to the taxon found in Florida. The species as referred to White were attributed additionally to Amphicyon remains found in 1992 in the lower part of the Calvert Formation at the Pollack Farm Site in Delaware dating to the early Hemingfordian (or early Miocene) based on the past referral of the Hawthorne Formation fossils to A. intermedius. However, a 1960 source by Olsen refers to A. intermedius as a synonym to A. longiramus, which Heizmann and Kordikova acknowledged in 2000 as making A. longiramus the valid name over White's A. intermedius name. Additionally, a 2012 article on Amphicyon by the Florida Museum of Natural History also refers to A. intermedius as a synonym to A. longiramus based on the similarities of the two in the localities of Florida and Delaware. The species A. longiramus is said to have coexisted with the smaller amphicyonine Cynelos caroniavorus (White, 1942), which was also found in the Thomas Farm locality.
Description
Amphicyon was a large to very large predator, although the various species differ considerably in size, ranging from moderately sized species such as A. astrei to the huge A. ingens, which was one of the biggest carnivorans of all time. The estimated weight of male A. major is 212 kg, while females are smaller, at only 122 kg, indicating significant sexual dimorphism. The shoulder height of a young female, which has been estimated to have weighed 125 kg, has been reconstructed as 65 cm. As the largest Old World species of the genus, A. giganteus was considerably larger, with females weighing 157 kg and males 317 kg, although they may have grown considerably larger. The mass of several other European species has been estimated craniodental measurements, which generally falls into the range of estimations derived from postcranial remains, although it may slightly overestimate their weight. A. astrei is the smallest species, estimated at 112 kg, while A. laugnacensis and A. lactorensis were somewhat larger, at ~130 kg and 132 kg, respectively. A. olisiponensis is estimated at 147 kg and A. carnutense as 182 kg, while A. eppelsheimensis and A. gutmanni are among the biggest members of the genus, with estimated weights of 225 and 246 kg. The North American species of the genus show a considerable size increase over the course of their evolution, with the earliest one, A. galushai, being estimated at 187 kg, whereas A. frendens was considerably larger, at 432 kg. Finally, the terminal North American species, A. ingens, was among the largest of all amphicyonids, with an estimated body mass of 550 kg.
Its skeleton showcases a variety of features resembling canids, ursids and felids. Amphicyon possessed a powerful skull, with a long snout and high sagittal crests. The canines are robust, and the posterior molars are enlarged, whereas the anterior premolars are reduced. Its neck is wide, similar to that of a bear. Its postcranial skeleton is stout and robust, with massive, powerful limbs, and mobile shoulder joints as well as flexible wrists. The upper limb bones are comparatively long in comparison to the lower ones, and it did not possess any adaption towards cursoriality. Its posture was more similar to plantigrade taxa such as ursids than to digitgrade ones like felids, and their claws were not retractable. Amphicyon also had a rather flexible back, and a heavy tail, which has been estimated to have possessed as many as 28 caudal vertebrae, and may have been as long as the rest of the spine.
Palaeobiology
Diet and predatory behaviour
The diet of Amphicyon has proven difficult to reconstruct, as its dentition possesses both crushing and shearing functions. It has been proposed, on the basis of dental wear patterns and morphology, that European species of this genus were bone-crushing mesocarnivores. One study argued that A. longiramus was hypercarnivorous, as the relative grinding area of its lower molars is similar to that of carnivorous canids, whereas another suggested that the North American species of the genus were omnivores. A dental microwear analysis of A. major recovers the diet of this species as mesocarnivorous, similar to red foxes, consuming meat as well as plants and hard items, which presumably included bone. Another dental microwear analysis also supports an omnivorous diet for A. giganteus, whose dentition possesses a high number of large pits and several small pits, and notes that it clearly differs from bone-crushing taxa such as hyaenas. As both its anterior premolars and posterior molars are reduced, A. olisiponensis may have been more hypercarnivorous than other European species.
As it lacked the adaptations for rapid acceleration, Amphicyon seems to have hunted quite unlike lions and tigers, which approach their prey very closely, before overtaking it after a quick burst of speed. However, as even modern pursuit predators such as wolves stalk and ambush their prey, it is likely that Amphicyon did the same. It has been proposed that it pursued its prey for longer distances, and at a speed notably slower than modern wolves. After catching up to its victim, it was likely able to immobilize it with its powerful forelimbs. Its postscapular fossa indicates a well-developed subscapularis minor muscle, which fixes the shoulder joint, and prevents the head of the humerus from being dislocated by the struggles of a prey animal trying to break free. The anatomy of its humerus also supports this, as it showcases the presence of a strong pronator teres muscle, and thereby pronation of the forearm, and powerful flexors of digits and wrists, which are integral to the prey-grasping ability of both extant bears and big cats. Indeed, the trochlea of its humeral condyle is shallower than that of a tiger, suggesting that the pronation/supination of its forearms might have been even greater than in large felids, although it likely lacked the ability of cats to retract their claws. Its small infraorbital foramina indicates that it lacked the well-developed vibrissae of cats, which provides them with the sensory information needed to place a precise killing bite. Therefore, it may have killed its prey by tearing open the preys ribcage, as thylacines did, or by biting into its neck to sever major blood vessels. Just like modern predators, it likely did not target its preys abdomen, as wounds in that area do not kill quickly. As the elongation of its distal limb segments was more similar to that of the solitary tiger than to the social lion, Amphicyon was likely solitary as well. Due to its comparatively slow maximum speed and lack of rapid acceleration, it is unlikely that Amphicyon preyed on cursorial ungulates. However, it has been proposed that its pursuit capabilities were suited to chase mediportal ungulates, such as merycoidodontids and rhinoceroses. A specimen of the rhinoceros Prosantorhinus douvillei was discovered with bitemarks corresponding to those of A. giganteus, although it remains unclear if this was the result of active predation or merely scavenging of remains. Other bitemarks referred to the species A. olisiponensis were found on a metapodial belonging to the large anthracothere Brachyodus onoideus. Bite traces on various mammalian long bones from the Early Miocene of Czechia have also been attributed to Amphicyon. As patterned bones have no immediate benefit for feeding, they likely represent evidence of active predation.
Sexual dimorphism
Strong sexual dimorphism is present in a variety of species, known from both Europe and North America, with the males being considerably larger than the females. Although this size difference is present in many amphicyonids, it is more strongly developed in Amphicyon than in Cynelos lemanensis. The males furthermore possess slightly longer and more robust snouts, larger canines and immense sagittal crests. Comparison with other strongly sexually dimorphic carnivorans suggests that Amphicyon was polygynous, with territorial males competing with each other for females during the mating season. This may have contributed to the size increase observed within the genus.
Possible footprints
Footprints assigned to the ichnotaxon Hirpexipes alfi were discovered in the Californian Barstow Formation, and match the feet of A. ingens. They showcase that the animal was semidigitigrade to semiplantigrade, and possessed long and sharp claws. Hiripex means "rake", and references the long, flexible digits of the foot, which reminded the authors of the prongs of leaf rakes.
Another ichnotaxon associated with Amphicyon is Platykopus maxima from the Hungarian Early Miocene locality Ipolytarnóc. The footprints were attributed to A. major on the basis of their size and short phalanges.
Fossil distribution
Fossil remains of Amphicyon are most common in Western and Central Europe, where they were discovered in various countries, including France, Germany, Spain and Hungary, but were also found in Bosnia-Herzegovina and Turkey. A. astrei is the oldest known species, and may have been the ancestor of the later members of the genus, and is known from the earliest Miocene of France. Species belonging to the A. giganteus lineage appeared shortly afterwards, and are common in Europe until MN6, which corresponds to 13.7 to 12.75 Ma. However, this species is also known from Turkey, where it was found in the Karacalar locality, which dates to 11.6 ± 0.25 Ma, indicating that it survived in Anatolia after it had already disappeared in Europe. Throughout the Middle Miocene of Europe, it was sympatric with the considerably smaller A. major, although the two species were likely ecologically or environmentally separated. While common throughout the continent during the Middle Miocene, amphicyonid diversity decreased following the Vallesian Turnover, with the last known European species of the genus surviving in Central Europe until MN11, which dates from 8.7 to 7.75 Ma.
While various remains and species of Amphicyon have been reported from South and East Asia, their referral is often problematic, as they're usually known from fragmentary material and all large sized amphicyonids found on the continent are generally placed in this genus. The only species definitely belonging to this genus is A. zhanxiangi from the middle Miocene of China. Other, tentatively assigned, species of this genus are known from China throughout the early Middle Miocene, but disappear by the late Miocene. It has been suggested that there were at least three dispersal events from European Amphicyon into Eastern Asia, with the first one being the ancestors of the North American species, the second one dating to the Early Miocene or earliest Middle Miocene, leading to A. zhanxiangi, and the last one, that of the A. ulungurensis lineage, which occurred slightly later. There was generally no closer affinity between the Chinese amphicyonids and those of the Indian Subcontinent during the middle Miocene. However, it has been proposed that the late Miocene A. lydekkeri from Pakistan, which is sometimes attributed to the separate genus Arctamphicyon, is a descendant of A. zhanxiangi, with the lineage immigrating from Northern China to Southern Asia. Further remains showcasing affinities with these species are also known from Yunnan, and their dispersal might be linked to the uplifting of the Tibetan Plateau and the strengthening of the Asian Monsoon. The attribution of the various Amphicyon species described from the South Asian Siwaliks is similarly questionable. They are found throughout the whole Miocene epoch, with A. shahbazi being known from the earliest Miocene, whereas remains of A. lydekkeri date to the latest Miocene (~7-5 Ma), making it one of the youngest amphicyonids known. A very large humerus from the Manchar formation indicates that a gigantic species was present in the Siwaliks during the early parts of the Middle Miocene. South East Asian reports include a large incisor from the Aquitanian (~23-21 Ma) of Vietnam, and a species from the Lower Irrawaddy Formation of Myanmar, which is likely closely related to Arctamphicyon. Scarce dental remains have also been reported from the Saudi Arabian Dam Formation, which dates to ca 17-15 Ma, in 1982. These remains show morphological differences to A. major, and several of the species to which it had been compared, mostly because of their similar, small size, including A. bohemicus, A. styriacus and A. steinheimensis (which also shares the apomorphic features present in the Arabian taxon), have since been moved to other genera.
The only definitive African remains of Amphicyon are from Arrisdrift in Namibia, which has variously been dated to 17.5 Ma or 16 Ma, and belong to the species A. giganteus. Further remains from this species have also been reported from the slightly older locality Moghra in Egypt, and it has been suggested that a mandible from Gebel Zelten, which is of similar age, in Libya indicates the presence of another, smaller species of the genus in the early Miocene of Africa. However, other authors assign these fossils to Afrocyon and Mogharacyon, respectively. Much younger remains of large, African amphicyonids have previously been referred to Amphicyon. Most notable among these are a molar and fragmentary postcranial remains from the Lower Nawatwa Formation, dating to 7.4 ± 0.1 – 6.5 ± 0.1 Ma, which represents one of the youngest amphicyonids known. Others tentatively refer this taxon to the genus Myacyon.
The migration of Amphicyon from Eurasia to North America was part of a trans-Beringian faunal exchange between the two continents during the Early Miocene. The oldest North American member of the genus is A. galushai, which first appeared between 18.8 and 18.2 Ma. It likely gave rise to the larger A. frendens, which itself was ancestral to the huge A. ingens, which was also the last North American member of the genus, disappearing around 14.2 Ma. This lineage was probably endemic to North America, and is mostly known from the Great Plains, although remains of A. ingens were also discovered in California and New Mexico. Another species, A. longiramus, is known from the Thomas Farm Site of Florida, which dates to ca. 18 Ma, and possibly the Pollack Farm Local Fauna of Delaware, as well as the Texan Garvin Gully fauna, which are of similar age. The relationship of this species to the Great Plains lineage is unclear.
| Biology and health sciences | Other carnivora | Animals |
5614784 | https://en.wikipedia.org/wiki/Moving%20magnet%20and%20conductor%20problem | Moving magnet and conductor problem | The moving magnet and conductor problem is a famous thought experiment, originating in the 19th century, concerning the intersection of classical electromagnetism and special relativity. In it, the current in a conductor moving with constant velocity, v, with respect to a magnet is calculated in the frame of reference of the magnet and in the frame of reference of the conductor. The observable quantity in the experiment, the current, is the same in either case, in accordance with the basic principle of relativity, which states: "Only relative motion is observable; there is no absolute standard of rest". However, according to Maxwell's equations, the charges in the conductor experience a magnetic force in the frame of the magnet and an electric force in the frame of the conductor. The same phenomenon would seem to have two different descriptions depending on the frame of reference of the observer.
This problem, along with the Fizeau experiment, the aberration of light, and more indirectly the negative aether drift tests such as the Michelson–Morley experiment, formed the basis of Einstein's development of the theory of relativity.
Introduction
Einstein's 1905 paper that introduced the world to relativity opens with a description of the magnet/conductor problem:
An overriding requirement on the descriptions in different frameworks is that they be consistent. Consistency is an issue because Newtonian mechanics predicts one transformation (so-called Galilean invariance) for the forces that drive the charges and cause the current, while electrodynamics as expressed by Maxwell's equations predicts that the fields that give rise to these forces transform differently (according to Lorentz invariance). Observations of the aberration of light, culminating in the Michelson–Morley experiment, established the validity of Lorentz invariance, and the development of special relativity resolved the resulting disagreement with Newtonian mechanics. Special relativity revised the transformation of forces in moving reference frames to be consistent with Lorentz invariance. The details of these transformations are discussed below.
In addition to consistency, it would be nice to consolidate the descriptions so they appear to be frame-independent. A clue to a framework-independent description is the observation that magnetic fields in one reference frame become electric fields in another frame. Likewise, the solenoidal portion of electric fields (the portion that is not originated by electric charges) becomes a magnetic field in another frame: that is, the solenoidal electric fields and magnetic fields are aspects of the same thing. That means the paradox of different descriptions may be only semantic. A description that uses scalar and vector potentials φ and A instead of B and E avoids the semantical trap. A Lorentz-invariant four vector Aα = (φ / c, A) replaces E and B and provides a frame-independent description (albeit less visceral than the E– B–description). An alternative unification of descriptions is to think of the physical entity as the electromagnetic field tensor, as described later on. This tensor contains both E and B fields as components, and has the same form in all frames of reference.
Background
Electromagnetic fields are not directly observable. The existence of classical electromagnetic fields can be inferred from the motion of charged particles, whose trajectories are observable. Electromagnetic fields do explain the observed motions of classical charged particles.
A strong requirement in physics is that all observers of the motion of a particle agree on the trajectory of the particle. For instance, if one observer notes that a particle collides with the center of a bullseye, then all observers must reach the same conclusion. This requirement places constraints on the nature of electromagnetic fields and on their transformation from one reference frame to another. It also places constraints on the manner in which fields affect the acceleration and, hence, the trajectories of charged particles.
Perhaps the simplest example, and one that Einstein referenced in his 1905 paper introducing special relativity, is the problem of a conductor moving in the field of a magnet. In the frame of the magnet, a conductor experiences a magnetic force. In the frame of a conductor moving relative to the magnet, the conductor experiences a force due to an electric field. The magnetic field in the magnet frame and the electric field in the conductor frame must generate consistent results in the conductor. At the time of Einstein in 1905, the field equations as represented by Maxwell's equations were properly consistent. Newton's law of motion, however, had to be modified to provide consistent particle trajectories.
Transformation of fields, assuming Galilean transformations
Assuming that the magnet frame and the conductor frame are related by a Galilean transformation, it is straightforward to compute the fields and forces in both frames. This will demonstrate that the induced current is indeed the same in both frames. As a byproduct, this argument will also yield a general formula for the electric and magnetic fields in one frame in terms of the fields in another frame.
In reality, the frames are not related by a Galilean transformation, but by a Lorentz transformation. Nevertheless, it will be a Galilean transformation to a very good approximation, at velocities much less than the speed of light.
Unprimed quantities correspond to the rest frame of the magnet, while primed quantities correspond to the rest frame of the conductor. Let v be the velocity of the conductor, as seen from the magnet frame.
Magnet frame
In the rest frame of the magnet, the magnetic field is some fixed field B(r), determined by the structure and shape of the magnet. The electric field is zero.
In general, the force exerted upon a particle of charge q in the conductor by the electric field and magnetic field is given by (SI units):
where is the charge on the particle, is the particle velocity and F is the Lorentz force. Here, however, the electric field is zero, so the force on the particle is
Conductor frame
In the conductor frame, there is a time-varying magnetic field B′ related to the magnetic field B in the magnet frame according to:
where
In this frame, there is an electric field, and its curl is given by the Maxwell-Faraday equation:
This yields:
To make this explicable: if a conductor moves through a B-field with a gradient , along the z-axis with constant velocity , it follows that in the frame of the conductor It can be seen that this equation is consistent with by determining and from this expression and substituting it in the first expression while using that Even in the limit of infinitesimal small gradients these relations hold, and therefore the Lorentz force equation is also valid if the magnetic field in the conductor frame is not varying in time. At relativistic velocities a correction factor is needed, see below and Classical electromagnetism and special relativity and Lorentz transformation.
A charge q in the conductor will be at rest in the conductor frame. Therefore, the magnetic force term of the Lorentz force has no effect, and the force on the charge is given by
This demonstrates that the force is the same in both frames (as would be expected), and therefore any observable consequences of this force, such as the induced current, would also be the same in both frames. This is despite the fact that the force is seen to be an electric force in the conductor frame, but a magnetic force in the magnet's frame.
Galilean transformation formula for fields
A similar sort of argument can be made if the magnet's frame also contains electric fields. (The Ampere-Maxwell equation also comes into play, explaining how, in the conductor's frame, this moving electric field will contribute to the magnetic field.) The result is that, in general,
with c the speed of light in free space.
By plugging these transformation rules into the full Maxwell's equations, it can be seen that if Maxwell's equations are true in one frame, then they are almost true in the other, but contain incorrect terms proportional to the quantity v/c raised to the second or higher power. Accordingly, these are not the exact transformation rules, but are a close approximation at low velocities. At large velocities approaching the speed of light, the Galilean transformation must be replaced by the Lorentz transformation, and the field transformation equations also must be changed, according to the expressions given below.
Transformation of fields as predicted by Maxwell's equations
In a frame moving at velocity v, the E-field in the moving frame when there is no E-field in the stationary magnet frame Maxwell's equations transform as:
where
is called the Lorentz factor and c is the speed of light in free space. This result is a consequence of requiring that observers in all inertial frames arrive at the same form for Maxwell's equations. In particular, all observers must see the same speed of light c. That requirement leads to the Lorentz transformation for space and time. Assuming a Lorentz transformation, invariance of Maxwell's equations then leads to the above transformation of the fields for this example.
Consequently, the force on the charge is
This expression differs from the expression obtained from the nonrelativistic Newton's law of motion by a factor of . Special relativity modifies space and time in a manner such that the forces and fields transform consistently.
Modification of dynamics for consistency with Maxwell's equations
The Lorentz force has the same form in both frames, though the fields differ, namely:
See Figure 1. To simplify, let the magnetic field point in the z-direction and vary with location x, and let the conductor translate in the positive x-direction with velocity v. Consequently, in the magnet frame where the conductor is moving, the Lorentz force points in the negative y-direction, perpendicular to both the velocity, and the B-field. The force on a charge, here due only to the B-field, is
while in the conductor frame where the magnet is moving, the force is also in the negative y-direction, and now due only to the E-field with a value:
The two forces differ by the Lorentz factor γ. This difference is expected in a relativistic theory, however, due to the change in space-time between frames, as discussed next.
Relativity takes the Lorentz transformation of space-time suggested by invariance of Maxwell's equations and imposes it upon dynamics as well (a revision of Newton's laws of motion). In this example, the Lorentz transformation affects the x-direction only (the relative motion of the two frames is along the x-direction). The relations connecting time and space are ( primes denote the moving conductor frame ) :
These transformations lead to a change in the y-component of a force:
That is, within Lorentz invariance, force is not the same in all frames of reference, unlike Galilean invariance. But, from the earlier analysis based upon the Lorentz force law:
which agrees completely. So the force on the charge is not the same in both frames, but it transforms as expected according to relativity.
| Physical sciences | Theory of relativity | Physics |
11965603 | https://en.wikipedia.org/wiki/Stellar%20rotation | Stellar rotation | Stellar rotation is the angular motion of a star about its axis. The rate of rotation can be measured from the spectrum of the star, or by timing the movements of active features on the surface.
The rotation of a star produces an equatorial bulge due to centrifugal force. As stars are not solid bodies, they can also undergo differential rotation. Thus the equator of the star can rotate at a different angular velocity than the higher latitudes. These differences in the rate of rotation within a star may have a significant role in the generation of a stellar magnetic field.
In its turn, the magnetic field of a star interacts with the stellar wind. As the wind moves away from the star its angular speed decreases. The magnetic field of the star interacts with the wind, which applies a drag to the stellar rotation. As a result, angular momentum is transferred from the star to the wind, and over time this gradually slows the star's rate of rotation.
Measurement
Unless a star is being observed from the direction of its pole, sections of the surface have some amount of movement toward or away from the observer. The component of movement that is in the direction of the observer is called the radial velocity. For the portion of the surface with a radial velocity component toward the observer, the radiation is shifted to a higher frequency because of Doppler shift. Likewise the region that has a component moving away from the observer is shifted to a lower frequency. When the absorption lines of a star are observed, this shift at each end of the spectrum causes the line to broaden. However, this broadening must be carefully separated from other effects that can increase the line width.
The component of the radial velocity observed through line broadening depends on the inclination of the star's pole to the line of sight. The derived value is given as , where is the rotational velocity at the equator and is the inclination. However, is not always known, so the result gives a minimum value for the star's rotational velocity. That is, if is not a right angle, then the actual velocity is greater than . This is sometimes referred to as the projected rotational velocity. In fast rotating stars polarimetry offers a method of recovering the actual velocity rather than just the rotational velocity; this technique has so far been applied only to Regulus.
For giant stars, the atmospheric microturbulence can result in line broadening that is much larger than effects of rotational, effectively drowning out the signal. However, an alternate approach can be employed that makes use of gravitational microlensing events. These occur when a massive object passes in front of the more distant star and functions like a lens, briefly magnifying the image. The more detailed information gathered by this means allows the effects of microturbulence to be distinguished from rotation.
If a star displays magnetic surface activity such as starspots, then these features can be tracked to estimate the rotation rate. However, such features can form at locations other than equator and can migrate across latitudes over the course of their life span, so differential rotation of a star can produce varying measurements. Stellar magnetic activity is often associated with rapid rotation, so this technique can be used for measurement of such stars. Observation of starspots has shown that these features can actually vary the rotation rate of a star, as the magnetic fields modify the flow of gases in the star.
Physical effects
Equatorial bulge
Gravity tends to contract celestial bodies into a perfect sphere, the shape where all the mass is as close to the center of gravity as possible. But a rotating star is not spherical in shape, it has an equatorial bulge.
As a rotating proto-stellar disk contracts to form a star its shape becomes more and more spherical, but the contraction doesn't proceed all the way to a perfect sphere. At the poles all of the gravity acts to increase the contraction, but at the equator the effective gravity is diminished by the centrifugal force. The final shape of the star after star formation is an equilibrium shape, in the sense that the effective gravity in the equatorial region (being diminished) cannot pull the star to a more spherical shape. The rotation also gives rise to gravity darkening at the equator, as described by the von Zeipel theorem.
An extreme example of an equatorial bulge is found on the star Regulus A (α Leonis A). The equator of this star has a measured rotational velocity of 317 ± 3 km/s. This corresponds to a rotation period of 15.9 hours, which is 86% of the velocity at which the star would break apart. The equatorial radius of this star is 32% larger than polar radius. Other rapidly rotating stars include Alpha Arae, Pleione, Vega and Achernar.
The break-up velocity of a star is an expression that is used to describe the case where the centrifugal force at the equator is equal to the gravitational force. For a star to be stable the rotational velocity must be below this value.
Differential rotation
Surface differential rotation is observed on stars such as the Sun when the angular velocity varies with latitude. Typically the angular velocity decreases with increasing latitude. However the reverse has also been observed, such as on the star designated HD 31993. The first such star, other than the Sun, to have its differential rotation mapped in detail is AB Doradus.
The underlying mechanism that causes differential rotation is turbulent convection inside a star. Convective motion carries energy toward the surface through the mass movement of plasma. This mass of plasma carries a portion of the angular velocity of the star. When turbulence occurs through shear and rotation, the angular momentum can become redistributed to different latitudes through meridional flow.
The interfaces between regions with sharp differences in rotation are believed to be efficient sites for the dynamo processes that generate the stellar magnetic field. There is also a complex interaction between a star's rotation distribution and its magnetic field, with the conversion of magnetic energy into kinetic energy modifying the velocity distribution.
Rotation braking
During formation
Stars are believed to form as the result of a collapse of a low-temperature cloud of gas and dust. As the cloud collapses, conservation of angular momentum causes any small net rotation of the cloud to increase, forcing the material into a rotating disk. At the dense center of this disk a protostar forms, which gains heat from the gravitational energy of the collapse.
As the collapse continues, the rotation rate can increase to the point where the accreting protostar can break up due to centrifugal force at the equator. Thus the rotation rate must be braked during the first 100,000 years to avoid this scenario. One possible explanation for the braking is the interaction of the protostar's magnetic field with the stellar wind in magnetic braking. The expanding wind carries away the angular momentum and slows down the rotation rate of the collapsing protostar.
Most main-sequence stars with a spectral class between O5 and F5 have been found to rotate rapidly. For stars in this range, the measured rotation velocity increases with mass. This increase in rotation peaks among young, massive B-class stars. "As the expected life span of a star decreases with increasing mass, this can be explained as a decline in rotational velocity with age."
After formation
For main-sequence stars, the decline in rotation can be approximated by a mathematical relation:
where is the angular velocity at the equator and is the star's age. This relation is named Skumanich's law after Andrew P. Skumanich who discovered it in 1972.
Gyrochronology is the determination of a star's age based on the rotation rate, calibrated using the Sun.
Stars slowly lose mass by the emission of a stellar wind from the photosphere. The star's magnetic field exerts a torque on the ejected matter, resulting in a steady transfer of angular momentum away from the star. Stars with a rate of rotation greater than 15 km/s also exhibit more rapid mass loss, and consequently a faster rate of rotation decay. Thus as the rotation of a star is slowed because of braking, there is a decrease in rate of loss of angular momentum. Under these conditions, stars gradually approach, but never quite reach, a condition of zero rotation.
At the end of the main sequence
Ultracool dwarfs and brown dwarfs experience faster rotation as they age, due to gravitational contraction. These objects also have magnetic fields similar to the coolest stars. However, the discovery of rapidly rotating brown dwarfs such as the T6 brown dwarf WISEPC J112254.73+255021.5 lends support to theoretical models that show that rotational braking by stellar winds is over 1000 times less effective at the end of the main sequence.
Close binary systems
A close binary star system occurs when two stars orbit each other with an average separation that is of the same order of magnitude as their diameters. At these distances, more complex interactions can occur, such as tidal effects, transfer of mass and even collisions. Tidal interactions in a close binary system can result in modification of the orbital and rotational parameters. The total angular momentum of the system is conserved, but the angular momentum can be transferred between the orbital periods and the rotation rates.
Each of the members of a close binary system raises tides on the other through gravitational interaction. However the bulges can be slightly misaligned with respect to the direction of gravitational attraction. Thus the force of gravity produces a torque component on the bulge, resulting in the transfer of angular momentum (tidal acceleration). This causes the system to steadily evolve, although it can approach a stable equilibrium. The effect can be more complex in cases where the axis of rotation is not perpendicular to the orbital plane.
For contact or semi-detached binaries, the transfer of mass from a star to its companion can also result in a significant transfer of angular momentum. The accreting companion can spin up to the point where it reaches its critical rotation rate and begins losing mass along the equator.
Degenerate stars
After a star has finished generating energy through thermonuclear fusion, it evolves into a more compact, degenerate state. During this process the dimensions of the star are significantly reduced, which can result in a corresponding increase in angular velocity.
White dwarf
A white dwarf is a star that consists of material that is the by-product of thermonuclear fusion during the earlier part of its life, but lacks the mass to burn those more massive elements. It is a compact body that is supported by a quantum mechanical effect known as electron degeneracy pressure that will not allow the star to collapse any further. Generally most white dwarfs have a low rate of rotation, most likely as the result of rotational braking or by shedding angular momentum when the progenitor star lost its outer envelope. (See planetary nebula.)
A slow-rotating white dwarf star can not exceed the Chandrasekhar limit of 1.44 solar masses without collapsing to form a neutron star or exploding as a Type Ia supernova. Once the white dwarf reaches this mass, such as by accretion or collision, the gravitational force would exceed the pressure exerted by the electrons. If the white dwarf is rotating rapidly, however, the effective gravity is diminished in the equatorial region, thus allowing the white dwarf to exceed the Chandrasekhar limit. Such rapid rotation can occur, for example, as a result of mass accretion that results in a transfer of angular momentum.
Neutron star
A neutron star is a highly dense remnant of a star that is primarily composed of neutrons—a particle that is found in most atomic nuclei and has no net electrical charge. The mass of a neutron star is in the range of 1.2 to 2.1 times the mass of the Sun. As a result of the collapse, a newly formed neutron star can have a very rapid rate of rotation; on the order of a hundred rotations per second.
Pulsars are rotating neutron stars that have a magnetic field. A narrow beam of electromagnetic radiation is emitted from the poles of rotating pulsars. If the beam sweeps past the direction of the Solar System then the pulsar will produce a periodic pulse that can be detected from the Earth. The energy radiated by the magnetic field gradually slows down the rotation rate, so that older pulsars can require as long as several seconds between each pulse.
Black hole
A black hole is an object with a gravitational field that is sufficiently powerful that it can prevent light from escaping. When they are formed from the collapse of a rotating mass, they retain all of the angular momentum that is not shed in the form of ejected gas. This rotation causes the space within an oblate spheroid-shaped volume, called the "ergosphere", to be dragged around with the black hole. Mass falling into this volume gains energy by this process and some portion of the mass can then be ejected without falling into the black hole. When the mass is ejected, the black hole loses angular momentum (the "Penrose process").
| Physical sciences | Stellar astronomy | Astronomy |
11968870 | https://en.wikipedia.org/wiki/Port%20of%20Kobe | Port of Kobe | The Port of Kobe is a Japanese maritime port in Kobe, Hyōgo in the Keihanshin area, backgrounded by the Hanshin Industrial Region.
Located at a foothill of the range of Mount Rokkō, flat lands are limited and constructions of artificial islands have carried out, to make Port Island, Rokkō Island, island of Kobe Airport to name some.
History
In the 12th century, Taira no Kiyomori renovated the then and moved to , the short-lived capital neighbouring the port.
Throughout medieval era, the port was known as .
In 1858 the Treaty of Amity and Commerce opened the Hyōgo Port to foreigners.
In 1865, the Hyōgo Port Opening Demand Incident occurred, in which nine warships from Britain, France, the Netherlands, and the United States invaded the Hyōgo Port demanding its opening.
In 1868, a new port of Kobe was built east of the Hyōgo Port and opened.
After the World War II pillars were occupied by the Allied Forces, later by United States Forces Japan. (Last one returned in 1973.)
In the 1970s the port boasted it handled the most containers in the world. It was the world's busiest container port from 1973 to 1978.
The 1995 Great Hanshin earthquake diminished much of the port city's prominence when it destroyed and halted much of the facilities and services there, causing approximately ten trillion yen or $102.5 billion in damage, 2.5% of Japan's GDP at the time. Most of the losses were uninsured, as only 3% of property in the Kobe area was covered by earthquake insurance, compared to 16% in Tokyo. Kobe was one of the world's busiest ports prior to the earthquake, but despite the repair and rebuilding, it has never regained its former status as Japan's principal shipping port. It remains Japan's fourth busiest container port.
Facilities
Container berths: 34
Area: 3.89 km²
Max draft: 18 m
Amusement facility for public
Meriken Park
Kobe Port Tower
Harborland
Passenger services
Busan, South Korea: twice a week
Shanghai, China: once a week
Tianjin, China: once a week
Cruise port
Kobe is also a home port for certain cruise ships. Cruise lines that call at the port are kinds like Holland America Line and Princess Cruise Line. In the summer of 2014 Princess expanded the market in Kobe when their ship sailed eight-day roundtrip Asia cruises from the port. These cruises on the Sun Princess are a part of Princess Cruises $11 billion contributions to the entire country of Japan, where the ship will also sail from Otaru, Hokkaido, as it is currently based in Yokohama, Tokyo.
Sister ports
Rotterdam port, Netherlands - 1967
Seattle port, United States - 1967
Tianjin port, China - 1980
Kolkata port, India-1951
Vancouver port, Canada-1991
| Technology | Specific piers and ports | null |
7331618 | https://en.wikipedia.org/wiki/Polycotylidae | Polycotylidae | Polycotylidae is a family of plesiosaurs from the Cretaceous, a sister group to Leptocleididae. They are known as false pliosaurs. Polycotylids first appeared during the Albian stage of the Early Cretaceous, before becoming abundant and widespread during the early Late Cretaceous. Several species survived into the final stage of the Cretaceous, the early Maastrichtian around . The possible latest surviving member Rarosaurus from the late Maastrichtian is more likely a crocodylomorph.
With their short necks and large elongated heads, they resemble the pliosaurs, but closer phylogenetic studies indicate that they share many common features with the Leptocleididae and Elasmosauridae. They have been found worldwide, with specimens reported from New Zealand, Australia, Japan, Morocco, the US, Canada, Eastern Europe, and South America.
Phylogeny
Cladogram after Albright, Gillette and Titus (2007).
Cladogram after Ketchum and Benson (2010).
Below is a cladogram of polycotylid relationships from Ketchum & Benson, 2011.
| Biology and health sciences | Prehistoric marine reptiles | Animals |
7333849 | https://en.wikipedia.org/wiki/Bratsk%20Hydroelectric%20Power%20Station | Bratsk Hydroelectric Power Station | The Bratsk Hydroelectric Power Station (also referred to as The 50 years of Great October Dam) is a concrete gravity dam on the Angara River and adjacent hydroelectric power station. It is the second level of the Angara River hydroelectric station cascade in Irkutsk Oblast, Russia. From its commissioning in 1966, the station was the world's single biggest power producer until Krasnoyarsk Hydroelectric Power Station reached 5,000 MW (at 10 turbines) in 1971. Annually the station produces 22.6 TWh. Currently, the Bratsk Power Station operates 18 hydro-turbines, each with capacity of 250 MW, produced by the Leningrad Metal Works ("LMZ", , ) in the 1960s.
Design and specifications
Dam
Components:
concrete wall 924 m long and 124.5 m high at its maximum (stationary part 515 m long, waterdrop part 242 m long, dumb part 167 m).
by-wall house 516 m long
riverbank concrete walls all 506 m long
right bank ground wall 2,987 m long, left - 723 m long.
On the top of the dam are the track of the Taishet-Lena railway line and a vehicle road.
There are no navigational channels, because the Angara has no through ship routes. Nevertheless, the construction project includes the possibility to build a ship elevator.
Bratsk dam is often referred to as the second largest in the world by reservoir storage capacity.
Power plant
The Turbine Hall contains 18 Francis hydroturbine units, ca. 250 MW each, with 106 m of operating head. A 5,140 m-long penstock forms the Bratsk Reservoir. With a 4,500 MW capacity, and 22.6 TWh of annual output, it is Russia's second largest single producer of hydroelectricity. Output is distributed into five 500 kV power lines and twenty 220 kV lines.
The plant was designed by the Moscow-based Hydroproject () institute, and is operated by the joint-stock company Irkutskenergo (), although all the buildings themselves belong to Russia's federal government. A reconstruction project includes increasing the output towards 5,000 MW. At present, Irkutskenergo together with JSC Silovii Mashini () is modernizing the aging turbines.
Economics
The plant powers hundreds of factories.
It became a part of the Bratsk territorial-production complex ()
About 75% of the output is consumed by the Bratsk Aluminium Plant.
History
The plan to build the hydroelectric plant was approved in September 1954 and later that year the first workers and machines arrived at Bratsk. On December 21, 1954, preparation works were initiated by the Nizhneangargasstroy department (), later renamed to Bratskgasstroy (). Concurrently, the city of Bratsk was founded. On December 12, 1955, Bratsk was officially converted from a workers settlement into a city by the decree of the Presidium of the Supreme Soviet of the RSFSR.
Construction was declared as the Komsomol's high-tempo priority goal and was in the center of public attention. Eventually, a lot of the workers were awarded state prizes and the plant became a symbol of the industrial development of Siberia.
From July 1955 to October 1957 the 220 kV power line to Irkutsk was constructed. On November 6, 1957, the Bratsk substation received the first current from the newly constructed plant and later that year this current was transmitted to Irkutsk for the first time via the newly created power line. In 1961 the second 500 kV power line was added.
On July 18, 1961, the Bratsk Reservoir started filling (level raised up to 100 m so that it became the largest artificial lake of that time). First stationary 225 MW generator (No. 18) became operational on November 28, 1961, at 10:15 local time. After 7 days on December 5 the second unit No. 17 started to operate and on December 12, 1963, units No. 16 and No. 15 were included into the Unified Energy System of Siberia. On May 9, 1964, operators began to control the plant as the central control post was put into service. On September 30, 1964, the last cubic meter of concrete was poured into the dam wall.
Construction of a railway track over the dam began on March 3, 1965, and it started to operate on June 16. A vehicular road opened on July 28.
On December 14, 1966, the last unit, No. 1 was operational and on September 8, 1967, the State Commission accepted the inclusion of Bratsk into constant use.
| Technology | Dams | null |
1586531 | https://en.wikipedia.org/wiki/Quadrans%20Muralis | Quadrans Muralis | Quadrans Muralis (Latin for mural quadrant) was a constellation created by the French astronomer Jérôme Lalande in 1795. It depicted a wall-mounted quadrant with which he and his nephew Michel Lefrançois de Lalande had charted the celestial sphere, and was named Le Mural in the French atlas. It was between the constellations of Boötes and Draco, near the tail of Ursa Major, containing stars between β Bootis (Nekkar) and η Ursae Majoris (Alkaid).
Johann Elert Bode converted its name to Latin as Quadrans Muralis and shrank the constellation a little in his 1801 Uranographia star atlas, to avoid it clashing with neighboring constellations.
In 1922, Quadrans Muralis was omitted when the International Astronomical Union (IAU) formalised its list of officially recognized constellations.
Notable features
The variable star BP Boötis was a member of the constellation.
39 Boötis is a double star that was transferred by Lalande into Quadrans.
The Quadrantid meteor shower is still named after the obsolete constellation.
| Physical sciences | Asterism | Astronomy |
1586721 | https://en.wikipedia.org/wiki/ABO%20blood%20group%20system | ABO blood group system | The ABO blood group system is used to denote the presence of one, both, or neither of the A and B antigens on erythrocytes (red blood cells). For human blood transfusions, it is the most important of the 44 different blood type (or group) classification systems currently recognized by the International Society of Blood Transfusions (ISBT) as of
December 2022. A mismatch in this serotype (or in various others) can cause a potentially fatal adverse reaction after a transfusion, or an unwanted immune response to an organ transplant. Such mismatches are rare in modern medicine. The associated anti-A and anti-B antibodies are usually IgM antibodies, produced in the first years of life by sensitization to environmental substances such as food, bacteria, and viruses.
The ABO blood types were discovered by Karl Landsteiner in 1901; he received the Nobel Prize in Physiology or Medicine in 1930 for this discovery. ABO blood types are also present in other primates such as apes, monkeys and Old World monkeys.
History
Discovery
The ABO blood types were first discovered by an Austrian physician, Karl Landsteiner, working at the Pathological-Anatomical Institute of the University of Vienna (now Medical University of Vienna). In 1900, he found that red blood cells would clump together (agglutinate) when mixed in test tubes with sera from different persons, and that some human blood also agglutinated with animal blood. He wrote a two-sentence footnote:
This was the first evidence that blood variations exist in humans — it was believed that all humans have similar blood. The next year, in 1901, he made a definitive observation that blood serum of an individual would agglutinate with only those of certain individuals. Based on this he classified human blood into three groups, namely group A, group B, and group C. He defined that group A blood agglutinates with group B, but never with its own type. Similarly, group B blood agglutinates with group A. Group C blood is different in that it agglutinates with both A and B.
This was the discovery of blood groups for which Landsteiner was awarded the Nobel Prize in Physiology or Medicine in 1930. In his paper, he referred to the specific blood group interactions as isoagglutination, and also introduced the concept of agglutinins (antibodies), which is the actual basis of antigen-antibody reaction in the ABO system. He asserted:
Thus, he discovered two antigens (agglutinogens A and B) and two antibodies (agglutinins — anti-A and anti-B). His third group (C) indicated absence of both A and B antigens, but contains anti-A and anti-B. The following year, his students Adriano Sturli and Alfred von Decastello discovered the fourth type (but not naming it, and simply referred to it as "no particular type").
In 1910, Ludwik Hirszfeld and Emil Freiherr von Dungern introduced the term 0 (null) for the group Landsteiner designated as C, and AB for the type discovered by Adriano sturli and Alfred von decastello (https://www.rockefeller.edu/our-scientists/karl-landsteiner/2554-nobel-prize/). They were also the first to explain the genetic inheritance of the blood groups.
Classification systems
Czech serologist Jan Janský independently introduced blood type classification in 1907 in a local journal. He used the Roman numerical I, II, III, and IV (corresponding to modern O, A, B, and AB). Unknown to Janský, an American physician William L. Moss devised a slightly different classification using the same numerical; his I, II, III, and IV corresponding to modern AB, A, B, and O.
These two systems created confusion and potential danger in medical practice. Moss's system was adopted in Britain, France, and US, while Janský's was preferred in most European countries and some parts of US. To resolve the chaos, the American Association of Immunologists, the Society of American Bacteriologists, and the Association of Pathologists and Bacteriologists made a joint recommendation in 1921 that the Jansky classification be adopted based on priority. But it was not followed particularly where Moss's system had been used.
In 1927, Landsteiner had moved to the Rockefeller Institute for Medical Research in New York. As a member of a committee of the National Research Council concerned with blood grouping, he suggested to substitute Janský's and Moss's systems with the letters O, A, B, and AB. (There was another confusion on the use of figure 0 for German null as introduced by Hirszfeld and von Dungern, because others used the letter O for ohne, meaning without or zero; Landsteiner chose the latter.) This classification was adopted by the National Research Council and became variously known as the National Research Council classification, the International classification, and most popularly the "new" Landsteiner classification. The new system was gradually accepted and by the early 1950s, it was universally followed.
Other developments
The first practical use of blood typing in transfusion was by an American physician Reuben Ottenberg in 1907. Large-scale application began during the First World War (1914–1915) when citric acid began to be used for blood clot prevention. Felix Bernstein demonstrated the correct blood group inheritance pattern of multiple alleles at one locus in 1924. Watkins and Morgan, in England, discovered that the ABO epitopes were conferred by sugars, to be specific, N-acetylgalactosamine for the A-type and galactose for the B-type. After much published literature claiming that the ABH substances were all attached to glycosphingolipids, Finne et al. (1978) found that the human erythrocyte glycoproteins contain polylactosamine chains that contains ABH substances attached and represent the majority of the antigens. The main glycoproteins carrying the ABH antigens were identified to be the Band 3 and Band 4.5 proteins and glycophorin. Later, Yamamoto's group showed the precise glycosyl transferase set that confers the A, B and O epitopes.
Genetics
Blood groups are inherited from both parents. The ABO blood type is controlled by a single gene (the ABO gene) with three types of alleles inferred from classical genetics: i, IA, and IB. The I designation stands for isoagglutinogen, another term for antigen. The gene encodes a glycosyltransferase—that is, an enzyme that modifies the carbohydrate content of the red blood cell antigens. The gene is located on the long arm of the ninth chromosome (9q34).
The IA allele gives type A, IB gives type B, and i gives type O. As both IA and IB are dominant over i, only ii people have type O blood. Individuals with IAIA or IAi have type A blood, and individuals with IBIB or IBi have type B. IAIB people have both phenotypes, because A and B express a special dominance relationship: codominance, which means that type A and B parents can have an AB child. A couple with type A and type B can also have a type O child if they are both heterozygous (IBi and IAi). The cis-AB phenotype has a single enzyme that creates both A and B antigens. The resulting red blood cells do not usually express A or B antigen at the same level that would be expected on common group A1 or B red blood cells, which can help solve the problem of an apparently genetically impossible blood group.
Individuals with the rare Bombay phenotype (hh) produce antibodies against the A, B, and O groups and can only receive transfusions from other hh individuals. The table above summarizes the various blood groups that children may inherit from their parents. Genotypes are shown in the second column and in small print for the offspring: AO and AA both test as type A; BO and BB test as type B. The four possibilities represent the combinations obtained when one allele is taken from each parent; each has a 25% chance, but some occur more than once. The text above them summarizes the outcomes.
Historically, ABO blood tests were used in paternity testing, but in 1957 only 50% of American men falsely accused were able to use them as evidence against paternity. Occasionally, the blood types of children are not consistent with expectations—for example, a type O child can be born to an AB parent—due to rare situations, such as Bombay phenotype and cis AB.
Subgroups
The A blood type contains about 20 subgroups, of which A1 and A2 are the most common (over 99%). A1 makes up about 80% of all A-type blood, with A2 making up almost all of the rest. These two subgroups are not always interchangeable as far as transfusion is concerned, as some A2 individuals produce antibodies against the A1 antigen. Complications can sometimes arise in rare cases when typing the blood.
With the development of DNA sequencing, it has been possible to identify a much larger number of alleles at the ABO locus, each of which can be categorized as A, B, or O in terms of the reaction to transfusion, but which can be distinguished by variations in the DNA sequence. There are six common alleles in white individuals of the ABO gene that produce one's blood type:
The same study also identified 18 rare alleles, which generally have a weaker glycosylation activity. People with weak alleles of A can sometimes express anti-A antibodies, though these are usually not clinically significant as they do not stably interact with the antigen at body temperature.
Cis AB is another rare variant, in which A and B genes are transmitted together from a single parent.
Distribution and evolutionary history
The distribution of the blood groups A, B, O and AB varies across the world according to the population. There are also variations in blood type distribution within human subpopulations.
In the UK, the distribution of blood type frequencies through the population still shows some correlation to the distribution of placenames and to the successive invasions and migrations including Celts, Norsemen, Danes, Anglo-Saxons, and Normans who contributed the morphemes to the placenames and the genes to the population. The native Celts tended to have more type O blood, while the other populations tended to have more type A.
The two common O alleles, O01 and O02, share their first 261 nucleotides with the group A allele A01. However, unlike the group A allele, a guanosine base is subsequently deleted. A premature stop codon results from this frame-shift mutation. This variant is found worldwide, and likely predates human migration from Africa. The O01 allele is considered to predate the O02 allele.
Some evolutionary biologists theorize that there are four main lineages of the ABO gene and that mutations creating type O have occurred at least three times in humans. From oldest to youngest, these lineages comprise the following alleles: A101/A201/O09, B101, O02 and O01. The continued presence of the O alleles is hypothesized to be the result of balancing selection. Both theories contradict the previously held theory that type O blood evolved first.
Origin theories
It is possible that food and environmental antigens (bacterial, viral, or plant antigens) have epitopes similar enough to A and B glycoprotein antigens. The antibodies created against these environmental antigens in the first years of life can cross-react with ABO-incompatible red blood cells that it comes in contact with during blood transfusion later in life. Anti-A antibodies are hypothesized to originate from immune response towards influenza virus, whose epitopes are similar enough to the α-D-N-galactosamine on the A glycoprotein to be able to elicit a cross-reaction. Anti-B antibodies are hypothesized to originate from antibodies produced against Gram-negative bacteria, such as E. coli, cross-reacting with the α-D-galactose on the B glycoprotein.
However, it is more likely that the force driving evolution of allele diversity is simply negative frequency-dependent selection; cells with rare variants of membrane antigens are more easily distinguished by the immune system from pathogens carrying antigens from other hosts. Thus, individuals possessing rare types are better equipped to detect pathogens. The high within-population diversity observed in human populations would, then, be a consequence of natural selection on individuals.
Clinical relevance
The carbohydrate molecules on the surfaces of red blood cells have roles in cell membrane integrity, cell adhesion, membrane transportation of molecules, and acting as receptors for extracellular ligands, and enzymes. ABO antigens are found having similar roles on epithelial cells as well as red blood cells.
Bleeding and thrombosis (von Willebrand factor)
The ABO antigen is also expressed on the von Willebrand factor (vWF) glycoprotein, which participates in hemostasis (control of bleeding). In fact, having type O blood predisposes to bleeding, as 30% of the total genetic variation observed in plasma vWF is explained by the effect of the ABO blood group, and individuals with group O blood normally have significantly lower plasma levels of vWF (and Factor VIII) than do non-O individuals. In addition, vWF is degraded more rapidly due to the higher prevalence of blood group O with the Cys1584 variant of vWF (an amino acid polymorphism in VWF): the gene for ADAMTS13 (vWF-cleaving protease) maps to human chromosome 9 band q34.2, the same locus as ABO blood type. Higher levels of vWF are more common amongst people who have had ischemic stroke (from blood clotting) for the first time. The results of this study found that the occurrence was not affected by ADAMTS13 polymorphism, and the only significant genetic factor was the person's blood group.
ABO(H) blood group antigens are also carried by other hemostatically relevant glycoproteins, such as platelet glycoprotein Ibα, which is a ligand for vWF on platelets. The significance of ABO(H) antigen expression on these other hemostatic glycoproteins is not fully defined, but may also be relevant for bleeding and thrombosis.
ABO hemolytic disease of the newborn
ABO blood group incompatibilities between the mother and child do not usually cause hemolytic disease of the newborn (HDN) because antibodies to the ABO blood groups are usually of the IgM type, which do not cross the placenta. However, in an O-type mother, IgG ABO antibodies are produced and the baby can potentially develop ABO hemolytic disease of the newborn.
Clinical applications
In human cells, the ABO alleles and their encoded glycosyltransferases have been described in several oncologic conditions. Using anti-GTA/GTB monoclonal antibodies, it was demonstrated that a loss of these enzymes was correlated to malignant bladder and oral epithelia. Furthermore, the expression of ABO blood group antigens in normal human tissues is dependent the type of differentiation of the epithelium. In most human carcinomas, including oral carcinoma, a significant event as part of the underlying mechanism is decreased expression of the A and B antigens. Several studies have observed that a relative down-regulation of GTA and GTB occurs in oral carcinomas in association with tumor development. More recently, a genome wide association study (GWAS) has identified variants in the ABO locus associated with susceptibility to pancreatic cancer.
In addition, another large GWAS study has associated ABO-histo blood groups as well as FUT2 secretor status with the presence in the intestinal microbiome of specific bacterial species. In this case the association was with Bacteroides and Faecalibacterium spp. Bacteroides of the same OTU (operational taxonomic unit) have been shown to be associated with inflammatory bowel disease, thus the study suggests an important role for the ABO histo-blood group antigens as candidates for direct modulation of the human microbiome in health and disease.
Clinical marker
A multi-locus genetic risk score study based on a combination of 27 loci, including the ABO gene, identified individuals at increased risk for both incident and recurrent coronary artery disease events, as well as an enhanced clinical benefit from statin therapy. The study was based on a community cohort study (the Malmo Diet and Cancer study) and four additional randomized controlled trials of primary prevention cohorts (JUPITER and ASCOT) and secondary prevention cohorts (CARE and PROVE IT-TIMI 22).
Alteration of ABO antigens for transfusion
In April 2007, an international team of researchers announced in the journal Nature Biotechnology an inexpensive and efficient way to convert types A, B, and AB blood into type O. This is done by using glycosidase enzymes from specific bacteria to strip the blood group antigens from red blood cells. The removal of A and B antigens still does not address the problem of the Rh blood group antigen on the blood cells of Rh positive individuals, and so blood from Rh negative donors must be used. The modified blood is named "enzyme converted to O" (ECO blood) but despite the early success of converting B- to O-type RBCs and clinical trials without adverse effects transfusing into A- and O-type patients, the technology has not yet become clinical practice.
Another approach to the blood antigen problem is the manufacture of artificial blood, which could act as a substitute in emergencies.
Pseudoscience
In Japan and other parts of East Asia, there is a popular belief in Blood type personality theory, which claims that blood types predict or influence personality. This claim is not scientifically based, and there is scientific consensus that no such link exists; the scientific community considers it a pseudoscience and a superstition.
The belief originated in the 1930s, when it was introduced as part of Japan's eugenics program. Its popularity faded following Japan's defeat in World War 2 and Japanese support for eugenics faltered, but it resurfaced in the 1970s by a journalist named Masahiko Nomi. Despite its status as a pseudoscience, it remains widely popular throughout East Asia.
Other popular ideas are blood type-specific dietary needs, that group A causes severe hangovers, that group O is associated with better teeth, and that those with group A2 have the highest IQ scores. As with blood type personality theory, these and other popular ideas lack scientific evidence, and many are discredited or pseudoscientific.
| Biology and health sciences | Human anatomy | Health |
1588158 | https://en.wikipedia.org/wiki/Taenia%20solium | Taenia solium | Taenia solium, the pork tapeworm, belongs to the cyclophyllid cestode family Taeniidae. It is found throughout the world and is most common in countries where pork is eaten. It is a tapeworm that uses humans (Homo sapiens) as its definitive host and pigs (family Suidae) as the intermediate or secondary hosts. It is transmitted to pigs through human feces that contain the parasite eggs and contaminate their fodder. Pigs ingest the eggs, which develop into larvae, then into oncospheres, and ultimately into infective tapeworm cysts, called cysticerci. Humans acquire the cysts through consumption of uncooked or under-cooked pork and the cysts grow into adult worms in the small intestine.
There are two forms of human infection. One is "primary hosting", called taeniasis, and is due to eating under-cooked pork that contains the cysts, resulting in adult worms in the intestines. This form generally is without symptoms; the infected person does not know they have tapeworms. This form is easily treated with anthelmintic medications which eliminate the tapeworm. The other form, "secondary hosting", called cysticercosis, is due to eating food, or drinking water, contaminated with faeces from someone infected by the adult worms, thus ingesting the tapeworm eggs, instead of the cysts. The eggs go on to develop cysts primarily in the muscles, and usually with no symptoms. However some people have obvious symptoms, the most harmful and chronic form of which is when the cysts form in the brain. Treatment of this form is more difficult but possible.
The adult worm has a flat, ribbon-like body which is white and measures long, or more. Its tiny attachment, the scolex, contains suckers and a rostellum as organs of attachment that attach to the wall of the small intestine. The main body, consists of a chain of segments known as proglottids. Each proglottid is little more than a self-sustainable, very lightly ingestive, self-contained reproductive unit since tapeworms are hermaphrodites.
Human primary hosting is best diagnosed by microscopy of eggs in faeces, often triggered by spotting shed segments. In secondary hosting, imaging techniques such as computed tomography and nuclear magnetic resonance are often employed. Blood samples can also be tested using antibody reaction of enzyme-linked immunosorbent assay.
T. solium deeply affects developing countries, especially in rural settings where pigs roam free, as clinical manifestations are highly dependent on the number, size, and location of the parasites as well as the host's immune and inflammatory response.
Description
Adult T. solium is a triploblastic acoelomate, having no body cavity. It is normally 2 to 3 metres (6' to 10') in length, but can become much larger, sometimes over 8 metres (30') long. It is white in colour and flattened into a ribbon-like body. The anterior end is a knob-like attachment organ (sometimes mistakenly referred to as a "head") called a scolex, 1 mm in diameter. The scolex bears four radially arranged suckers that surround the rostellum. These are the organs of adhesive attachment to the intestinal wall of the host. The rostellum is armed with two rows of proteinaceous spiny hooks. Its 22 to 32 rostellar hooks can be differentiated into short (130 μm) and long (180 μm) types.
After a short neck is the elongated body, the strobila. The entire body is covered by a covering called a tegument, which is an absorptive layer consisting of a mat of minute specialised microvilli called microtriches. The strobila is divided into segments called proglottids, 800 to 900 in number. Body growth starts from the neck region, so the oldest proglottids are at the posterior end. Thus, the three distinct proglottids are immature proglottids towards the neck, mature proglottids in the middle, and gravid proglottids at the posterior end. A hermaphroditic species, each mature proglottid contains a set of male and female reproductive systems with numerous testes and an ovary with three lobes. The uterus is branched with a characteristic 7 to 13 branches per side. The cirrus, a sex organ at the terminus of the vas deferens, and vagina open into a common genital pore or atrium. The oldest gravid proglottids are full of fertilised eggs, Each fertilised egg is spherical and measures 35 to 42 μm in diameter.
If released early enough in the digestive tract and not passed, fertilised eggs can mature using upper tract digestive enzymes. The tiny oncosphere larvae, activated by exposure to host enzymes and bile salts, penetrate the intestinal wall and migrate in the blood stream or lymphatics to reach sites where they can develop into cysticerci. These have three morphologically distinct types. The common one is the ordinary "cellulose" cysticercus, which has a fluid-filled bladder 0.5 to 1.5 cm (¼" to ½") in length and an invaginated scolex. The intermediate form has a scolex. The "racemose" has no evident scolex, but is believed to be larger. They can be 20 cm (8") in length and have 60 ml (2 fl. oz.) of fluid, and 13% of patients with neurocysticercosis can have all three types in the brain.
Life cycle
The life cycle of T. solium is indirect as it passes through pigs, commonly Sus domesticus due to their association with people, as intermediate hosts, into humans, as definitive hosts. In humans the infection can be relatively short or long lasting, and in the latter case if reaching the brain can last for life. From humans, the eggs are released in the environment where they await ingestion by another host. In the secondary host, the eggs develop into oncospheres which bore through the intestinal wall and migrate to other parts of the body where the cysticerci form. The cysticerci can survive for several years in the animal.
Definitive host
Humans are colonised by the larval stage, the cysticercus, from undercooked pork or other meat. Each microscopic cysticercus is oval in shape, containing an inverted scolex (specifically "protoscolex"), which everts once the organism is inside the small intestine. This process of evagination is stimulated by bile juice and digestive enzymes (of the host). Then, the protoscolex lodges in the host's upper intestine by using its crowned hooks and four suckers to enter the intestinal mucosa. Then, the scolex is fixed into the intestine by having the suckers attached to the villi and hooks extended. It grows in size using nutrients from the surroundings. Its strobila lengthens as new proglottids are formed at the foot of the neck. In 10–12 weeks after initial colonisation, it is an adult worm. The exact life span of an adult worm is not determined; however, evidences from an outbreak among British military in the 1930s indicate that they can survive for 2 to 5 years in humans.
As a hermaphrodite, it reproduces by self-fertilisation, or cross-fertilisation if gametes are exchanged between two different proglottids. Spermatozoa fuse with the ova in the fertilisation duct, where the zygotes are produced. The zygote undergoes holoblastic and unequal cleavage resulting in three cell types, small, medium and large (micromeres, mesomeres, megameres). Megameres develop into a syncytial layer, the outer embryonic membrane; mesomeres into the radially striated inner embryonic membrane or embryophore; micromeres become the morula. The morula transforms into a six-hooked embryo known as an oncosphere, or hexacanth ("six hooked") larva. A gravid proglottid can contain more than 50,000 embryonated eggs. Gravid proglottids often rupture in the intestine, liberating the oncospheres in faeces. Intact gravid proglottids are shed off in groups of four or five. The free eggs and detached proglottids are spread through the host's defecation (peristalsis). Oncospheres can survive in the environment for up to two months.
Intermediate host
Pigs are the principal intermediate hosts that ingest the eggs in traces of human faeces, mainly from vegetation contaminated with it such as from water bearing traces of it. The embryonated eggs enter intestine where they hatch into motile oncospheres. The embryonic and basement membranes are removed by the host's digestive enzymes (particularly pepsin). Then the free oncospheres attach on the intestinal wall using their hooks. With the help of digestive enzymes from the penetration glands, they penetrate the intestinal mucosa to enter blood and lymphatic vessels. They move along the general circulatory system to various organs, and large numbers are cleared in the liver. The surviving oncospheres preferentially migrate to striated muscles, as well as the brain, liver, and other tissues, where they settle to form cysts — cysticerci. A single cysticercus is spherical, measuring 1–2 cm (about ½") in diameter, and contains an invaginated protoscolex. The central space is filled with fluid like a bladder, hence it is also called bladder worm. Cysticerci are usually formed within 70 days and may continue to grow for a year.
Humans are also accidental secondary hosts when they are colonised by embryonated eggs, either by auto-colonisation or ingestion of contaminated food. As in pigs, the oncospheres hatch and enter blood circulation. When they settle to form cysts, clinical symptoms of cysticercosis appear. The cysticercus is often called the metacestode.
Diseases
Signs and symptoms
Taeniasis
Taeniasis is infection in the intestines by the adult T. solium. It generally has mild or non-specific symptoms. This may include abdominal pain, nausea, diarrhoea and constipation. Such symptoms will arise when the tapeworm has fully developed in the intestine, this would be around eight weeks after the contraction (ingestion of meat containing cysticerci).
These symptoms could continue until the tapeworm dies from the course of treatment but otherwise could continue for many years, as long as the worm lives. If untreated it is common that the infections with T. solium last for approximately 2–3 years. It is possible that infected people may show no symptoms for years.
Cysticercosis
Ingestion of T. solium eggs or egg-containing proglottids which rupture within the host intestines results in the development and subsequent migration of larvae into host tissue to cause cysticercosis. In pigs, there are not normally pathological lesions as they easily develop immunity. But in humans, infection with the eggs causes serious medical conditions. This is because T. solium cysticerci have a predilection for the brain. In symptomatic cases, a wide spectrum of symptoms may be expressed, including headaches, dizziness, and seizures. Brain infection by the cysticerci is called neurocysticercosis and is the leading cause of seizures worldwide.
In more severe cases, dementia or hypertension can occur due to perturbation of the normal circulation of cerebrospinal fluid. (Any increase in intracranial pressure will result in a corresponding increase in arterial blood pressure, as the body seeks to maintain circulation to the brain.) The severity of cysticercosis depends on location, size and number of parasite larvae in tissues, as well as the host immune response. Other symptoms include sensory deficits, involuntary movements, and brain system dysfunction. In children, ocular cysts are more common than in other parts of the body.
In many cases, cysticercosis in the brain can lead to epilepsy, seizures, lesions in the brain, blindness, tumour-like growths, and low eosinophil levels. It is the cause of major neurological problems, such as hydrocephalus, paraplegy, meningitis, convulsions, and even death.
Diagnosis
Stool tests commonly include microbiology testing – the microscopic examination of stools after concentration aims to determine the amount of eggs. Specificity is extremely high for someone with training but sensitivity is quite low because the high variation in the number of eggs in small amounts of sample.
Stool tapeworm antigen detection: Using ELISA increases the sensitivity of the diagnosis. The downside of this tool is it has high costs, an ELISA reader and reagents are required and trained operators are needed. A studies using Coproantigen (CoAg) ELISA methods are considered very sensitive but currently only genus specific. A 2020 study in Ag-ELISA test on Taenia solium cystercicosis in infected pigs and showed 82.7% sensitivity and 86.3% specificity. The study concluded that the test is more reliable in ruling out T. solium cystercosis versus confirmation.
Stool PCR: This method can provide a species-specific diagnosis when proglottid material is taken from the stool. This method requires specific facilities, equipment and trained individuals to run the tests. This method has not yet been tested in controlled field trials.
Serum antibody tests: using immunoblot and ELISA, tape-worm specific circulating antibodies have been detected. The assays for these tests have both a high sensitivity and specificity. A 2018 study of two commercially available kits showed low sensitivity with patients diagnose with NCC (neurocysticercosis) especially with calcified NCC versus patients with cystic hydatid disease. Current standard for serologic diagnosis of NCC is the lentil lectin-bound glycoproteins/enzyme-linked immunoelectrotransfer blot (LLGP-EITB).
Guidelines for diagnosis and treatment remain difficult for endemic countries, most of them developing with limited resources. Many developing countries diagnosed clinically with imaging.
Prevention
The best way to avoid getting tapeworms is to not eat undercooked pork or vegetables contaminated with faeces. Moreover, a high level of sanitation and prevention of faecal contamination of pig feeds also plays a major role in prevention. Infection can be prevented with proper disposal of human faeces around pigs, cooking meat thoroughly or freezing the meat at −10°C (14°F) for 5 days. For human cysticercosis, dirty hands are attributed to be the primary cause, and especially common among food handlers.
Treatment
Treatment of cysticercosis must be carefully monitored for inflammatory reactions to the dying worms, especially if they are located in the brain. Albendazole is commonly given (along with glucocorticoids to reduce the inflammation). In selected cases, surgery may be required to remove the cysts.
In neurocysticercosis, most patients under cysticidal therapy will have significant improvement in seizure control. A combination of praziquantel and albendazole is more effective in treating neurocystercosis. A 2014 double blind randomized control study showed increased parasiticidal effect with albendazole plus praziquantel.
A vaccine to prevent cysticercosis in pigs has been studied. The life-cycle of the parasite can be terminated in their intermediate host, pigs, thereby preventing further human infection. The large scale use of this vaccine, however, is still under consideration.
During the 1940s, the preferred treatment was oleoresin of aspidium, which would be introduced into the duodenum via a Rehfuss tube.
Epidemiology
T. solium is found worldwide, but its two distinctive forms rely on eating undercooked pork or on ingesting faeces-contaminated water or food (respectively). Because pig meat is the intermediate source of the intestinal parasite, rotation of the full life cycle occurs in regions where humans live in close contact with pigs and eat undercooked pork. However, humans can also act as secondary hosts, which is a more pathological, harmful stage triggered by oral contamination. High prevalences are reported among many places with poorer than average water hygiene or even mildly contaminated water especially with a pork-eating heritage such as Latin America, West Africa, Russia, India, Manchuria, and Southeast Asia. In Europe it is most common in pockets of Slavic countries and among global travelers taking inadequate precautions in eating pork especially.
The secondary host form, human cysticercosis, predominates in areas where poor hygiene allows for mild fecal contamination of food, soil, or water supplies. Rates in the United States have shown immigrants from Mexico, Central and South America, and Southeast Asia bear the brunt of cases of cysticercosis caused by the ingestion of microscopic, long-lasting and hardy tapeworm eggs. For example, in 1990 and 1991 four unrelated members of an Orthodox Jewish community in New York City developed recurrent seizures and brain lesions, which were found to have been caused by T. solium. All had housekeepers from Mexico, some of whom were suspected to be the source of the infections. Rates of T. solium cysticercosis in West Africa are not affected by any religion.
Neurocystiscercosis is noted at around one-third of all epilepsy cases in many developing countries. Neurological morbidity and mortality remain high in lower-income countries and high amongst developed countries with high rates of migration. Global prevalence rates remain largely unknown as screening tools, immunological, molecular tests, and neuroimaging are not usually available in many endemic areas.
| Biology and health sciences | Platyzoa | Animals |
1588279 | https://en.wikipedia.org/wiki/Combinatorial%20principles | Combinatorial principles | In proving results in combinatorics several useful combinatorial rules or combinatorial principles are commonly recognized and used.
The rule of sum, rule of product, and inclusion–exclusion principle are often used for enumerative purposes. Bijective proofs are utilized to demonstrate that two sets have the same number of elements. The pigeonhole principle often ascertains the existence of something or is used to determine the minimum or maximum number of something in a discrete context.
Many combinatorial identities arise from double counting methods or the method of distinguished element. Generating functions and recurrence relations are powerful tools that can be used to manipulate sequences, and can describe if not resolve many combinatorial situations.
Rule of sum
The rule of sum is an intuitive principle stating that if there are a possible outcomes for an event (or ways to do something) and b possible outcomes for another event (or ways to do another thing), and the two events cannot both occur (or the two things can't both be done), then there are a + b total possible outcomes for the events (or total possible ways to do one of the things). More formally, the sum of the sizes of two disjoint sets is equal to the size of their union.
Rule of product
The rule of product is another intuitive principle stating that if there are a ways to do something and b ways to do another thing, then there are a · b ways to do both things.
Inclusion–exclusion principle
The inclusion–exclusion principle relates the size of the union of multiple sets, the size of each set, and the size of each possible intersection of the sets. The smallest example is when there are two sets: the number of elements in the union of A and B is equal to the sum of the number of elements in A and B, minus the number of elements in their intersection.
Generally, according to this principle, if A1, …, An are finite sets, then
Rule of division
The rule of division states that there are n/d ways to do a task if it can be done using a procedure that can be carried out in n ways, and for every way w, exactly d of the n ways correspond to way w.
Bijective proof
Bijective proofs prove that two sets have the same number of elements by finding a bijective function (one-to-one correspondence) from one set to the other.
Double counting
Double counting is a technique that equates two expressions that count the size of a set in two ways.
Pigeonhole principle
The pigeonhole principle states that if a items are each put into one of b boxes, where a > b, then one of the boxes contains more than one item. Using this one can, for example, demonstrate the existence of some element in a set with some specific properties.
Method of distinguished element
The method of distinguished element singles out a "distinguished element" of a set to prove some result.
Generating function
Generating functions can be thought of as polynomials with infinitely many terms whose coefficients correspond to terms of a sequence. This new representation of the sequence opens up new methods for finding identities and closed forms pertaining to certain sequences. The (ordinary) generating function of a sequence an is
Recurrence relation
A recurrence relation defines each term of a sequence in terms of the preceding terms. Recurrence relations may lead to previously unknown properties of a sequence, but generally closed-form expressions for the terms of a sequence are more desired.
| Mathematics | Combinatorics | null |
1589341 | https://en.wikipedia.org/wiki/Large%20igneous%20province | Large igneous province | A large igneous province (LIP) is an extremely large accumulation of igneous rocks, including intrusive (sills, dikes) and extrusive (lava flows, tephra deposits), arising when magma travels through the crust towards the surface. The formation of LIPs is variously attributed to mantle plumes or to processes associated with divergent plate tectonics. The formation of some of the LIPs in the past 500 million years coincide in time with mass extinctions and rapid climatic changes, which has led to numerous hypotheses about causal relationships. LIPs are fundamentally different from any other currently active volcanoes or volcanic systems.
Overview
Definition
In 1992, Coffin and Eldholm initially defined the term "large igneous province" as representing a variety of mafic igneous provinces with areal extent greater than 100,000 km2 that represented "massive crustal emplacements of predominantly mafic (magnesium- and iron-rich) extrusive and intrusive rock, and originated via processes other than 'normal' seafloor spreading." That original definition included continental flood basalts, oceanic plateaus, large dike swarms (the eroded roots of a volcanic province), and volcanic rifted margins. Mafic basalt sea floors and other geological products of 'normal' plate tectonics were not included in the definition. Most of these LIPs consist of basalt, but some contain large volumes of associated rhyolite (e.g. the Columbia River Basalt Group in the western United States); the rhyolite is typically very dry compared to island arc rhyolites, with much higher eruption temperatures (850 °C to 1000 °C) than normal rhyolites. Some LIPs are geographically intact, such as the basaltic Deccan Traps in India, while others have been fragmented and separated by plate movements, like the Central Atlantic magmatic province—parts of which are found in Brazil, eastern North America, and northwestern Africa.
In 2008, Bryan and Ernst refined the definition to narrow it somewhat: "Large Igneous Provinces are magmatic provinces with areal extents >, igneous volumes > and maximum lifespans of ~50 Myr that have intraplate tectonic settings or geochemical affinities, and are characterised by igneous pulse(s) of short duration (~1–5 Myr), during which a large proportion (>75%) of the total igneous volume has been emplaced. They are dominantly mafic, but also can have significant ultramafic and silicic components, and some are dominated by silicic magmatism." This definition places emphasis on the high magma emplacement rate characteristics of the LIP event and excludes seamounts, seamount groups, submarine ridges and anomalous seafloor crust.
The definition has since been expanded and refined, and remains a work in progress. Some new definitions of LIP include large granitic provinces such as those found in the Andes Mountains of South America and in western North America. Comprehensive taxonomies have been developed to focus technical discussions. Sub-categorization of LIPs into large volcanic provinces (LVP) and large plutonic provinces (LPP), and including rocks produced by normal plate tectonic processes, have been proposed, but these modifications are not generally accepted. LIP is now frequently used to also describe voluminous areas of, not just mafic, but all types of igneous rocks. Further, the minimum threshold to be included as a LIP has been lowered to 50,000 km2. The working taxonomy, focused heavily on geochemistry, is:
Large volcanic province (LVP)
Large rhyolitic province (LRP)
Large andesitic province (LAP)
Large basaltic province (LBP): oceanic, or continental flood basalts
Large basaltic–rhyolitic province (LBRP)
Large plutonic province (LPP)
Large granitic province (LGP)
Large mafic plutonic province
Study
Because large igneous provinces are created during short-lived igneous events resulting in relatively rapid and high-volume accumulations of volcanic and intrusive igneous rock, they warrant study. LIPs present possible links to mass extinctions and global environmental and climatic changes. Michael Rampino and Richard Stothers cite 11 distinct flood basalt episodes—occurring in the past 250 million years—which created volcanic provinces and oceanic plateaus and coincided with mass extinctions. This theme has developed into a broad field of research, bridging geoscience disciplines such as biostratigraphy, volcanology, metamorphic petrology, and Earth System Modelling.
The study of LIPs has economic implications. Some workers associate them with trapped hydrocarbons. They are associated with economic concentrations of copper–nickel and iron. They are also associated with formation of major mineral provinces including platinum group element deposits and, in the silicic LIPs, silver and gold deposits. Titanium and vanadium deposits are also found in association with LIPs.
LIPs in the geological record have marked major changes in the hydrosphere and atmosphere, leading to major climate shifts and maybe mass extinctions of species. Some of these changes were related to rapid release of greenhouse gases from the lithosphere to the atmosphere. Thus the LIP-triggered changes may be used as cases to understand current and future environmental changes.
Plate tectonic theory explains topography using interactions between the tectonic plates, as influenced by viscous stresses created by flow within the underlying mantle. Since the mantle is extremely viscous, the mantle flow rate varies in pulses which are reflected in the lithosphere by small amplitude, long wavelength undulations. Understanding how the interaction between mantle flow and lithosphere elevation influences formation of LIPs is important to gaining insights into past mantle dynamics. LIPs have played a major role in the cycles of continental breakup, continental formation, new crustal additions from the upper mantle, and supercontinent cycles.
Formation
Earth has an outer shell made of discrete, moving tectonic plates floating on a solid convective mantle above a liquid core. The mantle's flow is driven by the descent of cold tectonic plates during subduction and the complementary ascent of mantle plumes of hot material from lower levels. The surface of the Earth reflects stretching, thickening and bending of the tectonic plates as they interact.
Ocean-plate creation at upwellings, spreading and subduction are well accepted fundamentals of plate tectonics, with the upwelling of hot mantle materials and the sinking of the cooler ocean plates driving the mantle convection. In this model, tectonic plates diverge at mid-ocean ridges, where hot mantle rock flows upward to fill the space. Plate-tectonic processes account for the vast majority of Earth's volcanism.
Beyond the effects of convectively driven motion, deep processes have other influences on the surface topography. The convective circulation drives up-wellings and down-wellings in Earth's mantle that are reflected in local surface levels. Hot mantle materials rising up in a plume can spread out radially beneath the tectonic plate causing regions of uplift. These ascending plumes play an important role in LIP formation.
When created, LIPs often have an areal extent of a few million square kilometers and volumes on the order of 1 million cubic kilometers. In most cases, the majority of a basaltic LIP's volume is emplaced in less than 1 million years. One of the conundra of such LIPs' origins is to understand how enormous volumes of basaltic magma are formed and erupted over such short time scales, with effusion rates up to an order of magnitude greater than mid-ocean ridge basalts. The source of many or all LIPs are variously attributed to mantle plumes, to processes associated with plate tectonics or to meteorite impacts.
Hotspots
Although most volcanic activity on Earth is associated with subduction zones or mid-oceanic ridges, there are significant regions of long-lived, extensive volcanism, known as hotspots, which are only indirectly related to plate tectonics. The Hawaiian–Emperor seamount chain, located on the Pacific Plate, is one example, tracing millions of years of relative motion as the plate moves over the Hawaii hotspot. Numerous hotspots of varying size and age have been identified across the world. These hotspots move slowly with respect to one another but move an order of magnitude more quickly with respect to tectonic plates, providing evidence that they are not directly linked to tectonic plates.
The origin of hotspots remains controversial. Hotspots that reach the Earth's surface may have three distinct origins. The deepest probably originate from the boundary between the lower mantle and the core; roughly 15–20% have characteristics such as presence of a linear chain of sea mounts with increasing ages, LIPs at the point of origin of the track, low shear wave velocity indicating high temperatures below the current location of the track, and ratios of 3He to 4He which are judged consistent with a deep origin. Others such as the Pitcairn, Samoan and Tahitian hotspots appear to originate at the top of large, transient, hot lava domes (termed superswells) in the mantle. The remainder appear to originate in the upper mantle and have been suggested to result from the breakup of subducting lithosphere.
Recent imaging of the region below known hotspots (for example, Yellowstone and Hawaii) using seismic-wave tomography has produced mounting evidence that supports relatively narrow, deep-origin, convective plumes that are limited in region compared to the large-scale plate tectonic circulation in which they are imbedded. Images reveal continuous but convoluted vertical paths with varying quantities of hotter material, even at depths where crystallographic transformations are predicted to occur.
Plate ruptures
A major alternative to the plume model is a model in which ruptures are caused by plate-related stresses that fractured the lithosphere, allowing melt to reach the surface from shallow heterogeneous sources. The high volumes of molten material that form the LIPs is postulated to be caused by convection in the upper mantle, which is secondary to the convection driving tectonic plate motion.
Early formed reservoir outpourings
It has been proposed that geochemical evidence supports an early-formed reservoir that survived in the Earth's mantle for about 4.5 billion years. Molten material is postulated to have originated from this reservoir, contributing the Baffin Island flood basalt about 60 million years ago. Basalts from the Ontong Java Plateau show similar isotopic and trace element signatures proposed for the early-Earth reservoir.
Meteorites
Seven pairs of hotspots and LIPs located on opposite sides of the earth have been noted; analyses indicate this coincident antipodal location is highly unlikely to be random. The hotspot pairs include a large igneous province with continental volcanism opposite an oceanic hotspot. Oceanic impacts of large meteorites are expected to have high efficiency in converting energy into seismic waves. These waves would propagate around the world and reconverge close to the antipodal position; small variations are expected as the seismic velocity varies depending upon the route characteristics along which the waves propagate. As the waves focus on the antipodal position, they put the crust at the focal point under significant stress and are proposed to rupture it, creating antipodal pairs. When the meteorite impacts a continent, the lower efficiency of kinetic energy conversion into seismic energy is not expected to create an antipodal hotspot.
A second impact-related model of hotspot and LIP formation has been suggested in which minor hotspot volcanism was generated at large-body impact sites and flood basalt volcanism was triggered antipodally by focused seismic energy. This model has been challenged because impacts are generally considered seismically too inefficient, and the Deccan Traps of India were not antipodal to (and began erupting several Myr before) the Chicxulub impact in Mexico. In addition, no clear example of impact-induced volcanism, unrelated to melt sheets, has been confirmed at any known terrestrial crater.
Correlations with LIP formation
Aerally extensive dike swarms, sill provinces, and large layered ultramafic intrusions are indicators of LIPs, even when other evidence is not now observable. The upper basalt layers of older LIPs may have been removed by erosion or deformed by tectonic plate collisions occurring after the layer is formed. This is especially likely for earlier periods such as the Paleozoic and Proterozoic.
Dike swarms
Giant dyke swarms having lengths over 300 km are a common record of severely eroded LIPs. Both radial and linear dyke swarm configurations exist. Radial swarms with an areal extent over 2,000 km and linear swarms extending over 1,000 km are known. The linear dyke swarms often have a high proportion of dykes relative to country rocks, particularly when the width of the linear field is less than 100 km. The dykes have a typical width of 20–100 m, although ultramafic dykes with widths greater than 1 km have been reported.
Dykes are typically sub-vertical to vertical. When upward flowing (dyke-forming) magma encounters horizontal boundaries or weaknesses, such as between layers in a sedimentary deposit, the magma can flow horizontally creating a sill. Some sill provinces have areal extents >1000 km.
Sills
A series of related sills that were formed essentially contemporaneously (within several million years) from related dikes comprise a LIP if their area is sufficiently large. Examples include:
Winagami sill complex (northwestern Alberta, Canada)
Bushveld Igneous Complex (South Africa)
Volcanic rifted margins
Volcanic rifted margins are found on the boundary of large igneous provinces. Volcanic margins form when rifting is accompanied by significant mantle melting, with volcanism occurring before and/or during continental breakup. Volcanic rifted margins are characterized by: a transitional crust composed of basaltic igneous rocks, including lava flows, sills, dikes, and gabbros, high volume basalt flows, seaward-dipping reflector sequences of basalt flows that were rotated during the early stages of breakup, limited passive-margin subsidence during and after breakup, and the presence of a lower crust with anomalously high seismic P-wave velocities in lower crustal bodies, indicative of lower temperature, dense media.
Hotspots
The early volcanic activity of major hotspots, postulated to result from deep mantle plumes, is frequently accompanied by flood basalts. These flood basalt eruptions have resulted in large accumulations of basaltic lavas emplaced at a rate greatly exceeding that seen in contemporary volcanic processes. Continental rifting commonly follows flood basalt volcanism. Flood basalt provinces may also occur as a consequence of the initial hot-spot activity in ocean basins as well as on continents. It is possible to track the hot spot back to the flood basalts of a large igneous province; the table below correlates large igneous provinces with the track of a specific hot spot.
Relationship to extinction events
Eruptions or emplacements of LIPs appear to have, in some cases, occurred simultaneously with oceanic anoxic events and extinction events. The most important examples are the Deccan Traps (Cretaceous–Paleogene extinction event), the Karoo-Ferrar (Pliensbachian-Toarcian extinction), the Central Atlantic magmatic province (Triassic-Jurassic extinction event), and the Siberian Traps (Permian-Triassic extinction event).
Several mechanisms are proposed to explain the association of LIPs with extinction events. The eruption of basaltic LIPs onto the earth's surface releases large volumes of sulfate gas, which forms sulfuric acid in the atmosphere; this absorbs heat and causes substantial cooling (e.g., the Laki eruption in Iceland, 1783). Oceanic LIPs can reduce oxygen in seawater by either direct oxidation reactions with metals in hydrothermal fluids or by causing algal blooms that consume large amounts of oxygen.
Ore deposits
Large igneous provinces are associated with a handful of ore deposit types including:
Nickel–Copper platinum groups
Porphyries
Iron oxide copper gold
Kimberlite
Mercury anomalies
Enrichment in mercury relative to total organic carbon (Hg/TOC) is a common geochemical proxy used to detect massive volcanism in the geologic record, although its foolproofness has been called into question.
Examples
Large rhyolitic provinces
These LIPs are composed dominantly of felsic materials. Examples include:
Whitsunday
Sierra Madre Occidental (Mexico)
Malani
Chon Aike (Argentina)
Gawler (Australia)
Large andesitic provinces
These LIPs are comprised dominantly of andesitic materials. Examples include:
Island arcs such as Indonesia and Japan
Active continental margins such as the Andes and the Cascades
Continental collision zones such as the Anatolia-Iran zone
Large basaltic provinces
This subcategory includes most of the provinces included in the original LIP classifications. It is composed of continental flood basalts, oceanic flood basalts, and diffuse provinces.
Continental flood basalts
Ethiopia-Yemen Continental Flood Basalts
Columbia River Basalt Group
Deccan Traps (India)
Coppermine River Group (Canadian Shield)
Midcontinent Rift System, Great Lakes Region, North America
Paraná and Etendeka traps (Paraná, Brazil–NE Namibia)
Brazilian Highlands
Río de la Plata Craton (Uruguay)
Karoo-Ferrar (South Africa–Antarctica)
Siberian Traps (Russia)
Emeishan Traps (western China)
Central Atlantic Magmatic Province (eastern United States and Canada, northern South America, northwest Africa)
North Atlantic Igneous Province (includes basalts in Greenland, Iceland, Ireland, Scotland, and Faroes)
High Arctic Large Igneous Province (includes the Ellesmere Island Volcanics, Strand Fiord Formation, Alpha Ridge, Franz Josef Land, and Svalbard)
Oceanic flood basalts
Azores Plateau (Atlantic Ocean)
Wrangellia Terrane (Alaska and Canada)
Caribbean large igneous province (Caribbean Sea)
Kerguelen Plateau (Indian Ocean)
Iceland Plateau (Atlantic Ocean)
Ontong Java Plateau, Manihiki Plateau and Hikurangi Plateau (southwest Pacific Ocean)
Jameson Land
Large basaltic–rhyolitic provinces
Snake River Plain – Oregon High Lava Plains
Dongargarh, India
Large plutonic provinces
Equatorial Atlantic Magmatic Province
Large granitic provinces
Patagonia
Peru–Chile Batholith
Coast Range Batholith (northwestern US)
Silicic-dominated large igneous provinces
Gawler Range Volcanics
Hiltaba Suite
Gawler Ranges, South Australia
| Physical sciences | Volcanic landforms | Earth science |
1589445 | https://en.wikipedia.org/wiki/Flare%20star | Flare star | A flare star is a variable star that can undergo unpredictable dramatic increases in brightness for a few minutes. It is believed that the flares on flare stars are analogous to solar flares in that they are due to the magnetic energy stored in the stars' atmospheres. The brightness increase is across the spectrum, from X-rays to radio waves. Flare activity among late-type stars was first reported by A. van Maanen in 1945, for WX Ursae Majoris and YZ Canis Minoris. However, the best-known flare star is UV Ceti, first observed to flare in 1948. Today similar flare stars are classified as UV Ceti type variable stars (using the abbreviation UV) in variable star catalogs such as the General Catalogue of Variable Stars.
Most flare stars are dim red dwarfs, although recent research indicates that less massive brown dwarfs might also be capable of flaring. The more massive RS Canum Venaticorum variables (RS CVn) are also known to flare, but it is understood that these flares are induced by a companion star in a binary system which causes the magnetic field to become tangled. Additionally, nine stars similar to the Sun had also been seen to undergo flare events prior
to the flood of superflare data from the Kepler observatory.
It has been proposed that the mechanism for this is similar to that of the RS CVn variables in that the flares are being induced by a companion, namely an unseen Jupiter-like planet in a close orbit.
Stellar Flare Model
The Sun is known to flare and solar flares have been extensively studied over all the spectrum. Even though the Sun on average shows less variability and weaker flares compared with other stars that are similar to the Sun in spectral type, rotation period and age, it is generally thought that other stellar flares and the solar flares share the same or similar processes. Thus the solar flare model has been used as the framework for understanding other stellar flares.
The general idea is that flares are generated through the reconnection of the magnetic field lines in the corona. There are several phases for the flare: preflare phase, impulsive phase, flash phase and decay phase. Those phases have different timescales and different emissions across the spectrum. During the preflare phase, which usually lasts for a few minutes, the coronal plasmas slowly heats up to temperatures of tens of millions Kelvin. This phase is mostly visible to soft X-rays and EUV. During the impulsive phase, which lasts for three to ten minutes, a large number of electrons and sometimes also ions are accelerated to extremely high energies ranging from keV to MeV. The radiation can be seen as gyrosynchrotron radiation in the radio wavelengths and bremsstrahlung radiation in the hard X-rays wavelengths. This is the phase where most of the energy is released. The later flash phase is defined by the rapid increase in Hα emissions. The free streaming particles travel along the magnetic lines, propagating energy from the corona to the lower chromosphere. The material in the chromosphere is then heated up and expands to the corona. Emission in the flash phase is primarily due to thermal radiation from the heated stellar atmosphere. As the material reaches the corona, the intensive release of energy slows down and cooling starts. During the decay phase which lasts for one to several hours, the corona returns back to its original state.
This is the model for how isolated star generates flares but this is not the only way. Interactions between a star and the companion or sometimes the environment can also produce flares. In binary systems such as RS Canum Venaticorum variable stars (RS CVn), flares can be produced through the interactions between the magnetic fields of the two bodies in the systems. For stars that have an accretion disk, which most of the time are protostars or pre-main sequence stars, the interactions of magnetic field between the stars and the disk can also cause flares.
Nearby flare stars
Flare stars are intrinsically faint, but have been found to distances of 1,000 light years from Earth. On April 23, 2014, NASA's Swift satellite detected the strongest, hottest, and longest-lasting sequence of stellar flares ever seen from a nearby red dwarf, DG Canum Venaticorum. The initial blast from this record-setting series of explosions was as much as 10,000 times more powerful than the largest solar flare ever recorded.
Proxima Centauri
The Sun's nearest stellar neighbor Proxima Centauri is a flare star that undergoes occasional increases in brightness because of magnetic activity. The star's magnetic field is created by convection throughout the stellar body, and the resulting flare activity generates a total X-ray emission similar to that produced by the Sun.
Wolf 359
The flare star Wolf 359 is another near neighbor (2.39 ± 0.01 parsecs). This star, also known as Gliese 406 and CN Leo, is a red dwarf of spectral class M6.5 that emits X-rays. It is a UV Ceti flare star, and has a relatively high flare rate.
The mean magnetic field has a strength of about (), but this varies significantly on time scales as short as six hours. By comparison, the magnetic field of the Sun averages (), although it can rise as high as () in active sunspot regions.
Barnard's Star
Barnard's Star is the fourth nearest star to the Sun. Given its age, at 7–12 billion years of age, Barnard's Star is considerably older than the Sun. It was long assumed to be quiescent in terms of stellar activity. However, in 1998, astronomers observed an intense stellar flare, showing that Barnard's Star is a flare star.
EV Lacertae
EV Lacertae is located 16.5 light-years away, and is the nearest star in its constellation. It is a young star, about 300 million years old, and has a strong magnetic field. In 2008, it produced a record-setting flare that was thousands of times more powerful than the largest observed solar flare.
TVLM513-46546
TVLM 513-46546 is a very low mass M9 flare star, at the boundary between red dwarfs and brown dwarfs. Data from Arecibo Observatory at radio wavelengths determined that the star flares every 7054 s with a precision of one one-hundredth of a second.
2MASS J18352154-3123385 A
The more massive member of the binary star 2MASS J1835, an M6.5 star, has strong X-ray activity indicative of a flare star, although it has never been directly observed to flare.
Record-setting flares
The most powerful stellar flare detected, as of December 2005, may have come from the active binary II Peg. Its observation by Swift suggested the presence of hard X-rays in the well-established Neupert effect as seen in solar flares.
| Physical sciences | Stellar astronomy | Astronomy |
9514190 | https://en.wikipedia.org/wiki/Precious%20coral | Precious coral | Precious coral, or red coral, is the common name given to a genus of marine corals, Corallium. The distinguishing characteristic of precious corals is their durable and intensely colored red or pink-orange skeleton, which is used for making jewelry.
Habitat
Red corals grow on rocky seabottom with low sedimentation, typically in dark environments—either in the depths or in dark caverns or crevices. The original species, C. rubrum (formerly Gorgonia nobilis), is found mainly in the Mediterranean Sea. It grows at depths from 10 to 300 meters below sea level, although the shallower of these habitats have been largely depleted by harvesting. In the underwater caves of Alghero, Sardinia (the "Coral Riviera"), it grows at depth from 4 to 35 meters. The same species is also found at Atlantic sites near the Strait of Gibraltar, at the Cape Verde Islands and off the coast of southern Portugal. Other Corallium species are native to the western Pacific, notably around Japan and Taiwan; these occur at depths of 350 to 1500 meters below sea level in areas with strong currents.
Anatomy
In common with other Alcyonacea, red corals have the shape of small leafless bushes and grow up to a meter in height. Their valuable skeleton is composed of intermeshed spicules of hard calcium carbonate, colored in shades of red by carotenoid pigments. In living specimens, the skeletal branches are overlaid with soft bright red integument, from which numerous retractable white polyps protrude. The polyps exhibit octameric radial symmetry.
Species
The following are known species in the genus:
As a gemstone
The hard skeleton of red coral branches is naturally matte, but can be polished to a glassy shine. It exhibits a range of warm reddish pink colors from pale pink to deep red; the word coral is also used to name such colors. Owing to its intense and permanent coloration and glossiness, precious coral skeletons have been harvested since antiquity for decorative use. Coral jewellery has been found in ancient Egyptian and prehistoric European burials, and continues to be made to the present day. It was especially popular during the Victorian age.
Precious coral has hardness 3.5 on the Mohs scale. Due to its softness and opacity, coral is usually cut en cabochon, or used to make beads.
History of trade
At the beginning of the 1st millennium, there was significant trade in coral between the Mediterranean and India, where it was highly prized as a substance believed to be endowed with mysterious sacred properties. Pliny the Elder remarks that, before the great demand from India, the Gauls used it for the ornamentation of their weapons and helmets; but by this period, so great was the Eastern demand, that it was very rarely seen even in the regions which produced it. Among the Romans, branches of coral were hung around children's necks to preserve them from danger from the outside, and the substance had many medicinal virtues attributed to it. The belief in coral's potency as a charm continued throughout the Middle Ages and early in 20th century Italy it was worn as a protection from the evil eye, and by women as a cure for infertility.
From the Middle Ages onward, the securing of the right to the coral fisheries off the African coasts was the object of considerable rivalry among the Mediterranean communities of Europe.
The story of the Torre del Greco is so interwoven with that of the coral so as to constitute an inseparable pair, and is documented as early as the fifteenth century. In 1790 the Royal Society of Coral was established in the town of Torre del Greco, with the idea of working and selling coral fish. This shows that the coral fishing flourished for many years in the city.
It was also enacted December 22, 1789, by Ferdinand IV of Bourbon Code coral (prepared by the Neapolitan jurist Michael Florio), with the intent to regulate the coral fishing in those years starring, in addition to the sailors Torre del Greco, the locals and those in Trapani
This regulation did not have the expected success. From 1805, when he founded the first factory for the manufacturing of coral in Torre del Greco (by Paul Bartholomew Martin, but with French Genoese origin), the golden age for the manufacturing of coral in the city situated on the slopes of the Vesuvius started, because working together with the coral fishing was increasingly under the control of Torre del Greco fishermen. Since 1875, the Torre del Greco began working with the Sciacca coral and a school for the manufacturing of coral was built in 1878 in the city (which closed in 1885 to reopen in 1887), with which in 1933 established a museum of the coral. Then came the time of processing of Japanese coral found in the markets of Chennai and Kolkata.
Other story instead a short period the Tunisian fisheries were secured by Charles V for Spain; but the monopoly soon fell into the hands of the French, who held the right until the Revolutionary government in 1793 threw the trade open. For a short period (about 1806) the British government controlled the fisheries, but this later returned to the hands of the French authorities. Before the French Revolution much of the coral trade was centred in Marseille, but then largely moved to Italy, where the procuring of the raw material and the working of it was centring in Naples, Rome and Genoa.
In culture
The origin of coral is explained in Greek mythology by the story of Perseus. Having petrified Cetus, the sea monster threatening Andromeda, Perseus placed Medusa's head on the riverbank while he washed his hands. When he recovered her head, he saw that her blood had turned the seaweed (in some variants the reeds) into red coral. Thus, the Greek word for coral is 'Gorgeia', as Medusa was one of the three Gorgons.
Poseidon resided in a palace made of coral and gems, and Hephaestus first crafted his work from coral.
The Romans believed coral could protect children from harm, as well as cure wounds made by snakes and scorpions and diagnose diseases by changing colour.
In Hindu astrology red coral is associated with the planet Mars or Graha-Mangala and used for pleasing Mars. It should be worn on the ring finger.
A branch of red coral figures prominently in the civic coat of arms of the town of Alghero, Italy.
Amongst the Yoruba and Bini peoples of West Africa, red precious coral jewellery (necklaces, wristlets and anklets most especially) are signifiers of high social rank, and are worn as a result by titled kings and chieftains.
In traditional Dutch culture, notably in fishing communities, red coral necklaces were worn by the female population as an indispensable part of the traditional costumes.
Conservation
Intensive fishing, particularly in shallow waters, has damaged this species along the Mediterranean coastline, where colonies at depths of less than 50 metres are much diminished. Fishing and climate change threaten their persistence. The three oldest Mediterranean marine protected areas—Banyuls, Carry-le-Rouet and Scandola, off the island of Corsica—all host substantial populations of C. rubrum. Since protection was established, colonies have grown in size and number at shallow and deeper depths.
| Biology and health sciences | Cnidarians | Animals |
1027282 | https://en.wikipedia.org/wiki/Moeritherium | Moeritherium | Moeritherium ("the beast from Lake Moeris") is an extinct genus of basal proboscideans from the Eocene of North and West Africa. The first specimen was discovered in strata from the Fayum fossil deposits of Egypt. It was named in 1901 by Charles William Andrews, who suggested that it was an early proboscidean, perhaps ancestral to mastodons, although subsequent workers considered it everything from a relative of manatees to a close relative of both clades' common ancestor. Currently, Moeritherium is seen as a proboscidean that, while fairly basal, predates the split between elephantiforms and deinotheres. Seven species have been named, though only three (M. lyonsi, M. gracile, and M. chehbeurameuri), are currently considered valid.
Moeritherium is unusual even among basal proboscidean standards. Like many later members of the group, it had two sets of tusks: the ones on the upper jaw pointed downwards, while those of the mandible (lower jaw) were flat and formed a spade shape. In addition to these trunks, it retained its upper canines, though had lost the lower set. The morphology of the skull, particularly the nasal cavity (which was only slightly retracted), suggests that Moeritherium lacked a trunk. It may have instead possessed a small, tapir-like proboscis, formed from the fusion of the upper lip and the nose, an evolutionary precursor of trunks. Though poorly described in the literature, Moeritherium's torso is known to have been very long, and its limbs were short. These divergent traits have led to comparisons with desmostylians, a lineage of extinct mammals formerly believed to have been relatives of manatees.
Moeritherium has been suggested to have led a semi-aquatic lifestyle. While this originally stemmed from perceived similarities to sirenians (manatees and dugongs), morphological data and isotope analysis has since lent it a great deal of support. The elongated body of Moeritherium, and the high position of its eyes and ears, are likely a result of its lifestyle, and its unusual dentition is likely an adaptation for feeding on water plants.
Taxonomy
Early history
The type species of Moeritherium, M. lyonsi, was discovered in strata belonging to the Qasr el Sagha Formation in the Fayum fossil deposits of Egypt. The type specimen (CGM C.10000) consists of an almost complete mandible. It was described in 1901 by Charles William Andrews, who proposed two hypotheses for its phylogenetic position: either Moeritherium was part of the obsolete order Amblypoda, or it was an early proboscidean, perhaps "a generalised forerunner of the Mastodon type". In any case, he regarded it as an ungulate.
Additional species
In 1902, after conducting a more thorough examination of specimens collected by himself and his colleague, Hugh John Llewellyn Beadnell, he named a second species from the Qasr el Sagha, M. gracile; a third was recognised in the same paper, though he did not provide a name, and referred to it simply as M. sp. The two species were distinguished from M. lyonsi by a more gracile build and a larger body size respectively. The lack of material overlap has made it difficult to determine how M. gracile actually relates to M. lyonsi, as their holotypes consist of different skull elements; the type specimen of the former (CGM C.10003) is a palate with no associated lower teeth. Regardless, they are treated as belonging to the same genus, and are likely separate. Two years later, a fourth taxon, M. trigodon, was described, also by Andrews, based on remains recovered from the "fluvio-marine beds" (equivalent to the Jebel Qatrani Formation) around the lake Birket-el-Qurun. In 1955, over half a century after the genus' initial naming, Sri Lankan artist and palaeontologist Paulus Edward Pieris Deraniyagala named two additional species, P. latidens and P. pharaonensis, based on isolated mandibular fragments.
In 1911, German zoologist Max Schlosser divided M. lyonsi into two species: M. lyonsi, restricted to the Qasr el Sagha Formation, and M. andrewsi, restricted to the Jebel Qatrani. This classification, however, has been rejected. In 1971, German zoologist Heinz Tobien opted to synonymise the entire genus with M. lyonsi, though he chose to altogether disregard, Deraniyagala's species, likely as they were poorly diagnostic. In 2006, Cyrile Delmer et al. published a paper describing a new Moeritherium species, M. chehbeurameuri, from Bir El Ater, Algeria. In their paper, they treated most of the above species (with the exception of M. latidens and M. pharaonensis) as valid. While the paper was not intended as a systematic revision, they chose to treat at the very least three species as valid: the type species M. lyonsi, M. gracile, and M. chehbeaurameuri.
Classification
Henry Fairfield Osborn, in 1909, suggested that Moeritherium was more similar to sirenians (manatees and dugongs, and their extinct kin) to any living or extinct proboscidean. In 1921, however, he rejected this view, and divided Proboscidea into four suborders or superfamilies: Moeritherioidea, Deinotherioidea, Mastodontoidea, and Elephantoidea. In a 1988 paper discussing the systematics of proboscideans, Pascal Tassy abandoned this system and neglected to provide any superfamily-rank clades. Erecting the suborder Elephantiformes, Tassy placed Moeritherium outside it, alongside Barytherium, Numidotherium, and the Deinotheriidae. He considered Moeritherium among the most basal proboscideans, with Numidotherium being the most basal and Barytherium being only slightly less basal than that. In a 2021 paper describing a new genus (Dagbatitherium tassyi) Lionel Hautier et al. ran a phylogenetic analysis which recovered Moeritherium as sister to a clade including deinotheres and elephantiforms.
A cladogram of Proboscidea based on the phylogenetic analysis of Hautier et al. 2021 is below:
Description
Moeritherium was a fairly small, very elongate taxon. It was smaller than most later proboscideans. The species M. lyonsi has an estimated body length of . At the shoulder, this species measured only , and it had a body mass of , though Moeritherium exhibited strong sized-based sexual dimorphism, so this estimate should be considered a crude average.
Skull and dentition
The skull of Moeritherium was long, slender, and very low for the entirety of its length. The cranial region is nearly twice as long as the facial region. The orbit (eye socket) occupied a fairly anterodorsal position, meaning that it sat towards the front and top of the skull, and resembled that of sirenians. Unlike later proboscideans, the naris (nasal cavity) was fairly close to the front of the skull, which, in conjunction with the length of the mandible, suggests that a conventional trunk was absent in Moeritherium. It may have instead possessed a wide, mobile unit comprising the nose and upper lip, similar to the proboscis of modern tapirs. The external ear would have been high up on the skull, which may have been an adaptation for a semiaquatic lifestyle; the same, however, is observed in other proboscideans that are unlikely to have been aquatic, such as Gomphotherium and Palaeomastodon.
Moeritherium has a dental formula of . The first lower incisors sit close together, forming a spade shape, while the equivalent set on the upper jaw, actually the second incisors (as in later genera), were modified into short, curved tusks. Moeritherium still retained the first and third upper incisors, and the upper canines, though in a highly reduced form. The cheek teeth (the premolars and molars) were bunodont, bearing rounded cusps, though were also lophodont, bearing large ridges called lophs between cusps. The premolars are large and broad in relation to the molars, a condition not seen in more derived proboscideans, though similar to in manatees.
Postcranial skeleton
The postcranial anatomy of Moeritherium has been compared to desmostylians, such as Pezosiren. Both taxa are have an extremely elongated, broad torso, possibly an adaptation for diving in both taxa. Little of the cervical (neck) vertebrae is known, save for the atlas and some of the middle cervical vertebrae. Most of the vertebral column, save for some cervical vertebrae and one of the thoracic (upper body) vertebrae, is known from a specimen that was at some point catalogued as C. 10005, probably belonging to M. lyonsi. A more complete specimen of Moeritherium is known, though has not been described in detail. Like modern proboscideans, there twenty-three presacral vertebrae (those preceding the sacrum). The lumbar (lower back) region was longer proportionally than in modern proboscideans, while the thoracic region was slightly shorter. Moeritherium's limbs were extremely short compared to those of later taxa, being roughly half as big, proportionally, as those of extant elephants.
Palaeobiology
Lifestyle
The notion of Moeritherium being semi-aquatic dates as far back as 1909, when Henry Fairfield Osborn suggested that it was not only related to sirenians, but resembled them in habits. In his 1923 paper discussing the genus' morphology, Japanese zoologist Matsumoto Hikoshichirō listed adaptations that indicated a semi-aquatic lifestyle (such as the high position of the eyes and ears), though also listed several that were evidence against it (such as its dentition, which to him seemed better-suited to a terrestrial forager). In his view, Moeritherium was unlikely to be semi-aquatic. However, similarities with desmostylians have been noted in the postcranial skeleton, and its unusual limb proportions have been cited as the product of a semi-aquatic lifestyle. In 2008, stable isotopic analysis lent further credence to the semiaquatic model, with its oxygen isotope ratios more closely resembling those of aquatic ones than fully terrestrial ones, with it being suggested that Moeritherium likely consumed freshwater plants.
Palaeoenvironment
The environment of the Jebel Qatrani Formation, from which some specimens of Moeritherium are known, have been described as a subtropical to tropical lowland plain by Bown, who further suggests the presence of streams and ponds.
Based on the occurrence of birds that are associated with water (such as ospreys, early flamingos, jacanas, herons, storks, cormorants and shoebills), Rasmussen and colleagues similarly inferred that the environment featured slow-moving freshwater with a substantial amount of aquatic vegetation. Although lithology suggests that most fossils were deposited on sandbanks after being transported by currents, the authors argue that swamps could have easily formed along the banks of the river that was present during the Oligocene and may account for the mudstone found in certain quarries. They furthermore suggest that the fossil birds of Fayum, due to their affinities with modern groups, should be considered a more valuable indicator of the environment when compared with the fossil mammals, many of which belonged to families lacking modern examples. The absence of other birds typical for such an environment may be explained either through sampling bias or due to the fact that said groups had simply not yet been present in Oligocene Africa. Generally, Rasmussen and colleagues compare the environment of Jebel Qatrani to freshwater habitats in modern Central Africa. The discovery of snakehead fossils seem to support Rasmussen's interpretation, as the Parachanna today prefers slow-moving backwaters with plenty of vegetation. Other fish present meanwhile, notably Tylochromis, suggest that deep, open water was likewise present. The river channels may have been overgrown with reeds, papyrus and featured floating vegetation like water lilies and Salvinia.
In a 2001 paper Rasmussen et al. argued that the sandstone and mudstone of the formation likely formed as sediments were aggraded by a system of river channels that emptied towards the west into the Tethys. Here they reconstructed the environment as a tropical lowland swamp forest intermingled with marshes. They furthermore suggest that the environment would have experienced monsoons.
Overall this indicates that this region was a part of an extensive belt of tropical forest that stretched across what is now northern Africa, which would gradually give rise to open woodland and even steppe the further one was to travel inland.
| Biology and health sciences | Proboscidea | Animals |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.