id
stringlengths
2
8
url
stringlengths
31
117
title
stringlengths
1
71
text
stringlengths
153
118k
topic
stringclasses
4 values
section
stringlengths
4
49
sublist
stringclasses
9 values
113424
https://en.wikipedia.org/wiki/Chi-squared%20distribution
Chi-squared distribution
In probability theory and statistics, the chi-squared distribution (also chi-square or -distribution) with degrees of freedom is the distribution of a sum of the squares of independent standard normal random variables. The chi-squared distribution is a special case of the gamma distribution and the univariate Wishart distribution. Specifically if then (where is the shape parameter and the scale parameter of the gamma distribution) and . The scaled chi-squared distribution is a reparametrization of the gamma distribution and the univariate Wishart distribution. Specifically if then and . The chi-squared distribution is one of the most widely used probability distributions in inferential statistics, notably in hypothesis testing and in construction of confidence intervals. This distribution is sometimes called the central chi-squared distribution, a special case of the more general noncentral chi-squared distribution. The chi-squared distribution is used in the common chi-squared tests for goodness of fit of an observed distribution to a theoretical one, the independence of two criteria of classification of qualitative data, and in finding the confidence interval for estimating the population standard deviation of a normal distribution from a sample standard deviation. Many other statistical tests also use this distribution, such as Friedman's analysis of variance by ranks. Definitions If are independent, standard normal random variables, then the sum of their squares, is distributed according to the chi-squared distribution with degrees of freedom. This is usually denoted as The chi-squared distribution has one parameter: a positive integer that specifies the number of degrees of freedom (the number of random variables being summed, Zi s). Introduction The chi-squared distribution is used primarily in hypothesis testing, and to a lesser extent for confidence intervals for population variance when the underlying distribution is normal. Unlike more widely known distributions such as the normal distribution and the exponential distribution, the chi-squared distribution is not as often applied in the direct modeling of natural phenomena. It arises in the following hypothesis tests, among others: Chi-squared test of independence in contingency tables Chi-squared test of goodness of fit of observed data to hypothetical distributions Likelihood-ratio test for nested models Log-rank test in survival analysis Cochran–Mantel–Haenszel test for stratified contingency tables Wald test Score test It is also a component of the definition of the t-distribution and the F-distribution used in t-tests, analysis of variance, and regression analysis. The primary reason for which the chi-squared distribution is extensively used in hypothesis testing is its relationship to the normal distribution. Many hypothesis tests use a test statistic, such as the t-statistic in a t-test. For these hypothesis tests, as the sample size, , increases, the sampling distribution of the test statistic approaches the normal distribution (central limit theorem). Because the test statistic (such as ) is asymptotically normally distributed, provided the sample size is sufficiently large, the distribution used for hypothesis testing may be approximated by a normal distribution. Testing hypotheses using a normal distribution is well understood and relatively easy. The simplest chi-squared distribution is the square of a standard normal distribution. So wherever a normal distribution could be used for a hypothesis test, a chi-squared distribution could be used. Suppose that is a random variable sampled from the standard normal distribution, where the mean is and the variance is : . Now, consider the random variable . The distribution of the random variable is an example of a chi-squared distribution: . The subscript 1 indicates that this particular chi-squared distribution is constructed from only 1 standard normal distribution. A chi-squared distribution constructed by squaring a single standard normal distribution is said to have 1 degree of freedom. Thus, as the sample size for a hypothesis test increases, the distribution of the test statistic approaches a normal distribution. Just as extreme values of the normal distribution have low probability (and give small p-values), extreme values of the chi-squared distribution have low probability. An additional reason that the chi-squared distribution is widely used is that it turns up as the large sample distribution of generalized likelihood ratio tests (LRT). LRTs have several desirable properties; in particular, simple LRTs commonly provide the highest power to reject the null hypothesis (Neyman–Pearson lemma) and this leads also to optimality properties of generalised LRTs. However, the normal and chi-squared approximations are only valid asymptotically. For this reason, it is preferable to use the t distribution rather than the normal approximation or the chi-squared approximation for a small sample size. Similarly, in analyses of contingency tables, the chi-squared approximation will be poor for a small sample size, and it is preferable to use Fisher's exact test. Ramsey shows that the exact binomial test is always more powerful than the normal approximation. Lancaster shows the connections among the binomial, normal, and chi-squared distributions, as follows. De Moivre and Laplace established that a binomial distribution could be approximated by a normal distribution. Specifically they showed the asymptotic normality of the random variable where is the observed number of successes in trials, where the probability of success is , and . Squaring both sides of the equation gives Using , , and , this equation can be rewritten as The expression on the right is of the form that Karl Pearson would generalize to the form where = Pearson's cumulative test statistic, which asymptotically approaches a distribution; = the number of observations of type ; = the expected (theoretical) frequency of type , asserted by the null hypothesis that the fraction of type in the population is ; and = the number of cells in the table. In the case of a binomial outcome (flipping a coin), the binomial distribution may be approximated by a normal distribution (for sufficiently large ). Because the square of a standard normal distribution is the chi-squared distribution with one degree of freedom, the probability of a result such as 1 heads in 10 trials can be approximated either by using the normal distribution directly, or the chi-squared distribution for the normalised, squared difference between observed and expected value. However, many problems involve more than the two possible outcomes of a binomial, and instead require 3 or more categories, which leads to the multinomial distribution. Just as de Moivre and Laplace sought for and found the normal approximation to the binomial, Pearson sought for and found a degenerate multivariate normal approximation to the multinomial distribution (the numbers in each category add up to the total sample size, which is considered fixed). Pearson showed that the chi-squared distribution arose from such a multivariate normal approximation to the multinomial distribution, taking careful account of the statistical dependence (negative correlations) between numbers of observations in different categories. Probability density function The probability density function (pdf) of the chi-squared distribution is where denotes the gamma function, which has closed-form values for integer . For derivations of the pdf in the cases of one, two and degrees of freedom, see Proofs related to chi-squared distribution. Cumulative distribution function Its cumulative distribution function is: where is the lower incomplete gamma function and is the regularized gamma function. In a special case of this function has the simple form: which can be easily derived by integrating directly. The integer recurrence of the gamma function makes it easy to compute for other small, even . Tables of the chi-squared cumulative distribution function are widely available and the function is included in many spreadsheets and all statistical packages. Letting , Chernoff bounds on the lower and upper tails of the CDF may be obtained. For the cases when (which include all of the cases when this CDF is less than half): The tail bound for the cases when , similarly, is For another approximation for the CDF modeled after the cube of a Gaussian, see under Noncentral chi-squared distribution. Properties Cochran's theorem The following is a special case of Cochran's theorem. Theorem. If are independent identically distributed (i.i.d.), standard normal random variables, then where Proof. Let be a vector of independent normally distributed random variables, and their average. Then where is the identity matrix and the all ones vector. has one eigenvector with eigenvalue , and eigenvectors (all orthogonal to ) with eigenvalue , which can be chosen so that is an orthogonal matrix. Since also , we have which proves the claim. Additivity It follows from the definition of the chi-squared distribution that the sum of independent chi-squared variables is also chi-squared distributed. Specifically, if are independent chi-squared variables with , degrees of freedom, respectively, then is chi-squared distributed with degrees of freedom. Sample mean The sample mean of i.i.d. chi-squared variables of degree is distributed according to a gamma distribution with shape and scale parameters: Asymptotically, given that for a shape parameter going to infinity, a Gamma distribution converges towards a normal distribution with expectation and variance , the sample mean converges towards: Note that we would have obtained the same result invoking instead the central limit theorem, noting that for each chi-squared variable of degree the expectation is , and its variance (and hence the variance of the sample mean being ). Entropy The differential entropy is given by where is the Digamma function. The chi-squared distribution is the maximum entropy probability distribution for a random variate for which and are fixed. Since the chi-squared is in the family of gamma distributions, this can be derived by substituting appropriate values in the Expectation of the log moment of gamma. For derivation from more basic principles, see the derivation in moment-generating function of the sufficient statistic. Noncentral moments The noncentral moments (raw moments) of a chi-squared distribution with degrees of freedom are given by Cumulants The cumulants are readily obtained by a power series expansion of the logarithm of the characteristic function: with cumulant generating function . Concentration The chi-squared distribution exhibits strong concentration around its mean. The standard Laurent-Massart bounds are: One consequence is that, if is a gaussian random vector in , then as the dimension grows, the squared length of the vector is concentrated tightly around with a width :where the exponent can be chosen as any value in . Since the cumulant generating function for is , and its convex dual is , the standard Chernoff bound yieldswhere . By the union bound,This result is used in proving the Johnson–Lindenstrauss lemma. Asymptotic properties By the central limit theorem, because the chi-squared distribution is the sum of independent random variables with finite mean and variance, it converges to a normal distribution for large . For many practical purposes, for the distribution is sufficiently close to a normal distribution, so the difference is ignorable. Specifically, if , then as tends to infinity, the distribution of tends to a standard normal distribution. However, convergence is slow as the skewness is and the excess kurtosis is . The sampling distribution of converges to normality much faster than the sampling distribution of , as the logarithmic transform removes much of the asymmetry. Other functions of the chi-squared distribution converge more rapidly to a normal distribution. Some examples are: If then is approximately normally distributed with mean and unit variance (1922, by R. A. Fisher, see (18.23), p. 426 of Johnson. If then is approximately normally distributed with mean and variance This is known as the Wilson–Hilferty transformation, see (18.24), p. 426 of Johnson. This normalizing transformation leads directly to the commonly used median approximation by back-transforming from the mean, which is also the median, of the normal distribution. Related distributions As , (normal distribution) (noncentral chi-squared distribution with non-centrality parameter ) If then has the chi-squared distribution As a special case, if then has the chi-squared distribution (The squared norm of k standard normally distributed variables is a chi-squared distribution with k degrees of freedom) If and , then . (gamma distribution) If then (chi distribution) If , then is an exponential distribution. (See gamma distribution for more.) If , then is an Erlang distribution. If , then If (Rayleigh distribution) then If (Maxwell distribution) then If then (Inverse-chi-squared distribution) The chi-squared distribution is a special case of type III Pearson distribution If and are independent then (beta distribution) If (uniform distribution) then If then If follows the generalized normal distribution (version 1) with parameters then chi-squared distribution is a transformation of Pareto distribution Student's t-distribution is a transformation of chi-squared distribution Student's t-distribution can be obtained from chi-squared distribution and normal distribution Noncentral beta distribution can be obtained as a transformation of chi-squared distribution and Noncentral chi-squared distribution Noncentral t-distribution can be obtained from normal distribution and chi-squared distribution A chi-squared variable with degrees of freedom is defined as the sum of the squares of independent standard normal random variables. If is a -dimensional Gaussian random vector with mean vector and rank covariance matrix , then is chi-squared distributed with degrees of freedom. The sum of squares of statistically independent unit-variance Gaussian variables which do not have mean zero yields a generalization of the chi-squared distribution called the noncentral chi-squared distribution. If is a vector of i.i.d. standard normal random variables and is a symmetric, idempotent matrix with rank , then the quadratic form is chi-square distributed with degrees of freedom. If is a positive-semidefinite covariance matrix with strictly positive diagonal entries, then for and a random -vector independent of such that and then The chi-squared distribution is also naturally related to other distributions arising from the Gaussian. In particular, is F-distributed, if , where and are statistically independent. If and are statistically independent, then . If and are not independent, then is not chi-square distributed. Generalizations The chi-squared distribution is obtained as the sum of the squares of independent, zero-mean, unit-variance Gaussian random variables. Generalizations of this distribution can be obtained by summing the squares of other types of Gaussian random variables. Several such distributions are described below. Linear combination If are chi square random variables and , then the distribution of is a special case of a Generalized Chi-squared Distribution. A closed expression for this distribution is not known. It may be, however, approximated efficiently using the property of characteristic functions of chi-square random variables. Chi-squared distributions Noncentral chi-squared distribution The noncentral chi-squared distribution is obtained from the sum of the squares of independent Gaussian random variables having unit variance and nonzero means. Generalized chi-squared distribution The generalized chi-squared distribution is obtained from the quadratic form where is a zero-mean Gaussian vector having an arbitrary covariance matrix, and is an arbitrary matrix. Gamma, exponential, and related distributions The chi-squared distribution is a special case of the gamma distribution, in that using the rate parameterization of the gamma distribution (or using the scale parameterization of the gamma distribution) where is an integer. Because the exponential distribution is also a special case of the gamma distribution, we also have that if , then is an exponential distribution. The Erlang distribution is also a special case of the gamma distribution and thus we also have that if with even , then is Erlang distributed with shape parameter and scale parameter . Occurrence and applications The chi-squared distribution has numerous applications in inferential statistics, for instance in chi-squared tests and in estimating variances. It enters the problem of estimating the mean of a normally distributed population and the problem of estimating the slope of a regression line via its role in Student's t-distribution. It enters all analysis of variance problems via its role in the F-distribution, which is the distribution of the ratio of two independent chi-squared random variables, each divided by their respective degrees of freedom. Following are some of the most common situations in which the chi-squared distribution arises from a Gaussian-distributed sample. if are i.i.d. random variables, then where . The box below shows some statistics based on independent random variables that have probability distributions related to the chi-squared distribution: The chi-squared distribution is also often encountered in magnetic resonance imaging. Computational methods Table of values vs -values The -value is the probability of observing a test statistic at least as extreme in a chi-squared distribution. Accordingly, since the cumulative distribution function (CDF) for the appropriate degrees of freedom (df) gives the probability of having obtained a value less extreme than this point, subtracting the CDF value from 1 gives the p-value. A low p-value, below the chosen significance level, indicates statistical significance, i.e., sufficient evidence to reject the null hypothesis. A significance level of 0.05 is often used as the cutoff between significant and non-significant results. The table below gives a number of p-values matching to for the first 10 degrees of freedom. These values can be calculated evaluating the quantile function (also known as "inverse CDF" or "ICDF") of the chi-squared distribution; e. g., the ICDF for and yields as in the table above, noticing that is the p-value from the table. History This distribution was first described by the German geodesist and statistician Friedrich Robert Helmert in papers of 1875–6, where he computed the sampling distribution of the sample variance of a normal population. Thus in German this was traditionally known as the Helmert'sche ("Helmertian") or "Helmert distribution". The distribution was independently rediscovered by the English mathematician Karl Pearson in the context of goodness of fit, for which he developed his Pearson's chi-squared test, published in 1900, with computed table of values published in , collected in . The name "chi-square" ultimately derives from Pearson's shorthand for the exponent in a multivariate normal distribution with the Greek letter Chi, writing for what would appear in modern notation as (Σ being the covariance matrix). The idea of a family of "chi-squared distributions", however, is not due to Pearson but arose as a further development due to Fisher in the 1920s.
Mathematics
Probability
null
113469
https://en.wikipedia.org/wiki/Joule%E2%80%93Thomson%20effect
Joule–Thomson effect
In thermodynamics, the Joule–Thomson effect (also known as the Joule–Kelvin effect or Kelvin–Joule effect) describes the temperature change of a real gas or liquid (as differentiated from an ideal gas) when it is expanding; typically caused by the pressure loss from flow through a valve or porous plug while keeping it insulated so that no heat is exchanged with the environment. This procedure is called a throttling process or Joule–Thomson process. The effect is purely an effect due to deviation from ideality, as any ideal gas has no JT effect. At room temperature, all gases except hydrogen, helium, and neon cool upon expansion by the Joule–Thomson process when being throttled through an orifice; these three gases rise in temperature when forced through a porous plug at room temperature, but lowers in temperature when already at lower temperatures. Most liquids such as hydraulic oils will be warmed by the Joule–Thomson throttling process. The temperature at which the JT effect switches sign is the inversion temperature. The gas-cooling throttling process is commonly exploited in refrigeration processes such as liquefiers in air separation industrial process. In hydraulics, the warming effect from Joule–Thomson throttling can be used to find internally leaking valves as these will produce heat which can be detected by thermocouple or thermal-imaging camera. Throttling is a fundamentally irreversible process. The throttling due to the flow resistance in supply lines, heat exchangers, regenerators, and other components of (thermal) machines is a source of losses that limits their performance. Since it is a constant-enthalpy process, it can be used to experimentally measure the lines of constant enthalpy (isenthalps) on the diagram of a gas. Combined with the specific heat capacity at constant pressure it allows the complete measurement of the thermodynamic potential for the gas. History The effect is named after James Prescott Joule and William Thomson, 1st Baron Kelvin, who discovered it in 1852. It followed upon earlier work by Joule on Joule expansion, in which a gas undergoes free expansion in a vacuum and the temperature is unchanged, if the gas is ideal. Description The adiabatic (no heat exchanged) expansion of a gas may be carried out in a number of ways. The change in temperature experienced by the gas during expansion depends not only on the initial and final pressure, but also on the manner in which the expansion is carried out. If the expansion process is reversible, meaning that the gas is in thermodynamic equilibrium at all times, it is called an isentropic expansion. In this scenario, the gas does positive work during the expansion, and its temperature decreases. In a free expansion, on the other hand, the gas does no work and absorbs no heat, so the internal energy is conserved. Expanded in this manner, the temperature of an ideal gas would remain constant, but the temperature of a real gas decreases, except at very high temperature. The method of expansion discussed in this article, in which a gas or liquid at pressure P1 flows into a region of lower pressure P2 without significant change in kinetic energy, is called the Joule–Thomson expansion. The expansion is inherently irreversible. During this expansion, enthalpy remains unchanged (see proof below). Unlike a free expansion, work is done, causing a change in internal energy. Whether the internal energy increases or decreases is determined by whether work is done on or by the fluid; that is determined by the initial and final states of the expansion and the properties of the fluid. The temperature change produced during a Joule–Thomson expansion is quantified by the Joule–Thomson coefficient, . This coefficient may be either positive (corresponding to cooling) or negative (heating); the regions where each occurs for molecular nitrogen, N2, are shown in the figure. Note that most conditions in the figure correspond to N2 being a supercritical fluid, where it has some properties of a gas and some of a liquid, but can not be really described as being either. The coefficient is negative at both very high and very low temperatures; at very high pressure it is negative at all temperatures. The maximum inversion temperature (621 K for N2) occurs as zero pressure is approached. For N2 gas at low pressures, is negative at high temperatures and positive at low temperatures. At temperatures below the gas-liquid coexistence curve, N2 condenses to form a liquid and the coefficient again becomes negative. Thus, for N2 gas below 621 K, a Joule–Thomson expansion can be used to cool the gas until liquid N2 forms. Physical mechanism There are two factors that can change the temperature of a fluid during an adiabatic expansion: a change in internal energy or the conversion between potential and kinetic internal energy. Temperature is the measure of thermal kinetic energy (energy associated with molecular motion); so a change in temperature indicates a change in thermal kinetic energy. The internal energy is the sum of thermal kinetic energy and thermal potential energy. Thus, even if the internal energy does not change, the temperature can change due to conversion between kinetic and potential energy; this is what happens in a free expansion and typically produces a decrease in temperature as the fluid expands. If work is done on or by the fluid as it expands, then the total internal energy changes. This is what happens in a Joule–Thomson expansion and can produce larger heating or cooling than observed in a free expansion. In a Joule–Thomson expansion the enthalpy remains constant. The enthalpy, , is defined as where is internal energy, is pressure, and is volume. Under the conditions of a Joule–Thomson expansion, the change in represents the work done by the fluid (see the proof below). If increases, with constant, then must decrease as a result of the fluid doing work on its surroundings. This produces a decrease in temperature and results in a positive Joule–Thomson coefficient. Conversely, a decrease in means that work is done on the fluid and the internal energy increases. If the increase in kinetic energy exceeds the increase in potential energy, there will be an increase in the temperature of the fluid and the Joule–Thomson coefficient will be negative. For an ideal gas, does not change during a Joule–Thomson expansion. As a result, there is no change in internal energy; since there is also no change in thermal potential energy, there can be no change in thermal kinetic energy and, therefore, no change in temperature. In real gases, does change. The ratio of the value of to that expected for an ideal gas at the same temperature is called the compressibility factor, . For a gas, this is typically less than unity at low temperature and greater than unity at high temperature (see the discussion in compressibility factor). At low pressure, the value of always moves towards unity as a gas expands. Thus at low temperature, and will increase as the gas expands, resulting in a positive Joule–Thomson coefficient. At high temperature, and decrease as the gas expands; if the decrease is large enough, the Joule–Thomson coefficient will be negative. For liquids, and for supercritical fluids under high pressure, increases as pressure increases. This is due to molecules being forced together, so that the volume can barely decrease due to higher pressure. Under such conditions, the Joule–Thomson coefficient is negative, as seen in the figure above. The physical mechanism associated with the Joule–Thomson effect is closely related to that of a shock wave, although a shock wave differs in that the change in bulk kinetic energy of the gas flow is not negligible. The Joule–Thomson (Kelvin) coefficient The rate of change of temperature with respect to pressure in a Joule–Thomson process (that is, at constant enthalpy ) is the Joule–Thomson (Kelvin) coefficient . This coefficient can be expressed in terms of the gas's specific volume , its heat capacity at constant pressure , and its coefficient of thermal expansion as: See the below for the proof of this relation. The value of is typically expressed in °C/bar (SI units: K/Pa) and depends on the type of gas and on the temperature and pressure of the gas before expansion. Its pressure dependence is usually only a few percent for pressures up to 100 bar. All real gases have an inversion point at which the value of changes sign. The temperature of this point, the Joule–Thomson inversion temperature, depends on the pressure of the gas before expansion. In a gas expansion the pressure decreases, so the sign of is negative by definition. With that in mind, the following table explains when the Joule–Thomson effect cools or warms a real gas: Helium and hydrogen are two gases whose Joule–Thomson inversion temperatures at a pressure of one atmosphere are very low (e.g., about 40 K, −233 °C for helium). Thus, helium and hydrogen warm when expanded at constant enthalpy at typical room temperatures. On the other hand, nitrogen and oxygen, the two most abundant gases in air, have inversion temperatures of 621 K (348 °C) and 764 K (491 °C) respectively: these gases can be cooled from room temperature by the Joule–Thomson effect. For an ideal gas, is always equal to zero: ideal gases neither warm nor cool upon being expanded at constant enthalpy. Theoretical models For a Van der Waals gas, the coefficient iswith inversion temperature . For the Dieterici gas, the reduced inversion temperature is , and the relation between reduced pressure and reduced inversion temperature is . This is plotted on the right. The critical point falls inside the region where the gas cools on expansion. The outside region is where the gas warms on expansion. Applications In practice, the Joule–Thomson effect is achieved by allowing the gas to expand through a throttling device (usually a valve) which must be very well insulated to prevent any heat transfer to or from the gas. No external work is extracted from the gas during the expansion (the gas must not be expanded through a turbine, for example). The cooling produced in the Joule–Thomson expansion makes it a valuable tool in refrigeration. The effect is applied in the Linde technique as a standard process in the petrochemical industry, where the cooling effect is used to liquefy gases, and in many cryogenic applications (e.g. for the production of liquid oxygen, nitrogen, and argon). A gas must be below its inversion temperature to be liquefied by the Linde cycle. For this reason, simple Linde cycle liquefiers, starting from ambient temperature, cannot be used to liquefy helium, hydrogen, or neon. They must first be cooled to their inversion temperatures, which are -233 C (helium), -71 C (hydrogen), and -42 C (neon). Proof that the specific enthalpy remains constant In thermodynamics so-called "specific" quantities are quantities per unit mass (kg) and are denoted by lower-case characters. So h, u, and v are the specific enthalpy, specific internal energy, and specific volume (volume per unit mass, or reciprocal density), respectively. In a Joule–Thomson process the specific enthalpy h remains constant. To prove this, the first step is to compute the net work done when a mass m of the gas moves through the plug. This amount of gas has a volume of V1 = m v1 in the region at pressure P1 (region 1) and a volume V2 = m v2 when in the region at pressure P2 (region 2). Then in region 1, the "flow work" done on the amount of gas by the rest of the gas is: W1 = m P1v1. In region 2, the work done by the amount of gas on the rest of the gas is: W2 = m P2v2. So, the total work done on the mass m of gas is The change in internal energy minus the total work done on the amount of gas is, by the first law of thermodynamics, the total heat supplied to the amount of gas. In the Joule–Thomson process, the gas is insulated, so no heat is absorbed. This means that where u1 and u2 denote the specific internal energies of the gas in regions 1 and 2, respectively. Using the definition of the specific enthalpy h = u + Pv, the above equation implies that where h1 and h2 denote the specific enthalpies of the amount of gas in regions 1 and 2, respectively. Throttling in the T-s diagram A convenient way to get a quantitative understanding of the throttling process is by using diagrams such as h-T diagrams, h-P diagrams, and others. Commonly used are the so-called T-s diagrams. Figure 2 shows the T-s diagram of nitrogen as an example. Various points are indicated as follows: As shown before, throttling keeps h constant. E.g. throttling from 200 bar and 300K (point a in fig. 2) follows the isenthalpic (line of constant specific enthalpy) of 430kJ/kg. At 1 bar it results in point b which has a temperature of 270K. So throttling from 200 bar to 1 bar gives a cooling from room temperature to below the freezing point of water. Throttling from 200 bar and an initial temperature of 133K (point c in fig. 2) to 1 bar results in point d, which is in the two-phase region of nitrogen at a temperature of 77.2K. Since the enthalpy is an extensive parameter the enthalpy in d (hd) is equal to the enthalpy in e (he) multiplied with the mass fraction of the liquid in d (xd) plus the enthalpy in f (hf) multiplied with the mass fraction of the gas in d (1 − xd). So With numbers: 150 = xd 28 + (1 − xd) 230 so xd is about 0.40. This means that the mass fraction of the liquid in the liquid–gas mixture leaving the throttling valve is 40%. Derivation of the Joule–Thomson coefficient It is difficult to think physically about what the Joule–Thomson coefficient, , represents. Also, modern determinations of do not use the original method used by Joule and Thomson, but instead measure a different, closely related quantity. Thus, it is useful to derive relationships between and other, more conveniently measured quantities, as described below. The first step in obtaining these results is to note that the Joule–Thomson coefficient involves the three variables T, P, and H. A useful result is immediately obtained by applying the cyclic rule; in terms of these three variables that rule may be written Each of the three partial derivatives in this expression has a specific meaning. The first is , the second is the constant pressure heat capacity, , defined by and the third is the inverse of the isothermal Joule–Thomson coefficient, , defined by . This last quantity is more easily measured than . Thus, the expression from the cyclic rule becomes This equation can be used to obtain Joule–Thomson coefficients from the more easily measured isothermal Joule–Thomson coefficient. It is used in the following to obtain a mathematical expression for the Joule–Thomson coefficient in terms of the volumetric properties of a fluid. To proceed further, the starting point is the fundamental equation of thermodynamics in terms of enthalpy; this is Now "dividing through" by dP, while holding temperature constant, yields The partial derivative on the left is the isothermal Joule–Thomson coefficient, , and the one on the right can be expressed in terms of the coefficient of thermal expansion via a Maxwell relation. The appropriate relation is where α is the cubic coefficient of thermal expansion. Replacing these two partial derivatives yields This expression can now replace in the earlier equation for to obtain: This provides an expression for the Joule–Thomson coefficient in terms of the commonly available properties heat capacity, molar volume, and thermal expansion coefficient. It shows that the Joule–Thomson inversion temperature, at which is zero, occurs when the coefficient of thermal expansion is equal to the inverse of the temperature. Since this is true at all temperatures for ideal gases (see expansion in gases), the Joule–Thomson coefficient of an ideal gas is zero at all temperatures. Joule's second law It is easy to verify that for an ideal gas defined by suitable microscopic postulates that αT = 1, so the temperature change of such an ideal gas at a Joule–Thomson expansion is zero. For such an ideal gas, this theoretical result implies that: The internal energy of a fixed mass of an ideal gas depends only on its temperature (not pressure or volume). This rule was originally found by Joule experimentally for real gases and is known as Joule's second law. More refined experiments found important deviations from it.
Physical sciences
Thermodynamics
Physics
113496
https://en.wikipedia.org/wiki/Space%20weather
Space weather
Space weather is a branch of space physics and aeronomy, or heliophysics, concerned with the varying conditions within the Solar System and its heliosphere. This includes the effects of the solar wind, especially on the Earth's magnetosphere, ionosphere, thermosphere, and exosphere. Though physically distinct, space weather is analogous to the terrestrial weather of Earth's atmosphere (troposphere and stratosphere). The term "space weather" was first used in the 1950s and popularized in the 1990s. Later, it prompted research into "space climate", the large-scale and long-term patterns of space weather. History For many centuries, the effects of space weather were noticed, but not understood. Displays of auroral light have long been observed at high latitudes. Beginnings In 1724, George Graham reported that the needle of a magnetic compass was regularly deflected from magnetic north over the course of each day. This effect was eventually attributed to overhead electric currents flowing in the ionosphere and magnetosphere by Balfour Stewart in 1882, and confirmed by Arthur Schuster in 1889 from analysis of magnetic observatory data. In 1852, astronomer and British Major General Edward Sabine showed that the probability of the occurrence of geomagnetic storms on Earth was correlated with the number of sunspots, demonstrating a novel solar-terrestrial interaction. The solar storm of 1859 caused brilliant auroral displays and disrupted global telegraph operations. Richard Carrington correctly connected the storm with a solar flare that he had observed the day before near a large sunspot group, demonstrating that specific solar events could affect the Earth. Kristian Birkeland explained the physics of aurorae by creating artificial ones in his laboratory, and predicted the solar wind. The introduction of radio revealed that solar weather could cause extreme static or noise. Radar jamming during a large solar event in 1942 led to the discovery of solar radio bursts, radio waves over a broad frequency range created by a solar flare. The 20th century In the 20th century, the interest in space weather expanded as military and commercial systems came to depend on systems affected by space weather. Communications satellites are a vital part of global commerce. Weather satellite systems provide information about terrestrial weather. The signals from satellites of a global positioning system (GPS) are used in a wide variety of applications. Space weather phenomena can interfere with or damage these satellites or interfere with the radio signals with which they operate. Space weather phenomena can cause damaging surges in long-distance transmission lines and expose passengers and crew of aircraft travel to radiation, especially on polar routes. The International Geophysical Year increased research into space weather. Ground-based data obtained during IGY demonstrated that the aurorae occurred in an auroral oval, a permanent region of luminescence 15 to 25° in latitude from the magnetic poles and 5 to 20° wide. In 1958, the Explorer I satellite discovered the Van Allen belts, regions of radiation particles trapped by the Earth's magnetic field. In January 1959, the Soviet satellite Luna 1 first directly observed the solar wind and measured its strength. A smaller International Heliophysical Year (IHY) occurred in 2007–2008. In 1969, INJUN-5 (or Explorer 40) made the first direct observation of the electric field impressed on the Earth's high-latitude ionosphere by the solar wind. In the early 1970s, Triad data demonstrated that permanent electric currents flowed between the auroral oval and the magnetosphere. The term "space weather" came into usage in the late 1950s as the space age began and satellites began to measure the space environment. The term regained popularity in the 1990s along with the belief that space's impact on human systems demanded a more coordinated research and application framework. Programs US National Space Weather Program The purpose of the US National Space Weather Program is to focus research on the needs of the affected commercial and military communities, to connect the research and user communities, to create coordination between operational data centers, and to better define user community needs. NOAA operates the National Weather Service's Space Weather Prediction Center. The concept was turned into an action plan in 2000, an implementation plan in 2002, an assessment in 2006 and a revised strategic plan in 2010. A revised action plan was scheduled to be released in 2011 followed by a revised implementation plan in 2012. ICAO Space Weather Advisory International Civil Aviation Organization (ICAO) implemented a Space Weather Advisory program in late 2019. Under this program, ICAO designated four global space weather service providers: The United States, which is done by the National Oceanic and Atmospheric Administration (NOAA) Space Weather Prediction Center. The Australia, Canada, France, and Japan (ACFJ) consortium, comprising space weather agencies from Australia, Canada, France, and Japan. The Pan-European Consortium for Aviation Space Weather User Services (PECASUS), comprising space weather agencies from Finland (lead), Belgium, the United Kingdom, Poland, Germany, Netherlands, Italy, Austria, and Cyprus. The China-Russian Federation Consortium (CRC) comprising space weather agencies from China and the Russian Federation. Phenomena Within the Solar System, space weather is influenced by the solar wind and the interplanetary magnetic field carried by the solar wind plasma. A variety of physical phenomena is associated with space weather, including geomagnetic storms and substorms, energization of the Van Allen radiation belts, ionospheric disturbances and scintillation of satellite-to-ground radio signals and long-range radar signals, aurorae, and geomagnetically induced currents at Earth's surface. Coronal mass ejections are also important drivers of space weather, as they can compress the magnetosphere and trigger geomagnetic storms. Solar energetic particles (SEP) accelerated by coronal mass ejections or solar flares can trigger solar particle events, a critical driver of human impact space weather, as they can damage electronics onboard spacecraft (e.g. Galaxy 15 failure), and threaten the lives of astronauts, as well as increase radiation hazards to high-altitude, high-latitude aviation. Effects Spacecraft electronics Some spacecraft failures can be directly attributed to space weather; many more are thought to have a space weather component. For example, 46 of the 70 failures reported in 2003 occurred during the October 2003 geomagnetic storm. The two most common adverse space weather effects on spacecraft are radiation damage and spacecraft charging. Radiation (high-energy particles) passes through the skin of the spacecraft and into the electronic components. In most cases, the radiation causes an erroneous signal or changes one bit in memory of a spacecraft's electronics (single event upsets). In a few cases, the radiation destroys a section of the electronics (single-event latchup). Spacecraft charging is the accumulation of an electrostatic charge on a nonconducting material on the spacecraft's surface by low-energy particles. If enough charge is built up, a discharge (spark) occurs. This can cause an erroneous signal to be detected and acted on by the spacecraft computer. A recent study indicated that spacecraft charging is the predominant space weather effect on spacecraft in geosynchronous orbit. Spacecraft orbit changes The orbits of spacecraft in low Earth orbit (LEO) decay to lower and lower altitudes due to the resistance from the friction between the spacecraft's surface (i.e. , drag) and the outer layer of the Earth's atmosphere (or the thermosphere and exosphere). Eventually, a LEO spacecraft falls out of orbit and towards the Earth's surface. Many spacecraft launched in the past few decades have the ability to fire a small rocket to manage their orbits. The rocket can increase altitude to extend lifetime, to direct the re-entry towards a particular (marine) site, or route the satellite to avoid collision with other spacecraft. Such maneuvers require precise information about the orbit. A geomagnetic storm can cause an orbit change over a few days that otherwise would occur over a year or more. The geomagnetic storm adds heat to the thermosphere, causing the thermosphere to expand and rise, increasing the drag on spacecraft. The 2009 satellite collision between the Iridium 33 and Cosmos 2251 demonstrated the importance of having precise knowledge of all objects in orbit. Iridium 33 had the capability to maneuver out of the path of Cosmos 2251 and could have evaded the crash, if a credible collision prediction had been available. Humans in space The exposure of a human body to ionizing radiation has the same harmful effects whether the source of the radiation is a medical X-ray machine, a nuclear power plant, or radiation in space. The degree of the harmful effect depends on the length of exposure and the radiation's energy density. The ever-present radiation belts extend down to the altitude of crewed spacecraft such as the International Space Station (ISS) and the Space Shuttle, but the amount of exposure is within the acceptable lifetime exposure limit under normal conditions. During a major space weather event that includes an SEP burst, the flux can increase by orders of magnitude. Areas within ISS provide shielding that can keep the total dose within safe limits. For the Space Shuttle, such an event would have required immediate mission termination. Ground systems Spacecraft signals The ionosphere bends radio waves in the same manner that water in a pool bends visible light. When the medium through which such waves travel is disturbed, the light image or radio information is distorted and can become unrecognizable. The degree of distortion (scintillation) of a radio wave by the ionosphere depends on the signal frequency. Radio signals in the VHF band (30 to 300 MHz) can be distorted beyond recognition by a disturbed ionosphere. Radio signals in the UHF band (300 MHz to 3 GHz) transit a disturbed ionosphere, but a receiver may not be able to keep locked to the carrier frequency. GPS uses signals at 1575.42 MHz (L1) and 1227.6 MHz (L2) that can be distorted by a disturbed ionosphere. Space weather events that corrupt GPS signals can significantly impact society. For example, the Wide Area Augmentation System operated by the US Federal Aviation Administration (FAA) is used as a navigation tool for North American commercial aviation. It is disabled by every major space weather event. Outages can range from minutes to days. Major space weather events can push the disturbed polar ionosphere 10° to 30° of latitude toward the equator and can cause large ionospheric gradients (changes in density over distance of hundreds of km) at mid and low latitude. Both of these factors can distort GPS signals. Long-distance radio signals Radio waves in the HF band (3 to 30 MHz) (also known as the shortwave band) are reflected by the ionosphere. Since the ground also reflects HF waves, a signal can be transmitted around the curvature of the Earth beyond the line of sight. During the 20th century, HF communications was the only method for a ship or aircraft far from land or a base station to communicate. The advent of systems such as Iridium brought other methods of communications, but HF remains critical for vessels that do not carry the newer equipment and as a critical backup system for others. Space weather events can create irregularities in the ionosphere that scatter HF signals instead of reflecting them, preventing HF communications. At auroral and polar latitudes, small space weather events that occur frequently disrupt HF communications. At mid-latitudes, HF communications are disrupted by solar radio bursts, by X-rays from solar flares (which enhance and disturb the ionospheric D-layer) and by TEC enhancements and irregularities during major geomagnetic storms. Transpolar airline routes are particularly sensitive to space weather, in part because Federal Aviation Regulations require reliable communication over the entire flight. Diverting such a flight is estimated to cost about $100,000. Humans in commercial aviation The magnetosphere guides cosmic ray and solar energetic particles to polar latitudes, while high-energy charged particles enter the mesosphere, stratosphere, and troposphere. These energetic particles at the top of the atmosphere shatter atmospheric atoms and molecules, creating harmful lower-energy particles that penetrate deep into the atmosphere and create measurable radiation. All aircraft flying above 8 km (26,200 feet) altitude are exposed to these particles. The dose exposure is greater in polar regions than at midlatitude and equatorial regions. Many commercial aircraft fly over the polar region. When a space weather event causes radiation exposure to exceed the safe level set by aviation authorities, the aircraft's flight path is diverted. Measurements of the radiation environment at commercial aircraft altitudes above 8 km (26,000 ft) have historically been done by instruments that record the data on board where the data are then processed later on the ground. However, a system of real-time radiation measurements on-board aircraft has been developed through the NASA Automated Radiation Measurements for Aerospace Safety (ARMAS) program. ARMAS has flown hundreds of flights since 2013, mostly on research aircraft, and sent the data to the ground through Iridium satellite links. The eventual goal of these types of measurements is to data assimilate them into physics-based global radiation models, e.g., NASA's Nowcast of Atmospheric Ionizing Radiation System (NAIRAS), so as to provide the weather of the radiation environment rather than the climatology. Ground-induced electric fields Magnetic storm activity can induce geoelectric fields in the Earth's conducting lithosphere. Corresponding voltage differentials can find their way into electric power grids through ground connections, driving uncontrolled electric currents that interfere with grid operation, damage transformers, trip protective relays, and sometimes cause blackouts. This complicated chain of causes and effects was demonstrated during the magnetic storm of March 1989, which caused the complete collapse of the Hydro-Québec electric-power grid in Canada, temporarily leaving nine million people without electricity. The possible occurrence of an even more intense storm led to operational standards intended to mitigate induction-hazard risks, while reinsurance companies commissioned revised risk assessments. Geophysical exploration Air- and ship-borne magnetic surveys can be affected by rapid magnetic field variations during geomagnetic storms. Such storms cause data-interpretation problems because the space weather-related magnetic field changes are similar in magnitude to those of the subsurface crustal magnetic field in the survey area. Accurate geomagnetic storm warnings, including an assessment of storm magnitude and duration, allows for an economic use of survey equipment. Geophysics and hydrocarbon production For economic and other reasons, oil and gas production often involves horizontal drilling of well paths many kilometers from a single wellhead. Accuracy requirements are strict, due to target size – reservoirs may only be a few tens to hundreds of meters across – and safety, because of the proximity of other boreholes. The most accurate gyroscopic method is expensive, since it can stop drilling for hours. An alternative is to use a magnetic survey, which enables measurement while drilling (MWD). Near real-time magnetic data can be used to correct drilling direction. Magnetic data and space weather forecasts can help to clarify unknown sources of drilling error. Terrestrial weather The amount of energy entering the troposphere and stratosphere from space weather phenomena is trivial compared to the solar insolation in the visible and infrared portions of the solar electromagnetic spectrum. Although some linkage between the 11-year sunspot cycle and the Earth's climate has been claimed., this has never been verified. For example, the Maunder minimum, a 70-year period almost devoid of sunspots, has often been suggested to be correlated to a cooler climate, but these correlations have disappeared after deeper studies. The suggested link from changes in cosmic-ray flux causing changes in the amount of cloud formation did not survive scientific tests. Another suggestion, that variations in the extreme ultraviolet (EUV) flux subtly influence existing drivers of the climate and tip the balance between El Niño/La Niña events collapsed when new research showed this was not possible. As such, a linkage between space weather and the climate has not been demonstrated. In addition, a link has been suggested between high energy charged particles (such as SEPs and cosmic rays) and cloud formation. This is because charged particles interact with the atmosphere to produce volatiles which then condense, creating cloud seeds. This is a topic of ongoing research at CERN, where experiments test the effect of high-energy charged particles on atmosphere. If proven, this may suggest a link between space weather (in the form of solar particle events) and cloud formation. Most recently, a statistical connection has been reported between the occurrence of heavy floods and the arrivals of high-speed solar wind streams (HSSs). The enhanced auroral energy deposition during HSSs is suggested as a mechanism for the generation of downward propagating atmospheric gravity waves (AGWs). As AGWs reach lower atmosphere, they may excite the conditional instability in the troposphere, thus leading to excessive rainfall. Observation Observation of space weather is done both for scientific research and applications. Scientific observation has evolved with the state of knowledge, while application-related observation expanded with the ability to exploit such data. Ground-based Space weather is monitored at ground level by observing changes in the Earth's magnetic field over periods of seconds to days, by observing the surface of the Sun, and by observing radio noise created in the Sun's atmosphere. The Sunspot Number (SSN) is the number of sunspots on the Sun's photosphere in visible light on the side of the Sun visible to an Earth observer. The number and total area of sunspots are related to the brightness of the Sun in the EUV and X-ray portions of the solar spectrum and to solar activity such as solar flares and coronal mass ejections. The 10.7 cm radio flux (F10.7) is a measurement of RF emissions from the Sun and is roughly correlated with the solar EUV flux. Since this RF emission is easily obtained from the ground and EUV flux is not, this value has been measured and disseminated continuously since 1947. The world standard measurements are made by the Dominion Radio Astrophysical Observatory at Penticton, BC, Canada and reported once a day at local noon in solar flux units (10−22W·m−2·Hz−1). F10.7 is archived by the National Geophysical Data Center. Fundamental space weather monitoring data are provided by ground-based magnetometers and magnetic observatories. Magnetic storms were first discovered by ground-based measurement of occasional magnetic disturbance. Ground magnetometer data provide real-time situational awareness for postevent analysis. Magnetic observatories have been in continuous operations for decades to centuries, providing data to inform studies of long-term changes in space climatology. Disturbance storm time index (Dst index) is an estimate of the magnetic field change at the Earth's magnetic equator due to a ring of electric current at and just earthward of the geosynchronous orbit. The index is based on data from four ground-based magnetic observatories between 21° and 33° magnetic latitude during a one-hour period. Stations closer to the magnetic equator are not used due to ionospheric effects. The Dst index is compiled and archived by the World Data Center for Geomagnetism, Kyoto. Kp/ap index: 'a' is an index created from the geomagnetic disturbance at one midlatitude (40° to 50° latitude) geomagnetic observatory during a 3-hour period. 'K' is the quasilogarithmic counterpart of the 'a' index. Kp and ap are the average of K and an over 13 geomagnetic observatories to represent planetary-wide geomagnetic disturbances. The Kp/ap index indicates both geomagnetic storms and substorms (auroral disturbance). Kp/ap data are available from 1932 onward. AE index is compiled from geomagnetic disturbances at 12 geomagnetic observatories in and near the auroral zones and is recorded at 1-minute intervals. The public AE index is available with a lag of two to three days that limits its utility for space weather applications. The AE index indicates the intensity of geomagnetic substorms except during a major geomagnetic storm when the auroral zones expand equatorward from the observatories. Radio noise bursts are reported by the Radio Solar Telescope Network to the U.S. Air Force and to NOAA. The radio bursts are associated with solar flare plasma that interacts with the ambient solar atmosphere. The Sun's photosphere is observed continuously for activity that can be the precursors to solar flares and CMEs. The Global Oscillation Network Group (GONG) project monitors both the surface and the interior of the Sun by using helioseismology, the study of sound waves propagating through the Sun and observed as ripples on the solar surface. GONG can detect sunspot groups on the far side of the Sun. This ability has recently been verified by visual observations from the STEREO spacecraft. Neutron monitors on the ground indirectly monitor cosmic rays from the Sun and galactic sources. When cosmic rays interact with the atmosphere, atomic interactions occur that cause a shower of lower-energy particles to descend into the atmosphere and to ground level. The presence of cosmic rays in the near-Earth space environment can be detected by monitoring high-energy neutrons at ground level. Small fluxes of cosmic rays are present continuously. Large fluxes are produced by the Sun during events related to energetic solar flares. Total Electron Content (TEC) is a measure of the ionosphere over a given location. TEC is the number of electrons in a column one meter square from the base of the ionosphere (around 90 km altitude) to the top of the ionosphere (around 1000 km altitude). Many TEC measurements are made by monitoring the two frequencies transmitted by GPS spacecraft. Presently, GPS TEC is monitored and distributed in real time from more than 360 stations maintained by agencies in many countries. Geoeffectiveness is a measure of how strongly space weather magnetic fields, such as coronal mass ejections, couple with the Earth's magnetic field. This is determined by the direction of the magnetic field held within the plasma that originates from the Sun. New techniques measuring Faraday rotation in radio waves are in development to measure field direction. Satellite-based A host of research spacecraft have explored space weather. The Orbiting Geophysical Observatory series were among the first spacecraft with the mission of analyzing the space environment. Recent spacecraft include the NASA-ESA Solar-Terrestrial Relations Observatory (STEREO) pair of spacecraft launched in 2006 into solar orbit and the Van Allen Probes, launched in 2012 into a highly elliptical Earth orbit. The two STEREO spacecraft drift away from the Earth by about 22° per year, one leading and the other trailing the Earth in its orbit. Together they compile information about the solar surface and atmosphere in three dimensions. The Van Allen probes record detailed information about the radiation belts, geomagnetic storms, and the relationship between the two. Some spacecraft with other primary missions have carried auxiliary instruments for solar observation. Among the earliest such spacecraft were the Applications Technology Satellite (ATS) series at GEO that were precursors to the modern Geostationary Operational Environmental Satellite (GOES) weather satellite and many communication satellites. The ATS spacecraft carried environmental particle sensors as auxiliary payloads and had their navigational magnetic field sensor used for sensing the environment. Many of the early instruments were research spacecraft that were re-purposed for space weather applications. One of the first of these was the IMP-8 (Interplanetary Monitoring Platform). It orbited the Earth at 35 Earth radii and observed the solar wind for two-thirds of its 12-day orbits from 1973 to 2006. Since the solar wind carries disturbances that affect the magnetosphere and ionosphere, IMP-8 demonstrated the utility of continuous solar wind monitoring. IMP-8 was followed by ISEE-3, which was placed near the Sun-Earth Lagrangian point, 235 Earth radii above the surface (about 1.5 million km, or 924,000 miles) and continuously monitored the solar wind from 1978 to 1982. The next spacecraft to monitor the solar wind at the point was WIND from 1994 to 1998. After April 1998, the WIND spacecraft orbit was changed to circle the Earth and occasionally pass the point. The NASA Advanced Composition Explorer has monitored the solar wind at the point from 1997 to present. In addition to monitoring the solar wind, monitoring the Sun is important to space weather. Because the solar EUV cannot be monitored from the ground, the joint NASA-ESA Solar and Heliospheric Observatory (SOHO) spacecraft was launched and has provided solar EUV images beginning in 1995. SOHO is a main source of near-real time solar data for both research and space weather prediction and inspired the STEREO mission. The Yohkoh spacecraft at LEO observed the Sun from 1991 to 2001 in the X-ray portion of the solar spectrum and was useful for both research and space weather prediction. Data from Yohkoh inspired the Solar X-ray Imager on GOES. Spacecraft with instruments whose primary purpose is to provide data for space weather predictions and applications include the Geostationary Operational Environmental Satellite (GOES) series of spacecraft, the POES series, the DMSP series, and the Meteosat series. The GOES spacecraft have carried an X-ray sensor (XRS) which measures the flux from the whole solar disk in two bands – 0.05 to 0.4 nm and 0.1 to 0.8 nm – since 1974, an X-ray imager (SXI) since 2004, a magnetometer which measures the distortions of the Earth's magnetic field due to space weather, a whole disk EUV sensor since 2004, and particle sensors (EPS/HEPAD) which measure ions and electrons in the energy range of 50 keV to 500 MeV. Starting sometime after 2015, the GOES-R generation of GOES spacecraft will replace the SXI with a solar EUV image (SUVI) similar to the one on SOHO and STEREO and the particle sensor will be augmented with a component to extend the energy range down to 30 eV. The Deep Space Climate Observatory (DSCOVR) satellite is a NOAA Earth observation and space weather satellite that launched in February 2015. Among its features is advance warning of coronal mass ejections. Models Space weather models are simulations of the space weather environment. Models use sets of mathematical equations to describe physical processes. These models take a limited data set and attempt to describe all or part of the space weather environment in or to predict how weather evolves over time. Early models were heuristic; i.e., they did not directly employ physics. These models take less resources than their more sophisticated descendants. Later models use physics to account for as many phenomena as possible. No model can yet reliably predict the environment from the surface of the Sun to the bottom of the Earth's ionosphere. Space weather models differ from meteorological models in that the amount of input is vastly smaller. A significant portion of space weather model research and development in the past two decades has been done as part of the Geospace Environmental Model (GEM) program of the National Science Foundation. The two major modeling centers are the Center for Space Environment Modeling (CSEM) and the Center for Integrated Space weather Modeling (CISM). The Community Coordinated Modeling Center (CCMC) at the NASA Goddard Space Flight Center is a facility for coordinating the development and testing of research models, for improving and preparing models for use in space weather prediction and application. Modeling techniques include (a) magnetohydrodynamics, in which the environment is treated as a fluid, (b) particle in cell, in which non-fluid interactions are handled within a cell and then cells are connected to describe the environment, (c) first principles, in which physical processes are in balance (or equilibrium) with one another, (d) semi-static modeling, in which a statistical or empirical relationship is described, or a combination of multiple methods. Commercial space weather development During the first decade of the 21st Century, a commercial sector emerged that engaged in space weather, serving agency, academia, commercial and consumer sectors. Space weather providers are typically smaller companies, or small divisions within a larger company, that provide space weather data, models, derivative products and service distribution. The commercial sector includes scientific and engineering researchers as well as users. Activities are primarily directed toward the impacts of space weather upon technology. These include, for example: Atmospheric drag on LEO satellites caused by energy inputs into the thermosphere from solar UV, FUV, Lyman-alpha, EUV, XUV, X-ray, and gamma ray photons as well as by charged particle precipitation and Joule heating at high latitudes; Surface and internal charging from increased energetic particle fluxes, leading to effects such as discharges, single event upsets and latch-up, on LEO to GEO satellites; Disrupted GPS signals caused by ionospheric scintillation leading to increased uncertainty in navigation systems such as aviation's Wide Area Augmentation System (WAAS); Lost HF, UHF and L-band radio communications due to ionosphere scintillation, solar flares and geomagnetic storms; Increased radiation to human tissue and avionics from galactic cosmic rays SEP, especially during large solar flares, and possibly bremsstrahlung gamma-rays produced by precipitating radiation belt energetic electrons at altitudes above 8 km; Increased inaccuracy in surveying and oil/gas exploration that uses the Earth's main magnetic field when it is disturbed by geomagnetic storms; Loss of power transmission from GIC surges in the electrical power grid and transformer shutdowns during large geomagnetic storms. Many of these disturbances result in societal impacts that account for a significant part of the national GDP. The concept of incentivizing commercial space weather was first suggested by the idea of a Space Weather Economic Innovation Zone discussed by the American Commercial Space Weather Association (ACSWA) in 2015. The establishment of this economic innovation zone would encourage expanded economic activity developing applications to manage the risks space weather and would encourage broader research activities related to space weather by universities. It could encourage U.S. business investment in space weather services and products. It promoted the support of U.S. business innovation in space weather services and products by requiring U.S. government purchases of U.S. built commercial hardware, software, and associated products and services where no suitable government capability pre-exists. It also promoted U.S. built commercial hardware, software, and associated products and services sales to international partners. designate U.S. built commercial hardware, services, and products as “Space Weather Economic Innovation Zone” activities; Finally, it recommended that U.S. built commercial hardware, services, and products be tracked as Space Weather Economic Innovation Zone contributions within agency reports. In 2015 the U.S. Congress bill HR1561 provided groundwork where social and environmental impacts from a Space Weather Economic Innovation Zone could be far-reaching. In 2016, the Space Weather Research and Forecasting Act (S. 2817) was introduced to build on that legacy. Later, in 2017-2018 the HR3086 Bill took these concepts, included the breadth of material from parallel agency studies as part of the OSTP-sponsored Space Weather Action Program (SWAP), and with bicameral and bipartisan support the 116th Congress (2019) is considering passage of the Space Weather Coordination Act (S141, 115th Congress). American Commercial Space Weather Association On April 29, 2010, the commercial space weather community created the American Commercial Space Weather Association (ACSWA) an industry association. ACSWA promotes space weather risk mitigation for national infrastructure, economic strength and national security. It seeks to: provide quality space weather data and services to help mitigate risks to technology; provide advisory services to government agencies; provide guidance on the best task division between commercial providers and government agencies; represent the interests of commercial providers; represent commercial capabilities in the national and international arena; develop best-practices. A summary of the broad technical capabilities in space weather that are available from the association can be found on their web site http://www.acswa.us. Notable events On December 21, 1806, Alexander von Humboldt observed that his compass had become erratic during a bright auroral event. The Solar storm of 1859 (Carrington Event) caused widespread disruption of telegraph service. The Aurora of November 17, 1882 disrupted telegraph service. The May 1921 geomagnetic storm, one of the largest geomagnetic storms disrupted telegraph service and damaged electrical equipment worldwide. The Solar storm of August 1972, a large SEP event occurred. If astronauts had been in space at the time, the dose could have been life-threatening. The March 1989 geomagnetic storm included multiple space weather effects: SEP, CME, Forbush decrease, ground level enhancement, geomagnetic storm, etc.. The 2000 Bastille Day event coincided with exceptionally bright aurora. April 21, 2002, the Nozomi Mars Probe was hit by a large SEP event that caused large-scale failure. The mission, which was already about 3 years behind schedule, was abandoned in December 2003. The 2003 Halloween solar storms, a series of coronal mass ejections and solar flares in late October and early November 2003 with associated impacts.
Physical sciences
Astronomy basics
Astronomy
113499
https://en.wikipedia.org/wiki/Aestivation
Aestivation
Aestivation ( (summer); also spelled estivation in American English) is a state of animal dormancy, similar to hibernation, although taking place in the summer rather than the winter. Aestivation is characterized by inactivity and a lowered metabolic rate, that is entered in response to high temperatures and arid conditions. It takes place during times of heat and dryness, which are often the summer months. Invertebrate and vertebrate animals are known to enter this state to avoid damage from high temperatures and the risk of desiccation. Both terrestrial and aquatic animals undergo aestivation. Fossil records suggest that aestivation may have evolved several hundred million years ago. Physiology Organisms that aestivate appear to be in a fairly "light" state of dormancy, as their physiological state can be rapidly reversed, and the organism can quickly return to a normal state. A study done on Otala lactea, a snail native to parts of Europe and Northern Africa, shows that they can wake from their dormant state within ten minutes of being introduced to a wetter environment. The primary physiological and biochemical concerns for an aestivating animal are to conserve energy, retain water in the body, ration the use of stored energy, handle the nitrogenous end products, and stabilize bodily organs, cells, and macromolecules. This can be quite a task as hot temperatures and arid conditions may last for months, in some cases for years. The depression of metabolic rate during aestivation causes a reduction in macromolecule synthesis and degradation. To stabilise the macromolecules, aestivators will enhance antioxidant defenses and elevate chaperone proteins. This is a widely used strategy across all forms of hypometabolism. These physiological and biochemical concerns appear to be the core elements of hypometabolism throughout the animal kingdom. In other words, animals which aestivate appear to go through nearly the same physiological processes as animals that hibernate. Invertebrates Mollusca Gastropoda: some air-breathing land snails, including species in the genera Helix, Cernuella, Theba, Helicella, Achatina and Otala, commonly aestivate during periods of heat. Some species move into shaded vegetation or rubble. Others climb up tall plants, including crop species as well as bushes and trees, and will also climb human-made structures such as posts, fences, etc. Their habit of climbing vegetation to aestivate has caused more than one introduced snail species to be declared an agricultural nuisance. To seal the opening to their shell to prevent water loss, pulmonate land snails secrete a membrane of dried mucus called an epiphragm. In certain species, such as Helix pomatia, this barrier is reinforced with calcium carbonate, and thus it superficially resembles an operculum, except that it has a tiny hole to allow some oxygen exchange. There is a decrease in metabolic rate and reduced rate of water loss in aestivating snails like Rhagada tescorum, Sphincterochila boissieri and others. Arthropoda Insecta: Lady beetles (Coccinellidae) have been reported to aestivate. Another type of beetle (Blepharida rhois) also chooses to aestivate. They usually do so when the temperature is warmer and will re-emerge in the late summer or early fall. Mosquitoes also are reported to undergo aestivation. False honey ants are well known for being winter active and aestivate in temperate climates. Bogong moths will aestivate over the summer to avoid the heat and lack of food sources. Adult alfalfa weevils (Hypera postica) aestivate during the summer in the southeastern United States, during which their metabolism, respiration, and nervous systems show a dampening of activity. Crustacea: An example of a crustacean undergoing aestivation is with the Australian crab Austrothelphusa transversa, which undergoes aestivation underground during the dry season. Vertebrates Reptiles and amphibians Non-mammalian animals that aestivate include North American desert tortoises, crocodiles, and salamanders. Some amphibians (e.g. the cane toad and greater siren) aestivate during the hot dry season by moving underground where it is cooler and more humid. The California red-legged frog may aestivate to conserve energy when its food and water supply is low. The water-holding frog has an aestivation cycle. It buries itself in sandy ground in a secreted, water-tight mucus cocoon during periods of hot, dry weather. Australian Aboriginals discovered a means to take advantage of this by digging up one of these frogs and squeezing it, causing the frog to empty its bladder. This dilute urine—up to half a glassful—can be drunk. However, this will cause the death of the frog which will be unable to survive until the next rainy season without the water it had stored. The western swamp turtle aestivates to survive hot summers in the ephemeral swamps it lives in. It buries itself in various media which change depending on location and available substrates. Because the species is critically endangered, the Perth Zoo began a conservation and breeding program for it. However, zookeepers were unaware of the importance of their aestivation cycle and during the first summer period would perform weekly checks on the animals. This repeated disturbance was detrimental to the health of the animals, with many losing significant weight and some dying. The zookeepers quickly changed their procedures and now leave their captive turtles undisturbed during their aestivation period. Fish African lungfish also aestivate as can salamanderfish. Mammals Although relatively uncommon, a small number of mammals aestivate. Animal physiologist Kathrin Dausmann of Philipps University of Marburg, Germany, and coworkers presented evidence in a 2004 edition of Nature that the Malagasy fat-tailed dwarf lemur hibernates or aestivates in a small tree hole for seven months of the year. According to the Oakland Zoo in California, four-toed hedgehogs are thought to aestivate during the dry season.
Biology and health sciences
Ethology
Biology
113509
https://en.wikipedia.org/wiki/AM%20broadcasting
AM broadcasting
AM broadcasting is radio broadcasting using amplitude modulation (AM) transmissions. It was the first method developed for making audio radio transmissions, and is still used worldwide, primarily for medium wave (also known as "AM band") transmissions, but also on the longwave and shortwave radio bands. The earliest experimental AM transmissions began in the early 1900s. However, widespread AM broadcasting was not established until the 1920s, following the development of vacuum tube receivers and transmitters. AM radio remained the dominant method of broadcasting for the next 30 years, a period called the "Golden Age of Radio", until television broadcasting became widespread in the 1950s and received much of the programming previously carried by radio. Later, AM radio's audiences declined greatly due to competition from FM (frequency modulation) radio, Digital Audio Broadcasting (DAB), satellite radio, HD (digital) radio, Internet radio, music streaming services, and podcasting. Compared to FM or digital transmissions, AM transmissions are more expensive to transmit due to the necessity of having to transmit a high power carrier wave to overcome ground losses, and the large antenna radiators required at the low broadcast frequencies, but can be sent over long distances via the ionosphere at night; however, they are much more susceptible to interference, and often have lower audio fidelity. Thus, AM broadcasters tend to specialize in spoken-word formats, such as talk radio, all-news radio and sports radio, with music formats primarily for FM and digital stations. History Early broadcasting development The idea of broadcasting — the unrestricted transmission of signals to a widespread audience — dates back to the founding period of radio development, even though the earliest radio transmissions, originally known as "Hertzian radiation" and "wireless telegraphy", used spark-gap transmitters that could only transmit the dots-and-dashes of Morse code. In October 1898 a London publication, The Electrician, noted that "there are rare cases where, as Dr. [Oliver] Lodge once expressed it, it might be advantageous to 'shout' the message, spreading it broadcast to receivers in all directions". However, it was recognized that this would involve significant financial issues, as that same year The Electrician also commented "did not Prof. Lodge forget that no one wants to pay for shouting to the world on a system by which it would be impossible to prevent non-subscribers from benefiting gratuitously?" On January 1, 1902, Nathan Stubblefield gave a short-range "wireless telephone" demonstration, that included simultaneously broadcasting speech and music to seven locations throughout Murray, Kentucky. However, this was transmitted using induction rather than radio signals, and although Stubblefield predicted that his system would be perfected so that "it will be possible to communicate with hundreds of homes at the same time", and "a single message can be sent from a central station to all parts of the United States", he was unable to overcome the inherent distance limitations of this technology. The earliest public radiotelegraph broadcasts were provided as government services, beginning with daily time signals inaugurated on January 1, 1905, by a number of U.S. Navy stations. In Europe, signals transmitted from a station located on the Eiffel Tower were received throughout much of Europe. In both the United States and France this led to a small market of receiver lines geared for jewelers who needed accurate time to set their clocks, including the Ondophone in France, and the De Forest RS-100 Jewelers Time Receiver in the United States The ability to pick up time signal broadcasts, in addition to Morse code weather reports and news summaries, also attracted the interest of amateur radio enthusiasts. Early amplitude modulation (AM) transmitter technologies It was immediately recognized that, much like the telegraph had preceded the invention of the telephone, the ability to make audio radio transmissions would be a significant technical advance. Despite this knowledge, it still took two decades to perfect the technology needed to make quality audio transmissions. In addition, the telephone had rarely been used for distributing entertainment, outside of a few "telephone newspaper" systems, most of which were established in Europe, beginning with the Paris Théâtrophone. With this in mind, most early radiotelephone development envisioned that the device would be more profitably developed as a "wireless telephone" for personal communication, or for providing links where regular telephone lines could not be run, rather than for the uncertain finances of broadcasting. The person generally credited as the primary early developer of AM technology is Canadian-born inventor Reginald Fessenden. The original spark-gap radio transmitters were impractical for transmitting audio, since they produced discontinuous pulses known as "damped waves". Fessenden realized that what was needed was a new type of radio transmitter that produced steady "undamped" (better known as "continuous wave") signals, which could then be "modulated" to reflect the sounds being transmitted. Fessenden's basic approach was disclosed in U.S. Patent 706,737, which he applied for on May 29, 1901, and was issued the next year. It called for the use of a high-speed alternator (referred to as "an alternating-current dynamo") that generated "pure sine waves" and produced "a continuous train of radiant waves of substantially uniform strength", or, in modern terminology, a continuous-wave (CW) transmitter. Fessenden began his research on audio transmissions while doing developmental work for the United States Weather Service on Cobb Island, Maryland. Because he did not yet have a continuous-wave transmitter, initially he worked with an experimental "high-frequency spark" transmitter, taking advantage of the fact that the higher the spark rate, the closer a spark-gap transmission comes to producing continuous waves. He later reported that, in the fall of 1900, he successfully transmitted speech over a distance of about 1.6 kilometers (one mile), which appears to have been the first successful audio transmission using radio signals. However, at this time the sound was far too distorted to be commercially practical. For a time he continued working with more sophisticated high-frequency spark transmitters, including versions that used compressed air, which began to take on some of the characteristics of arc-transmitters. Fessenden attempted to sell this form of radiotelephone for point-to-point communication, but was unsuccessful. Alternator transmitter Fessenden's work with high-frequency spark transmissions was only a temporary measure. His ultimate plan for creating an audio-capable transmitter was to redesign an electrical alternator, which normally produced alternating current of at most a few hundred (Hz), to increase its rotational speed and so generate currents of tens-of-thousands Hz, thus producing a steady continuous-wave transmission when connected to an aerial. The next step, adopted from standard wire-telephone practice, was to insert a simple carbon microphone into the transmission line, to modulate the carrier wave signal to produce AM audio transmissions. However, it would take many years of expensive development before even a prototype alternator-transmitter would be ready, and a few years beyond that for high-power versions to become available. Fessenden worked with General Electric's (GE) Ernst F. W. Alexanderson, who in August 1906 delivered an improved model which operated at a transmitting frequency of approximately 50 kHz, although at low power. The alternator-transmitter achieved the goal of transmitting quality audio signals, but the lack of any way to amplify the signals meant they were somewhat weak. On December 21, 1906, Fessenden made an extensive demonstration of the new alternator-transmitter at Brant Rock, Massachusetts, showing its utility for point-to-point wireless telephony, including interconnecting his stations to the wire telephone network. As part of the demonstration, speech was transmitted 18 kilometers (11 miles) to a listening site at Plymouth, Massachusetts. An American Telephone Journal account of the December 21 alternator-transmitter demonstration included the statement that "It is admirably adapted to the transmission of news, music, etc. as, owing to the fact that no wires are needed, simultaneous transmission to many subscribers can be effected as easily as to a few", echoing the words of a handout distributed to the demonstration witnesses, which stated "[Radio] Telephony is admirably adapted for transmitting news, stock quotations, music, race reports, etc. simultaneously over a city, on account of the fact that no wires are needed and a single apparatus can distribute to ten thousand subscribers as easily as to a few. It is proposed to erect stations for this purpose in the large cities here and abroad." However, other than two holiday transmissions reportedly made shortly after these demonstrations, Fessenden does not appear to have conducted any radio broadcasts for the general public, or to have even given additional thought about the potential of a regular broadcast service, and in a 1908 article providing a comprehensive review of the potential uses for his radiotelephone invention, he made no references to broadcasting. Because there was no way to amplify electrical currents at this time, modulation was usually accomplished by a carbon microphone inserted directly in the antenna wire. This meant that the full transmitter power flowed through the microphone, and even using water cooling, the power handling ability of the microphones severely limited the power of the transmissions. Ultimately only a small number of large and powerful Alexanderson alternators would be developed. However, they would be almost exclusively used for long-range radiotelegraph communication, and occasionally for radiotelephone experimentation, but were never used for general broadcasting. Arc transmitters Almost all of the continuous wave AM transmissions made prior to 1915 were made by versions of the arc converter transmitter, which had been initially developed by Valdemar Poulsen in 1903. Arc transmitters worked by producing a pulsating electrical arc in an enclosed hydrogen atmosphere. They were much more compact than alternator transmitters, and could operate on somewhat higher transmitting frequencies. However, they suffered from some of the same deficiencies. The lack of any means to amplify electrical currents meant that, like the alternator transmitters, modulation was usually accomplished by a microphone inserted directly in the antenna wire, which again resulted in overheating issues, even with the use of water-cooled microphones. Thus, transmitter powers tended to be limited. The arc was also somewhat unstable, which reduced audio quality. Experimenters who used arc transmitters for their radiotelephone research included Ernst Ruhmer, Quirino Majorana, Charles "Doc" Herrold, and Lee de Forest. Vacuum tube transmitters Advances in vacuum tube technology (called "valves" in British usage), especially after around 1915, revolutionized radio technology. Vacuum tube devices could be used to amplify electrical currents, which overcame the overheating issues of needing to insert microphones directly in the transmission antenna circuit. Vacuum tube transmitters also provided high-quality AM signals, and could operate on higher transmitting frequencies than alternator and arc transmitters. Non-governmental radio transmissions were prohibited in many countries during World War I, but AM radiotelephony technology advanced greatly due to wartime research, and after the war the availability of tubes sparked a great increase in the number of amateur radio stations experimenting with AM transmission of news or music. Vacuum tubes remained the central technology of radio for 40 years, until transistors began to dominate in the late 1950s, and are still used in the highest power broadcast transmitters. Receivers Unlike telegraph and telephone systems, which used completely different types of equipment, most radio receivers were equally suitable for both radiotelegraph and radiotelephone reception. In 1903 and 1904 the electrolytic detector and thermionic diode (Fleming valve) were invented by Reginald Fessenden and John Ambrose Fleming, respectively. Most important, in 1904–1906 the crystal detector, the simplest and cheapest AM detector, was developed by G. W. Pickard. Homemade crystal radios spread rapidly during the next 15 years, providing ready audiences for the first radio broadcasts. One limitation of crystals sets was the lack of amplifying the signals, so listeners had to use earphones, and it required the development of vacuum-tube receivers before loudspeakers could be used. The dynamic cone loudspeaker, invented in 1924, greatly improved audio frequency response over the previous horn speakers, allowing music to be reproduced with good fidelity. AM radio offered the highest sound quality available in a home audio device prior to the introduction of the high-fidelity, long-playing record in the late 1940s. Listening habits changed in the 1960s due to the introduction of the revolutionary transistor radio (Regency TR-1, the first transistor radio released December 1954), which was made possible by the invention of the transistor in 1948. (The transistor was invented at Bell labs and released in June 1948.) Their compact size — small enough to fit in a shirt pocket — and lower power requirements, compared to vacuum tubes, meant that for the first time radio receivers were readily portable. The transistor radio became the most widely used communication device in history, with billions manufactured by the 1970s. Radio became a ubiquitous "companion medium" which people could take with them anywhere they went. Early experimental broadcasts The demarcation between what is considered "experimental" and "organized" broadcasting is largely arbitrary. Listed below are some of the early AM radio broadcasts, which, due to their irregular schedules and limited purposes, can be classified as "experimental": Christmas Eve 1906. Until the early 1930s, it was generally accepted that Lee de Forest's series of demonstration broadcasts begun in 1907 were the first transmissions of music and entertainment by radio. However, in 1932 an article prepared by Samuel M. Kintner, a former associate of Reginald Fessenden, asserted that Fessenden had actually conducted two earlier broadcasts. This claim was based solely on information included in a January 29, 1932, letter that Fessenden had sent to Kintner. (Fessenden subsequently died five months before Kintner's article appeared.) In his letter, Fessenden reported that, on the evening of December 24, 1906 (Christmas Eve), he had made the first of two broadcasts of music and entertainment to a general audience, using the alternator-transmitter at Brant Rock, Massachusetts. Fessenden remembered producing a short program that included playing a phonograph record, followed by his playing the violin and singing, and closing with a Bible reading. He also stated that a second short program was broadcast on December 31 (New Year's Eve). The intended audience for both transmissions was primarily shipboard radio operators along the Atlantic seaboard. Fessenden claimed these two programs had been widely publicized in advance, with the Christmas Eve broadcast heard "as far down" as Norfolk, Virginia, while the New Year’s Eve broadcast had been received in the West Indies. However, extensive efforts to verify Fessenden's claim during both the 50th and 100th anniversaries of the claimed broadcasts, which included reviewing ships' radio log accounts and other contemporary sources, have so far failed to confirm that these reported holiday broadcasts actually took place. 1907-1912. Lee de Forest conducted multiple test broadcasts beginning in 1907, and was widely quoted promoting the potential of organized radio broadcasting. Using a series of arc transmitters, he made his first entertainment broadcast in February 1907, transmitting electronic telharmonium music from his Parker Building laboratory station in New York City. This was followed by tests that included, in the fall, Eugenia Farrar singing "I Love You Truly" and "Just Awearyin' for You". Additional promotional events in New York included live performances by famous Metropolitan Opera stars such as Mariette Mazarin and Enrico Caruso. He also broadcast phonograph music from the Eiffel Tower in Paris. His company equipped the U.S. Navy's Great White Fleet with experimental arc radiotelephones for their 1908 around-the-world cruise, and the operators broadcast phonograph music as the ships entered ports like San Francisco and Honolulu. June 1910. In a June 23, 1910, notarized letter that was published in a catalog produced by the Electro Importing Company of New York, Charles "Doc" Herrold reported that, using one of that company's spark coils to create a "high frequency spark" transmitter, he had successfully broadcast "wireless phone concerts to local amateur wireless men". Herrold lived in San Jose, California. 1913. Robert Goldschmidt began experimental radiotelephone transmissions from the Laeken station, near Brussels, Belgium, and by March 13, 1914, the tests had been heard as far away as the Eiffel Tower in Paris. 1914–1919. "University of Wisconsin electrical engineering Professor Edward Bennett sets up a personal radio transmitter on campus and in June 1915 is issued an Experimental radio station license with the call sign 9XM. Activities included regular Morse Code broadcasts of weather forecasts and sending game reports for a Wisconsin-Ohio State basketball game on February 17, 1917. January 15, 1920. Broadcasting in the United Kingdom began with impromptu news and phonograph music over 2MT, the 15 kW experimental tube transmitter at Marconi's factory in Chelmsford, Essex, at a frequency of 120 kHz. On June 15, 1920, the Daily Mail newspaper sponsored the first scheduled British radio concert, by the famed Australian opera diva Nellie Melba. This transmission was heard throughout much of Europe, including in Berlin, Paris, The Hague, Madrid, Spain, and Sweden. Chelmsford continued broadcasting concerts with noted performers. A few months later, in spite of burgeoning popularity, the government ended the broadcasts, due to complaints that the station's longwave signal was interfering with more important communication, in particular military aircraft radio. August 27, 1920. Argentina made the first mass radio transmission as a communication medium. Medicine students of the UBA made the first radio program by transmitting Wagner's Parsifal on radio and picked up by about 100 amateurs in the city, emitting from the roof of the Teatro Colón. They kept transmitting over the nights different operas being the first in offering a radio program. There were known as the "Locos de la azotea" (the crazies of the roof). Organized broadcasting Following World War I, the number of stations providing a regular broadcasting service greatly increased, primarily due to advances in vacuum-tube technology. In response to ongoing activities, government regulators eventually codified standards for which stations could make broadcasts intended for the general public, for example, in the United States formal recognition of a "broadcasting service" came with the establishment of regulations effective December 1, 1921, and Canadian authorities created a separate category of "radio-telephone broadcasting stations" in April 1922. However, there were numerous cases of entertainment broadcasts being presented on a regular schedule before their formal recognition by government regulators. Some early examples include: July 21, 1912. The first person to transmit entertainment broadcasts on a regular schedule appears to have been Charles "Doc" Herrold, who inaugurated weekly programs, using an arc transmitter, from his Wireless School station in San Jose, California. The broadcasts continued until the station was shut down due to the entrance of the United States into World War I in April 1917. March 28, 1914. The Laeken station in Belgium, under the oversight of Robert Goldschmidt, inaugurated a weekly series of concerts, transmitted at 5:00 p.m. on Saturdays. These continued for about four months until July, and were ended by the start of World War I. In August 1914 the Laeken facilities were destroyed, to keep them from falling into the hands of invading German troops. November 1916. De Forest perfected "Oscillion" power vacuum tubes, capable of use in radio transmitters, and inaugurated daily broadcasts of entertainment and news from his New York "Highbridge" station, 2XG. This station also suspended operations in April 1917 due to the prohibition of civilian radio transmissions following the United States' entry into World War I. Its most publicized program was the broadcasting of election results for the Hughes-Wilson presidential election on November 7, 1916, with updates provided by wire from the New York American offices. An estimated 7,000 radio listeners as far as 200 miles (320 kilometers) from New York heard election returns interspersed with patriotic music. April 17, 1919. Shortly after the end of World War I, F. S. McCullough at the Glenn L. Martin aviation plant in Cleveland, Ohio, began a weekly series of phonograph concerts. However, the broadcasts were soon suspended, due to interference complaints by the U.S. Navy. November 6, 1919. The first scheduled (pre-announced in the press) Dutch radio broadcast was made by Nederlandsche Radio Industrie station PCGG at The Hague, which began regular concerts broadcasts. It found it had a large audience outside the Netherlands, mostly in the UK. (Rather than true AM signals, at least initially this station used a form of narrowband FM, which required receivers to be slightly detuned to receive the signals using slope detection.) Late 1919. De Forest's New York station, 2XG, returned to the airwaves in late 1919 after having to suspend operations during World War I. The station continued to operate until early 1920, when it was shut down because the transmitter had been moved to a new location without permission. May 20, 1920. Experimental Canadian Marconi station XWA (later CFCF, deleted in 2010 as CINW) in Montreal began regular broadcasts, and claims status as the first commercial broadcaster in the world. June 1920. De Forest transferred 2XG's former transmitter to San Francisco, California, where it was relicensed as 6XC, the "California Theater station". By June 1920 the station began transmitting daily concerts. De Forest later stated that this was the "first radio-telephone station devoted solely" to broadcasting to the public. August 20, 1920. On this date the Detroit News began daily transmissions over station 8MK (later WWJ), located in the newspaper's headquarters building. The newspaper began extensively publicizing station operations beginning on August 31, 1920, with a special program featuring primary election returns. Station management later claimed the title of being where "commercial radio broadcasting began". November 2, 1920. Beginning on October 17, 1919, Westinghouse engineer Frank Conrad began broadcasting recorded and live music on a semi-regular schedule from his home station, 8XK in Wilkinsburg, Pennsylvania. This inspired his employer to begin its own ambitious service at the company's headquarters in East Pittsburgh, Pennsylvania. Operations began, initially with the call sign 8ZZ, with an election night program featuring election returns on November 2, 1920. As KDKA, the station adopted a daily schedule beginning on December 21, 1920. This station is another contender for the title of "first commercial station". January 3, 1921. University of Wisconsin - Regular schedule of voice broadcasts begin; 9XM is the first radio station in the United States to provide the weather forecast by voice (January 3). In September, farm market broadcasts are added. On November 1, 9XM carries the first live broadcast of a symphony orchestra—the Cincinnati Symphony Orchestra from the UW Armory using a single microphone. Radio networks Because most longwave radio frequencies were used for international radiotelegraph communication, a majority of early broadcasting stations operated on mediumwave frequencies, whose limited range generally restricted them to local audiences. One method for overcoming this limitation, as well as a method for sharing program costs, was to create radio networks, linking stations together with telephone lines to provide a nationwide audience. United States In the U.S., the American Telephone and Telegraph Company (AT&T) was the first organization to create a radio network, and also to promote commercial advertising, which it called "toll" broadcasting. Its flagship station, WEAF (now WFAN) in New York City, sold blocks of airtime to commercial sponsors that developed entertainment shows containing commercial messages. AT&T held a monopoly on quality telephone lines, and by 1924 had linked 12 stations in Eastern cities into a "chain". The Radio Corporation of America (RCA), General Electric, and Westinghouse organized a competing network around its own flagship station, RCA's WJZ (now WABC) in New York City, but were hampered by AT&T's refusal to lease connecting lines or allow them to sell airtime. In 1926 AT&T sold its radio operations to RCA, which used them to form the nucleus of the new NBC network. By the 1930s, most of the major radio stations in the country were affiliated with networks owned by two companies, NBC and CBS. In 1934, a third national network, the Mutual Radio Network, was formed as a cooperative owned by its stations. United Kingdom A second country which quickly adopted network programming was the United Kingdom, and its national network quickly became a prototype for a state-managed monopoly of broadcasting. A rising interest in radio broadcasting by the British public pressured the government to reintroduce the service, following its suspension in 1920. However, the government also wanted to avoid what it termed the "chaotic" U.S. experience of allowing large numbers of stations to operate with few restrictions. There were also concerns about broadcasting becoming dominated by the Marconi company. Arrangements were made for six large radio manufacturers to form a consortium, the British Broadcasting Company (BBC), established on 18 October 1922, which was given a monopoly on broadcasting. This enterprise was supported by a tax on radio sets sales, plus an annual license fee on receivers, collected by the Post Office. Initially the eight stations were allowed regional autonomy. In 1927, the original broadcasting organization was replaced by a government chartered British Broadcasting Corporation. an independent nonprofit supported solely by a 10 shilling receiver license fee. Both highbrow and mass-appeal programmes were carried by the National and Regional networks. "Golden Age of Radio" The period from the early 1920s through the 1940s is often called the "Golden Age of Radio". During this period AM radio was the main source of home entertainment, until it was replaced by television. For the first time entertainment was provided from outside the home, replacing traditional forms of entertainment such as oral storytelling and music from family members. New forms were created, including radio plays, mystery serials, soap operas, quiz shows, variety hours, situation comedies and children's shows. Radio news, including remote reporting, allowed listeners to be vicariously present at notable events. Radio greatly eased the isolation of rural life. Political officials could now speak directly to millions of citizens. One of the first to take advantage of this was American president Franklin Roosevelt, who became famous for his fireside chats during the Great Depression. However, broadcasting also provided the means to use propaganda as a powerful government tool, and contributed to the rise of fascist and communist ideologies. Decline in popularity In the 1940s two new broadcast media, FM radio and television, began to provide extensive competition with the established broadcasting services. The AM radio industry suffered a serious loss of audience and advertising revenue, and coped by developing new strategies. Network broadcasting gave way to format broadcasting: instead of broadcasting the same programs all over the country, stations individually adopted specialized formats which appealed to different audiences, such as regional and local news, sports, "talk" programs, and programs targeted at minorities. Instead of live music, most stations began playing less expensive recorded music. In the late 1960s and 1970s, top 40 rock and roll stations in the U.S. and Canada such as WABC and CHUM transmitted highly processed and extended audio to 11 kHz, successfully attracting huge audiences. For young people, listening to AM broadcasts and participating in their music surveys and contests was the social media of the time. In the late 1970s, spurred by the exodus of musical programming to FM stations, the AM radio industry in the United States developed technology for broadcasting in stereo. Other nations adopted AM stereo, most commonly choosing Motorola's C-QUAM, and in 1993 the United States also made the C-QUAM system its standard, after a period allowing four different standards to compete. The selection of a single standard improved acceptance of AM stereo, however overall there was limited adoption of AM stereo worldwide, and interest declined after 1990. With the continued migration of AM stations away from music to news, sports, and talk formats, receiver manufacturers saw little reason to adopt the more expensive stereo tuners, and thus radio stations have little incentive to upgrade to stereo transmission. In countries where the use of directional antennas is common, such as the United States, transmitter sites consisting of multiple towers often occupy large tracts of land that have significantly increased in value over the decades, to the point that the value of land exceeds that of the station itself. This sometimes results in the sale of the transmitter site, with the station relocating to a more distant shared site using significantly less power, or completely shutting down operations. The ongoing development of alternative transmission systems, including Digital Audio Broadcasting (DAB), satellite radio, and HD (digital) radio, continued the decline of the popularity of the traditional broadcast technologies. These new options, including the introduction of Internet streaming, particularly resulted in the reduction of shortwave transmissions, as international broadcasters found ways to reach their audiences more easily. In 2022 it was reported that AM radio was being removed from a number of electric vehicle (EV) models, including from cars manufactured by Tesla, Audi, Porsche, BMW and Volvo, reportedly due to automakers concerns that an EV's higher electromagnetic interference can disrupt the reception of AM transmissions and hurt the listening experience, among other reasons. However the United States Congress has introduced a bill to require all vehicles sold in the US to have an AM receiver to receive emergency broadcasts. AM band revitalization efforts in the United States The FM broadcast band was established in 1941 in the United States, and at the time some suggested that the AM band would soon be eliminated. In 1948 wide-band FM's inventor, Edwin H. Armstrong, predicted that "The broadcasters will set up FM stations which will parallel, carry the same program, as over their AM stations... eventually the day will come, of course, when we will no longer have to build receivers capable of receiving both types of transmission, and then the AM transmitters will disappear." However, FM stations actually struggled for many decades, and it was not until 1978 that FM listenership surpassed that of AM stations. Since then the AM band's share of the audience has continued to decline. Fairness Doctrine repeal In 1987, the elimination of the Fairness Doctrine requirement meant that talk shows, which were commonly carried by AM stations, could adopt a more focused presentation on controversial topics, without the distraction of having to provide airtime for any contrasting opinions. In addition, satellite distribution made it possible for programs to be economically carried on a national scale. The introduction of nationwide talk shows, most prominently Rush Limbaugh's beginning in 1988, was sometimes credited with "saving" AM radio. However, these stations tended to attract older listeners who were of lesser interest to advertisers, and AM radio's audience share continued to erode. AM stereo and AMAX standards In 1961, the FCC adopted a single standard for FM stereo transmissions, which was widely credited with enhancing FM's popularity. Developing the technology for AM broadcasting in stereo was challenging due to the need to limit the transmissions to a 20 kHz bandwidth, while also making the transmissions backward compatible with existing non-stereo receivers. In 1990, the FCC authorized an AM stereo standard developed by Magnavox, but two years later revised its decision to instead approve four competing implementations, saying it would "let the marketplace decide" which was best. The lack of a common standard resulted in consumer confusion and increased the complexity and cost of producing AM stereo receivers. In 1993, the FCC again revised its policy, by selecting C-QUAM as the sole AM stereo implementation. In 1993, the FCC also endorsed, although it did not make mandatory, AMAX broadcasting standards that were developed by the Electronic Industries Association (EIA) and the National Association of Broadcasters (NAB) with the intention of helping AM stations, especially ones with musical formats, become more competitive with FM broadcasters by promoting better quality receivers. However, the stereo AM and AMAX initiatives had little impact, and a 2015 review of these events concluded that Initially the consumer manufacturers made a concerted attempt to specify performance of AM receivers through the 1993 AMAX standard, a joint effort of the EIA and the NAB, with FCC backing... The FCC rapidly followed up on this with codification of the CQUAM AM stereo standard, also in 1993. At this point, the stage appeared to be set for rejuvenation of the AM band. Nevertheless, with the legacy of confusion and disappointment in the rollout of the multiple incompatible AM stereo systems, and failure of the manufacturers (including the auto makers) to effectively promote AMAX radios, coupled with the ever-increasing background of noise in the band, the general public soon lost interest and moved on to other media. Expanded band On June 8, 1988, an International Telecommunication Union (ITU)-sponsored conference held at Rio de Janeiro, Brazil adopted provisions, effective July 1, 1990, to extend the upper end of the Region 2 AM broadcast band, by adding ten frequencies which spanned from 1610 kHz to 1700 kHz. At this time it was suggested that as many as 500 U.S. stations could be assigned to the new frequencies. On April 12, 1990, the FCC voted to begin the process of populating the expanded band, with the main priority being the reduction of interference on the existing AM band, by transferring selected stations to the new frequencies. It was now estimated that the expanded band could accommodate around 300 U.S. stations. However, it turned out that the number of possible station reassignments was much lower, with a 2006 accounting reporting that, out of 4,758 licensed U.S. AM stations, only 56 were now operating on the expanded band. Moreover, despite an initial requirement that by the end of five years either the original station or its expanded band counterpart had to cease broadcasting, as of 2015 there were 25 cases where the original standard band station was still on the air, despite also operating as an expanded band station. HD radio HD Radio is a digital audio broadcasting method developed by iBiquity. In 2002 its "hybrid mode", which simultaneously transmits a standard analog signal as well as a digital one, was approved by the FCC for use by AM stations, initially only during daytime hours, due to concerns that during the night its wider bandwidth would cause unacceptable interference to stations on adjacent frequencies. In 2007 nighttime operation was also authorized. The number of hybrid mode AM stations is not exactly known, because the FCC does not keep track of the stations employing the system, and some authorized stations have later turned it off. But as of 2020 the commission estimated that fewer than 250 AM stations were transmitting hybrid mode signals. On October 27, 2020, the FCC voted to allow AM stations to eliminate their analog transmissions and convert to all-digital operation, with the requirement that stations making the change had to continue to make programming available over "at least one free over-the-air digital programming stream that is comparable to or better in audio quality than a standard analog broadcast". FM translator stations Despite the various actions, AM band audiences continued to contract, and the number of stations began to slowly decline. A 2009 FCC review reported that "The story of AM radio over the last 50 years has been a transition from being the dominant form of audio entertainment for all age groups to being almost non-existent to the youngest demographic groups. Among persons aged 12–24, AM accounts for only 4% of listening, while FM accounts for 96%. Among persons aged 25–34, AM accounts for only 9% of listening, while FM accounts for 91%. The median age of listeners to the AM band is 57 years old, a full generation older than the median age of FM listeners." In 2009, the FCC made a major regulatory change, when it adopted a policy allowing AM stations to simulcast over FM translator stations. Translators had previously been available only to FM broadcasters, in order to increase coverage in fringe areas. Their assignment for use by AM stations was intended to approximate the station's daytime coverage, which in cases where the stations reduced power at night, often resulted in expanded nighttime coverage. Although the translator stations are not permitted to originate programming when the "primary" AM station is broadcasting, they are permitted to do so during nighttime hours for AM stations licensed for daytime-only operation. Prior to the adoption of the new policy, as of March 18, 2009, the FCC had issued 215 Special Temporary Authority grants for FM translators relaying AM stations. After creation of the new policy, by 2011 there were approximately 500 in operation, and as of 2020 approximately 2,800 of the 4,570 licensed AM stations were rebroadcasting on one or more FM translators. In 2009 the FCC stated that "We do not intend to allow these cross-service translators to be used as surrogates for FM stations". However, based on station slogans, especially in the case of recently adopted musical formats, in most cases the expectation is that listeners will primarily be tuning into the FM signal rather than the nominally "primary" AM station. A 2020 review noted that "for many owners, keeping their AM stations on the air now is pretty much just about retaining their FM translator footprint rather than keeping the AM on the air on its own merits". Additional activities In 2018 the FCC, led by then-Commission Chairman Ajit Pai, proposed greatly reducing signal protection for 50 kW Class A "clear channel" stations. This would allow co-channel secondary stations to operate with higher powers, especially at night. However, the Federal Emergency Management Agency (FEMA) expressed concerns that this would reduce the effectiveness of emergency communications. Electric vehicles In May 2023, a bipartisan group of lawmakers in the United States introduced legislation making it illegal for automakers to eliminate AM radio from their cars. The lawmakers argue that AM radio is an important tool for public safety due to being a component of the Emergency Alert System (EAS). Some automakers have been eliminating AM radio from their electric vehicles (EVs) due to interference from the electric motors, but the lawmakers argue that this is a safety risk and that car owners should have access to AM radio regardless of the type of vehicle they drive. The proposed legislation would require all new vehicles to include AM radio at no additional charge, and it would also require automakers that have already eliminated AM radio to inform customers of alternatives. One proper solution would be to switch to DRM30 gradually as indicated in the diagram above. Maybe in the US it could be mandated to have full DRM plus at least one of analog AM or All-digital AM HD Radio before going exclusively DRM (switching off FM and DAB too eventually globally). Technical information AM radio technology is simpler than later transmission systems. An AM receiver detects amplitude variations in the radio waves at a particular frequency, then amplifies changes in the signal voltage to operate a loudspeaker or earphone. However, the simplicity of AM transmission also makes it vulnerable to "static" (radio noise, radio frequency interference) created by both natural atmospheric electrical activity such as lightning, and electrical and electronic equipment, including fluorescent lights, motors and vehicle ignition systems. In large urban centers, AM radio signals can be severely disrupted by metal structures and tall buildings. As a result, AM radio tends to do best in areas where FM frequencies are in short supply, or in thinly populated or mountainous areas where FM coverage is poor. Great care must be taken to avoid mutual interference between stations operating on the same frequency. In general, an AM transmission needs to be about 20 times stronger than an interfering signal to avoid a reduction in quality, in contrast to FM signals, where the "capture effect" means that the dominant signal needs to only be about twice as strong as the interfering one. To allow room for more stations on the mediumwave broadcast band in the United States, in June 1989 the FCC adopted a National Radio Systems Committee (NRSC) standard that limited maximum transmitted audio bandwidth to 10.2 kHz, limiting occupied bandwidth to 20.4 kHz. The former audio limitation was 15 kHz resulting in bandwidth of 30 kHz. Another common limitation on AM fidelity is the result of receiver design, although some efforts have been made to improve this, notably through the AMAX standards adopted in the United States. Broadcast band frequencies AM broadcasts are used on several frequency bands. The allocation of these bands is governed by the ITU's Radio Regulations and, on the national level, by each country's telecommunications administration (the FCC in the U.S., for example) subject to international agreements. The frequency ranges given here are those that are allocated to stations. Because of the bandwidth taken up by the sidebands, the range allocated for the band as a whole is usually about 5 kHz wider on either side. Longwave broadcasting Longwave (also known as Low frequency (LF)) (148.5 kHz – 283.5 kHz) Broadcasting stations in this band are assigned transmitting frequencies in the range 153 kHz – 279 kHz, and generally maintain 9 kHz spacing. Longwave assignments for broadcasting only exist in ITU Region 1 (Europe, Africa, and northern and central Asia) and are not allocated elsewhere. Individual stations have coverage measured in the hundreds of kilometers; however, there is only a very limited number of available broadcasting slots. Most of the earliest broadcasting experiments took place on longwave frequencies; however, complaints about interference from existing services, particularly the military, led to most broadcasting moving to higher frequencies. Medium-wave broadcasting Medium wave (also known as Medium frequency (MF)), is by far the most commonly used AM broadcasting band. In ITU Regions 1 and 3, transmitting frequencies run from 531 kHz – 1602 kHz, with 9 kHz spacing (526.5 kHz – 1606.5 kHz), and in ITU Region 2 (the Americas), transmitting frequencies are 530 kHz – 1700 kHz, using 10 kHz spacing (525 kHz – 1705 kHz), including the ITU Extended AM broadcast band, authorized in Region 2, between 1605 kHz and 1705 kHz, previously used for police radio. Shortwave broadcasting Shortwave (also known as High frequency (HF)) transmissions range from approximately 2.3 to 26.1 MHz, divided into 14 broadcast bands. Shortwave broadcasts generally use a narrow 5 kHz channel spacing. Shortwave is used by audio services intended to be heard at great distances from the transmitting station. The long range of shortwave broadcasts comes at the expense of lower audio fidelity. Most broadcast services use AM transmissions, although some use a modified version of AM such as Single-sideband modulation (SSB) or an AM-compatible version of SSB such as "SSB with carrier reinserted". VHF AM broadcasting Beginning in the mid-1930s, the United States evaluated options for the establishment of broadcasting stations using much higher transmitting frequencies. In October 1937, the FCC announced a second band of AM stations, consisting of 75 channels spanning from 41.02 to 43.98 MHz, which were informally called Apex. The 40 kHz spacing between adjacent frequencies was four times that of the 10 kHz spacing used on the standard AM broadcast band, which reduced adjacent-frequency interference, and provided more bandwidth for high-fidelity programming. However, this band was eliminated effective 1 January 1941, after the FCC determined that establishing a band of FM stations was preferable. Other distribution methods Beginning in the mid-1930s, starting with "The Brown Network" at Brown University in Providence, Rhode Island, a very low power broadcasting method known as carrier current was developed, and mostly adopted on U.S. college campuses. In this approach AM broadcast signals are distributed over electric power lines, which radiate a signal receivable at a short distance from the lines. In Switzerland a system known as "wire broadcasting" (Telefonrundspruch in German) transmitted AM signals over telephone lines in the longwave band until 1998, when it was shut down. In the UK, Rediffusion was an early pioneer of AM radio cable distribution. Hybrid digital broadcast systems, which combine (mono analog) AM transmission with digital sidebands, have started to be used around the world. In the United States, iBiquity's proprietary HD Radio has been adopted and approved by the FCC for medium wave transmissions, while Digital Radio Mondiale is a more open effort often used on the shortwave bands, and can be used alongside many AM broadcasts. Both of these standards are capable of broadcasting audio of significantly greater fidelity than that of standard AM with current bandwidth limitations, and a theoretical frequency response of 0–16 kHz, in addition to stereo sound and text data. Microbroadcasting Some microbroadcasters, especially those in the United States operating under the FCC's Part 15 rules, and pirate radio operators on mediumwave and shortwave, achieve greater range than possible on the FM band. On mediumwave these stations often transmit on 1610 kHz to 1710 kHz. Hobbyists also use low-power AM (LPAM) transmitters to provide programming for vintage radio equipment in areas where AM programming is not widely available or does not carry programming the listener desires; in such cases the transmitter, which is designed to cover only the immediate property and perhaps nearby areas, is connected to a computer, an FM radio or an MP3 player. Microbroadcasting and pirate radio have generally been supplanted by streaming audio on the Internet, but some schools and hobbyists still use LPAM transmissions.
Technology
Broadcasting
null
113530
https://en.wikipedia.org/wiki/Fen
Fen
A fen is a type of peat-accumulating wetland fed by mineral-rich ground or surface water. It is one of the main types of wetland along with marshes, swamps, and bogs. Bogs and fens, both peat-forming ecosystems, are also known as mires. The unique water chemistry of fens is a result of the ground or surface water input. Typically, this input results in higher mineral concentrations and a more basic pH than found in bogs. As peat accumulates in a fen, groundwater input can be reduced or cut off, making the fen ombrotrophic rather than minerotrophic. In this way, fens can become more acidic and transition to bogs over time. Fens can be found around the world, but the vast majority are located at the mid to high latitudes of the Northern Hemisphere. They are dominated by sedges and mosses, particularly graminoids that may be rarely found elsewhere, such as the sedge species Carex exilis. Fens are highly biodiverse ecosystems and often serve as habitats for endangered or rare species, with species composition changing with water chemistry. They also play important roles in the cycling of nutrients such as carbon, nitrogen, and phosphorus due to the lack of oxygen (anaerobic conditions) in waterlogged organic fen soils. Fens have historically been converted to agricultural land. Aside from such conversion, fens face a number of other threats, including peat cutting, pollution, invasive species, and nearby disturbances that lower the water table in the fen, such as quarrying. Interrupting the flow of mineral-rich water into a fen changes the water chemistry, which can alter species richness and dry out the peat. Drier peat is more easily decomposed and can even burn. Distribution and extent Fens are distributed around the world, but are most frequently found at the mid-high latitudes of the Northern Hemisphere. They are found throughout the temperate zone and boreal regions, but are also present in tundra and in specific environmental conditions in other regions around the world. In the United States, fens are most common in the Midwest and Northeast, but can be found across the country. In Canada, fens are most frequent in the lowlands near Hudson Bay and James Bay, but can also be found across the country. Fens are also spread across the northern latitudes of Eurasia, including Britain and Ireland, as well as Japan, but east-central Europe is especially rich in fens. Further south, fens are much rarer, but do exist under specific conditions. In Africa, fens have been found in the Okavango Delta in Botswana and the highland slopes in Lesotho. Fens can also be found at the colder latitudes of the Southern Hemisphere. They are found in New Zealand and southwest Argentina, but the extent is much less than that of the northern latitudes. Locally, fens are most often found at the intersection of terrestrial and aquatic ecosystems, such as the headwaters of streams and rivers. It is estimated that there are approximately 1.1 million square kilometers of fens worldwide, but quantifying the extent of fens is difficult. Because wetland definitions vary regionally, not all countries define fens the same way. In addition, wetland data is not always available or of high quality. Fens are also difficult to rigidly delineate and measure, as they are located between terrestrial and aquatic ecosystems. Definition Rigidly defining types of wetlands, including fens, is difficult for a number of reasons. First, wetlands are diverse and varied ecosystems that are not easily categorized according to inflexible definitions. They are often described as a transition between terrestrial and aquatic ecosystems with characteristics of both. This makes it difficult to delineate the exact extent of a wetland. Second, terms used to describe wetland types vary greatly by region. The term bayou, for example, describes a type of wetland, but its use is generally limited to the southern United States. Third, different languages use different terms to describe types of wetlands. For instance, in Russian, there is no equivalent word for the term swamp as it is typically used in North America. The result is a large number of wetland classification systems that each define wetlands and wetland types in their own way. However, many classification systems include four broad categories that most wetlands fall into: marsh, swamp, bog, and fen. While classification systems differ on the exact criteria that define a fen, there are common characteristics that describe fens generally and imprecisely. A general definition provided by the textbook Wetlands describes a fen as "a peat-accumulating wetland that receives some drainage from surrounding mineral soil and usually supports marsh like vegetation." Three examples are presented below to illustrate more specific definitions for the term fen. Canadian Wetland Classification System definition In the Canadian Wetland Classification System, fens are defined by six characteristics: Peat is present. The surface of the wetland is level with the water table. Water flows on the surface and through the subsurface of the wetland. The water table fluctuates. It may be at the surface of the wetland or a few centimeters above or below it. The wetland receives a significant amount of its water from mineral-rich groundwater or surface water. Decomposed sedges or brown moss peat are present. The vegetation is predominantly graminoids and shrubs. Wetland Ecology: Principles and Conservation (Keddy) definition In the textbook Wetland Ecology: Principles and Conservation, Paul A. Keddy offers a somewhat simpler definition of a fen as "a wetland that is usually dominated by sedges and grasses rooted in shallow peat, often with considerable groundwater movement, and with pH greater than 6." This definition differentiates fens from swamps and marshes by the presence of peat. The Biology of Peatlands (Rydin) definition In The Biology of Peatlands fens are defined by the following criteria: The wetland is not flooded by lake or stream water. Woody vegetation 2 meters or taller is absent or canopy cover is less than 25%. The wetland is minerotrophic (it receives its nutrients from mineral-rich groundwater). A further distinction is made between open and wooded fens, where open fens have canopy cover less than 10% and wooded fens have 10–25% canopy cover. If tall shrubs or trees dominate, the wetland is instead classified as a wooded bog or swamp forest, depending on other criteria. Biogeochemical features Hydrological conditions Hydrological conditions, as seen in other wetlands, are a major determinant of fen biota and biogeochemistry. Fen soils are constantly inundated because the water table is at or near the surface. The result is anaerobic (oxygen-free) soils due to the slow rate at which oxygen diffuses into waterlogged soil. Anaerobic soils are ecologically unique because Earth's atmosphere is oxygenated, while most terrestrial ecosystems and surface waters are aerobic. The anaerobic conditions found in wetland soils result in reduced, rather than oxidized, soil chemistry. A hallmark of fens is that a significant portion of their water supply is derived from groundwater (minerotrophy). Because hydrology is the dominant factor in wetlands, the chemistry of the groundwater has an enormous effect on the characteristics of the fen it supplies. Groundwater chemistry, in turn, is largely determined by the geology of the rocks that the groundwater flows through. Thus, the characteristics of a fen, especially its pH, are directly influenced by the type of rocks its groundwater supply contacts. pH is a major factor in determining fen species composition and richness, with more basic fens called "rich" and more acidic fens called "poor." Rich fens tend to be highly biodiverse and harbor a number of rare or endangered species, and biodiversity tends to decrease as the richness of fen decreases. Fens tend to be found above rocks that are rich in calcium, such as limestone. When groundwater flows past calcareous (calcium-rich) rocks like limestone (calcium carbonate), a small amount dissolves and is carried to the fen supplied by the groundwater. When calcium carbonate dissolves, it produces bicarbonate and a calcium cation according to the following equilibrium: CaCO3 + H2CO3 <=> Ca^2+ + 2HCO3^- where carbonic acid (H2CO3) is produced by the dissolution of carbon dioxide in water. In fens, the bicarbonate anion produced in this equilibrium acts as a pH buffer, which keeps the pH of the fen relatively stable. Fens supplied by groundwater that doesn't flow through minerals and act as a buffer when dissolved tend to be more acidic. The same effect is observed when groundwater flows through minerals with low solubility, such as sand. In extreme rich fens, calcium carbonate can precipitate out of solution to form marl deposits. Calcium carbonate precipitates out of solution when the partial pressure of carbon dioxide in the solution falls. The decrease in carbon dioxide partial pressure is caused by uptake by plants for photosynthesis or direct loss to the atmosphere. This reduces the availability of carbonic acid in solution, shifting the above equilibrium back towards the formation of calcium carbonate. The result is the precipitation of calcium carbonate and the formation of marl. Nutrient cycling Fen, being a distinct type of wetland, shares many biogeochemical characteristics with other wetlands. Like all wetlands, they play an important role in nutrient cycling because they are located at the interface of aerobic (oxic) and anaerobic (anoxic) environments. Most wetlands have a thin top layer of oxygenated soil in contact with the atmosphere or oxygenated surface waters. Nutrients and minerals may cycle between this oxidized top layer and the reduced layer below, undergoing oxidation and reduction reactions by the microbial communities adapted to each layer. Many important reactions take place in the reduced layer, including denitrification, manganese reduction, iron reduction, sulfate reduction, and methanogenesis. Because wetlands are hotspots for nutrient transformations and often serve as nutrient sinks, they may be constructed to treat nutrient-rich waters created by human activities. Fens are also hotspots for primary production, as the continuous input of groundwater stimulates production. Bogs, which lack this input of groundwater, have much lower primary production. Carbon Carbon from all types of wetlands, including fens, arrives mostly as organic carbon from either adjacent upland ecosystems or by photosynthesis in the wetland itself. Once in the wetland, organic carbon generally has three main fates: oxidation to CO2 by aerobic respiration, burial as organic matter in peat, or decomposition to methane. In peatlands, including fens, primary production by plants is greater than decomposition, which results in the accumulation of organic matter as peat. Resident mosses usually carry out decomposition within the fen, and temperate fens are often driven by plant roots' decomposition. These peat stores sequester an enormous amount of carbon. Nevertheless, it is difficult to determine whether fens net take up or emit greenhouse gases. This is because fens emit methane, which is a more potent greenhouse gas than carbon dioxide. Methanogenic archaea that reside in the anaerobic layers of peat combine carbon dioxide and hydrogen gas to form methane and water. This methane can then escape into the atmosphere and exert its warming effects. Peatlands dominated by brown mosses and sedges such as fens have been found to emit a greater amount of methane than Sphagnum-dominated peatlands such as bogs. Nitrogen Fens play an important role in the global nitrogen cycle due to the anaerobic conditions found in their soils, which facilitate the oxidation or reduction of one form of nitrogen to another. Most nitrogen arrives in wetlands as nitrate from runoff, in organic matter from other areas, or by nitrogen fixation in the wetland. There are three main forms of nitrogen found in wetlands: nitrogen in organic matter, oxidized nitrogen (nitrate or nitrite), and ammonium. Nitrogen is abundant in peat. When the organic matter in peat is decomposed in the absence of oxygen, ammonium is produced via ammonification. In the oxidized surface layer of the wetland, this ammonium is oxidized to nitrite and nitrate by nitrification. The production of ammonium in the reduced layer and its consumption in the top oxidized layer drives upward diffusion of ammonium. Likewise, nitrate production in the oxidized layer and nitrate consumption in the reduced layer by denitrification drives downward diffusion of nitrate. Denitrification in the reduced layer produces nitrogen gas and some nitrous oxide, which then exit the wetland to the atmosphere. Nitrous oxide is a potent greenhouse gas whose production is limited by nitrate and nitrite concentrations in fens. Nitrogen, along with phosphorus, controls how fertile a wetland is. Phosphorus Almost all of the phosphorus that arrives in a wetland does so through sediments or plant litter from other ecosystems. Along with nitrogen, phosphorus limits wetland fertility. Under basic conditions like those found in extremely rich fens, calcium will bind to phosphate anions to make calcium phosphates, which are unavailable for uptake by plants. Mosses also play a considerable role in aiding plants in phosphorus uptake by decreasing soil phosphorus stress and stimulating phosphatase activity in organisms found below the moss cover. Helophytes have been shown to bolster phosphorus cycling within fens, especially in fen reestablishment, due to their ability to act as a phosphorus sink, which prevents residual phosphorus in the fen from being transferred away from the it. Under normal conditions, phosphorus is held within soil as dissolved inorganic phosphorus, or phosphate, which leaves trace amounts of phosphorus in the rest of the ecosystem. Iron is important in phosphorus cycling within fens. Iron can bind to high levels of inorganic phosphate within the fen, leading to a toxic environment and inhibition of plant growth. In iron-rich fens, the area can become vulnerable to acidification, excess nitrogen and potassium, and low water levels. Peat soils play a role in preventing the bonding of irons to phosphate by providing high levels of organic anions for iron to bind to instead of inorganic anions such as phosphate. Bog–rich-fen gradient Bogs and fens can be thought of as two ecosystems on a gradient from poor to rich, with bogs at the poor end, extremely rich fens at the rich end, and poor fens in between. In this context, "rich" and "poor" refer to the species richness, or how biodiverse a fen or bog is. The richness of these species is strongly influenced by pH and concentrations of calcium and bicarbonate. These factors assist in identifying where along the gradient a particular fen falls. In general, rich fens are minerotrophic, or dependent on mineral-rich groundwater, while bogs are ombrotrophic, or dependent on precipitation for water and nutrients. Poor fens fall between these two. Rich fens Rich fens are strongly minerotrophic; that is, a large proportion of their water comes from mineral-rich ground or surface water. Fens that are more distant from surface waters such as rivers and lakes, however, are more rich than fens that are connected. This water is dominated by calcium and bicarbonate, resulting in a slightly acidic to slightly basic pH characteristic of rich fens. These conditions promote high biodiversity. Within rich fens, there is a large amount of variability. The richest fens are the extreme rich (marl) fens, where marl deposits are often build up. These are often pH 7 or greater. Rich and intermediate rich fens are generally neutral to slightly acidic, with a pH of approximately 7 to 5. Rich fens are not always very productive; at high calcium concentrations, calcium ions bind to phosphate anions, reducing the availability of phosphorus and decreasing primary production. Rich bogs with limited primary production can stabilize with the accumulation of mosses and mycorrhiza, which promote phosphorus cycling and can support the growth of new vegetation and bacteria. Brown mosses (family Amblystegiaceae) and sedges (genus Carex) are the dominant vegetation. However, an accumulation of mosses such as Sphagnum can lead to the acidification of the rich fen, potentially converting it into a poor fen. Compared to poor fens, rich fens have higher concentrations of bicarbonate, base cations (Na+, Ca2+, K+, Mg2+), and sulfate. Poor fens Poor fens are, in many ways, an intermediate between rich fens and bogs. Hydrologically, they are more similar to rich fens than to bogs, but regarding vegetation composition and chemistry, they are more similar to bogs than rich fens. They are much more acidic than their rich counterparts, with a pH of approximately 5.5 to 4. Peat in poor fens tends to be thicker than that of rich fens, which cuts off vegetation access to the mineral-rich soil underneath. In addition, the thicker peat reduces the influence of mineral-rich groundwater that buffers the pH. This makes the fen more ombrotrophic, or dependent on nutrient-poor precipitation for its water and nutrients. Poor fens may also form in areas where the groundwater supplying the fen flows through sediments that don't dissolve well or have low buffering capacity when dissolved. Species richness tends to be lower than that of rich fens but higher than that of bogs. Poor fens, like bogs, are dominated by Sphagnum mosses, which acidify the fen and decrease nutrient availability. Threats One of the many threats that fens face is conversion to agricultural lands. Where climates are suitable, fens have been drained for agricultural use alongside crop production, grazing, and hay making. Draining a fen directly is particularly damaging because it lowers the water table. A lower water table can increase aeration and dry out peat, allowing for aerobic decomposition or burning of the organic matter in peat. Draining a fen indirectly by decreasing its water supply can be just as damaging. Disrupting groundwater flow into the fen with nearby human activities such as quarrying or residential development changes how much water and nutrients enter the fen. This can make the fen more ombrotrophic (dependent on precipitation), which results in acidification and a change in water chemistry. This directly impacts the habitat of these species, and many signature fen species disappear. Fens are also threatened by invasive species, fragmentation, peat cutting, and pollution. Non-native invasive species, such as the common buckthorn in North America, can invade fens and outcompete rare fen species, reducing biodiversity. Habitat fragmentation threatens fen species, especially rare or endangered species that are unable to move to nearby fens due to fragmentation. Peat cutting, while much more common in bogs, does happen in fens. Peat cut from fens has many uses, including burning as a fuel. Pollutants can alter the chemistry of fens and facilitate invasion by invasive species. Common pollutants of fens include road salts, nutrients from septic tanks, and runoff of agricultural fertilizers and pesticides. Use of term in literature Shakespeare used the term "fen-sucked" to describe the fog (literally: rising from marshes) in King Lear, when Lear says, "Infect her beauty, You fen-sucked fogs drawn by the powerful sun, To fall and blister." Images
Physical sciences
Wetlands
Earth science
113604
https://en.wikipedia.org/wiki/Broadcasting
Broadcasting
Broadcasting is the distribution of audio or video content to a dispersed audience via any electronic mass communications medium, but typically one using the electromagnetic spectrum (radio waves), in a one-to-many model. Broadcasting began with AM radio, which came into popular use around 1920 with the spread of vacuum tube radio transmitters and receivers. Before this, most implementations of electronic communication (early radio, telephone, and telegraph) were one-to-one, with the message intended for a single recipient. The term broadcasting evolved from its use as the agricultural method of sowing seeds in a field by casting them broadly about. It was later adopted for describing the widespread distribution of information by printed materials or by telegraph. Examples applying it to "one-to-many" radio transmissions of an individual station to multiple listeners appeared as early as 1898. Over-the-air broadcasting is usually associated with radio and television, though more recently, both radio and television transmissions have begun to be distributed by cable (cable television). The receiving parties may include the general public or a relatively small subset; the point is that anyone with the appropriate receiving technology and equipment (e.g., a radio or television set) can receive the signal. The field of broadcasting includes both government-managed services such as public radio, community radio and public television, and private commercial radio and commercial television. The U.S. Code of Federal Regulations, title 47, part 97 defines broadcasting as "transmissions intended for reception by the general public, either direct or relayed". Private or two-way telecommunications transmissions do not qualify under this definition. For example, amateur ("ham") and citizens band (CB) radio operators are not allowed to broadcast. As defined, transmitting and broadcasting are not the same. Transmission of radio and television programs from a radio or television station to home receivers by radio waves is referred to as over the air (OTA) or terrestrial broadcasting and in most countries requires a broadcasting license. Transmissions using a wire or cable, like cable television (which also retransmits OTA stations with their consent), are also considered broadcasts but do not necessarily require a license (though in some countries, a license is required). In the 2000s, transmissions of television and radio programs via streaming digital technology have increasingly been referred to as broadcasting as well. History In 1894, Italian inventor Guglielmo Marconi began developing a wireless communication using the then-newly discovered phenomenon of radio waves, showing by 1901 that they could be transmitted across the Atlantic Ocean. This was the start of wireless telegraphy by radio. Audio radio broadcasting began experimentally in the first decade of the 20th century. On 17 December 1902, a transmission from the Marconi station in Glace Bay, Nova Scotia, Canada, became the world's first radio message to cross the Atlantic from North America. In 1904, a commercial service was established to transmit nightly news summaries to subscribing ships, which incorporated them into their onboard newspapers. World War I accelerated the development of radio for military communications. After the war, commercial radio AM broadcasting began in the 1920s and became an important mass medium for entertainment and news. World War II again accelerated the development of radio for the wartime purposes of aircraft and land communication, radio navigation, and radar. Development of stereo FM broadcasting of radio began in the 1930s in the United States and the 1970s in the United Kingdom, displacing AM as the dominant commercial standard. On 25 March 1925, John Logie Baird demonstrated the transmission of moving pictures at the London department store Selfridges. Baird's device relied upon the Nipkow disk and thus became known as the mechanical television. It formed the basis of experimental broadcasts done by the British Broadcasting Corporation beginning on 30 September 1929. However, for most of the 20th century, televisions depended on the cathode-ray tube invented by Karl Braun. The first version of such a television to show promise was produced by Philo Farnsworth and demonstrated to his family on 7 September 1927. After World War II, interrupted experiments resumed and television became an important home entertainment broadcast medium, using VHF and UHF spectrum. Satellite broadcasting was initiated in the 1960s and moved into general industry usage in the 1970s, with DBS (Direct Broadcast Satellites) emerging in the 1980s. Originally, all broadcasting was composed of analog signals using analog transmission techniques but in the 2000s, broadcasters switched to digital signals using digital transmission. An analog signal is any continuous signal representing some other quantity, i.e., analogous to another quantity. For example, in an analog audio signal, the instantaneous signal voltage varies continuously with the pressure of the sound waves. In contrast, a digital signal represents the original time-varying quantity as a sampled sequence of quantized values which imposes some bandwidth and dynamic range constraints on the representation. In general usage, broadcasting most frequently refers to the transmission of information and entertainment programming from various sources to the general public: Analog audio radio (AM, FM) vs. digital audio radio (HD radio), digital audio broadcasting (DAB), satellite radio and digital Radio Mondiale (DRM) Analog television vs. digital television Wireless The world's technological capacity to receive information through one-way broadcast networks more than quadrupled during the two decades from 1986 to 2007, from 432 exabytes of (optimally compressed) information, to 1.9 zettabytes. This is the information equivalent of 55 newspapers per person per day in 1986, and 175 newspapers per person per day by 2007. Methods In a broadcast system, the central high-powered broadcast tower transmits a high-frequency electromagnetic wave to numerous receivers. The high-frequency wave sent by the tower is modulated with a signal containing visual or audio information. The receiver is then tuned so as to pick up the high-frequency wave and a demodulator is used to retrieve the signal containing the visual or audio information. The broadcast signal can be either analog (signal is varied continuously with respect to the information) or digital (information is encoded as a set of discrete values). Historically, there have been several methods used for broadcasting electronic media audio and video to the general public: Telephone broadcasting (1881–1932): the earliest form of electronic broadcasting (not counting data services offered by stock telegraph companies from 1867, if ticker-tapes are excluded from the definition). Telephone broadcasting began with the advent of Théâtrophone ("Theatre Phone") systems, which were telephone-based distribution systems allowing subscribers to listen to live opera and theatre performances over telephone lines, created by French inventor Clément Ader in 1881. Telephone broadcasting also grew to include telephone newspaper services for news and entertainment programming which were introduced in the 1890s, primarily located in large European cities. These telephone-based subscription services were the first examples of electrical/electronic broadcasting and offered a wide variety of programming. Radio broadcasting (experimentally from 1906, commercially from 1920); audio signals sent through the air as radio waves from a transmitter, picked up by an antenna and sent to a receiver. Radio stations can be linked in radio networks to broadcast common radio programs, either in broadcast syndication, simulcast or subchannels. Television broadcasting (telecast), experimentally from 1925, commercially from the 1930s: an extension of radio to include video signals. Cable radio (also called cable FM, from 1928) and cable television (from 1932): both via coaxial cable, originally serving principally as transmission media for programming produced at either radio or television stations, but later expanding into a broad universe of cable-originated channels. Direct-broadcast satellite (DBS) (from ) and satellite radio (from ): meant for direct-to-home broadcast programming (as opposed to studio network uplinks and down-links), provides a mix of traditional radio or television broadcast programming, or both, with dedicated satellite radio programming. (
Technology
Media
null
113624
https://en.wikipedia.org/wiki/Graphics%20card
Graphics card
A graphics card (also called a video card, display card, graphics accelerator, graphics adapter, VGA card/VGA, video adapter, display adapter, or colloquially GPU) is a computer expansion card that generates a feed of graphics output to a display device such as a monitor. Graphics cards are sometimes called discrete or dedicated graphics cards to emphasize their distinction to an integrated graphics processor on the motherboard or the central processing unit (CPU). A graphics processing unit (GPU) that performs the necessary computations is the main component in a graphics card, but the acronym "GPU" is sometimes also used to erroneously refer to the graphics card as a whole. Most graphics cards are not limited to simple display output. The graphics processing unit can be used for additional processing, which reduces the load from the CPU. Additionally, computing platforms such as OpenCL and CUDA allow using graphics cards for general-purpose computing. Applications of general-purpose computing on graphics cards include AI training, cryptocurrency mining, and molecular simulation. Usually, a graphics card comes in the form of a printed circuit board (expansion board) which is to be inserted into an expansion slot. Others may have dedicated enclosures, and they are connected to the computer via a docking station or a cable. These are known as external GPUs (eGPUs). Graphics cards are often preferred over integrated graphics for increased performance. A more powerful graphics card will be able to render more frames per second. History Graphics cards, also known as video cards or graphics processing units (GPUs), have historically evolved alongside computer display standards to accommodate advancing technologies and user demands. In the realm of IBM PC compatibles, the early standards included Monochrome Display Adapter (MDA), Color Graphics Adapter (CGA), Hercules Graphics Card, Enhanced Graphics Adapter (EGA), and Video Graphics Array (VGA). Each of these standards represented a step forward in the ability of computers to display more colors, higher resolutions, and richer graphical interfaces, laying the foundation for the development of modern graphical capabilities. In the late 1980s, advancements in personal computing led companies like Radius to develop specialized graphics cards for the Apple Macintosh II. These cards were unique in that they incorporated discrete 2D QuickDraw capabilities, enhancing the graphical output of Macintosh computers by accelerating 2D graphics rendering. QuickDraw, a core part of the Macintosh graphical user interface, allowed for the rapid rendering of bitmapped graphics, fonts, and shapes, and the introduction of such hardware-based enhancements signaled an era of specialized graphics processing in consumer machines. The evolution of graphics processing took a major leap forward in the mid-1990s with 3dfx Interactive's introduction of the Voodoo series, one of the earliest consumer-facing GPUs that supported 3D acceleration. These cards, however, were dedicated entirely to 3D processing and lacked 2D support, necessitating the use of a separate 2D graphics card in tandem. The Voodoo's architecture marked a major shift in graphical computing by offloading the demanding task of 3D rendering from the CPU to the GPU, significantly improving gaming performance and graphical realism. The development of fully integrated GPUs that could handle both 2D and 3D rendering came with the introduction of the NVIDIA RIVA 128. Released in 1997, the RIVA 128 was one of the first consumer-facing GPUs to integrate both 3D and 2D processing units on a single chip. This innovation simplified the hardware requirements for end-users, as they no longer needed separate cards for 2D and 3D rendering, thus paving the way for the widespread adoption of more powerful and versatile GPUs in personal computers. In contemporary times, the majority of graphics cards are built using chips sourced from two dominant manufacturers: AMD and Nvidia. These modern graphics cards are multifunctional and support various tasks beyond rendering 3D images for gaming. They also provide 2D graphics processing, video decoding, TV output, and multi-monitor setups. Additionally, many graphics cards now have integrated sound capabilities, allowing them to transmit audio alongside video output to connected TVs or monitors with built-in speakers, further enhancing the multimedia experience. Within the graphics industry, these products are often referred to as graphics add-in boards (AIBs). The term "AIB" emphasizes the modular nature of these components, as they are typically added to a computer's motherboard to enhance its graphical capabilities. The evolution from the early days of separate 2D and 3D cards to today's integrated and multifunctional GPUs reflects the ongoing technological advancements and the increasing demand for high-quality visual and multimedia experiences in computing. Discrete vs integrated graphics As an alternative to the use of a graphics card, video hardware can be integrated into the motherboard, CPU, or a system-on-chip as integrated graphics. Motherboard-based implementations are sometimes called "on-board video". Some motherboards support using both integrated graphics and the graphics card simultaneously to feed separate displays. The main advantages of integrated graphics are: a low cost, compactness, simplicity, and low energy consumption. Integrated graphics often have less performance than a graphics card because the graphics processing unit inside integrated graphics needs to share system resources with the CPU. On the other hand, a graphics card has a separate random access memory (RAM), cooling system, and dedicated power regulators. A graphics card can offload work and reduce memory-bus-contention from the CPU and system RAM, therefore the overall performance for a computer could improve in addition to increased performance in graphics processing. Such improvements to performance can be seen in video gaming, 3D animation, and video editing. Both AMD and Intel have introduced CPUs and motherboard chipsets which support the integration of a GPU into the same die as the CPU. AMD advertises CPUs with integrated graphics under the trademark Accelerated Processing Unit (APU), while Intel brands similar technology under "Intel Graphics Technology". Power demand As the processing power of graphics cards increased, so did their demand for electrical power. Current high-performance graphics cards tend to consume large amounts of power. For example, the thermal design power (TDP) for the GeForce Titan RTX is 280 watts. When tested with video games, the GeForce RTX 2080 Ti Founder's Edition averaged 300 watts of power consumption. While CPU and power supply manufacturers have recently aimed toward higher efficiency, power demands of graphics cards continued to rise, with the largest power consumption of any individual part in a computer. Although power supplies have also increased their power output, the bottleneck occurs in the PCI-Express connection, which is limited to supplying 75 watts. Modern graphics cards with a power consumption of over 75 watts usually include a combination of six-pin (75 W) or eight-pin (150 W) sockets that connect directly to the power supply. Providing adequate cooling becomes a challenge in such computers. Computers with multiple graphics cards may require power supplies over 750 watts. Heat extraction becomes a major design consideration for computers with two or more high-end graphics cards. As of the Nvidia GeForce RTX 30 series, Ampere architecture, a custom flashed RTX 3090 named "Hall of Fame" has been recorded to reach a peak power draw as high as 630 watts. A standard RTX 3090 can peak at up to 450 watts. The RTX 3080 can reach up to 350 watts, while a 3070 can reach a similar, if not slightly lower peak power draw. Ampere cards of the Founders Edition variant feature a "dual axial flow through" cooler design, which includes fans above and below the card to dissipate as much heat as possible towards the rear of the computer case. A similar design was used by the Sapphire Radeon RX Vega 56 Pulse graphics card. Size Graphics cards for desktop computers have different size profiles, which allows graphics cards to be added to smaller-sized computers. Some graphics cards are not of the usual size, and are named as "low profile". Graphics card profiles are based on height only, with low-profile cards taking up less than the height of a PCIe slot, some can be as low as "half-height". Length and thickness can vary greatly, with high-end cards usually occupying two or three expansion slots, and with modern high-end graphics cards such as the RTX 4090 exceeding 300mm in length. A lower profile card is preferred when trying to fit multiple cards or if graphics cards run into clearance issues with other motherboard components like the DIMM or PCIE slots. This can be fixed with a larger computer case such as mid-tower or full tower. Full towers are usually able to fit larger motherboards in sizes like ATX and micro ATX. GPU sag In the late 2010s and early 2020s, some high-end graphics card models have become so heavy that it is possible for them to sag downwards after installing without proper support, which is why many manufacturers provide additional support brackets. GPU sag can damage a GPU in the long term. Multicard scaling Some graphics cards can be linked together to allow scaling graphics processing across multiple cards. This is done using either the PCIe bus on the motherboard or, more commonly, a data bridge. Usually, the cards must be of the same model to be linked, and most low end cards are not able to be linked in this way. AMD and Nvidia both have proprietary scaling methods, CrossFireX for AMD, and SLI (since the Turing generation, superseded by NVLink) for Nvidia. Cards from different chip-set manufacturers or architectures cannot be used together for multi-card scaling. If graphics cards have different sizes of memory, the lowest value will be used, with the higher values disregarded. Currently, scaling on consumer-grade cards can be done using up to four cards. The use of four cards requires a large motherboard with a proper configuration. Nvidia's GeForce GTX 590 graphics card can be configured in a four-card configuration. As stated above, users will want to stick to cards with the same performances for optimal use. Motherboards including ASUS Maximus 3 Extreme and Gigabyte GA EX58 Extreme are certified to work with this configuration. A large power supply is necessary to run the cards in SLI or CrossFireX. Power demands must be known before a proper supply is installed. For the four card configuration, a 1000+ watt supply is needed. With any relatively powerful graphics card, thermal management cannot be ignored. Graphics cards require well-vented chassis and good thermal solutions. Air or water cooling are usually required, though low end GPUs can use passive cooling. Larger configurations use water solutions or immersion cooling to achieve proper performance without thermal throttling. SLI and Crossfire have become increasingly uncommon as most games do not fully utilize multiple GPUs, due to the fact that most users cannot afford them. Multiple GPUs are still used on supercomputers (like in Summit), on workstations to accelerate video and 3D rendering, visual effects, for simulations, and for training artificial intelligence. 3D graphics APIs A graphics driver usually supports one or multiple cards by the same vendor and has to be written for a specific operating system. Additionally, the operating system or an extra software package may provide certain programming APIs for applications to perform 3D rendering. Specific usage Some GPUs are designed with specific usage in mind: Gaming GeForce GTX GeForce RTX Nvidia Titan Radeon HD Radeon RX Intel Arc Cloud gaming Nvidia Grid Radeon Sky Workstation Nvidia Quadro AMD FirePro Radeon Pro Intel Arc Pro Cloud Workstation Nvidia Tesla AMD FireStream Artificial Intelligence Cloud Nvidia Tesla Radeon Instinct Automated/Driverless car Nvidia Drive PX Industry As of 2016, the primary suppliers of the GPUs (graphics chips or chipsets) used in graphics cards are AMD and Nvidia. In the third quarter of 2013, AMD had a 35.5% market share while Nvidia had 64.5%, according to Jon Peddie Research. In economics, this industry structure is termed a duopoly. AMD and Nvidia also build and sell graphics cards, which are termed graphics add-in-boards (AIBs) in the industry. (See Comparison of Nvidia graphics processing units and Comparison of AMD graphics processing units.) In addition to marketing their own graphics cards, AMD and Nvidia sell their GPUs to authorized AIB suppliers, which AMD and Nvidia refer to as "partners". The fact that Nvidia and AMD compete directly with their customer/partners complicates relationships in the industry. AMD and Intel being direct competitors in the CPU industry is also noteworthy, since AMD-based graphics cards may be used in computers with Intel CPUs. Intel's integrated graphics may weaken AMD, in which the latter derives a significant portion of its revenue from its APUs. As of the second quarter of 2013, there were 52 AIB suppliers. These AIB suppliers may market graphics cards under their own brands, produce graphics cards for private label brands, or produce graphics cards for computer manufacturers. Some AIB suppliers such as MSI build both AMD-based and Nvidia-based graphics cards. Others, such as EVGA, build only Nvidia-based graphics cards, while XFX, now builds only AMD-based graphics cards. Several AIB suppliers are also motherboard suppliers. Most of the largest AIB suppliers are based in Taiwan and they include ASUS, MSI, GIGABYTE, and Palit. Hong Kongbased AIB manufacturers include Sapphire and Zotac. Sapphire and Zotac also sell graphics cards exclusively for AMD and Nvidia GPUs respectively. Market Graphics card shipments peaked at a total of 114 million in 1999. By contrast, they totaled 14.5 million units in the third quarter of 2013, a 17% fall from Q3 2012 levels. Shipments reached an annual total of 44 million in 2015. The sales of graphics cards have trended downward due to improvements in integrated graphics technologies; high-end, CPU-integrated graphics can provide competitive performance with low-end graphics cards. At the same time, graphics card sales have grown within the high-end segment, as manufacturers have shifted their focus to prioritize the gaming and enthusiast market. Beyond the gaming and multimedia segments, graphics cards have been increasingly used for general-purpose computing, such as big data processing. The growth of cryptocurrency has placed a severely high demand on high-end graphics cards, especially in large quantities, due to their advantages in the process of cryptocurrency mining. In January 2018, mid- to high-end graphics cards experienced a major surge in price, with many retailers having stock shortages due to the significant demand among this market. Graphics card companies released mining-specific cards designed to run 24 hours a day, seven days a week, and without video output ports. The graphics card industry took a setback due to the 202021 chip shortage. Parts A modern graphics card consists of a printed circuit board on which the components are mounted. These include: Graphics processing unit A graphics processing unit (GPU), also occasionally called visual processing unit (VPU), is a specialized electronic circuit designed to rapidly manipulate and alter memory to accelerate the building of images in a frame buffer intended for output to a display. Because of the large degree of programmable computational complexity for such a task, a modern graphics card is also a computer unto itself. Heat sink A heat sink is mounted on most modern graphics cards. A heat sink spreads out the heat produced by the graphics processing unit evenly throughout the heat sink and unit itself. The heat sink commonly has a fan mounted to cool the heat sink and the graphics processing unit. Not all cards have heat sinks, for example, some cards are liquid-cooled and instead have a water block; additionally, cards from the 1980s and early 1990s did not produce much heat, and did not require heat sinks. Most modern graphics cards need proper thermal solutions. They can be water-cooled or through heat sinks with additional connected heat pipes usually made of copper for the best thermal transfer. Video BIOS The video BIOS or firmware contains a minimal program for the initial set up and control of the graphics card. It may contain information on the memory and memory timing, operating speeds and voltages of the graphics processor, and other details which can sometimes be changed. Modern Video BIOSes do not support full functionalities of graphics cards; they are only sufficient to identify and initialize the card to display one of a few frame buffer or text display modes. It does not support YUV to RGB translation, video scaling, pixel copying, compositing or any of the multitude of other 2D and 3D features of the graphics card, which must be accessed by software drivers. Video memory The memory capacity of most modern graphics cards ranges from 2 to 24 GB. But with up to 32 GB as of the last 2010s, the applications for graphics use are becoming more powerful and widespread. Since video memory needs to be accessed by the GPU and the display circuitry, it often uses special high-speed or multi-port memory, such as VRAM, WRAM, SGRAM, etc. Around 2003, the video memory was typically based on DDR technology. During and after that year, manufacturers moved towards DDR2, GDDR3, GDDR4, GDDR5, GDDR5X, and GDDR6. The effective memory clock rate in modern cards is generally between 2 and 15 GHz. Video memory may be used for storing other data as well as the screen image, such as the Z-buffer, which manages the depth coordinates in 3D graphics, as well as textures, vertex buffers, and compiled shader programs. RAMDAC The RAMDAC, or random-access-memory digital-to-analog converter, converts digital signals to analog signals for use by a computer display that uses analog inputs such as cathode-ray tube (CRT) displays. The RAMDAC is a kind of RAM chip that regulates the functioning of the graphics card. Depending on the number of bits used and the RAMDAC-data-transfer rate, the converter will be able to support different computer-display refresh rates. With CRT displays, it is best to work over 75 Hz and never under 60 Hz, to minimize flicker. (This is not a problem with LCD displays, as they have little to no flicker.) Due to the growing popularity of digital computer displays and the integration of the RAMDAC onto the GPU die, it has mostly disappeared as a discrete component. All current LCD/plasma monitors and TVs and projectors with only digital connections work in the digital domain and do not require a RAMDAC for those connections. There are displays that feature analog inputs (VGA, component, SCART, etc.) only. These require a RAMDAC, but they reconvert the analog signal back to digital before they can display it, with the unavoidable loss of quality stemming from this digital-to-analog-to-digital conversion. With the VGA standard being phased out in favor of digital formats, RAMDACs have started to disappear from graphics cards. Output interfaces The most common connection systems between the graphics card and the computer display are: Video Graphics Array (VGA) (DE-15) Also known as D-sub, VGA is an analog-based standard adopted in the late 1980s designed for CRT displays, also called VGA connector. Today, the VGA analog interface is used for high definition video resolutions including 1080p and higher. Some problems of this standard are electrical noise, image distortion and sampling error in evaluating pixels. While the VGA transmission bandwidth is high enough to support even higher resolution playback, the picture quality can degrade depending on cable quality and length. The extent of quality difference depends on the individual's eyesight and the display; when using a DVI or HDMI connection, especially on larger sized LCD/LED monitors or TVs, quality degradation, if present, is prominently visible. Blu-ray playback at 1080p is possible via the VGA analog interface, if Image Constraint Token (ICT) is not enabled on the Blu-ray disc. Digital Visual Interface (DVI) Digital Visual Interface is a digital-based standard designed for displays such as flat-panel displays (LCDs, plasma screens, wide high-definition television displays) and video projectors. There were also some rare high-end CRT monitors that use DVI. It avoids image distortion and electrical noise, corresponding each pixel from the computer to a display pixel, using its native resolution. It is worth noting that most manufacturers include a DVI-I connector, allowing (via simple adapter) standard RGB signal output to an old CRT or LCD monitor with VGA input. Video-in video-out (VIVO) for S-Video, composite video and component video These connectors are included to allow connection with televisions, DVD players, video recorders and video game consoles. They often come in two 10-pin mini-DIN connector variations, and the VIVO splitter cable generally comes with either 4 connectors (S-Video in and out plus composite video in and out), or 6 connectors (S-Video in and out, component YPBPR out and composite in and out). High-Definition Multimedia Interface (HDMI) HDMI is a compact audio/video interface for transferring uncompressed video data and compressed/uncompressed digital audio data from an HDMI-compliant device ("the source device") to a compatible digital audio device, computer monitor, video projector, or digital television. HDMI is a digital replacement for existing analog video standards. HDMI supports copy protection through HDCP. DisplayPort DisplayPort is a digital display interface developed by the Video Electronics Standards Association (VESA). The interface is primarily used to connect a video source to a display device such as a computer monitor, though it can also be used to transmit audio, USB, and other forms of data. The VESA specification is royalty-free. VESA designed it to replace VGA, DVI, and LVDS. Backward compatibility to VGA and DVI by using adapter dongles enables consumers to use DisplayPort fitted video sources without replacing existing display devices. Although DisplayPort has a greater throughput of the same functionality as HDMI, it is expected to complement the interface, not replace it. USB-C Other types of connection systems Motherboard interfaces Chronologically, connection systems between graphics card and motherboard were, mainly: S-100 bus: Designed in 1974 as a part of the Altair 8800, it is the first industry-standard bus for the microcomputer industry. ISA: Introduced in 1981 by IBM, it became dominant in the marketplace in the 1980s. It is an 8- or 16-bit bus clocked at 8 MHz. NuBus: Used in Macintosh II, it is a 32-bit bus with an average bandwidth of 10 to 20 MB/s. MCA: Introduced in 1987 by IBM it is a 32-bit bus clocked at 10 MHz. EISA: Released in 1988 to compete with IBM's MCA, it was compatible with the earlier ISA bus. It is a 32-bit bus clocked at 8.33 MHz. VLB: An extension of ISA, it is a 32-bit bus clocked at 33 MHz. Also referred to as VESA. PCI: Replaced the EISA, ISA, MCA and VESA buses from 1993 onwards. PCI allowed dynamic connectivity between devices, avoiding the manual adjustments required with jumpers. It is a 32-bit bus clocked 33 MHz. UPA: An interconnect bus architecture introduced by Sun Microsystems in 1995. It is a 64-bit bus clocked at 67 or 83 MHz. USB: Although mostly used for miscellaneous devices, such as secondary storage devices or peripherals and toys, USB displays and display adapters exist. It was first used in 1996. AGP: First used in 1997, it is a dedicated-to-graphics bus. It is a 32-bit bus clocked at 66 MHz. PCI-X: An extension of the PCI bus, it was introduced in 1998. It improves upon PCI by extending the width of bus to 64 bits and the clock frequency to up to 133 MHz. PCI Express: Abbreviated as PCIe, it is a point-to-point interface released in 2004. In 2006, it provided a data-transfer rate that is double of AGP. It should not be confused with PCI-X, an enhanced version of the original PCI specification. This is standard for most modern graphics cards. The following table is a comparison between features of some interfaces listed above.
Technology
Computer hardware
null
113673
https://en.wikipedia.org/wiki/Pitcher%20plant
Pitcher plant
Pitcher plants are carnivorous plants known as pitfall traps—a prey-trapping mechanism featuring a deep cavity filled with digestive liquid. The traps of pitcher plant are considered to be "true" pitcher plants and are formed by specialized leaves. The plants attract and drown the prey with nectar. Types The term "pitcher plant" generally refers to members of the Nepenthaceae and Sarraceniaceae families, but similar pitfall traps are employed by the monotypic Cephalotaceae and some members of the Bromeliaceae. The families Nepenthaceae and Sarraceniaceae are the most species-rich families of pitcher plants. Nepenthaceae The Nepenthaceae contains a single genus, Nepenthes, containing over 100 species and numerous hybrids and cultivars. In this genus of Old World pitcher plants, the pitchers are borne at the end of tendrils that extend from the midrib of an otherwise unexceptional leaf. Old World pitcher plants are typically characterized as having reduced and symmetrical pitchers with a comprehensive waxy coating on the surface of the inner pitcher wall. The plants themselves are often climbers, accessing the canopy of their habitats using the aforementioned tendrils, although others are found on the ground in forest clearings, or as epiphytes on trees. Sarraceniaceae The New World pitcher plants (Sarraceniaceae), which comprise three genera, are ground-dwelling herbs whose pitchers arise from a horizontal rhizome. In this family, the entire leaf forms the pitcher, as opposed to Nepenthaceae, where the pitcher arises from the terminal portion of the leaf. The species of the genus Heliamphora, which are popularly known as marsh pitchers (or erroneously as sun pitchers), have a simple rolled-leaf pitcher, at the tip of which is a spoon-like structure that secretes nectar. They are restricted to areas of high rainfall in South America. The North American genus Sarracenia are the trumpet pitchers, which have a more complex trap than Heliamphora, with an operculum, which prevents excess accumulation of rainwater in most of the species. The single species in the California genus Darlingtonia is popularly known as the cobra plant, due to its possession of an inflated "lid" with elegant false-exits, and a forked "tongue", which serves to ferry ants and other prey to the entrance of the pitcher. The species in the genus Sarracenia readily hybridize, making their classification a complex matter. The purple pitcher plant, Sarracenia purpurea, is the floral emblem of the province of Newfoundland and Labrador, Canada. Cephalotaceae The Cephalotaceae is a monotypic family with but one genus and species, Cephalotus follicularis. This species has a small (2–5 cm) pitcher similar in form to those of Nepenthes. Unlike in Nepenthes, in Cephalotus follicularis the petiole is attached to the rear of the upper trap rim rather than to the base of the pitcher. The species occurs in only one location in southwestern Australia. Bromeliaceae A few species of bromeliads (Bromeliaceae), such as Brocchinia reducta and Catopsis berteroniana, are known or suspected to be carnivorous. Feeding behavior Attraction Foraging, flying, or crawling insects such as flies are attracted to a cavity formed by the cupped leaf, often by visual lures such as anthocyanin pigments, and nectar. Many pitcher plants exhibit patterns of ultraviolet coloration which may play a role in attracting insects. Some species, such as Cephalotus follicularis, likely use camouflage to trap insects, as their coloration matches that of the surrounding environment and the plants are often embedded in the substrate such that the traps are flush with the ground. Olfactory cues can also play a role in attraction. For example, Nepenthes rafflesiana uses flower-scent mimicry to attract insects to its pitchers. Capture The rim of the pitcher (peristome) is slippery when moistened by condensation or nectar, causing insects to fall into the trap. The walls of the pitfall may be covered with waxy scales, protruding aldehyde crystals, cuticular folds, downward-pointing hairs, or guard-cell-originating lunate cells, to help prevent escape. The small bodies of liquid contained within the pitcher traps are called phytotelmata. They drown the insect, whose body is gradually dissolved. This may occur by bacterial action (the bacteria being washed into the pitcher by rainfall), or by digestive enzymes secreted by the plant itself. Pitcher trap fluids largely vary in their viscoelasticity and acidity, which then dictates which type of prey they can target. For example, increased viscoelasticity is associated with increased insect retention to help capture flying insects such as flies, whereas increased fluid acidity can decrease insect killing-time, which can help capture crawling insects such as ants. Some pitcher plants contain mutualistic insect larvae, which feed on trapped prey, and whose excreta the plant absorbs. Digestion Whatever the mechanism of digestion, the prey items are converted into a solution of amino acids, peptides, phosphates, ammonium and urea, from which the plant obtains its mineral nutrition (particularly nitrogen and phosphorus). Like all carnivorous plants, pitcher plants all grow in locations where the soil is too poor in minerals and/or too acidic for most plants to survive. Pitcher plants supplement available nutrients and minerals (which plants normally obtain through their roots) with the constituents of their insect prey. Feces-trapping symbiosis Mature plants of Nepenthes lowii attract tree shrews (Tupaia montana), which feed on nectar that the plant produces but also defecate into the pitcher, providing nitrates and other nutrients. The plant and tree shrew have a symbiotic relationship. The rim of N. lowii is not slippery so that tree shrews can easily get in and out; it provides more nectar than other pitcher plants. The shape of the pitcher rim and the position of the nectar ensure that the animal's hindquarters are over the rim while it feeds. Nepenthes rafflesiana var. elongata has a similar relationship with Hardwicke's woolly bats (Kerivoula hardwickii). The bats roost inside the pitchers and the plants derive much of their foliar nitrogen from the feces of the bats. Compared to other varieties of Nepenthes rafflesiana that do not exhibit this form of mutualism, N. rafflesiana var. elongata has elongated pitchers that can accommodate both single bats and mother-juvenile pairs. As well as its elongated shape, N. rafflesiana var. elongata has reduced volumes of pitcher fluid compared to other species, leaving more space to accommodate the bats. Evolution of the form It is widely assumed pitfall traps evolved by epiascidiation (infolding of the leaf with the adaxial or upper surface becoming the inside of the pitcher), with selection pressure favouring more deeply cupped leaves over evolutionary time. The pitcher trap evolved independently in three eudicot lineages and one monocot lineage, representing a case of convergent evolution. Some pitcher plant families (such as Nepenthaceae) are placed within clades consisting mostly of flypaper traps, indicating that some pitchers may have evolved from the common ancestors of today's flypaper traps by loss of mucilage.
Biology and health sciences
Botany
Biology
2534996
https://en.wikipedia.org/wiki/Oxyaenidae
Oxyaenidae
Oxyaenidae ("sharp hyenas") is a family of extinct carnivorous placental mammals. Traditionally classified in order Creodonta, this group is now classified in its own order Oxyaenodonta ("sharp tooth hyenas") within clade Pan-Carnivora in mirorder Ferae. The group contains four subfamilies comprising fourteen genera. Oxyaenids were the first to appear during the late Paleocene in North America, while smaller radiations of oxyaenids in Europe and Asia occurred during the Eocene. Etymology The name of order Oxyaenodonta comes , name of hyena genus Hyaena and . The name of family Oxyaenidae comes , name of hyena genus Hyaena and taxonomic suffix "-idae". Description They were superficially cat-like mammals that walked on flat feet, in contrast to modern cats, which walk and run on their toes. Anatomically, characteristic features include a short, broad skull, deep jaws, and teeth designed for crushing rather than shearing, as in the hyaenodonts or modern cats. Oxyaenids were specialized carnivores that preyed on other terrestrial vertebrates, eggs and insects. They were capable of climbing trees, which is suggested by fossil evidence of their paws. Classification and phylogeny Taxonomy Order: †Oxyaenodonta Family: †Oxyaenidae Subfamily: †Machaeroidinae Genus: †Apataelurus †Apataelurus kayi †Apataelurus pishigouensis Genus: †Diegoaelurus Diegoaelurus vanvalkenburghae Genus: †Isphanatherium Isphanatherium ferganensis Genus: †Machaeroides †Machaeroides eothen †Machaeroides simpsoni Subfamily: †Oxyaeninae Genus: †Argillotherium †Argillotherium toliapicum Genus: †Dipsalidictis (paraphyletic genus) †Dipsalidictis aequidens †Dipsalidictis krausei †Dipsalidictis platypus †Dipsalidictis transiens Genus: †Malfelis †Malfelis badwaterensis Genus: †Oxyaena †Oxyaena forcipata †Oxyaena gulo †Oxyaena intermedia †Oxyaena lupina †Oxyaena pardalis †Oxyaena simpsoni †Oxyaena woutersi Genus: †Patriofelis †Patriofelis ferox †Patriofelis ulta Genus: †Protopsalis †Protopsalis tigrinus Genus: †Sarkastodon †Sarkastodon henanensis †Sarkastodon mongoliensis Subfamily: †Palaeonictinae Genus: †Ambloctonus †Ambloctonus major †Ambloctonus priscus †Ambloctonus sinosus Genus: †Dipsalodon (paraphyletic genus) †Dipsalodon churchillorum †Dipsalodon matthewi Genus: †Palaeonictis †Palaeonictis gigantea †Palaeonictis occidentalis †Palaeonictis peloria †Palaeonictis wingi Subfamily: †Tytthaeninae Genus: †Tytthaena †Tytthaena lichna †Tytthaena parrisi Phylogeny Cladogram according to Gunnel in 1991:
Biology and health sciences
Mammals: General
Animals
2535167
https://en.wikipedia.org/wiki/Butternut%20squash
Butternut squash
Butternut squash (Cucurbita moschata), known in Australia and New Zealand as butternut pumpkin or gramma, is a type of winter squash that grows on a vine. It has a sweet, nutty taste similar to that of a pumpkin. It has tan-yellow skin and orange fleshy pulp with a compartment of seeds in the blossom end. When ripening, the flesh turns increasingly deep orange due to its rich content of beta-carotene, a provitamin A compound. Although botanically a fruit (specifically, a berry), butternut squash is used culinarily as a vegetable that can be roasted, sautéed, puréed for soups such as squash soup, or mashed to be used in casseroles, breads, muffins, and pies. It is part of the same squash family as ponca, waltham, pumpkin, and calabaza. History The word squash comes from the Narragansett word askutasquash, meaning "eaten raw or uncooked", and butternut from the squash's nutty flavor. Although American native peoples may have eaten some forms of squash without cooking, today most squash is eaten cooked. Before the arrival of Europeans, C. moschata had been carried over all parts of North America where it could be grown, but butternut squash is a modern variety of winter squash. It was developed by Charles Leggett of Stow, Massachusetts, who, in 1944, crossed pumpkin and gooseneck squash varieties. Nutrition Baked butternut squash is 88% water, 11% carbohydrates, 1% protein, and contains negligible fat (table). In a reference amount of , it supplies of food energy and is a rich source (20% or more of the Daily Value, DV) of vitamin A (70% DV), with moderate amounts of vitamin C (18% DV) and vitamin B6 (10% DV) (table). Uses Storage The optimal eating period of butternut squash is 3-6 months after harvest. They are best kept at with 50 percent humidity. For the best flavor, butternut squash should be left to cure for 2 months after harvest. Culinary One of the most common ways to prepare butternut squash is baking. Once cooked, it can be eaten in a variety of ways. The fruit is prepared by removing the skin, stalk, and seeds, which are not usually eaten or cooked. However, the seeds are edible, either raw or roasted, and the skin is also edible and softens when roasted. The seeds can even be roasted and pressed into an oil to create butternut squash seed oil. This oil can be used for roasting, cooking, on popcorn, or as a salad dressing. In Australia, it is regarded as a pumpkin, and is used interchangeably with other types of pumpkin. In South Africa, butternut squash is commonly used and often prepared as a soup or grilled whole. Grilled butternut is typically seasoned with nutmeg and cinnamon or stuffed (e.g., spinach and feta) before being wrapped in foil and grilled. Grilled butternut is often served as a side dish to braais (barbecues) and the soup as a starter dish. Butternuts were introduced commercially in New Zealand in the 1950s by brothers Arthur and David Harrison, nursery workers, and Otaki market gardeners.
Biology and health sciences
Botanical fruits used as culinary vegetables
Plants
2535591
https://en.wikipedia.org/wiki/Leafhopper
Leafhopper
Leafhopper is the common name for any species from the family Cicadellidae. These minute insects, colloquially known as hoppers, are plant feeders that suck plant sap from grass, shrubs, or trees. Their hind legs are modified for jumping, and are covered with hairs that facilitate the spreading of a secretion over their bodies that acts as a water repellent and carrier of pheromones. They undergo a partial metamorphosis, and have various host associations, varying from very generalized to very specific. Some species have a cosmopolitan distribution, or occur throughout the temperate and tropical regions. Some are pests or vectors of plant viruses and phytoplasmas. The family is distributed all over the world, and constitutes the second-largest hemipteran family, with at least 20,000 described species. They belong to a lineage traditionally treated as infraorder Cicadomorpha in the suborder Auchenorrhyncha. This has sometimes been placed in its own suborder (Clypeorrhyncha), but more recent research retains it within Auchenorrhyncha. Members of the tribe Proconiini of the subfamily Cicadellinae are commonly known as sharpshooters. Description and ecology The Cicadellidae combine the following features: The thickened part of the antennae is very short and ends with a bristle (arista). Two ocelli (simple eyes) are present on the top or front of the head. The tarsi are made of three segments. The femora are at front with, at most, weak spines. The hind tibiae have one or more distinct keels, with a row of movable spines on each, sometimes on enlarged bases. The base of the middle legs is close together where they originate under the thorax. The front wings not particularly thickened. An additional and unique character of leafhoppers is the production of brochosomes, which are thought to protect the animals, and particularly their egg clutches, from predation as well as pathogens. Like other Exopterygota, the leafhoppers undergo direct development from nymph to adult without a pupal stage. While many leafhoppers are drab little insects as is typical for the Membracoidea, the adults and nymphs of some species are quite colorful. Some – in particular Stegelytrinae – have largely translucent wings and resemble flies at a casual glance. Leafhoppers have piercing-sucking mouthparts, enabling them to feed on plant sap. A leafhoppers' diet commonly consists of sap from a wide and diverse range of plants, but some are more host-specific. Leafhoppers mainly are herbivores, but some are known to eat smaller insects, such as aphids, on occasion. A few species are known to be mud-puddling, but as it seems, females rarely engage in such behavior. Many species are also known to opportunistically pierce the human skin and draw blood but the function of such behaviour is unclear. Leafhoppers are micropredators that can act as vectors transmitting plant pathogens, such as viruses, phytoplasmas and bacteria. Cicadellidae species that are significant agricultural pests include the beet leafhopper (Circulifer tenellus), the maize leafhopper (Cicadulina mbila), potato leafhopper (Empoasca fabae), two-spotted leafhopper (Sophonia rufofascia), blue-green sharpshooter (Graphocephala atropunctata), glassy-winged sharpshooter (Homalodisca vitripennis), the common brown leafhopper (Orosius orientalis), rice green leafhoppers (Nephotettix spp.), and the white apple leafhopper (Typhlocyba pomaria). The beet leafhopper (Circulifer tenellus) can transmit the beet curly top virus to various members of the nightshade family, including tobacco, tomato, or eggplant, and is a serious vector of the disease in chili pepper in the Southwestern United States. In some cases, the plant pathogens distributed by leafhoppers are also pathogens of the insects themselves, and can replicate within the leafhoppers' salivary glands. Leafhoppers are also susceptible to various insect pathogens, including Dicistroviridae viruses, bacteria and fungi; numerous parasitoids attack the eggs and the adults provide food for small insectivores. Some species such as the Australian Kahaono montana even build silk nests under the leaves of trees they live in, to protect them from predators. Systematics In the now-obsolete classification that was used throughout much of the 20th century, the leafhoppers were part of the Homoptera, a paraphyletic assemblage uniting the basal lineages of Hemiptera and ranked as suborder. The splitting of the Homoptera is likely to be repeated for the Auchenorrhyncha for similar reasons, as the Auchenorrhyncha simply seem to group the moderately advanced Hemiptera, regardless of the fact the highly apomorphic Coleorrhyncha and Heteroptera (typical bugs) evolved from auchenorrhynchans. Hence, a recent trend treats the most advanced hemipterans as three or four lineages, namely Archaeorrhyncha (Fulgoromorpha if included in Auchenorrhyncha), Coleorrhyncha and Heteroptera (sometimes united as Prosorrhyncha) and Clypeorrhyncha. Within the latter, the three traditional superfamilies – Cercopoidea (froghoppers and spittlebugs), Cicadoidea (cicadas) and Membracoidea – appear to be monophyletic. The leafhoppers are the most basal living lineage of Membracoidea, which otherwise include the families Aetalionidae (aetalionid treehoppers), Membracidae (typical treehoppers and thorn bugs), Melizoderidae, and Myerslopiidae. Subfamilies The leafhoppers are divided into 25 subfamilies, which are listed here alphabetically, as too little is known about the family's internal phylogeny. Aphrodinae Bathysmatophorinae Cicadellinae Coelidiinae Deltocephalinae Errhomeninae Euacanthellinae Eurymelinae Evacanthinae Hylicinae Iassinae Jascopinae Ledrinae Megophthalminae Mileewinae Nastlopiinae Neobalinae Neocoelidiinae Nioniinae Phereurhininae Portaninae Signoretiinae Tartessinae Typhlocybinae Ulopinae Further information: Agalliopsis, Utecha trivia
Biology and health sciences
Hemiptera (true bugs)
null
2535858
https://en.wikipedia.org/wiki/Ice%20giant
Ice giant
An ice giant is a giant planet composed mainly of elements heavier than hydrogen and helium, such as oxygen, carbon, nitrogen, and sulfur. There are two ice giants in the Solar System: Uranus and Neptune. In astrophysics and planetary science the term "ice" refers to volatile chemical compounds with freezing points above about 100 K, such as water, ammonia, or methane, with freezing points of 273 K (0 °C), 195 K (−78 °C), and 91 K (−182 °C), respectively (see Volatiles). In the 1990s, it was determined that Uranus and Neptune were a distinct class of giant planet, separate from the other giant planets, Jupiter and Saturn, which are gas giants predominantly composed of hydrogen and helium. Neptune and Uranus are now referred to as ice giants. Lacking well-defined solid surfaces, they are primarily composed of gases and liquids. Their constituent compounds were solids when they were primarily incorporated into the planets during their formation, either directly in the form of ice or trapped in water ice. Today, very little of the water in Uranus and Neptune remains in the form of ice. Instead, water primarily exists as supercritical fluid at the temperatures and pressures within them. Uranus and Neptune consist of only about 20% hydrogen and helium by mass, compared to the Solar System's gas giants, Jupiter and Saturn, which are more than 90% hydrogen and helium by mass. Terminology In 1952, science fiction writer James Blish coined the term gas giant and it was used to refer to the large non-terrestrial planets of the Solar System. However, since the late 1940s the compositions of Uranus and Neptune have been understood to be significantly different from those of Jupiter and Saturn. They are primarily composed of elements heavier than hydrogen and helium, forming a separate type of giant planet altogether. Because during their formation Uranus and Neptune incorporated their material as either ice or gas trapped in water ice, the term ice giant came into use. In the early 1970s, the terminology became popular in the science fiction community, e.g., Bova (1971), but the earliest scientific usage of the terminology was likely by Dunne & Burgess (1978) in a NASA report. Formation Modelling the formation of terrestrial and gas giants is relatively straightforward and uncontroversial. The terrestrial planets of the Solar System are widely understood to have formed through collisional accumulation of planetesimals within the protoplanetary disk. The gas giants—Jupiter, Saturn, and their extrasolar counterpart planets—are thought to have formed solid cores of around 10 Earth masses () through the same process, while accreting gaseous envelopes from the surrounding solar nebula over the course of a few to several million years (Ma), although alternative models of core formation based on pebble accretion have recently been proposed. Some extrasolar giant planets may instead have formed via gravitational disk instabilities. The formation of Uranus and Neptune through a similar process of core accretion is far more problematic. The escape velocity for the small protoplanets about 20 astronomical units (AU) from the center of the Solar System would have been comparable to their relative velocities. Such bodies crossing the orbits of Saturn or Jupiter would have been liable to be sent on hyperbolic trajectories ejecting them from the system. Such bodies, being swept up by the gas giants, would also have been likely to just be accreted into larger planets or thrown into cometary orbits. Despite the trouble modelling their formation, many ice giant candidates have been observed orbiting other stars since 2004. This indicates that they may be common in the Milky Way. Migration Considering the orbital challenges of protoplanets 20 AU or more from the centre of the Solar System would experience, a simple solution is that the ice giants formed between the orbits of Jupiter and Saturn before being gravitationally scattered outward to their now more distant orbits. Disk instability Gravitational instability of the protoplanetary disk could also produce several gas giant protoplanets out to distances of up to 30 AU. Regions of slightly higher density in the disk could lead to the formation of clumps that eventually collapse to planetary densities. A disk with even marginal gravitational instability could yield protoplanets between 10 and 30 AU in over one thousand years (ka). This is much shorter than the 100,000 to 1,000,000 years required to produce protoplanets through core accretion of the cloud and could make it viable in even the shortest-lived disks, which exist for only a few million years. A problem with this model is determining what kept the disk stable before the instability. There are several possible mechanisms allowing gravitational instability to occur during disk evolution. A close encounter with another protostar could provide a gravitational kick to an otherwise stable disk. A disk evolving magnetically is likely to have magnetic dead zones, due to varying degrees of ionization, where mass moved by magnetic forces could pile up, eventually becoming marginally gravitationally unstable. A protoplanetary disk may simply accrete matter slowly, causing relatively short periods of marginal gravitational instability and bursts of mass collection, followed by periods where the surface density drops below what is required to sustain the instability. Photoevaporation Observations of photoevaporation of protoplanetary disks in the Orion Trapezium Cluster by extreme ultraviolet (EUV) radiation emitted by θ1 Orionis C suggests another possible mechanism for the formation of ice giants. Multiple-Jupiter-mass gas-giant protoplanets could have rapidly formed due to disk instability before having most of their hydrogen envelopes stripped off by intense EUV radiation from a nearby massive star. In the Carina Nebula, EUV fluxes are approximately 100 times higher than in Trapezium's Orion Nebula. Protoplanetary disks are present in both nebulae. Higher EUV fluxes make this an even more likely possibility for ice-giant formation. The stronger EUV would increase the removal of the gas envelopes from protoplanets before they could collapse sufficiently to resist further loss. Characteristics The ice giants represent one of two fundamentally different categories of giant planets present in the Solar System, the other group being the more-familiar gas giants, which are composed of more than 90% hydrogen and helium (by mass). Their hydrogen is thought to extend all the way down to their small rocky cores, where hydrogen molecular ion transitions to metallic hydrogen under the extreme pressures of hundreds of gigapascals (GPa). The ice giants are primarily composed of heavier elements. Based on the abundance of elements in the universe, oxygen, carbon, nitrogen, and sulfur are most likely. Although the ice giants also have hydrogen envelopes, these are much smaller. They account for less than 20% of their mass. Their hydrogen also never reaches the depths necessary for the pressure to create metallic hydrogen. These envelopes nevertheless limit observation of the ice giants' interiors, and thereby the information on their composition and evolution. Although Uranus and Neptune are referred to as ice giant planets, it is thought that there is a supercritical water-ammonia ocean beneath their clouds, which accounts for about two-thirds of their total mass. Atmosphere and weather The gaseous outer layers of the ice giants have several similarities to those of the gas giants. These include long-lived, high-speed equatorial winds, polar vortices, large-scale circulation patterns, and complex chemical processes driven by ultraviolet radiation from above and mixing with the lower atmosphere. Studying the ice giants' atmospheric patterns also gives insights into atmospheric physics. Their compositions promote different chemical processes and they receive far less sunlight in their distant orbits than any other planets in the Solar System (increasing the relevance of internal heating on weather patterns). The largest visible feature on Neptune is the recurring Great Dark Spot. It forms and dissipates every few years, as opposed to the similarly sized Great Red Spot of Jupiter, which has persisted for centuries. Of all known giant planets in the Solar System, Neptune emits the most internal heat per unit of absorbed sunlight, a ratio of approximately 2.6. Saturn, the next-highest emitter, only has a ratio of about 1.8. Uranus emits the least heat, one-tenth as much as Neptune. It is suspected that this may be related to its extreme 98˚ axial tilt. This causes its seasonal patterns to be very different from those of any other planet in the Solar System. There are still no complete models explaining the atmospheric features observed in the ice giants. Understanding these features will help elucidate how the atmospheres of giant planets in general function. Consequently, such insights could help scientists better predict the atmospheric structure and behaviour of giant exoplanets discovered to be very close to their host stars (pegasean planets) and exoplanets with masses and radii between that of the giant and terrestrial planets found in the Solar System. Interior Because of their large sizes and low thermal conductivities, the planetary interior pressures range up to several hundred GPa and temperatures of several thousand kelvins (K). In March 2012, it was found that the compressibility of water used in ice-giant models could be off by one-third. This value is important for modeling ice giants, and has a ripple effect on understanding them. Magnetic fields The magnetic fields of Uranus and Neptune are both unusually displaced and tilted. Their field strengths are intermediate between those of the gas giants and those of the terrestrial planets, being 50 and 25 times that of Earth's, respectively. The equatorial magnetic field strengths of Uranus and Neptune are respectively 75 percent and 45 percent of Earth's 0.305 gauss. Their magnetic fields are believed to originate in an ionized convecting fluid-ice mantle. Spacecraft visitation Past Voyager 2 (Uranus and Neptune) Proposals MUSE (proposed in 2012; considered by NASA in 2014 and ESA in 2016) NASA Uranus orbiter and probe (proposed in 2011; considered by NASA in 2017) OCEANUS (proposed in 2017) ODINUS (proposed in 2013) Outer Solar System (proposed in 2012) Triton Hopper (proposed in 2015; under consideration by NASA as of 2018) Uranus Pathfinder (proposed in 2010) Neptune Odyssey (proposed in 2022)
Physical sciences
Planetary science
Astronomy
2536187
https://en.wikipedia.org/wiki/Alvarez%20hypothesis
Alvarez hypothesis
The Alvarez hypothesis posits that the mass extinction of the non-avian dinosaurs and many other living things during the Cretaceous–Paleogene extinction event was caused by the impact of a large asteroid on the Earth. Prior to 2013, it was commonly cited as having happened about 65 million years ago, but Renne and colleagues (2013) gave an updated value of 66 million years. Evidence indicates that the asteroid fell in the Yucatán Peninsula, at Chicxulub, Mexico. The hypothesis is named after the father-and-son team of scientists Luis and Walter Alvarez, who first suggested it in 1980. Shortly afterwards, and independently, the same was suggested by Dutch paleontologist Jan Smit. In March 2010, an international panel of scientists endorsed the asteroid hypothesis, specifically the Chicxulub impact, as being the cause of the extinction. A team of 41 scientists reviewed 20 years of scientific literature and in so doing also ruled out other theories such as massive volcanism. They had determined that a space rock in diameter hurtled into earth at Chicxulub. For comparison, the Martian moon Phobos has a diameter of , and Mount Everest is just under . The collision would have released the same energy as , over a billion times the energy of the atomic bombs dropped on Hiroshima and Nagasaki. A 2016 drilling project into the peak ring of the crater strongly supported the hypothesis, and confirmed various matters that had been unclear until that point. These included the fact that the peak ring comprised granite (a rock found deep within the Earth) rather than typical sea floor rock, which had been shocked, melted, and ejected to the surface in minutes, and evidence of colossal seawater movement directly afterwards from sand deposits. Crucially, the cores also showed a near-complete absence of gypsum, a sulfate-containing rock, which would have been vaporized and dispersed as an aerosol into the atmosphere, confirming the presence of a probable link between the impact and global longer-term effects on the climate and food chain. History In 1980, a team of researchers led by Nobel prize-winning physicist Luis Alvarez, his son, geologist Walter Alvarez, and chemists Frank Asaro and Helen Vaughn Michel, discovered that sedimentary layers found all over the world at the Cretaceous–Paleogene boundary (K–Pg boundary, formerly called Cretaceous–Tertiary or K–T boundary) contain a concentration of iridium hundreds of times greater than normal. Previously, in a 1953 publication, geologists Allan O. Kelly and Frank Dahille analyzed global geological evidence suggesting that one or more giant asteroids impacted the Earth, causing an angular shift in its axis, global floods, firestorms, atmospheric occlusion, and the extinction of the dinosaurs. There were other earlier speculations on the possibility of an impact event, but without strong confirming evidence. Evidence The location of the impact was unknown when the Alvarez team developed their hypothesis, but later scientists discovered the Chicxulub Crater in the Yucatán Peninsula, now considered the likely impact site. Paul Renne of the Berkeley Geochronology Center has reported that the date of the asteroid event is 66,038,000 years ago, plus or minus 11,000 years, based on Ar-Ar dating. He further posits that the mass extinction of dinosaurs occurred within 33,000 years of this date. In April 2019, a paper was published in PNAS which describes evidence from a fossil site in North Dakota that the authors say provides a "postimpact snapshot" of events after the asteroid collision "including ejecta accretion and faunal mass death". The team found that the tektites that had peppered the area were present in amber found on the site and were also embedded in the gills of about 50 percent of the fossil fish. They were also able to find traces of iridium. The authors – who include Walter Alvarez – postulate that shock of the impact, equivalent to an earthquake of magnitude 10 or 11, may have led to seiches, oscillating movements of water in lakes, bays, or gulfs, that would have reached the site in North Dakota within minutes or hours of the impact. This would have led to the rapid burial of organisms under a thick layer of sediment. Coauthor David Burnham of the University of Kansas was quoted as saying "They’re not crushed, it’s like an avalanche that collapses almost like a liquid, then sets like concrete. They were killed pretty suddenly because of the violence of that water. We have one fish that hit a tree and was broken in half." According to a high-resolution study of fossilized fish bones published in 2022, the Cretaceous-Paleogene asteroid which caused mass extinction impacted during the Northern Hemisphere spring. Criticism Although a 2010 paper published in Science that declared that the extinction of the dinosaurs was caused by Chicxulub was co-authored by 41 scientists, dozens of other scientists challenged both the paper's methods and its conclusions. A leading critic of the Alvarez hypothesis is Gerta Keller, who has focused on Deccan Traps volcanism as a likely cause of a more gradual extinction. Despite the fact that the Alvarez hypothesis has overwhelming support from the scientific community, Keller has continued to advocate for research into alternate theories. The Deccan Traps theory was first proposed in 1978 by geologist Dewey McLean but quickly lost traction. The Deccan Traps are an area of volcanic flood basalts in Western India spanning ~1.3 million square kilometers that were created by massive volcanic activity during the same time period in which the Chicxulub impact occurred. Prior to Keller's research, the timeframe of the Deccan Traps' eruptions had a significantly large range of error, making it difficult to draw strong conclusions regarding their connection to the K-Pg extinction. In a 2014 report, Keller and her colleagues used uranium-lead zircon geochronology to more accurately identify the eruptions as occurring both within a span of one million years and around 250,000 years prior to the K-Pg boundary. Keller additionally determined that ocean temperatures rose seven to nine degrees Celsius during the most significant period of the Deccan eruptions. Along with ocean acidification, ozone reduction, acid rain, and a release of harmful gases, she asserts that these conditions were sufficient to have initiated the mass extinction. Keller has specifically rejected the Alvarez hypothesis, pointing to evidence she gathered from the Chicxulub crater in 2009 revealing that twenty inches of sediment separates the impact from the extinction. The finding suggests that the impact occurred 200,000 to 300,000 years before the K-Pg extinction, a period far too large for the two to be correlated. This, however, contrasts the range of 33,000 years determined by Paul Renne in 2015, as well the more recent assertion that a tsunami generated by the impact created the unusual sediment layer. Keller additionally claims that the impact did not cause as much ecological damage as is widely believed, and she determined that many foraminifera species began to decline well before the impact event occurred. Her 2009 project revealed that the 52 species found in the sediment prior to the impact were present in the sediment following it, suggesting that the impact caused minimal extinction. A more recent theory combining both Deccan volcanism and the impact hypothesis has been developed by teams at UC Berkeley led by Paul Renne and Mark Richards. This theory proposes that the impact itself instigated the most intense period of Deccan eruptions, both of which had devastating effects contributing to the K-Pg extinction. Renne and Richards calculated that the Chixculub impact was capable of producing seismic activity strong enough to initiate volcanic eruptions. They determined that the largest period of Deccan volcanic eruptions, or the Wai subgroup, occurred 50,000 to 100,000 years after the Chixculub impact, which is consistent with theoretical predictions modeling the length of time after which eruptions should occur. The group also confirmed that the length of time between the extinction and subsequent biological recovery was consistent with the length of Deccan volcanic activity, proposing that the eruptions paused the recovery of the marine ecosystems destroyed by the impact. Debate regarding the cause of the K-Pg extinction has proven to be extremely controversial among researchers, and the resilience of its intensity has earned it the moniker of the "dinosaur wars." Criticism is unusually harsh, targeting not only research findings but the credibility and integrity of the scientists themselves. Verbal accusations have been thrown both by and toward many prominent researchers including Gerta Keller and Luis Alvarez, discouraging civil debate and in some cases threatening careers. Walter Alvarez is an active member of the UC Berkeley team researching the connection between Deccan volcanism and the Chicxulub impact. 2016 Chicxulub crater drilling project In 2016, a scientific drilling project drilled deep into the peak ring of the Chicxulub impact crater, to obtain rock core samples from the impact itself. The discoveries were widely seen as confirming current theories related to both the crater impact, and its effects. They confirmed that the rock composing the peak ring had been subjected to immense pressures and forces and had been melted by immense heat and shocked by immense pressure from its usual state into its present form in just minutes; the fact that the peak ring was made of granite was also significant, since granite is not a rock found in sea-floor deposits, it originates much deeper in the Earth and had been ejected to the surface by the immense pressures of impact; that gypsum, a sulfate-containing rock that is usually present in the shallow seabed of the region, had been almost entirely removed and must therefore have been almost entirely vaporized and entered the atmosphere, and that the event was immediately followed by a huge megatsunami (a massive movement of sea waters) sufficient to lay down the largest known layer of sand separated by grain size directly above the peak ring. These strongly support the hypothesis that the impactor was large enough to create a 120-mile peak ring, to melt, shock and eject basement granite from the mid-crust deep within the Earth, to create colossal water movements, and to eject an immense quantity of vaporized rock and sulfates into the atmosphere, where they would have persisted for a long time. This global dispersal of dust and sulfates would have led to a sudden and catastrophic effect on the climate worldwide, large temperature drops, and devastated the food chain.
Physical sciences
Events
Earth science
2536716
https://en.wikipedia.org/wiki/Dental%20extraction
Dental extraction
A dental extraction (also referred to as tooth extraction, exodontia, exodontics, or informally, tooth pulling) is the removal of teeth from the dental alveolus (socket) in the alveolar bone. Extractions are performed for a wide variety of reasons, but most commonly to remove teeth which have become unrestorable through tooth decay, periodontal disease, or dental trauma, especially when they are associated with toothache. Sometimes impacted wisdom teeth (wisdom teeth that are stuck and unable to grow normally into the mouth) cause recurrent infections of the gum (pericoronitis), and may be removed when other conservative treatments have failed (cleaning, antibiotics and operculectomy). In orthodontics, if the teeth are crowded, healthy teeth may be extracted (often bicuspids) to create space so the rest of the teeth can be straightened. Procedure Extractions could be categorized into non-surgical (simple) and surgical, depending on the type of tooth to be removed and other factors. Assessment and special investigations A comprehensive history taking should be performed to find out the pain history of the tooth, the patient's medical history and the history of previous difficult extractions. The tooth should be assessed clinically i.e. checked visually by the dentist. Pre-extraction radiographs are not always necessary but are often taken to confirm the diagnosis and hence appropriate treatment plan. Radiographs also help in visualising the shape and size of roots which are beneficial in planning the extraction. All this information will aid the dentist in foreseeing any difficulties and hence preparing appropriately. Obtaining consent from patient In order to obtain permission from patient for extraction of tooth, the dentist should explain that other treatment options are available, what is involved in the dental extraction procedure, the potential risks of the procedure and the benefits of the procedure. The process of gaining consent should be documented in clinical notes. Giving local anaesthetic Before extracting a tooth, the dentist would deliver local anaesthetic to ensure the tooth and surrounding tissues are numb before they start the extraction. There are several techniques to achieve numbness of the tooth including infiltration – injection containing local anaesthetic is delivered into the gum near the root tip of the tooth to be extracted. This allows the local anaesthetic to penetrate through the bone, eventually reaching the nerve bundle of the tooth to be extracted. nerve block – injection containing local anaesthetic is delivered to an earlier branch of a nerve. For example, the inferior alveolar nerve block can be used to anaesthetise all the lower teeth. The two most commonly used local anaesthetics in the UK are lidocaine and articaine. Prior to injection, topical anaesthetic gel or cream, such as lidocaine or benzocaine, can be applied to the gum to numb the site of the injection up to a few millimetres deep. This should reduce the discomfort felt during the injection and thus help to reduce patient anxiety. Removal of tooth During extraction, multiple instruments are used to aid and ease the removal of the tooth whilst trying to minimally traumatise the tissues to allow for quicker healing. Extraction forceps are commonly used to remove teeth. Different shaped forceps are available depending on the type of tooth requiring removal, what side of the mouth (left or right) it is on and if it is an upper or lower tooth. The beaks of the forceps must grip onto the root of the tooth securely before pressure is applied along the long axis of the tooth towards the root. Different movements of the forceps can be employed to remove teeth. Generally, while keeping downwards pressure attempts to move the tooth towards the cheek side (buccal) and then the opposite direction (palatal or lingual) are made to loosen the tooth from its socket. For single, conical-rooted teeth such as the incisors, rotatory movements are also used. A 'figure of eight' movement can be used to extract lower molars. Instruments used are summarised below: In terms of operator positioning when removing a tooth, the patient is placed more supine when extracting an upper and more upright when extracting a lower. This is to allow direct vision for the operator during the procedure. A right handed operator will stand to the front of the patient and to their right when removing any upper teeth or lower left teeth. However, they will stand behind the patient and to the right when extracting a lower right tooth. Dental elevators can be used to aid removal of teeth. Various types are available that have different shapes. Their working ends are designed to engage into the space between the tooth and bone of the socket. Rotatory movements are then made to dislodge the tooth from the socket. Another similar looking but sharper instrument that can be used is a luxator; this instrument can be used gently and with great care to cut the ligament between the tooth and its boney socket (periodontal ligament). Achieving haemostasis Biting down on a piece of sterile gauze over the socket will provide firm pressure to the wound. Normally this is sufficient to stop any bleeding and will promote blood clot formation at the base of the socket. Moreover, the patient must be inhibited from eating and drinking hot food in the first 24 hours. Using straw for drinking is also prohibited due to the negative pressure it can produce which will lead to removal of a newly formed clot from the socket. The source of any bleeding can either be from soft tissues (gingiva and mucosa) or hard tissue (the bony socket). Bleeding of soft tissues can be controlled by several means including suturing the wound (stitches) and/ or using chemical agents such as tranexamic acid, ferric sulphate and silver nitrate. Bony bleeding can be arrested by using haemostatic gauze and bone wax. Other means of achieving haemostasis include electrocautery. Reasons Medical/Dental Severe tooth decay or infection (acute or chronic alveolar abscess, such as periapical abscess – collection of infected material [pus] forming at the tip of the root of a tooth). Despite the reduction in worldwide prevalence of dental caries, it is still the most common reason for extraction of (non-third molar) teeth, accounting for up to two thirds of extractions. Severe gum disease, which may affect the supporting tissues and bone structures of teeth. Treatment of symptomatic impacted wisdom teeth e.g. that are associated with pericoronitis, unrestorable caries or cysts. Prophylactic removal of asymptomatic impacted wisdom teeth. Historically, many asymptomatic impacted third molars were removed, however, both American and British Health Authorities now provide guidance about the indication for third molar removal. The American Public Health Association, for example, adopted a policy, Opposition to Prophylactic Removal of Third Molars (Wisdom Teeth), because of the large number of injuries resulting from unnecessary extractions. Supernumerary teeth that are blocking other teeth from coming in. Supplementary or malformed teeth. Fractured teeth. Teeth in the fracture line of the jaw bone Teeth which cannot be restored endodontically. Prosthetics; teeth detrimental to the fit or appearance of dentures. Head and neck radiation therapy, to treat and/or manage tumors, may require extraction of teeth, either before or after radiation treatments. Lower cost, compared to other treatments. Medically unnecessary extraction as a form of physical torture. It was once a common practice to remove the front teeth of institutionalized psychiatric patients who had a history of biting. Orthodontic In preparation for orthodontic treatment (braces). Extractions are commonly required before the provision of orthodontic treatment, to create space for crowded teeth to be moved into. The premolar teeth are the most commonly extracted teeth for this purpose. Aesthetics Cosmetic: to remove teeth of poor appearance, unsuitable for restoration. Types Extractions are often categorized as "simple" or "surgical". Simple extractions are performed on teeth that are visible in the mouth, usually with the patient under local anaesthetic, and require only the use of instruments to elevate and/or grasp the visible portion of the tooth. Typically the tooth is lifted using an elevator, and using dental forceps, specific tooth movements are performed (e.g. rocking the tooth back and forth) expanding the tooth socket. Once the periodontal ligament is broken and the supporting alveolar bone has been adequately widened the tooth can be removed. Typically, when teeth are removed with forceps, slow, steady pressure is applied with controlled force. Surgical extractions involve the removal of teeth that cannot be easily accessed or removed via simple extraction, for example because they have broken under the gum or because they have not erupted fully, such as an impacted wisdom tooth. Surgical extractions almost always require an incision. In a surgical extraction the dentist may elevate the soft tissues covering the tooth and bone, and may also remove some of the overlying and/or surrounding jaw bone with a drill or, less commonly, an instrument called an osteotome. Frequently, the tooth may be split into multiple pieces to facilitate its removal. Common risks after any extraction include pain, swelling, bleeding, bruising, infection, trismus (not being able to open as wide as normal) and dry socket. There are additional risks associated with the surgical extraction of wisdom teeth in particular: permanent or temporary damage to the inferior alveolar nerve +/- lingual nerve, causing permanent or temporary numbness, tingling or altered sensation to the lip, chin +/- tongue. Surgical procedure Incisions are made full thickness through mucosa and periosteum to bone. In general, the flap is extended from one tooth behind the tooth concerned to one tooth in front, including the interdental papilla. An anterior relieving incision is made extending down into the sulcus. This flap design is called “two sided”. A “three sided” flap includes an additional relieving incision posteriorly. The flap is raised using periosteal elevator to expose the area of interest. The flap is held out of the way with an instrument such as a rake retractor. A small gutter of bone is drilled away around the tooth to make space into which an application point for instruments can be achieved. It is important that copious amount of saline is used to cool the bone during this process. The tooth concerned can be removed using a combination of luxators, elevators and extraction forceps. Any sharp bone is smoothed off and the wound is irrigated with saline. The flap is repositioned and sutured in place. Pre-extraction consideration Anticoagulant/Antiplatelet Use Anticoagulants are drugs that interfere with the clotting cascade. Antiplatelets are drugs that interfere with platelet aggregation. These drugs are prescribed in certain medical conditions/situations to reduce the risk of a thromboembolic event. With this comes an increased risk of bleeding. Historically, the anticoagulant warfarin (belonging to the group of drugs called coumarins) and antiplatelets such as aspirin or clopidogrel, were prescribed commonly in these circumstances. However, whilst these drugs are still used, newer antiplatelet (e.g. ticagrelor) and anticoagulant (e.g. rivaroxaban, apixaban and dabigatran) drugs are being used more commonly. When considering dental treatment (including dental extractions) different guidance/precautions need to be followed depending on the drug prescribed and the individual patient circumstances. The Scottish Dental Clinical Effectiveness Programme (SDCEP) provides excellent guidance on this topic. Antibiotic Prescribing Individual patient circumstances should be evaluated prior to the use of antibiotics to reduce the risks of certain post-extraction complications. There is evidence that use of antibiotics before and/or after impacted wisdom tooth extraction reduces the risk of infections by 66%, and lowers incidence of dry socket by one third. For every 19 people who are treated with an antibiotic following impacted wisdom tooth removal, one infection is prevented. Use of antibiotics does not seem to have a direct effect on manifestation of fever, swelling, or trismus seven days post-extraction. In the 2021 Cochrane review, 23 randomized control double-blinded experiments were reviewed and, after considering the biased risk associated with these studies, it was concluded that there is moderate overall evidence supporting the routine use of antibiotics in practice in order to reduce risk of infection following a third molar extraction. There are still reasonable concerns remaining regarding the possible adverse effects of indiscriminate antibiotic use in patients. There are also concerns about development of antibiotic resistance which advises against the use of prophylactic antibiotics in practice. Assessing risk of nerve damage The inferior alveolar nerve (IAN), a branch of the trigeminal nerve (cranial nerve V), is a nerve that runs through the mandible (lower jaw) and supplies sensation to all the lower teeth, the lip and the chin. The lower teeth, and in particular the lower wisdom teeth, can therefore be in close proximity to this nerve. Damage to the inferior alveolar nerve is a risk of lower wisdom tooth removal (and other surgical procedures in the mandible). This means there is a risk of temporary or permanent numbness or altered sensation to the lip +/- chin on the side the surgery is taking place. Therefore, in order to assess this risk and inform the patient, the position of the inferior alveolar nerve in relation to a lower wisdom tooth needs to be assessed radiographically prior to extraction. The proximity of the root to the canal can be assessed radiographically and there are several factors which can indicate high risk of nerve damage: Darkening of the tooth root where it crosses the canal Deviation of the canal Narrowing of the roots Loss of the lamina dura of the canal Juxta apical area: a radiolucency associated with the root of the tooth which is not caused by periapical infection The lingual nerve can also be damaged (temporary or permanent) during surgical procedures in the mandible, in particular lower wisdom tooth removal. This would present as temporary or permanent numbness/altered sensation/altered taste to the side of tongue (side corresponding to side of surgery). Post-extraction healing Immediate management Immediately following the removal of a tooth, bleeding or oozing very commonly occurs. Pressure is applied by the patient biting on a gauze swab, and a thrombus (blood clot) forms in the socket (hemostatic response). Common hemostatic measures include local pressure application with gauze, and the use of oxidized cellulose (gelfoam) and fibrin sealant. Dental practitioners usually have absorbent gauze, hemostatic packing material (oxidized cellulose, collagen sponge), and suture kit available. Sometimes 30 minutes of continuous pressure is required to fully arrest bleeding. Complications Talking, which moves the mandible and hence removes the pressure applied on the socket, instead of keeping constant pressure, is a very common reason that bleeding might not stop. This is likened to someone with a bleeding wound on their arm, when being instructed to apply pressure, instead holds the wound intermittently every few moments. Coagulopathies (clotting disorders, e.g. hemophilia) are sometimes discovered for the first time if a person has had no other surgical procedure in their life, but this is rare. Sometimes the blood clot can be dislodged, triggering more bleeding and formation of a new blood clot, or leading to a dry socket (see complications). Some oral surgeons routinely scrape the walls of a socket to encourage bleeding in the belief that this will reduce the chance of dry socket, but there is no evidence that this practice works. The most serious post extraction healing complication is that slow or halted healing caused by the adverse effects of use of bisphosphonates which can cause osteochemonecrosis of the bone. Healing process The chance of further bleeding reduces as healing progresses, and is unlikely after 24 hours. The blood clot is covered by epithelial cells which proliferate from the gingival mucosa of socket margins, taking about 10 days to fully cover the defect. In the clot, neutrophils and macrophages are involved as an inflammatory response takes place. The proliferative and synthesizing phase next occurs, characterized by proliferation of osteogenic cells from the adjacent bone marrow in the alveolar bone. Bone formation starts after about 10 days from when the tooth was extracted. After 10–12 weeks, the outline of the socket is no longer apparent on an X-ray image. Bone remodeling as the alveolus adapts to the edentulous state occurs in the longer term as the alveolar process slowly resorbs. In maxillary posterior teeth, the degree of pneumatization of the maxillary sinus may also increase as the antral floor remodels. Post-extraction management Post-operative instructions Post-operative instructions following tooth extractions can be provided to encourage healing of the socket and prevent post-operative complications from arising. The advice listed below is usually given verbally, and can be supplemented with instructions in the written form. The combination of both methods of delivery has been found to reduce the severity of pain experienced by patients post-extraction and results in higher levels of patient satisfaction compared to verbal post-operative instructions alone. General advice The following can be recommended to encourage healing after a tooth extraction. Avoid exploration of the tooth socket with the tongue, a finger or toothbrush - otherwise this might disturb clot formation Avoid rinsing mouth for 24 hours to prevent dislodging the blood clot. After 24 hours has passed use warm salty mouthwashes especially after meals to keep the wound clean. Patients may be advised to use a plastic syringe with a curved tip to clean the sockets during the healing process, though evidence for the effectiveness of this practice is limited. Avoid alcohol for at least 24 hours Try to relax for the remainder of the day, avoiding strenuous activities that will cause an increase in blood pressure as this might disrupt clot formation For a few days, adopt a diet consisting of soft foods Pain management Many drug therapies are available for pain management after third molar extractions including NSAIDS (non-steroidal anti-inflammatory), APAP (acetaminophen), and opioid formulations. Although each has its own pain-relieving efficacy, they also pose adverse effects. According to two doctors, Ibuprofen-APAP combinations have the greatest efficacy in pain relief and reducing inflammation along with the fewest adverse effects. Taking either of these agents alone or in combination may be contraindicated in those who have certain medical conditions. For example, taking ibuprofen or any NSAID in conjunction with warfarin (a blood thinner) may not be appropriate. Also, prolonged use of ibuprofen or APAP has gastrointestinal and cardiovascular risks. There is high quality evidence that ibuprofen is superior to paracetamol in managing postoperative pain. Socket preservation Socket preservation or alveolar ridge preservation (ARP) is a procedure to reduce bone loss after tooth extraction to preserve the dental alveolus (tooth socket) in the alveolar bone. At the time of extraction a platelet rich fibrin (PRF) membrane containing bone growth enhancing elements is placed in the wound or a graft material or scaffold is placed in the socket of the extracted tooth. Post-extraction bleeding Post-extraction bleeding is bleeding that occurs 8–12 hours after tooth extraction. It is normal for bleeding to occur for up to 30 minutes following the extraction. It is not uncommon for the extraction site to discharge a small amount of blood or to see saliva blood-stained for up to 8 hours. Should post-extraction bleeding occur, UK guidance recommends biting onto a piece of damp gauze for at least 20 minutes whilst sitting in an upright position. It is important that the gauze is damp, but not soaking, to avoid disrupting clot formation and consequently inducing a rebound bleed. If the socket continues to bleed, it is recommended to repeat the process with a fresh piece of damp gauze for 20 minutes again. Should both attempts fail to stem the bleed, it is advised to seek professional advice. Factors Various factors contribute to post-extraction bleeding. Local factors Laceration of blood vessels Osseous bleeding from nutrients canal/ central vessels Inflammation Infection Traumatic extraction Failure of patient to follow post-extraction instructions Systemic factors Platelet problem Coagulation disorder/ excessive fibrinolysis Inherited/medication-induced problems Type of bleeding 1. Primary prolonged bleeding This type of bleeding occurs during/immediately after extraction, because true haemostasis has not been achieved. It is usually controlled by conventional techniques, such as applying pressure packs or haemostatic agents onto the wound. 2. Reactionary bleeding This type of bleeding starts 2 to 3 hours after tooth extraction, as a result of cessation of vasoconstriction. Systemic intervention might be required. 3. Secondary bleeding This type of bleeding usually begins 7 to 10 days post extraction, and is most likely due to infection destroying the blood clot or ulcerating local vessels. Interventions There is no clear evidence from clinical trials comparing the effects of different interventions for the treatment of post-extraction bleeding. In view of the lack of reliable evidence, clinicians must use their clinical experience to determine the most appropriate means of treating this condition, depending on patient-related factors. Complications Infection: The dentist may opt to prescribe antibiotics pre- and/or post-operatively if he or she determines the patient to be at risk of infection. Prolonged bleeding: The dentist has a variety of means at his/her disposal to address bleeding; however, small amounts of blood mixed in the saliva after extraction are normal, even up to 72 hours after extraction. Usually, however, bleeding will almost completely stop within eight hours of the surgery, with only minuscule amounts of blood mixed with saliva coming from the wound. A gauze compress will significantly reduce bleeding over a period of a few hours. Swelling: Often dictated by the amount of surgery performed, to extract a tooth (e.g., surgical insult to the tissues, both hard and soft, surrounding a tooth). Generally, when a surgical flap must be elevated (i.e., the periosteum covering the bone is thus injured), minor to moderate swelling will occur. A poorly cut soft tissue flap, for instance, where the periosteum is torn off rather than cleanly elevated off the underlying bone, will often increase such swelling. Similarly, when bone must be removed using a drill, more swelling is likely to occur. Bruising: Bruising may occur as a complication after tooth extraction. Bruising is more common in older people or people on aspirin or steroid therapy. It may take weeks for bruising to disappear completely. Sinus exposure, possibly leading to infection, and oral-antral communication: This can occur when extracting upper molars (and in some patients, upper premolars). The maxillary sinus sits directly above the roots of maxillary molars and premolars. There is a bony floor of the sinus, dividing the tooth socket from the sinus itself. This bone can range from thick to thin, from tooth to tooth, from patient to patient. In some cases it is absent and the root is, in fact, in the sinus. At other times, this bone may be removed with the tooth, or may be perforated during surgical extraction. The doctor typically mentions this risk to patients, based on evaluation of radiographs showing the relationship of the tooth to the sinus. The sinus cavity is lined with a membrane called the Sniderian membrane, which may or may not be perforated. If this membrane is exposed after an extraction, but remains intact, a "sinus exposed" has occurred. If the membrane is perforated, however, it is a "sinus communication". These two conditions are treated differently. In the event of a sinus communication, the dentist may decide to let it heal on its own, or, may need to surgically obtain primary closure—depending on the size of the exposure and the likelihood of the patient to heal. In both cases, a resorbable material called "gelfoam" is typically placed in the extraction site to promote clotting and serve as a framework for granulation tissue to accumulate. Patients are typically provided with prescriptions for antibiotics that cover sinus bacterial flora, decongestants, and careful instructions to follow during the healing period. Nerve injury: This is primarily an issue with extraction of third molars, but can occur with the extraction of any tooth should the nerve be close to the surgical site. Two nerves are typically of concern, and are found in duplicate (one left and one right): 1. the inferior alveolar nerve, which enters the mandible at the mandibular foramen and exits the mandible at the sides of the chin from the mental foramen. This nerve supplies sensation to the lower teeth on the right or left half of the dental arch, as well as sense of touch to the right or left half of the chin and lower lip. 2. The lingual nerve (one right and one left), which branches off the mandibular branches of the trigeminal nerve and courses just inside the jaw bone, entering the tongue and supplying sense of touch and taste to the right and left half of the anterior 2/3 of the tongue as well as the lingual gingiva (i.e., the gums on the inside surface of the dental arch). Such injuries can occur while lifting teeth (typically the inferior alveolar), but are most commonly caused by inadvertent damage with a surgical drill. Such injuries are rare and are usually temporary, but depending on the type of injury (i.e., Seddon classification: neuropraxia, axonotmesis, & neurotmesis), can be prolonged or even permanent. Displacement of tooth or part of the tooth into the maxillary sinus (upper teeth only). In such cases, the tooth or tooth fragment must almost always be retrieved. In some cases, the sinus cavity can be irrigated with saline (antral lavage) and the tooth fragment may be brought back to the site of the opening through which it entered the sinus, and may be retrievable. At other times, a window must be made into the sinus in the Canine fossa—a procedure referred to as a "Caldwell-Luc". Dry-socket (Alveolar osteitis) is a painful phenomenon that most commonly occurs a few days after the removal of mandibular (lower) wisdom teeth. It typically occurs when the blood clot within the healing tooth extraction site is disrupted. More likely, alveolar osteitis is a phenomenon of painful inflammation within the empty tooth socket because of the relatively poor blood supply to this area of the mandible (which explains why dry-socket is usually not experienced in other parts of the jaw). Inflamed alveolar bone, unprotected and exposed to the oral environment after tooth extraction, can become packed with food and debris. Dry-socket typically causes a sharp and sudden increase in pain commencing 2–5 days following the extraction of a mandibular molar, most commonly the third molar. This is often extremely unpleasant for the patient; the only symptom of dry-socket is pain, which often radiates up and down the head and neck. A dry-socket is not an infection, and is not directly associated with swelling because it occurs entirely within bone – it is a phenomenon of inflammation, within the bony lining, of an empty tooth socket. Because dry-socket is not an infection, the use of antibiotics has no effect on its rate of occurrence. There is some evidence that rinsing with chlorhexidine before or after extraction or placing chlorhexidine gel in the sockets of extracted teeth provides a benefit in preventing dry-socket, but potential adverse effects of chlorhexidine have to be considered. The risk factor for alveolar osteitis can dramatically increase with smoking after an extraction. Bone fragments: Particularly when extraction of molars is involved, it is not uncommon for the bones which formerly supported the tooth to shift and in some cases to erupt through the gums, presenting protruding sharp edges which can irritate the tongue and cause discomfort. This is distinguished from a similar phenomenon, where, broken fragments of bone or tooth left over from the extraction can also protrude through the gums. In the latter case, the fragments will usually work their way out on their own. In the former case, the protrusions can either be snipped off by the dentist, or eventually the exposed bone will erode away on its own. Maxilla tuberosity fracture: Can occur especially during molar extractions. There can be a variety of factors causing this including single standing molar, extracting in the wrong order, inadequate alveolar support, pathological gemination or extension of maxillary sinus weakening the area. Trismus: Trismus, also known as lockjaw, affects functions of the oral cavity by restricting opening of the mouth. A double blind, clinical study was done to test the effect of two different medications on post-extraction trismus. The patients who received a corticosteroid by IV had a statistically significant lower level of trismus when compared to patients receiving an NSAID by IV or no medication. Loss of a tooth: If an extracted tooth slips out of the forceps, it may be swallowed or inhaled. The patient may be aware of swallowing it, or, they may cough, which suggests inhalation of the tooth. The patient must be referred for a chest X-ray in hospital if a tooth cannot be found. If it has been swallowed, no action is necessary as it usually passes through the alimentary canal without doing any harm. But if it has been inhaled, an urgent operation is necessary to recover it from the airway or lung before it causes serious complications such as pneumonia or a lung abscess. Luxation of the adjacent tooth: The application of force during the extraction procedure must strictly be limited to the tooth that requires the extraction. Most cases of surgical extraction procedures require that the forces are diverted from the tooth itself to areas such as bone surrounding the tooth to ensure adequate bone removal before proceeding any further in the extraction procedure. Either way, the forces applied by various instruments during both simple and complicated surgical procedure may loosen the teeth present both in front of or behind the tooth depending upon the impact direction and location of the force being applied and that happening only if the forces divert from the actual tooth that needs extraction. Such deleterious forces may weaken the anchorage of adjacent teeth from within their bony socket, and hence result in weakening of the adjacent teeth. Extraction of the wrong tooth: Misdiagnosis, altered tooth morphology, faulty clinical examination, poor patient history, undetected/unmentioned previous extractions that may predispose the operator to consider another tooth to be a replicate of the one previously extracted are a few causes of extraction of a wrong tooth. Osteonecrosis: Osteonecrosis of the jaw is the slow destruction of bone in an extraction site. A case control study of 191 cases and 573 controls were used to understand the relationship between osteonecrosis of the jaw and prior usage of bisphosphonate drugs, which are commonly prescribed to treat osteoporosis. All of the participants were over 40 years of age, mostly female, and had been taking bisphosphonates for six months or longer. The presence of osteonecrosis of the jaw was reported by dentists' previous diagnosis of the participating case and control patient's medical records. Reports showed that women using bisphosphonates for more than two years are ten times more likely to experience osteonecrosis of the jaw, while women who have taken bisphosphonates for less than two years are four times more likely to suffer from osteonecrosis of the jaw when compared to women who were not taking bisphosphonates. Therefore, it is extremely important to report all medications used to the dentist before an extraction, so that osteonecrosis can be avoided. Atraumatic extraction Atraumatic extraction is a novel technique for extracting teeth with minimal trauma to the bone and surrounding tissues. It is especially useful in patients who are highly susceptible to complications such as bleeding, necrosis, or jaw fracture. It can also preserve bone for subsequent implant placement. Techniques involve minimal use of forceps, which damage socket walls, relying instead on luxators, elevators and syndesmotomy. Replacement options for missing teeth Following dental extraction, a gap is left. The options to fill this gap are commonly recorded as Bind, and the choice is made by dentist and patient based on several factors. History Historically, dental extractions have been used to treat a variety of illnesses. Before the discovery of antibiotics, chronic tooth infections were often linked to a variety of health problems, and therefore removal of a diseased tooth was a common treatment for various medical conditions. Instruments used for dental extractions date back several centuries. In the 14th century, Guy de Chauliac invented the dental pelican, which was used through the late 18th century. The pelican was replaced by the dental key which, in turn, was replaced by modern forceps in the 19th century. As dental extractions can vary tremendously in difficulty, depending on the patient and the tooth, a wide variety of instruments exist to address specific situations. Rarely, tooth extraction was used as a method of torture, e.g., to obtain forced confessions.
Biology and health sciences
Dental treatments
Health
38714
https://en.wikipedia.org/wiki/World
World
The world is the totality of entities, the whole of reality, or everything that exists. The nature of the world has been conceptualized differently in different fields. Some conceptions see the world as unique, while others talk of a "plurality of worlds". Some treat the world as one simple object, while others analyze the world as a complex made up of parts. In scientific cosmology, the world or universe is commonly defined as "the totality of all space and time; all that is, has been, and will be". Theories of modality talk of possible worlds as complete and consistent ways how things could have been. Phenomenology, starting from the horizon of co-given objects present in the periphery of every experience, defines the world as the biggest horizon, or the "horizon of all horizons". In philosophy of mind, the world is contrasted with the mind as that which is represented by the mind. Theology conceptualizes the world in relation to God, for example, as God's creation, as identical to God, or as the two being interdependent. In religions, there is a tendency to downgrade the material or sensory world in favor of a spiritual world to be sought through religious practice. A comprehensive representation of the world and our place in it, as is found in religions, is known as a worldview. Cosmogony is the field that studies the origin or creation of the world, while eschatology refers to the science or doctrine of the last things or of the end of the world. In various contexts, the term "world" takes a more restricted meaning associated, for example, with the Earth and all life on it, with humanity as a whole, or with an international or intercontinental scope. In this sense, world history refers to the history of humanity as a whole, and world politics is the discipline of political science studying issues that transcend nations and continents. Other examples include terms such as "world religion", "world language", "world government", "world war", "world population", "world economy", or "world championship". Etymology The English word world comes from the Old English . The Old English is a reflex of the Common Germanic *, a compound of 'man' and 'age', thus literally meaning roughly 'age of man'; this word led to Old Frisian , Old Saxon , Old Dutch , Old High German , and Old Norse . The corresponding word in Latin is , literally 'clean, elegant', itself a loan translation of Greek cosmos 'orderly arrangement'. While the Germanic word thus reflects a mythological notion of a "domain of Man" (compare Midgard), presumably as opposed to the divine sphere on the one hand and the chthonic sphere of the underworld on the other, the Greco-Latin term expresses a notion of creation as an act of establishing order out of chaos. Conceptions Different fields often work with quite different conceptions of the essential features associated with the term "world". Some conceptions see the world as unique: there can be no more than one world. Others talk of a "plurality of worlds". Some see worlds as complex things composed of many substances as their parts while others hold that worlds are simple in the sense that there is only one substance: the world as a whole. Some characterize worlds in terms of objective spacetime while others define them relative to the horizon present in each experience. These different characterizations are not always exclusive: it may be possible to combine some without leading to a contradiction. Most of them agree that worlds are unified totalities. Monism and pluralism Monism is a thesis about oneness: that only one thing exists in a certain sense. The denial of monism is pluralism, the thesis that, in a certain sense, more than one thing exists. There are many forms of monism and pluralism, but in relation to the world as a whole, two are of special interest: existence monism/pluralism and priority monism/pluralism. Existence monism states that the world is the only concrete object there is. This means that all the concrete "objects" we encounter in our daily lives, including apples, cars and ourselves, are not truly objects in a strict sense. Instead, they are just dependent aspects of the world-object. Such a world-object is simple in the sense that it does not have any genuine parts. For this reason, it has also been referred to as "blobject" since it lacks an internal structure like a blob. Priority monism allows that there are other concrete objects besides the world. But it holds that these objects do not have the most fundamental form of existence, that they somehow depend on the existence of the world. The corresponding forms of pluralism state that the world is complex in the sense that it is made up of concrete, independent objects. Scientific cosmology Scientific cosmology can be defined as the science of the universe as a whole. In it, the terms "universe" and "cosmos" are usually used as synonyms for the term "world". One common definition of the world/universe found in this field is as "[t]he totality of all space and time; all that is, has been, and will be". Some definitions emphasize that there are two other aspects to the universe besides spacetime: forms of energy or matter, like stars and particles, and laws of nature. World-conceptions in this field differ both concerning their notion of spacetime and of the contents of spacetime. The theory of relativity plays a central role in modern cosmology and its conception of space and time. A difference from its predecessors is that it conceives space and time not as distinct dimensions but as a single four-dimensional manifold called spacetime. This can be seen in special relativity in relation to the Minkowski metric, which includes both spatial and temporal components in its definition of distance. General relativity goes one step further by integrating the concept of mass into the concept of spacetime as its curvature. Quantum cosmology uses a classical notion of spacetime and conceives the whole world as one big wave function expressing the probability of finding particles in a given location. Theories of modality The world-concept plays a role in many modern theories of modality, sometimes in the form of possible worlds. A possible world is a complete and consistent way how things could have been. The actual world is a possible world since the way things are is a way things could have been. There are many other ways things could have been besides how they actually are. For example, Hillary Clinton did not win the 2016 US election, but she could have won them. So there is a possible world in which she did. There is a vast number of possible worlds, one corresponding to each such difference, no matter how small or big, as long as no outright contradictions are introduced this way. Possible worlds are often conceived as abstract objects, for example, in terms of non-obtaining states of affairs or as maximally consistent sets of propositions. On such a view, they can even be seen as belonging to the actual world. Another way to conceive possible worlds, made famous by David Lewis, is as concrete entities. On this conception, there is no important difference between the actual world and possible worlds: both are conceived as concrete, inclusive and spatiotemporally connected. The only difference is that the actual world is the world we live in, while other possible worlds are not inhabited by us but by our counterparts. Everything within a world is spatiotemporally connected to everything else but the different worlds do not share a common spacetime: They are spatiotemporally isolated from each other. This is what makes them separate worlds. It has been suggested that, besides possible worlds, there are also impossible worlds. Possible worlds are ways things could have been, so impossible worlds are ways things could not have been. Such worlds involve a contradiction, like a world in which Hillary Clinton both won and lost the 2016 US election. Both possible and impossible worlds have in common the idea that they are totalities of their constituents. Phenomenology Within phenomenology, worlds are defined in terms of horizons of experiences. When we perceive an object, like a house, we do not just experience this object at the center of our attention but also various other objects surrounding it, given in the periphery. The term "horizon" refers to these co-given objects, which are usually experienced only in a vague, indeterminate manner. The perception of a house involves various horizons, corresponding to the neighborhood, the city, the country, the Earth, etc. In this context, the world is the biggest horizon or the "horizon of all horizons". It is common among phenomenologists to understand the world not just as a spatiotemporal collection of objects but as additionally incorporating various other relations between these objects. These relations include, for example, indication-relations that help us anticipate one object given the appearances of another object and means-end-relations or functional involvements relevant for practical concerns. Philosophy of mind In philosophy of mind, the term "world" is commonly used in contrast to the term "mind" as that which is represented by the mind. This is sometimes expressed by stating that there is a gap between mind and world and that this gap needs to be overcome for representation to be successful. One problem in philosophy of mind is to explain how the mind is able to bridge this gap and to enter into genuine mind-world-relations, for example, in the form of perception, knowledge or action. This is necessary for the world to be able to rationally constrain the activity of the mind. According to a realist position, the world is something distinct and independent from the mind. Idealists conceive of the world as partially or fully determined by the mind. Immanuel Kant's transcendental idealism, for example, posits that the spatiotemporal structure of the world is imposed by the mind on reality but lacks independent existence otherwise. A more radical idealist conception of the world can be found in Berkeley's subjective idealism, which holds that the world as a whole, including all everyday objects like tables, cats, trees and ourselves, "consists of nothing but minds and ideas". Theology Different theological positions hold different conceptions of the world based on its relation to God. Classical theism states that God is wholly distinct from the world. But the world depends for its existence on God, both because God created the world and because He maintains or conserves it. This is sometimes understood in analogy to how humans create and conserve ideas in their imagination, with the difference being that the divine mind is vastly more powerful. On such a view, God has absolute, ultimate reality in contrast to the lower ontological status ascribed to the world. God's involvement in the world is often understood along the lines of a personal, benevolent God who looks after and guides His creation. Deists agree with theists that God created the world but deny any subsequent, personal involvement in it. Pantheists reject the separation between God and world. Instead, they claim that the two are identical. This means that there is nothing to the world that does not belong to God and that there is nothing to God beyond what is found in the world. Panentheism constitutes a middle ground between theism and pantheism. Against theism, it holds that God and the world are interrelated and depend on each other. Against pantheism, it holds that there is no outright identity between the two. History of philosophy In philosophy, the term world has several possible meanings. In some contexts, it refers to everything that makes up reality or the physical universe. In others, it can mean have a specific ontological sense (see world disclosure). While clarifying the concept of world has arguably always been among the basic tasks of Western philosophy, this theme appears to have been raised explicitly only at the start of the twentieth century, Plato Plato is well known for his theory of forms, which posits the existence of two different worlds: the sensible world and the intelligible world. The sensible world is the world we live in, filled with changing physical things we can see, touch and interact with. The intelligible world is the world of invisible, eternal, changeless forms like goodness, beauty, unity and sameness. Plato ascribes a lower ontological status to the sensible world, which only imitates the world of forms. This is due to the fact that physical things exist only to the extent that they participate in the forms that characterize them, while the forms themselves have an independent manner of existence. In this sense, the sensible world is a mere replication of the perfect exemplars found in the world of forms: it never lives up to the original. In the allegory of the cave, Plato compares the physical things we are familiar with to mere shadows of the real things. But not knowing the difference, the prisoners in the cave mistake the shadows for the real things. Wittgenstein Two definitions that were both put forward in the 1920s, however, suggest the range of available opinion. "The world is everything that is the case", wrote Ludwig Wittgenstein in his influential Tractatus Logico-Philosophicus, first published in 1921. Heidegger Martin Heidegger, meanwhile, argued that "the surrounding world is different for each of us, and notwithstanding that we move about in a common world". Eugen Fink "World" is one of the key terms in Eugen Fink's philosophy. He thinks that there is a misguided tendency in western philosophy to understand the world as one enormously big thing containing all the small everyday things we are familiar with. He sees this view as a form of forgetfulness of the world and tries to oppose it by what he calls the "cosmological difference": the difference between the world and the inner-worldly things it contains. On his view, the world is the totality of the inner-worldly things that transcends them. It is itself groundless but it provides a ground for things. It therefore cannot be identified with a mere container. Instead, the world gives appearance to inner-worldly things, it provides them with a place, a beginning and an end. One difficulty in investigating the world is that we never encounter it since it is not just one more thing that appears to us. This is why Fink uses the notion of play or playing to elucidate the nature of the world. He sees play as a symbol of the world that is both part of it and that represents it. Play usually comes with a form of imaginary play-world involving various things relevant to the play. But just like the play is more than the imaginary realities appearing in it so the world is more than the actual things appearing in it. Goodman The concept of worlds plays a central role in Nelson Goodman's late philosophy. He argues that we need to posit different worlds in order to account for the fact that there are different incompatible truths found in reality. Two truths are incompatible if they ascribe incompatible properties to the same thing. This happens, for example, when we assert both that the earth moves and that the earth is at rest. These incompatible truths correspond to two different ways of describing the world: heliocentrism and geocentrism. Goodman terms such descriptions "world versions". He holds a correspondence theory of truth: a world version is true if it corresponds to a world. Incompatible true world versions correspond to different worlds. It is common for theories of modality to posit the existence of a plurality of possible worlds. But Goodman's theory is different since it posits a plurality not of possible but of actual worlds. Such a position is in danger of involving a contradiction: there cannot be a plurality of actual worlds if worlds are defined as maximally inclusive wholes. This danger may be avoided by interpreting Goodman's world-concept not as maximally inclusive wholes in the absolute sense but in relation to its corresponding world-version: a world contains all and only the entities that its world-version describes. Religion Mythological cosmologies depict the world as centered on an axis mundi and delimited by a boundary such as a world ocean, a world serpent or similar. Hinduism Hinduism constitutes a family of religious-philosophical views. These views present perspectives on the nature and role of the world. Samkhya philosophy, for example, is a metaphysical dualism that understands reality as comprising 2 parts: purusha and prakriti. The term "purusha" stands for the individual conscious self that each of "us" possesses. Prakriti, on the other hand, is the 1 world inhabited by all these selves. Samkhya understands this world as a world of matter governed by the law of cause and effect. The term "matter" is understood in a sense in this tradition including physical and mental aspects. This is reflected in the doctrine of tattvas, according to which prakriti is made up of 23 principles or elements of reality. These principles include physical elements, like water or earth, and mental aspects, like intelligence or sense-impressions. The relation between purusha and prakriti is conceived as 1 of observation: purusha is the conscious self aware of the world of prakriti and does not causally interact with it. A conception of the world is present in Advaita Vedanta, the monist school among the Vedanta schools. Unlike the realist position defended in Samkhya philosophy, Advaita Vedanta sees the world of multiplicity as an illusion, referred to as Maya. This illusion includes impression of existing as separate experiencing selfs called Jivas. Instead, Advaita Vedanta teaches that on the most fundamental level of reality, referred to as Brahman, there exists no plurality or difference. All there is is 1 all-encompassing self: Atman. Ignorance is seen as the source of this illusion, which results in bondage to the world of mere appearances. Liberation is possible in the course of overcoming this illusion by acquiring the knowledge of Brahman, according to Advaita Vedanta. Christianity Contemptus mundi is the name given to the belief that the world, in all its vanity, is nothing more than a futile attempt to hide from God by stifling our desire for the good and the holy. This view has been characterised as a "pastoral of fear" by historian Jean Delumeau. "The world, the flesh, and the devil" is a traditional division of the sources of temptation. is a Latin phrase meaning "Catholic world", per the expression Urbi et Orbi, and refers to that area of Christendom under papal supremacy. Islam In Islam, the term "dunya" is used for the world. Its meaning is derived from the root word "dana", a term for "near". It is associated with the temporal, sensory world and earthly concerns, i.e. with this world in contrast to the spiritual world. Religious teachings warn of a tendency to seek happiness in this world and advise a more ascetic lifestyle concerned with the afterlife. Other strands in Islam recommend a balanced approach. Mandaeism In Mandaean cosmology, the world or earthly realm is known as Tibil. It is separated from the World of Light () above and the World of Darkness () below by aether (). Related terms and problems Worldviews A worldview is a comprehensive representation of the world and our place in it. As a representation, it is a subjective perspective of the world and thereby different from the world it represents. All higher animals need to represent their environment in some way in order to navigate it. But it has been argued that only humans possess a representation encompassing enough to merit the term "worldview". Philosophers of worldviews commonly hold that the understanding of any object depends on a worldview constituting the background on which this understanding can take place. This may affect not just our intellectual understanding of the object in question but the experience of it in general. It is therefore impossible to assess one's worldview from a neutral perspective since this assessment already presupposes the worldview as its background. Some hold that each worldview is based on a single hypothesis that promises to solve all the problems of our existence we may encounter. On this interpretation, the term is closely associated to the worldviews given by different religions. Worldviews offer orientation not just in theoretical matters but also in practical matters. For this reason, they usually include answers to the question of the meaning of life and other evaluative components about what matters and how we should act. A worldview can be unique to one individual but worldviews are usually shared by many people within a certain culture or religion. Paradox of many worlds The idea that there exist many different worlds is found in various fields. For example, theories of modality talk about a plurality of possible worlds and the many-worlds interpretation of quantum mechanics carries this reference even in its name. Talk of different worlds is also common in everyday language, for example, with reference to the world of music, the world of business, the world of football, the world of experience or the Asian world. But at the same time, worlds are usually defined as all-inclusive totalities. This seems to contradict the very idea of a plurality of worlds since if a world is total and all-inclusive then it cannot have anything outside itself. Understood this way, a world can neither have other worlds besides itself or be part of something bigger. One way to resolve this paradox while holding onto the notion of a plurality of worlds is to restrict the sense in which worlds are totalities. On this view, worlds are not totalities in an absolute sense. This might be even understood in the sense that, strictly speaking, there are no worlds at all. Another approach understands worlds in a schematic sense: as context-dependent expressions that stand for the current domain of discourse. So in the expression "Around the World in Eighty Days", the term "world" refers to the earth while in the colonial expression "the New World" it refers to the landmass of North and South America. Cosmogony Cosmogony is the field that studies the origin or creation of the world. This includes both scientific cosmogony and creation myths found in various religions. The dominant theory in scientific cosmogony is the Big Bang theory, according to which both space, time and matter have their origin in one initial singularity occurring about 13.8 billion years ago. This singularity was followed by an expansion that allowed the universe to sufficiently cool down for the formation of subatomic particles and later atoms. These initial elements formed giant clouds, which would then coalesce into stars and galaxies. Non-scientific creation myths are found in many cultures and are often enacted in rituals expressing their symbolic meaning. They can be categorized concerning their contents. Types often found include creation from nothing, from chaos or from a cosmic egg. Eschatology Eschatology refers to the science or doctrine of the last things or of the end of the world. It is traditionally associated with religion, specifically with the Abrahamic religions. In this form, it may include teachings both of the end of each individual human life and of the end of the world as a whole. But it has been applied to other fields as well, for example, in the form of physical eschatology, which includes scientifically based speculations about the far future of the universe. According to some models, there will be a Big Crunch in which the whole universe collapses back into a singularity, possibly resulting in a second Big Bang afterward. But current astronomical evidence seems to suggest that our universe will continue to expand indefinitely. World history World history studies the world from a historical perspective. Unlike other approaches to history, it employs a global viewpoint. It deals less with individual nations and civilizations, which it usually approaches at a high level of abstraction. Instead, it concentrates on wider regions and zones of interaction, often interested in how people, goods and ideas move from one region to another. It includes comparisons of different societies and civilizations as well as considering wide-ranging developments with a long-term global impact like the process of industrialization. Contemporary world history is dominated by three main research paradigms determining the periodization into different epochs. One is based on productive relations between humans and nature. The two most important changes in history in this respect were the introduction of agriculture and husbandry concerning the production of food, which started around 10,000 to 8,000 BCE and is sometimes termed the Neolithic Revolution, and the Industrial Revolution, which started around 1760 CE and involved the transition from manual to industrial manufacturing. Another paradigm, focusing on culture and religion instead, is based on Karl Jaspers' theories about the Axial Age, a time in which various new forms of religious and philosophical thoughts appeared in several separate parts of the world around the time between 800 and 200 BCE. A third periodization is based on the relations between civilizations and societies. According to this paradigm, history can be divided into three periods in relation to the dominant region in the world: Middle Eastern dominance before 500 BCE, Eurasian cultural balance until 1500 CE and Western dominance since 1500 CE. Big history employs an even wider framework than world history by putting human history into the context of the history of the universe as a whole. It starts with the Big Bang and traces the formation of galaxies, the Solar System, the Earth, its geological eras, the evolution of life and humans until the present day. World politics World politics, also referred to as global politics or international relations, is the discipline of political science studying issues of interest to the world that transcend nations and continents. It aims to explain complex patterns found in the social world that are often related to the pursuit of power, order and justice, usually in the context of globalization. It focuses not just on the relations between nation-states but also considers other transnational actors, like multinational corporations, terrorist groups, or non-governmental organizations. For example, it tries to explain events like 9/11, the 2003 war in Iraq or the financial crisis of 2007–2008. Various theories have been proposed in order to deal with the complexity involved in formulating such explanations. These theories are sometimes divided into realism, liberalism and constructivism. Realists see nation-states as the main actors in world politics. They constitute an anarchical international system without any overarching power to control their behavior. They are seen as sovereign agents that, determined by human nature, act according to their national self-interest. Military force may play an important role in the ensuing struggle for power between states, but diplomacy and cooperation are also key mechanisms for nations to achieve their goals. Liberalists acknowledge the importance of states but they also emphasize the role of transnational actors, like the United Nations or the World Trade Organization. They see humans as perfectible and stress the role of democracy in this process. The emergent order in world politics, on this perspective, is more complex than a mere balance of power since more different agents and interests are involved in its production. Constructivism ascribes more importance to the agency of individual humans than realism and liberalism. It understands the social world as a construction of the people living in it. This leads to an emphasis on the possibility of change. If the international system is an anarchy of nation-states, as the realists hold, then this is only so because we made it this way and may change since this is not prefigured by human nature, according to the constructivists.
Physical sciences
Physical cosmology
Astronomy
38722
https://en.wikipedia.org/wiki/Psychiatric%20hospital
Psychiatric hospital
A psychiatric hospital, also known as a mental health hospital, a behavioral health hospital, or an asylum is a specialized medical facility that focuses on the treatment of severe mental disorders. These institutions cater to patients with conditions such as schizophrenia, bipolar disorder, major depressive disorder, and eating disorders, among others. Overview Psychiatric hospitals vary considerably in size and classification. Some specialize in short-term or outpatient therapy for low-risk patients, while others provide long-term care for individuals requiring routine assistance or a controlled environment due to their psychiatric condition. Patients may choose voluntary commitment, but those deemed to pose a significant danger to themselves or others may be subject to involuntary commitment and treatment. In general hospitals, psychiatric wards or units serve a similar purpose. Modern psychiatric hospitals have evolved from the older concept of lunatic asylums, shifting focus from mere containment and restraint to evidence-based treatments that aim to help patients function in society. Drug administration, as well as structured and one-to-one therapy (such as occupational therapy and psychotherapy) play a role in trajectories. They are the focus of most studies on forms of treatment that exist in psychiatric wards. However, because psychiatric wards are social living spaces, inpatient relationships in psychiatric wards also play a role in survival and recovery trajectories. With successive waves of reform, and the introduction of effective evidence-based treatments, most modern psychiatric hospitals emphasize treatment, usually including a combination of psychiatric medications and psychotherapy, that assist patients in functioning in the outside world. Many countries have prohibited the use of physical restraints on patients, which includes tying psychiatric patients to their beds for days or even months at a time, though this practice still is periodically employed in the United States, India, Japan, and other countries. History Modern psychiatric hospitals evolved from, and eventually replaced, the older lunatic asylum. Their development also entails the rise of organized institutional psychiatry. Hospitals known as bimaristans were built in the Middle East in the early ninth century; the first was built in Baghdad under the leadership of Harun al-Rashid. While not devoted solely to patients with psychiatric disorders, early psychiatric hospitals often contained wards for patients exhibiting mania or other psychological distress. Because of cultural taboos against refusing to care for one's family members, mentally ill patients would be surrendered to a bimaristan only if the patient demonstrated violence, incurable chronic illness, or some other extremely debilitating ailment. Psychological wards were typically enclosed by iron bars owing to the aggression of some of the patients. In Western Europe, the first idea and set up for a proper mental hospital entered through Spain. A member of the Mercedarian Order named Juan Gilaberto Jofré traveled frequently to Islamic countries and observed several institutions that confined the insane. He proposed the founding of an institution exclusive for "sick people who had to be treated by doctors", something very modern for the time. The foundation was carried out in 1409 thanks to several wealthy men from Valencia who contributed funds for its completion. It was considered the first institution in the world at that time specialized in the treatment of mental illnesses. Later on, physicians, including Philippe Pinel at Bicêtre Hospital in France and William Tuke at York Retreat in England, began to advocate for the viewing of mental illness as a disorder that required compassionate treatment that would aid in the rehabilitation of the victim. In the Western world, the arrival of institutionalisation as a solution to the problem of madness was very much an advent of the nineteenth century. The first public mental asylums were established in Britain; the passing of the County Asylums Act 1808 empowered magistrates to build rate-supported asylums in every county to house the many 'pauper lunatics'. Nine counties first applied, the first public asylum opening in 1812 in Nottinghamshire. In 1828, the newly appointed Commissioners in Lunacy were empowered to license and supervise private asylums. The Lunacy Act 1845 made the construction of asylums in every county compulsory with regular inspections on behalf of the Home Secretary, and required asylums to have written regulations and a resident physician. At the beginning of the 19th century there were a few thousand people housed in a variety of disparate institutions throughout England, but by 1900 that figure had grown to about 100,000. This growth coincided with the growth of alienism, later known as psychiatry, as a medical specialism. The treatment of inmates in early lunatic asylums was sometimes very brutal and focused on containment and restraint. In the late 19th and early 20th centuries, psychiatric institutions ceased using terms such as "madness", "lunacy" or "insanity", which assumed a unitary psychosis, and began instead splitting into numerous mental diseases, including catatonia, melancholia, and dementia praecox, which is now known as schizophrenia. In 1961, sociologist Erving Goffman described a theory of the "total institution" and the process by which it takes efforts to maintain predictable and regular behavior on the part of both "guard" and "captor", suggesting that many of the features of such institutions serve the ritual function of ensuring that both classes of people know their function and social role, in other words of "institutionalizing" them. Asylums as a key text in the development of deinstitutionalization. With successive waves of reform and the introduction of effective evidence-based treatments, modern psychiatric hospitals provide a primary emphasis on treatment; and further, they attempt—where possible—to help patients control their own lives in the outside world with the use of a combination of psychiatric drugs and psychotherapy. These treatments can be involuntary. Involuntary treatments are among the many psychiatric practices which are questioned by the mental patient liberation movement. In America history in the 1980s after the "12,225,000 Acre Bill" it was emphasized that care would be given in asylums instead of housing the individuals in jails, poorhouses, or having them live on the streets. Due to the decrease over the years of psychiatric hospitals available depending on the state the availability of space and beds for new patients has drastically decreased. Types There are several different types of modern psychiatric hospitals, but all of them house people with mental illnesses of varying severity. In the United Kingdom, both crisis admissions and medium-term care are usually provided on acute admissions wards. Juvenile or youth wards in psychiatric hospitals or psychiatric wards are set aside for children or youth with mental illness. Long-term care facilities have the goal of treatment and rehabilitation within a short time-frame (two or three years). Another institution for the mentally ill is a community-based halfway house. Crisis stabilization In the United States, there are high acuity and low acuity crisis facilities (or Crisis Stabilization Units). High acuity crisis stabilization units serve individuals who are actively suicidal, violent, or intoxicated. Low acuity crisis facilities include peer respites, social detoxes, and other programs to serve individuals who are not actively suicidal/violent. Open units Open psychiatric units are not as secure as crisis stabilization units. They are not used for acutely suicidal people; instead, the focus in these units is to make life as normal as possible for patients while continuing treatment to the point where they can be discharged. However, patients are usually still not allowed to hold their own medications in their rooms because of the risk of an impulsive overdose. While some open units are physically unlocked, other open units still use locked entrances and exits, depending on the type of patients admitted. Medium term Another type of psychiatric hospital is medium term, which provides care lasting several weeks. Most drugs used for psychiatric purposes take several weeks to take effect, and the main purpose of these hospitals is to monitor the patient for the first few weeks of therapy to ensure the treatment is effective. Juvenile wards Juvenile wards are sections of psychiatric hospitals or psychiatric wards set aside for children with mental illness. However, there are a number of institutions specializing only in the treatment of juveniles, particularly when dealing with drug abuse, self-harm, eating disorders, anxiety, depression or other mental illnesses. As of 2020, the statistics of mental illness among inmates in jails and juvenile wards range from 15% to 20%. Because of this, many juvenile wards and prisons have opened an inpatient mental health unit within their facility. Long-term care facilities In the United Kingdom, long-term care facilities are now being replaced with smaller secure units, some within hospitals. Modern buildings, modern security, and being locally situated to help with reintegration into society once medication has stabilized the condition are often features of such units. Examples of this include the Three Bridges Unit at St Bernard's Hospital in West London and the John Munroe Hospital in Staffordshire. These units have the goal of treatment and rehabilitation to allow for transition back into society within a short time-frame, usually lasting two or three years. Not all patients' treatment meets this criterion, however, leading larger hospitals to retain this role. These hospitals provide stabilization and rehabilitation for those who are actively experiencing uncontrolled symptoms of mental disorders such as depression, bipolar disorders, eating disorders, and so on. In the United States long-term care facilities are used for individuals with severe and continuous mental health struggles. These types of hospitals provide a different from of care compared to other psychiatric hospitals, this type is designed to provide comprehensive care over an extended period of time, higher level of support and care, as well as heavy monitoring of patients.  Within these facilities the care can be better adapted to best fit each individual patient, this allows for a more patient centered focus on the form of care they are receiving. Halfway houses One type of institution for the mentally ill is a community-based halfway house. These facilities provide assisted living for an extended period of time for patients with mental illnesses, and they often aid in the transition to self-sufficiency. These institutions are considered to be one of the most important parts of a mental health system by many psychiatrists, although some localities lack sufficient funding. Political imprisonment In some countries, the mental institution may be used for the incarceration of political prisoners as a form of punishment. One notable historical example was the use of punitive psychiatry in the Soviet Union and China. Like the former Soviet Union and China, Belarus also has used punitive psychiatry toward political opponents and critics of current government in modern times. Secure units In the United Kingdom, criminal courts or the Home Secretary can, under various sections of the Mental Health Act, order the detention of offenders in a psychiatric hospital, but the term "criminally insane" is no longer legally or medically recognized. Secure psychiatric units exist in all regions of the UK for this purpose; in addition, there are a few specialist hospitals which offer treatment with high levels of security. These facilities are divided into three main categories: High, Medium and Low Secure. Although the phrase "Maximum Secure" is often used in the media, there is no such classification. "Local Secure" is a common misnomer for Low Secure units, as patients are often detained there by local criminal courts for psychiatric assessment before sentencing. Run by the National Health Service, these facilities which provide psychiatric assessments can also provide treatment and accommodation in a safe hospital environment which prevents absconding. Thus there is far less risk of patients harming themselves or others. In Dublin, the Central Mental Hospital performs a similar function. Community hospital utilization Community hospitals across the United States regularly discharge mental health patients, who are then typically referred to out-patient treatment and therapy. A study of community hospital discharge data from 2003 to 2011, however, found that mental health hospitalizations had increased for both children and adults. Compared to other hospital utilization, mental health discharges for children were the lowest while the most rapidly increasing hospitalizations were for adults under 64. Some units have been opened to provide therapeutically enhanced Treatment, a subcategory to the three main hospital unit types. In the UK, high secure hospitals exist, including Ashworth Hospital in Merseyside, Broadmoor Hospital in Crowthorne, Rampton Secure Hospital in Retford, and the State Hospital in Carstairs, Scotland. In Northern Ireland, the Isle of Man, and the Channel Islands, medium and low secure units exist but high secure units on the UK mainland are used for patients who qualify for the treatment under the Out of Area (Off-Island Placements) Referrals provision of the Mental Health Act 1983. Among the three unit types, medium secure facilities are the most prevalent in the UK. As of 2009, there were 27 women-only units in England. Irish units include those at prisons in Portlaise, Castelrea, and Cork. Criticism Psychiatrist Thomas Szasz in Hungary has argued that psychiatric hospitals are like prisons unlike other kinds of hospitals, and that psychiatrists who coerce people (into treatment or involuntary commitment) function as judges and jailers, not physicians. Historian Michel Foucault is widely known for his comprehensive critique of the use and abuse of the mental hospital system in Madness and Civilization. He argued that Tuke and Pinel's asylum was a symbolic recreation of the condition of a child under a bourgeois family. It was a microcosm symbolizing the massive structures of bourgeois society and its values: relations of Family–Children (paternal authority), Fault–Punishment (immediate justice), Madness–Disorder (social and moral order). Erving Goffman coined the term "total institution" for mental hospitals and similar places which took over and confined a person's whole life. Goffman placed psychiatric hospitals in the same category as concentration camps, prisons, military organizations, orphanages, and monasteries. In his book Asylums Goffman describes how the institutionalisation process socialises people into the role of a good patient, someone "dull, harmless and inconspicuous"; in turn, it reinforces notions of chronicity in severe mental illness. The Rosenhan experiment of 1973 demonstrated the difficulty of distinguishing sane patients from insane patients. Franco Basaglia, a leading psychiatrist who inspired and planned the psychiatric reform in Italy, also defined the mental hospital as an oppressive, locked, and total institution in which prison-like, punitive rules are applied, in order to gradually eliminate its own contents. Patients, doctors and nurses are all subjected (at different levels) to the same process of institutionalism. American psychiatrist Loren Mosher noticed that the psychiatric institution itself gave him master classes in the art of the "total institution": labeling, unnecessary dependency, the induction and perpetuation of powerlessness, the degradation ceremony, authoritarianism, and the primacy of institutional needs over the patients, whom it was ostensibly there to serve. The anti-psychiatry movement coming to the fore in the 1960s has opposed many of the practices, conditions, or existence of mental hospitals; due to the extreme conditions in them. The psychiatric consumer/survivor movement has often objected to or campaigned against conditions in mental hospitals or their use, voluntarily or involuntarily. The mental patient liberation movement emphatically opposes involuntary treatment but it generally does not object to any psychiatric treatments that are consensual, provided that both parties can withdraw consent at any time. While there is a lot of criticism to the set up and the form of care psychiatric hospitals provide, there is the more prominent issue of stigmatization from other individuals and the communities surrounding these hospitals. There has been an increase in the stigmatization towards individuals who receive professional mental health care in psychiatric hospitals. Stigmatization has a major impact on not only the patients in these hospitals but also the clients of so-called alternative settings. Having this stigma can cause future patients and individuals who need this care to be more hesitant to get the care due to the fear of future judgement and being a victim of this stigmatization. Some other criticism that can occur is by peers. This can have a direct impact on the patients. This alone can cause them not to feel as they can share or seek help from a professional mental health provider. Undercover journalism Alongside the 1973 academic investigation by Rosenhan and other similar experiments, several journalists have been willingly admitted to hospitals in order to conduct undercover journalism. These include: Julius Chambers, who visited Bloomingdale Insane Asylum in 1872, leading to the 1876 book A Mad World and Its People. Nellie Bly, who admitted herself to a mental institution in 1887, leading to the work Ten Days in a Mad-House. Frank Smith in 1935 admitted himself into a Kankakee hospital, leading to the articles "Seven days in the Madhouse" in the Chicago Daily Times. Michael Mok, who investigated similarly in New York 1961, winning the Lasker prize. Frank Sutherland, who received coaching from a psychiatrist in order to accurately feign symptoms, and spent 31 days in late 1973 to early 1974, leading to a series of articles in the Nashville Tennessean. Betty Wells, who investigated in 1974, with the articles titled "A Trip into Darkness" for the Wichita Eagle. Criteria When looking at the criteria for individuals who may need to be admitted into a psychiatric hospitals there are six things that are looked at to indicate the need for the hospital. These include mental status, self-care ability, responsible parties available, patients effect on environment, danger potential and the treatment prognosis.  The need for inpatient care can change depending on the individual and the presenting issues that need to be addressed. Some other criteria's can be if the individual is an immediate threat to themselves or others, this can be presented in something called a suicidal ideation. Some of the symptoms, disorders or signs of someone who is in need of a psychiatric hospital is, major depressive disorder, suicidal ideation, schizophrenia, eating disorder, post-traumatic stress disorder, and many others.
Biology and health sciences
General concepts
null
38780
https://en.wikipedia.org/wiki/Mariner%202
Mariner 2
Mariner 2 (Mariner-Venus 1962), an American space probe to Venus, was the first robotic space probe to report successfully from a planetary encounter. The first successful spacecraft in the NASA Mariner program, it was a simplified version of the Block I spacecraft of the Ranger program and an exact copy of Mariner 1. The missions of the Mariner 1 and 2 spacecraft are sometimes known as the Mariner R missions. Original plans called for the probes to be launched on the Atlas-Centaur, but serious developmental problems with that vehicle forced a switch to the much smaller Agena B second stage. As such, the design of the Mariner R vehicles was greatly simplified. Far less instrumentation was carried than on the Soviet Venera probes of this period—for example, forgoing a TV camera—as the Atlas-Agena B had only half as much lift capacity as the Soviet 8K78 booster. The Mariner 2 spacecraft was launched from Cape Canaveral on August 27, 1962, and passed as close as to Venus on December 14, 1962. The Mariner probe consisted of a 100 cm (39.4 in) diameter hexagonal bus, to which solar panels, instrument booms, and antennas were attached. The scientific instruments on board the Mariner spacecraft were: two radiometers (one each for the microwave and infrared portions of the spectrum), a micrometeorite sensor, a solar plasma sensor, a charged particle sensor, and a magnetometer. These instruments were designed to measure the temperature distribution on the surface of Venus and to make basic measurements of Venus' atmosphere. The primary mission was to receive communications from the spacecraft in the vicinity of Venus and to perform radiometric temperature measurements of the planet. A second objective was to measure the interplanetary magnetic field and charged particle environment. En route to Venus, Mariner 2 measured the solar wind, a constant stream of charged particles flowing outwards from the Sun, confirming the measurements by Luna 1 in 1959. It also measured interplanetary dust, which turned out to be scarcer than predicted. In addition, Mariner 2 detected high-energy charged particles coming from the Sun, including several brief solar flares, as well as cosmic rays from outside the Solar System. As it flew by Venus on December 14, 1962, Mariner 2 scanned the planet with its pair of radiometers, revealing that Venus has cool clouds and an extremely hot surface. Background With the advent of the Cold War, the two then-superpowers, the United States and the Soviet Union, both initiated ambitious space programs with the intent of demonstrating military, technological, and political dominance. The Soviets launched the Sputnik 1, the first Earth orbiting satellite, on October 4, 1957. The Americans followed suit with Explorer 1 on February 1, 1958, by which point the Soviets had already launched the first orbiting animal, Laika in Sputnik 2. Earth's orbit having been reached, focus turned to being the first to the Moon. The Pioneer program of satellites consisted of three unsuccessful lunar attempts in 1958. In early 1959, the Soviet Luna 1 was the first probe to fly by the Moon, followed by Luna 2, the first artificial object to impact the Moon. With the Moon achieved, the superpowers turned their eyes to the planets. As the closest planet to Earth, Venus presented an appealing interplanetary spaceflight target. Every 19 months, Venus and the Earth reach relative positions in their orbits around the Sun such that a minimum of fuel is required to travel from one planet to the other via a Hohmann Transfer Orbit. These opportunities mark the best time to launch exploratory spacecraft, requiring the least fuel to make the trip. The first such opportunity of the Space Race occurred in late 1957, before either superpower had the technology to take advantage of it. The second opportunity, around June 1959, lay just within the edge of technological feasibility, and U.S. Air Force contractor Space Technology Laboratory (STL) intended to take advantage of it. A plan drafted January 1959 involved two spacecraft evolved from the first Pioneer probes, one to be launched via Thor-Able rocket, the other via the yet-untested Atlas-Able. STL was unable to complete the probes before June, and the launch window was missed. The Thor-Able probe was repurposed as the deep space explorer Pioneer 5, which was launched March 11, 1960, and designed to maintain communications with Earth up to a distance of as it traveled toward the orbit of Venus. (The Atlas Able probe concept was repurposed as the unsuccessful Pioneer Atlas Moon probes.) No American missions were sent during the early 1961 opportunity. The Soviet Union launched Venera 1 on February 12, 1961, and on May 19–20 became the first probe to fly by Venus; however, it had stopped transmitting on February 26. For the summer 1962 launch opportunity, NASA contracted Jet Propulsion Laboratory (JPL) in July 1960 to develop "Mariner A", a spacecraft to be launched using the yet undeveloped Atlas-Centaur. By August 1961, it had become clear that the Centaur would not be ready in time. JPL proposed to NASA that the mission might be accomplished with a lighter spacecraft using the less powerful but operational Atlas-Agena. A hybrid of Mariner A and JPL's Block 1 Ranger lunar explorer, already under development, was suggested. NASA accepted the proposal, and JPL began an 11-month crash program to develop "Mariner R" (so named because it was a Ranger derivative). Mariner 1 would be the first Mariner R to be launched followed by Mariner 2. Spacecraft Three Mariner R spacecraft were built: two for launching and one to run tests, which was also to be used as a spare. Aside from its scientific capabilities, Mariner also had to transmit data back to Earth from a distance of more than , and to survive solar radiation twice as intense as that encountered in Earth orbit. Structure All three of the Mariner R spacecraft, including Mariner 2, weighed within of the design weight of , of which was devoted to non-experimental systems: maneuvering systems, fuel, and communications equipment for receiving commands and transmitting data. Once fully deployed in space, with its two solar panel "wings" extended, Mariner R was in height and across. The main body of the craft was hexagonal with six separate cases of electronic and electromechanical equipment: Two of the cases comprised the power system: switchgear that regulated and transmitted power from the 9800 solar cells to the rechargeable 1000 watt silver-zinc storage battery. Two more included the radio receiver, the three-watt transmitter, and control systems for Mariner's experiments. The fifth case held electronics for digitizing the analog data received by the experiments for transmission. The sixth case carried the three gyroscopes that determined Mariner's orientation in space. It also held the central computer and sequencer, the "brain" of the spacecraft that coordinated all of its activities pursuant to code in its memory banks and on a schedule maintained by an electronic clock tuned into equipment on Earth. At the rear of the spacecraft, a monopropellant (anhydrous hydrazine) 225 N rocket motor was mounted for course corrections. A nitrogen gas fueled stabilizing system of ten jet nozzles controlled by the onboard gyroscopes, Sun sensors, and Earth sensors, kept Mariner properly oriented to receive and transmit data to Earth. The primary high-gain parabolic antenna was also mounted on the underside of Mariner and kept pointed toward the Earth. An omnidirectional antenna atop the spacecraft would broadcast at times that the spacecraft was rolling or tumbling out of its proper orientation, to maintain contact with Earth; as an unfocused antenna, its signal would be much weaker than the primary. Mariner also mounted small antennas on each of the wings to receive commands from ground stations. Temperature control was both passive, involving insulated, and highly reflective components; and active, incorporating louvers to protect the case carrying the onboard computer. At the time the first Mariners were built, no test chamber existed to simulate the near-Venus solar environment, so the efficacy of these cooling techniques could not be tested until the live mission. Scientific instruments Background At the time of the Mariner project's inception, few of Venus' characteristics were definitely known. Its opaque atmosphere precluded telescopic study of the ground. It was unknown whether there was water beneath the clouds, though a small amount of water vapor above them had been detected. The planet's rotation rate was uncertain, though JPL scientists had concluded through radar observation that Venus rotated very slowly compared to the Earth, advancing the long-standing (but later disproven) hypothesis that the planet was tidally locked with respect to the Sun (as the Moon is with respect to the Earth). No oxygen had been detected in Venus' atmosphere, suggesting that life as existed on Earth was not present. It had been determined that Venus' atmosphere contained at least 500 times as much carbon dioxide as the Earth's. These comparatively high levels suggested that the planet might be subject to a runaway greenhouse effect with surface temperatures as high as , but this had not yet been conclusively determined. The Mariner spacecraft would be able to verify this hypothesis by measuring the temperature of Venus close-up; at the same time, the spacecraft could determine if there was a significant disparity between night and daytime temperatures. An on-board magnetometer and suite of charged particle detectors could determine if Venus possessed an appreciable magnetic field and an analog to Earth's Van Allen Belts. As the Mariner spacecraft would spend most of its journey to Venus in interplanetary space, the mission also offered an opportunity for long-term measurement of the solar wind of charged particles and to map the variations in the Sun's magnetosphere. The concentration of cosmic dust beyond the vicinity of Earth could be explored as well. Due to the limited capacity of the Atlas Agena, only of the spacecraft could be allocated to scientific experiments. Instruments A two-channel microwave radiometer of the crystal video type operating in the standard Dicke mode of chopping between the main antenna, pointed at the target, and a reference horn pointed at cold space. It was used to determine the absolute temperature of Venus' surface and details concerning its atmosphere through its microwave-radiation characteristics, including the daylight and dark hemispheres, and in the region of the terminator. Measurements were performed simultaneously in two frequency bands of 13.5 mm and 19 mm. The total weight of the radiometer was . Its average power consumption was 4 watts and its peak power consumption 9 watts. A two-channel infrared radiometer to measure the effective temperatures of small areas of Venus. The radiation that was received could originate from the planetary surface, clouds in the atmosphere, the atmosphere itself or a combination of these. The radiation was received in two spectral ranges: 8 to 9 μm (focused on 8.4 μm) and 10 to 10.8 μm (focused on 10.4 μm). The latter corresponding to the carbon dioxide band. The total weight of the infrared radiometer, which was housed in a magnesium casting, was , and it required 2.4 watts of power. It was designed to measure radiation temperatures between approximately . A three-axis fluxgate magnetometer to measure planetary and interplanetary magnetic fields. Three probes were incorporated in its sensors, so it could obtain three mutually orthogonal components of the field vector. Readings of these components were separated by 1.9 seconds. It had three analog outputs that had each two sensitivity scales: ± 64 γ and ± 320 γ (1 γ = 1 nanotesla). These scales were automatically switched by the instrument. The field that the magnetometer observed was the super-position of a nearly constant spacecraft field and the interplanetary field. Thus, it effectively measured only the changes in the interplanetary field. An ionization chamber with matched Geiger-Müller tubes (also known as a cosmic ray detector) to measure high-energy cosmic radiation. A particle detector (implemented through use of an Anton type 213 Geiger-Müller tube) to measure lower radiation (especially near Venus), also known as the Iowa detector, as it was provided by the University of Iowa. It was a miniature tube having a 1.2 mg/cm2 mica window about in diameter and weighing about . It detected soft x-rays efficiently and ultraviolet inefficiently, and was previously used in Injun 1, Explorer 12 and Explorer 14. It was able to detect protons above 500 keV in energy and electrons above 35 keV. The length of the basic telemetry frame was 887.04 seconds. During each frame, the counting rate of the detector was sampled twice at intervals separated by 37 seconds. The first sampling was the number of counts during an interval of 9.60 seconds (known as the 'long gate'); the second was the number of counts during an interval of 0.827 seconds (known as the 'short gate'). The long gate accumulator overflowed on the 256th count and the short gate accumulator overflowed on the 65,536th count. The maximum counting rate of the tube was 50,000 per second. A cosmic dust detector to measure the flux of cosmic dust particles in space. A solar plasma spectrometer to measure the spectrum of low-energy positively charged particles from the Sun, i.e. the solar wind. The magnetometer was attached to the top of the mast below the omnidirectional antenna. Particle detectors were mounted halfway up the mast, along with the cosmic ray detector. The cosmic dust detector and solar plasma spectrometer were attached to the top edges of the spacecraft base. The microwave radiometer, the infrared radiometer and the radiometer reference horns were rigidly mounted to a diameter parabolic radiometer antenna mounted near the bottom of the mast. All instruments were operated throughout the cruise and encounter modes except the radiometers, which were only used in the immediate vicinity of Venus. In addition to these scientific instruments, Mariner 2 had a data conditioning system (DCS) and a scientific power switching (SPS) unit. The DCS was a solid-state electronic system designed to gather information from the scientific instruments on board the spacecraft. It had four basic functions: analog-to-digital conversion, digital-to-digital conversion, sampling and instrument-calibration timing, and planetary acquisition. The SPS unit was designed to perform the following three functions: control of the application of AC power to appropriate portions of the science subsystem, application of power to the radiometers and removal of power from the cruise experiments during radiometer calibration periods, and control of the speed and direction of the radiometer scans. The DCS sent signals to the SPS unit to perform the latter two functions. Not included on any of the Mariner R spacecraft was a camera for visual photos. With payload space at a premium, project scientists considered a camera an unneeded luxury, unable to return useful scientific results. Carl Sagan, one of the Mariner R scientists, unsuccessfully fought for their inclusion, noting that not only might there be breaks in Venus' cloud layer, but "that cameras could also answer questions that we were way too dumb to even pose". Mission profile Prelude to Mariner 2 The launch window for Mariner, constrained both by the orbital relationship of Earth and Venus and the limitations of the Atlas Agena, was determined to fall in the 51-day period from July 22 through September 10. The Mariner flight plan was such that the two operational spacecraft would be launched toward Venus in a 30-day period within this window, taking slightly differing paths such that they both arrived at the target planet within nine days of each other, between the December 8 and 16. Only Cape Canaveral Launch Complex 12 was available for the launching of Atlas-Agena rockets, and it took 24 days to ready an Atlas-Agena for launch. This meant that there was only a 27-day margin for error for a two-launch schedule. Each Mariner would be launched into a parking orbit, whereupon the restartable Agena would fire a second time, sending Mariner on its way to Venus (errors in trajectory would be corrected by a mid-course burn of Mariner's onboard engines). Real-time radar tracking of the Mariner spacecraft while it was in parking orbit and upon its departure the Atlantic Missile Range would provide real-time radar tracking with stations at Ascension and Pretoria, while Palomar Observatory provided optical tracking. Deep space support was provided by three tracking and communications stations at Goldstone, California, Woomera, Australia, and Johannesburg, South Africa, each separated on the globe by around 120° for continuous coverage. On July 22, 1962, the two-stage Atlas-Agena rocket carrying Mariner 1 veered off-course during its launch due to a defective signal from the Atlas and a bug in the program equations of the ground-based guidance computer; the spacecraft was destroyed by the Range Safety Officer. Two days after that launch, Mariner 2 and its booster (Atlas vehicle 179D) were rolled out to LC-12. The Atlas proved troublesome to prepare for launch, and multiple serious problems with the autopilot occurred, including a complete replacement of the servoamplifier after it had suffered component damage due to shorted transistors. Launch At 1:53 AM EST on August 27, Mariner 2 was launched from Cape Canaveral Air Force Station Launch Complex 12 at 06:53:14 UTC. The bug in the rocket’s software that resulted in the loss of Mariner 1 had not been identified at the time of the launch. In the event the bug caused no issues with the launch since it was in a section of code that was only used when the data-feed from the ground was interrupted and there were no such interruptions during the launch of Mariner 2. The flight proceeded normally up to the point of the Agena booster engine cutoff, at which point the V-2 vernier engine lost pitch and yaw control. The vernier started oscillating and banging against its stops, resulting in a rapid roll of the launch vehicle that came close to threatening the integrity of the stack. At T+189 seconds, the rolling stopped and the launch continued without incident. The rolling motion of the Atlas resulted in ground guidance losing its lock on the booster and preventing any backup commands from being sent to counteract the roll. The incident was traced to a loose electrical connection in the vernier feedback transducer, which was pushed back into place by the centrifugal force of the roll, which also by fortunate coincidence left the Atlas only a few degrees off from where it started and within the range of the Agena's horizontal sensor. As a consequence of this episode, GD/A implemented improved fabrication of wiring harnesses and checkout procedures. Five minutes after liftoff, the Atlas and Agena-Mariner separated, followed by the first Agena burn and second Agena burn. The Agena-Mariner separation injected the Mariner 2 spacecraft into a geocentric escape hyperbola at 26 minutes 3 seconds after liftoff. The NASA NDIF tracking station at Johannesburg, South Africa, acquired the spacecraft about 31 minutes after launch. Solar panel extension was completed approximately 44 minutes after launch. The Sun lock acquired the Sun about 18 minutes later. The high-gain antenna was extended to its acquisition angle of 72°. The output of the solar panels was slightly above the predicted value. As all subsystems were performing normally, with the battery fully charged and the solar panels providing adequate power, the decision was made on August 29 to turn on cruise science experiments. On September 3, the Earth acquisition sequence was initiated, and Earth lock was established 29 minutes later. Mid-course maneuver Due to the Atlas-Agena putting Mariner slightly off course, the spacecraft required a mid-course correction, consisting of a roll-turn sequence, followed by a pitch-turn sequence and finally a motor-burn sequence. Preparation commands were sent to the spacecraft at 21:30 UTC on September 4. Initiation of the mid-course maneuver sequence was sent at 22:49:42 UTC and the roll-turn sequence started one hour later. The entire maneuver took approximately 34 minutes. As a result of the mid-course maneuver, the sensors lost their lock with the Sun and Earth. At 00:27:00 UTC the Sun re-acquisition began and at 00:34 UTC the Sun was reacquired. Earth re-acquisition started at 02:07:29 UTC and Earth was reacquired at 02:34 UTC. Loss of attitude control On September 8 at 12:50 UTC, the spacecraft experienced a problem with attitude control. It automatically turned on the gyros, and the cruise science experiments were automatically turned off. The exact cause is unknown as attitude sensors went back to normal before telemetry measurements could be sampled, but it may have been an Earth-sensor malfunction or a collision with a small unidentified object which temporarily caused the spacecraft to lose Sun lock. A similar experience happened on September 29 at 14:34 UTC. Again, all sensors went back to normal before it could be determined which axis had lost lock. By this date, the Earth sensor brightness indication had essentially gone to zero. This time, however, telemetry data indicated that the Earth-brightness measurement had increased to the nominal value for that point in the trajectory. Solar panel output On October 31, the output from one solar panel (with solar sail attached) deteriorated abruptly. It was diagnosed as a partial short circuit in the panel. As a precaution, the cruise science instruments were turned off. A week later, the panel resumed normal function, and cruise science instruments were turned back on. The panel permanently failed on November 15, but Mariner 2 was close enough to the Sun that one panel could supply adequate power; thus, the cruise science experiments were left active. Encounter with Venus Mariner 2 was the first spacecraft to successfully encounter another planet, passing as close as to Venus after 110 days of flight on December 14, 1962. Post encounter After encounter, cruise mode resumed. Spacecraft perihelion occurred on December 27 at a distance of . The last transmission from Mariner 2 was received on January 3, 1963, at 07:00 UTC, making the total time from launch to termination of the Mariner 2 mission 129 days. After passing Venus, Mariner 2 entered heliocentric orbit. Results The data produced during the flight consisted of two categoriesviz., tracking data and telemetry data. One particularly noteworthy piece of data gathered during the pioneering fly-by was the high temperature of the atmosphere, measured to be . Various properties of the solar wind were also measured for the first time. Scientific observations The microwave radiometer made three scans of Venus in 35 minutes on December 14, 1962, starting at 18:59 UTC. The first scan was made on the dark side, the second was near the terminator, and the third was located on the light side. The scans with the 19 mm band revealed peak temperatures of on the dark side, 595 ± 12 K near the terminator, and 511 ± 14 K on the light side. It was concluded that there is no significant difference in temperature across Venus. However, the results suggest a limb darkening, an effect which presents cooler temperatures near the edge of the planetary disk and higher temperatures near the center. This was evidence for the theory that the Venusian surface was extremely hot and the atmosphere optically thick. The infrared radiometer showed that the 8.4 μm and 10.4 μm radiation temperatures were in agreement with radiation temperatures obtained from Earth-based measurements. There was no systematic difference between the temperatures measured on the light side and dark side of the planet, which was also in agreement with Earth-based measurements. The limb darkening effect that the microwave radiometer detected was also present in the measurements by both channels of the infrared radiometer. The effect was only slightly present in the 10.4 μm channel but was more pronounced in the 8.4 μm channel. The 8.4 μm channel also showed a slight phase effect. The phase effect indicated that if a greenhouse effect existed, heat was transported in an efficient manner from the light side to the dark side of the planet. The 8.4 μm and 10.4 μm showed equal radiation temperatures, indicating that the limb darkening effect would appear to come from a cloud structure rather than the atmosphere. Thus, if the measured temperatures were actually cloud temperatures instead of surface temperatures, then these clouds would have to be quite thick. The magnetometer detected a persistent interplanetary magnetic field varying between 2 γ and 10 γ (nanotesla), which agrees with prior Pioneer 5 observations from 1960. This also means that interplanetary space is rarely empty or field-free. The magnetometer could detect changes of about 4 γ on any of the axes, but no trends above 10 γ were detected near Venus, nor were fluctuations seen like those that appear at Earth's magnetospheric termination. This means that Mariner 2 found no detectable magnetic field near Venus, although that did not necessarily mean that Venus had none. However, if Venus had a magnetic field, then it would have to be at least smaller than 1/10 the magnetic field of the Earth. In 1980, Pioneer 12 indeed showed that Venus has a small weak magnetic field. The Anton type 213 Geiger-Müller tube performed as expected. The average rate was 0.6 counts per second. Increases in its counting rate were larger and more frequent than for the two larger tubes, since it was more sensitive to particles of lower energy. It detected seven small solar bursts of radiation during September and October and 2 during November and December. The absence of a detectable magnetosphere was also confirmed by the tube; it detected no radiation belt at Venus similar to that of Earth. The count rate would have increased by 104, but no change was measured. It was also shown that in interplanetary space, the solar wind streams continuously, confirming a prediction by Eugene Parker, and the cosmic dust density is much lower than the near-Earth region. Improved estimates of Venus' mass and the value of the Astronomical Unit were made. Also, research, which was later confirmed by Earth-based radar and other explorations, suggested that Venus rotates very slowly and in a direction opposite that of the Earth.
Technology
Unmanned spacecraft
null
38799
https://en.wikipedia.org/wiki/Galanthus
Galanthus
Galanthus (from Ancient Greek , (, "milk") + (, "flower")), or snowdrop, is a small genus of approximately 20 species of bulbous perennial herbaceous plants in the family Amaryllidaceae. The plants have two linear leaves and a single small white drooping bell-shaped flower with six petal-like (petaloid) tepals in two circles (whorls). The smaller inner petals have green markings. Snowdrops have been known since the earliest times under various names, but were named Galanthus in 1753. As the number of recognised species increased, various attempts were made to divide the species into subgroups, usually on the basis of the pattern of the emerging leaves (vernation). In the era of molecular phylogenetics this characteristic has been shown to be unreliable and now seven molecularly defined clades are recognised that correspond to the biogeographical distribution of species. New species continue to be discovered. Most species flower in winter, before the vernal equinox (20 or 21 March in the Northern Hemisphere), but some flower in early spring and late autumn. Sometimes snowdrops are confused with the two related genera within the tribe Galantheae, snowflakes Leucojum and Acis. Description General All species of Galanthus are perennial petaloid herbaceous bulbous (growing from bulbs) monocot plants. The genus is characterised by the presence of two leaves, pendulous white flowers with six free perianth segments in two whorls. The inner whorl is smaller than the outer whorl and has green markings. Vegetative Leaves These are basal, emerging from the bulb initially enclosed in a tubular membranous sheath of cataphylls. Generally, these are two (sometimes three) in number and linear, strap-shaped, or oblanceolate. Vernation, the arrangement of the emerging leaves relative to each other, varies among species. These may be applanate (flat), supervolute (conduplicate), or explicative (pleated). In applanate vernation, the two leaf blades are pressed flat to each other within the bud and as they emerge; explicative leaves are also pressed flat against each other, but the edges of the leaves are folded back (externally recurved) or sometimes rolled; in supervolute plants, one leaf is tightly clasped around the other within the bud and generally remains at the point where the leaves emerge from the soil (for illustration, see Stearn and Davis). In the past, this feature has been used to distinguish between species and to determine the parentage of hybrids, but now has been shown to be homoplasious, and not useful in this regard. The scape (flowering stalk) is erect, leafless, terete, or compressed. Reproductive Inflorescence At the top of the scape is a pair of bract-like spathes (valves) usually fused down one side and joined by a papery membrane, appearing monophyllous (single). From between the spathes emerges a solitary (rarely two), pendulous, nodding, bell-shaped white flower, held on a slender pedicel. The flower bears six free perianth segments (tepals) rather than true petals, arranged in two whorls of three, the outer whorl being larger and more convex than the inner whorl. The outer tepals are acute to more or less obtuse, spathulate or oblanceolate to narrowly obovate or linear, shortly clawed, and erect spreading. The inner tepals are much shorter (half to two thirds as long), oblong, spathulate or oblanceolate, somewhat unguiculate (claw like); tapering to the base and erect. These tepals also bear green markings at the base, the apex, or both, that when at the apex, are bridge-shaped over the small sinus (notch) at the tip of each tepal, which are emarginate. Occasionally, the markings are either green-yellow, yellow, or absent, and the shape and size varies by species. Androecium The six stamens are inserted at the base of the perianth, and are very short (shorter than the inner perianth segments), the anthers basifixed (attached at their bases) with filaments much shorter than the anthers; they dehisce (open) by terminal pores or short slits. Gynoecium, fruit and seeds The inferior ovary is three-celled. The style is slender and longer than the anthers; the stigma is minutely capitate. The ovary ripens into a three-celled capsule fruit. This fruit is fleshy, ellipsoid or almost spherical, opening by three flaps, with seeds that are light brown to white and oblong with a small appendage or tail (elaiosome) containing substances attractive to ants, which distribute the seeds. The chromosome number is 2n=24. Floral formula: Distribution and habitat The genus Galanthus is native to Europe and the Middle East, from the Spanish and French Pyrenees in the west through to the Caucasus and Iran in the east, and south to Sicily, the Peloponnese, the Aegean, Turkey, Lebanon, and Syria. The northern limit is uncertain because G. nivalis has been widely introduced and cultivated throughout Europe. G. nivalis and some other species valued as ornamentals have become widely naturalised in Europe, North America, and other regions. In the Udmurt republic of Russia, Galanthus are found even above the 56th parallel. Galanthus nivalis is the best-known and most widespread representative of the genus Galanthus. It is native to a large area of Europe, stretching from the Pyrenees in the west, through France and Germany to Poland in the north, Italy, northern Greece, Bulgaria, Romania, Ukraine, and European Turkey. It has been introduced and is widely naturalised elsewhere. Although it is often thought of as a British native wild flower, or to have been brought to the British Isles by the Romans, it most likely was introduced around the early sixteenth century, and is currently not a protected species in the UK. It was first recorded as naturalised in the UK in Worcestershire and Gloucestershire in 1770. Most other Galanthus species are from the eastern Mediterranean, while several are found in the Caucasus, in southern Russia, Georgia, Armenia, and Azerbaijan. Galanthus fosteri is found in Jordan, Lebanon, Syria, Turkey, and, perhaps, Palestine. Most Galanthus species grow best in woodland, in acid or alkaline soil, although some are grassland or mountain species. Taxonomy History Early Snowdrops have been known since early times, being described by the classical Greek author Theophrastus, in the fourth century BCE, in his Περὶ φυτῶν ἱστορία (Latin: Historia plantarum, Enquiry into plants). He gave it, and similar plants, the name λευκόἲον (λευκος, leukos "white" and ἰόν, ion "violet") from which the later name Leucojum was derived. He described the plant as "ἑπεἰ τοῖς γε χρώμασι λευκἂ καἱ οὐ λεπυριώδη" (in colour white and bulbs without scales) and of their habits "Ἰῶν δ' ἁνθῶν τὀ μἑν πρῶτον ἑκφαἱνεται τὁ λευκόἲον, ὅπου μἑν ό ἀἠρ μαλακώτερος εὐθὑς τοῦ χειμῶνος, ὅπου δἐ σκληρότερος ὕστερον, ἑνιαχοῡ τοῡ ἣρος" (Of the flowers, the first to appear is the white violet. Where the climate is mild, it appears with the first sign of winter, but in more severe climates, later in spring) Rembert Dodoens, a Flemish botanist, described and illustrated this plant in 1583 as did Gerard in England in 1597 (probably using much of Dodoens' material), calling it Leucojum bulbosum praecox (Early bulbous violet). Gerard refers to Theophrastus's description as Viola alba or Viola bulbosa, using Pliny's translation, and comments that the plant had originated in Italy and had "taken possession" in England "many years past". The genus was formally named Galanthus and described by Carl Linnaeus in 1753, with the single species, Galanthus nivalis, which is the type species. Consequently, Linnaeus is granted the botanical authority. In doing so, he distinguished this genus and species from Leucojum (Leucojum bulbosum trifolium minus), a name by which it previously had been known. Modern In 1763 Michel Adanson began a system of arranging genera in families. Using the synonym Acrocorion (also spelt Akrokorion), he placed Galanthus in the family Liliaceae, section Narcissi. Lamarck provided a description of the genus in his encyclopedia (1786), and later, Illustrations des genres (1793). In 1789 de Jussieu, who is credited with the modern concept of genera organised in families, placed Galanthus and related genera within a division of Monocotyledons, using a modified form of Linnaeus' sexual classification, but with the respective topography of stamens to carpels rather than just their numbers. In doing so, he restored the name Galanthus and retained their placement under Narcissi, this time as a family (known as Ordo, at that time) and referred to the French vernacular name, Perce-neige (Snow-pierce), based on the plants tendency to push through early spring snow (see Ecology for illustration). The modern family of Amaryllidaceae, in which Galanthus is placed, dates to Jaume Saint-Hilaire (1805) who replaced Jussieu's Narcissi with Amaryllidées. In 1810, Brown proposed that a subgroup of Liliaceae be distinguished on the basis of the position of the ovaries and be referred to as Amaryllideae, and in 1813, de Candolle separated them by describing Liliacées Juss. and Amaryllidées Brown as two quite separate families. However, in his comprehensive survey of the Flora of France (Flore française, 1805–1815) he divided Liliaceae into a series of Ordres, and placed Galanthus into the Narcissi Ordre. This relationship of Galanthus to either liliaceous or amaryllidaceaous taxa (see Taxonomy of Liliaceae) was to last for another two centuries until the two were formally divided at the end of the twentieth century. Lindley (1830) followed this general pattern, placing Galanthus and related genera such as Amaryllis and Narcissus in his Amaryllideae (which he called The Narcissus Tribe in English). By 1853, the number of known plants was increasing considerably and he revised his schema in his last work, placing Galanthus together, and the other two genera in the modern Galantheae in tribe Amarylleae, order Amaryllidaceae, alliance Narcissales. These three genera have been treated together taxonomically by most authors, on the basis of an inferior ovary. As the number of plant species increased, so did the taxonomic complexity. By the time Bentham and Hooker published their Genera plantarum (1862–1883) ordo Amaryllideae contained five tribes, and tribe Amarylleae three subtribes (see Bentham & Hooker system). They placed Galanthus in subtribe Genuinae and included three species. Phylogeny Galanthus is one of three closely related genera making up the tribe Galantheae within subfamily Amaryllidoideae (family Amaryllidaceae). Sometimes snowdrops are confused with the other two genera, Leucojum and Acis (both called snowflakes). Leucojum species are much larger and flower in spring (or early summer, depending on the species), with all six tepals in the flower being the same size, although some "poculiform" (goblet- or cup-shaped) Galanthus species may have inner segments similar in shape and length to the outer ones. Galantheae are likely to have arisen in the Caucusus. Subdivision Galanthus has approximately 20 species, but new species continue to be described. G. trojanus was identified in Turkey in 2001. G. panjutinii (Panjutin's snowdrop) was discovered in 2012 in five locations in a small area (estimated at ) of the northern Colchis area (western Transcaucasus) of Georgia and Russia. G. samothracicus was identified in Greece in 2014. Since it has not been subjected to genetic sequencing, it remains unplaced. It resembles G. nivalis, but is outside the distribution of that species. Many species are difficult to identify, however, and traditional infrageneric classification based on plant morphology alone, such as those of Stern (1956), Traub (1963) and Davis (1999, 2001), has not reflected what is known about its evolutionary history, due to the morphological similarities among the species and relative lack of easily discernible distinguishing characteristics. Stern divided the genus into three series according to leaf vernation (the way the leaves are folded in the bud, when viewed in transverse section, see Description); section Nivales Beck (flat leaves) section Plicati Beck (plicate leaves) section Latifolii Stern (convolute leaves) Stern further utilised characteristics such as the markings of the inner segments, length of the pedicels in relation to the spathe, and the colour and shape of the leaves in identifying and classifying species Traub considered them as subgenera; subgenus Galanthus subgenus Plicatanthus Traub & Moldk. subgenus Platyphyllanthe Traub By contrast Davis, with much more information and specimens, included biogeography in addition to vernation, forming two series. He used somewhat different terminology for vernation, namely applanate (flat), explicative (plicate), and supervolute (convolute). He merged Nivalis and Plicati into series Galanthus, and divided Latifolii into two subseries, Glaucaefolii (Kem.-Nath) A.P.Davis and Viridifolii (Kem.-Nath) A.P.Davis. Early molecular phylogenetic studies confirmed the genus was monophyletic and suggested four clades, which were labelled as series, and showed that Davis' subseries were not monophyletic. An expanded study in 2013 demonstrated seven major clades, corresponding to biogeographical distribution. This study used nuclear encoded nrITS (Nuclear ribosomal internal transcribed spacer), and plastid encoded genes matK (Maturase K), trnL-F, ndhF, and psbK–psbI, and examined all species recognised at the time, as well as two naturally occurring putative hybrids. The morphological characteristic of vernation that earlier authors had mainly relied on was shown to be highly homoplasious. A number of species, such as G. nivalis and G. elwesii demonstrated intraspecific biogeographical clades, indicating problems with speciation and there may be a need for recircumscription. These clades were assigned names, partly according to Davis' previous groupings. In this model clade, the group containing G. platyphyllus is sister to the rest of the genus. By contrast, another study performed at the same time, using both nuclear and chloroplast DNA, but limited to the 14 species found in Turkey, largely confirmed Davis' series and subseries, and with biogeographical correlation. Series Galanthus in this study corresponded to clade nivalis, subseries Glaucaefolii with clade Elwesii and subseries Viridifolii with clades Woronowii and Alpinus. However, the model did not provide complete resolution. Clades sensu Ronsted et al. 2013 Platyphyllus clade (Caucasus, W. Transcaucasus, NE Turkey) Galanthus krasnovii Khokhr. 1963 Galanthus platyphyllus Traub & Moldenke 1948 Galanthus panjutinii Zubov & A.P.Davis 2012 Trojanus clade (NW Turkey) Galanthus trojanus A.P.Davis & Özhatay 2001 Ikariae clade (Aegean Islands) Galanthus ikariae Baker 1893 Elwesii clade (Turkey, Aegean Islands, SE Europe) Galanthus cilicicus Baker 1897 Galanthus elwesii Hook.f. 1875 (2 variants) Galanthus gracilis Celak. 1891 Galanthus peshmenii A.P.Davis & C.D.Brickell 1994 Nivalis clade (Europe, NW Turkey) Galanthus nivalis L. 1753 Galanthus plicatus M.Bieb. 1819 (2 subspecies) Galanthus reginae-olgae Orph. 1874 (2 subspecies) Woronowii clade (Caucasus, E. and NE Turkey, N. Iran) Galanthus fosteri Baker 1889 Galanthus lagodechianus Kem.-Nath. 1947 Galanthus rizehensis Stern 1956 Galanthus woronowii Losinsk. 1935 Alpinus clade (Caucasus, NE Turkey, N.Iran) Galanthus × allenii Baker 1891 Galanthus angustifolius Koss 1951 Galanthus alpinus Sosn. (2 variants) 1911 Galanthus koenenianus Lobin 1993 Galanthus transcaucasicus Fomin 1909 Unplaced Galanthus bursanus Zubov, Konca & A.P.Davis 2019 (NW Turkey) Galanthus samothracicus Kit Tan & Biel 2014 (Greece) Selected species Common snowdrop, Galanthus nivalis, grows to around 7–15 cm tall, flowering between January and April in the northern temperate zone (January–May in the wild). Applanate vernation Grown as ornamental. Crimean snowdrop, Galanthus plicatus, 30 cm tall, flowering January/March, white flowers, with broad leaves folded back at the edges (explicative vernation) Giant snowdrop, Galanthus elwesii, a native of the Levant, 23 cm tall, flowering January/February, with large flowers, the three inner segments of which often have a much larger and more conspicuous green blotch (or blotches) than the more common kinds; supervolute vernation. Grown as ornamental. Galanthus reginae-olgae, from Greece and Sicily, is quite similar in appearance to G. nivalis, but flowers in autumn before the leaves appear. The leaves, which appear in the spring, have a characteristic white stripe on their upper side; applanate vernation G. reginae-olgae subsp. vernalis, from Sicily, northern Greece and the southern part of former Yugoslavia, blooms at the end of the winter with developed young leaves and is thus easily confused with G. nivalis. Etymology Galanthus is derived from the Greek γάλα (gala), meaning "milk" and ἄνθος (anthos) meaning "flower", alluding to the colour of the flowers. The epithet nivalis is derived from the Latin, meaning "of the snow". The word "Snowdrop" may be derived from the German Schneetropfen (snow-drop), the tear drop shaped pearl earrings popular in the sixteenth and seventeenth centuries. Other, earlier, common names include Candlemas bells, Fair maids of February, and White ladies (see Symbols). Ecology Snowdrops are hardy herbaceous plants that perennate by underground bulbs. They are among the earliest spring bulbs to bloom, although a few forms of G. nivalis are autumn flowering. In colder climates, they will emerge through snow (see illustration). They naturalise relatively easily forming large drifts. These are often sterile, found near human habitation, and also former monastic sites. The leaves die back a few weeks after the flowers have faded. Galanthus plants are relatively vigorous and may spread rapidly by forming bulb offsets. They also spread by dispersal of seed, animals disturbing bulbs, and water if disturbed by floods. Conservation Some snowdrop species are threatened in their wild habitats, due to habitat destruction, illegal collecting, and climate change. In most countries collecting bulbs from the wild is now illegal. Under CITES regulations, international trade in any quantity of Galanthus, whether bulbs, live plants, or even dead ones, is illegal without a CITES permit. This applies to hybrids and named cultivars, as well as species. CITES lists all species, but allows a limited trade in wild-collected bulbs of just three species (G. nivalis, G. elwesii, and G. woronowii) from Turkey and Georgia. A number of species are on the IUCN Red List of threatened species, with the conservation status being G. trojanus as critically endangered, four species vulnerable, G. nivalis is near threatened and several species show decreasing populations. G. panjutinii is considered endangered. One of its five known sites, at Sochi, was destroyed by preparations for the 2014 Winter Olympics. Cultivation Galanthus species and cultivars are extremely popular as symbols of spring and are traded more than any other wild-source ornamental bulb genus. Millions of bulbs are exported annually from Turkey and Georgia. For instance export quotas for 2016 for G. elwesii were 7 million for Turkey. Quotas for G. worononowii were 5 million for Turkey and 15 million for Georgia. These figures include both wild-taken and artificially propagated bulbs. Snowdrop gardens Celebrated as a sign of spring, snowdrops may form impressive carpets of white in areas where they are native or have been naturalised. These displays may attract large numbers of sightseers. There are a number of snowdrop gardens in England, Wales, Scotland, and Ireland. Several gardens open specially in February for visitors to admire the flowers. Sixty gardens took part in Scotland's first Snowdrop Festival (1 Feb–11 March 2007). Several gardens in England open during snowdrop season for the National Gardens Scheme (NGS) and in Scotland for Scotland's Gardens. Colesbourne Park in Gloucestershire is one of the best known of the English snowdrop gardens, being the home of Henry John Elwes, a collector of Galanthus specimens, and after whom Galanthus elwesii is named. Cultivars Numerous single- and double-flowered cultivars of Galanthus nivalis are known, and also of several other Galanthus species, particularly G. plicatus and G. elwesii. Also, many hybrids between these and other species exist (more than 500 cultivars are described in Bishop, Davis, and Grimshaw's book, plus lists of many cultivars that have now been lost, and others not seen by the authors). They differ particularly in the size, shape, and markings of the flower, the period of flowering, and other characteristics, mainly of interest to the keen (even fanatical) snowdrop collectors, known as "galanthophiles", who hold meetings where the scarcer cultivars change hands.  Double-flowered cultivars and forms, such as the extremely common Galanthus nivalis f. pleniflorus 'Flore Pleno', may be less attractive to some people, but they can have greater visual impact in a garden setting. Cultivars with yellow markings and ovaries rather than the usual green are also grown, such as 'Wendy's Gold'. Many hybrids have also occurred in cultivation. Awards , the following have gained the Royal Horticultural Society's Award of Garden Merit: Galanthus 'Ailwyn' Galanthus 'Atkinsii' Galanthus 'Bertram Anderson' Galanthus elwesii Galanthus elwesii 'Comet' Galanthus elwesii 'Godfrey Owen' Galanthus elwesii 'Mrs Macnamara' Galanthus elwesii var. monostictus Galanthus 'John Gray' Galanthus 'Lady Beatrix Stanley' Galanthus 'Magnet' Galanthus 'Merlin' Galanthus nivalis Galanthus nivalis f. pleniflorus 'Flore Pleno' Galanthus nivalis 'Viridapice' Galanthus plicatus Galanthus plicatus 'Augustus' Galanthus plicatus 'Diggory' Galanthus plicatus 'Three Ships' Galanthus reginae-olgae subsp. reginae-olgae Galanthus 'S. Arnott' Galanthus 'Spindlestone Surprise' Galanthus 'Straffan' Galanthus 'Trumps' Galanthus woronowii Propagation Propagation is by offset bulbs, either by careful division of clumps in full growth ("in the green"), or removed when the plants are dormant, immediately after the leaves have withered; or by seeds sown either when ripe, or in spring. Professional growers and keen amateurs also use such methods as "twin-scaling" to increase the stock of choice cultivars quickly. Toxicity Snowdrops contain an active lectin or agglutinin named GNA for Galanthus nivalis agglutinin. Medicinal use In 1983, Andreas Plaitakis and Roger Duvoisin suggested that the mysterious magical herb, moly, that appears in Homer's Odyssey is the snowdrop. One of the active principles present in the snowdrop is the alkaloid galantamine, which, as an acetylcholinesterase inhibitor, could have acted as an antidote to Circe's poisons. Further supporting this notion are notes made during the fourth century BC by the Greek scholar Theophrastus who wrote in Historia plantarum that moly was "used as an antidote against poisons" although which specific poisons it was effective against remains unclear. Galantamine (or galanthamine) may be helpful in the treatment of Alzheimer's disease, although it is not a cure; the substance also occurs naturally in daffodils and other narcissi. In popular culture Snowdrops figure prominently in art and literature, often as a symbol in poetry of spring, purity, and religion (see Symbols), such as Walter de la Mare's poem The Snowdrop (1929). In this poem, he likened the triple tepals in each whorl ("A triplet of green-pencilled snow") to the Holy Trinity. He used snowdrop imagery several times in his poetry, such as Blow, Northern Wind (1950) – see Box. Another instance is the poem by Letitia Elizabeth Landon in which she asks "Thou fairy gift from summer, Why art thou blooming now?" In the fairy-tale play The Twelve Months by Russian writer Samuil Marshak, a greedy queen decrees that a basket of gold coins shall be rewarded to anyone who can bring her galanthus flowers in the dead of winter. A young orphan girl is sent out during a snow storm by her cruel stepmother to find the spirits of the 12 months of the year, who take pity on her and not only save her from freezing to death, but also make it possible for her to gather the flowers even in winter. The Soviet traditionally animated film The Twelve Months (1956), Lenfilm film The Twelve Months (1972), and the anime film Twelve Months (1980) (Sekai meisaku dowa mori wa ikiteiru in Japan), are based on this fairy-tale play. "Snowdrops" was the nickname that the British people gave during the Second World War to the military police of the United States Army (who were stationed in the UK preparatory to the invasion of the continent) because they wore a white helmet, gloves, gaiters, and Sam Browne belt against their olive drab uniforms. In the German fairy tale, Snow White and the Seven Dwarfs, "Snowdrop" is used as an alternate name for the Princess Snow White. The short story The Snowdrop by Hans Christian Andersen follows the fate of a snowdrop from a bulb striving toward the light to a picked flower placed in a book of poetry. Russian composer Tchaikovsky wrote a series of 12 piano pieces, each one named after a month of the year with a second name suggesting something associated with that month. His "April" piece is subnamed "Snow Drop". The Russian climate having a later spring, and winter ending a bit later than in other places. Johann Strauss II named his very successful waltz Schneeglöckchen (Snowdrops) op. 143 after this flower. The inspiration is especially evident in the cello introduction and in the slow unfurling of the opening waltz. Strauss composed this piece for a Russian Embassy dinner given at the Sperl ballroom in Vienna on 2 December 1853, but did not perform it publicly until the year 1854. The Sperl banquet was given in honour of her Excellency Frau Maria von Kalergis, daughter of the Russian diplomat and foreign minister Count Karl Nesselrode, and Strauss also dedicated his waltz to her. Symbolism Early names refer to the association with the religious feast of Candlemas (February 2) – the optimum flowering time of the plant – at which young women, robed in white, would walk in solemn procession in commemoration of the Purification of the Virgin, an alternative name for the feast day. The French name of refers to Candlemas, while an Italian name, , refers to purification. The German name of (little snow bells) invokes the symbol of bells. In the language of flowers, the snowdrop is synonymous with 'hope' (and the goddess Persephone's/Proserpina's return from Hades), as it blooms in early springtime, just before the vernal equinox, and so, is seen as 'heralding' the new spring and new year. In more recent times, the snowdrop was adopted as a symbol of sorrow and of hope following the Dunblane massacre in Scotland, and lent its name to the subsequent campaign to restrict the legal ownership of handguns in the UK.
Biology and health sciences
Asparagales
Plants
38801
https://en.wikipedia.org/wiki/Algebraic%20topology
Algebraic topology
Algebraic topology is a branch of mathematics that uses tools from abstract algebra to study topological spaces. The basic goal is to find algebraic invariants that classify topological spaces up to homeomorphism, though usually most classify up to homotopy equivalence. Although algebraic topology primarily uses algebra to study topological problems, using topology to solve algebraic problems is sometimes also possible. Algebraic topology, for example, allows for a convenient proof that any subgroup of a free group is again a free group. Main branches Below are some of the main areas studied in algebraic topology: Homotopy groups In mathematics, homotopy groups are used in algebraic topology to classify topological spaces. The first and simplest homotopy group is the fundamental group, which records information about loops in a space. Intuitively, homotopy groups record information about the basic shape, or holes, of a topological space. Homology In algebraic topology and abstract algebra, homology (in part from Greek ὁμός homos "identical") is a certain general procedure to associate a sequence of abelian groups or modules with a given mathematical object such as a topological space or a group. Cohomology In homology theory and algebraic topology, cohomology is a general term for a sequence of abelian groups defined from a cochain complex. That is, cohomology is defined as the abstract study of cochains, cocycles, and coboundaries. Cohomology can be viewed as a method of assigning algebraic invariants to a topological space that has a more refined algebraic structure than does homology. Cohomology arises from the algebraic dualization of the construction of homology. In less abstract language, cochains in the fundamental sense should assign "quantities" to the chains of homology theory. Manifolds A manifold is a topological space that near each point resembles Euclidean space. Examples include the plane, the sphere, and the torus, which can all be realized in three dimensions, but also the Klein bottle and real projective plane which cannot be embedded in three dimensions, but can be embedded in four dimensions. Typically, results in algebraic topology focus on global, non-differentiable aspects of manifolds; for example Poincaré duality. Knot theory Knot theory is the study of mathematical knots. While inspired by knots that appear in daily life in shoelaces and rope, a mathematician's knot differs in that the ends are joined so that it cannot be undone. In precise mathematical language, a knot is an embedding of a circle in three-dimensional Euclidean space, . Two mathematical knots are equivalent if one can be transformed into the other via a deformation of upon itself (known as an ambient isotopy); these transformations correspond to manipulations of a knotted string that do not involve cutting the string or passing the string through itself. Complexes A simplicial complex is a topological space of a certain kind, constructed by "gluing together" points, line segments, triangles, and their n-dimensional counterparts (see illustration). Simplicial complexes should not be confused with the more abstract notion of a simplicial set appearing in modern simplicial homotopy theory. The purely combinatorial counterpart to a simplicial complex is an abstract simplicial complex. A CW complex is a type of topological space introduced by J. H. C. Whitehead to meet the needs of homotopy theory. This class of spaces is broader and has some better categorical properties than simplicial complexes, but still retains a combinatorial nature that allows for computation (often with a much smaller complex). Method of algebraic invariants An older name for the subject was combinatorial topology, implying an emphasis on how a space X was constructed from simpler ones (the modern standard tool for such construction is the CW complex). In the 1920s and 1930s, there was growing emphasis on investigating topological spaces by finding correspondences from them to algebraic groups, which led to the change of name to algebraic topology. The combinatorial topology name is still sometimes used to emphasize an algorithmic approach based on decomposition of spaces. In the algebraic approach, one finds a correspondence between spaces and groups that respects the relation of homeomorphism (or more general homotopy) of spaces. This allows one to recast statements about topological spaces into statements about groups, which have a great deal of manageable structure, often making these statements easier to prove. Two major ways in which this can be done are through fundamental groups, or more generally homotopy theory, and through homology and cohomology groups. The fundamental groups give us basic information about the structure of a topological space, but they are often nonabelian and can be difficult to work with. The fundamental group of a (finite) simplicial complex does have a finite presentation. Homology and cohomology groups, on the other hand, are abelian and in many important cases finitely generated. Finitely generated abelian groups are completely classified and are particularly easy to work with. Setting in category theory In general, all constructions of algebraic topology are functorial; the notions of category, functor and natural transformation originated here. Fundamental groups and homology and cohomology groups are not only invariants of the underlying topological space, in the sense that two topological spaces which are homeomorphic have the same associated groups, but their associated morphisms also correspond—a continuous mapping of spaces induces a group homomorphism on the associated groups, and these homomorphisms can be used to show non-existence (or, much more deeply, existence) of mappings. One of the first mathematicians to work with different types of cohomology was Georges de Rham. One can use the differential structure of smooth manifolds via de Rham cohomology, or Čech or sheaf cohomology to investigate the solvability of differential equations defined on the manifold in question. De Rham showed that all of these approaches were interrelated and that, for a closed, oriented manifold, the Betti numbers derived through simplicial homology were the same Betti numbers as those derived through de Rham cohomology. This was extended in the 1950s, when Samuel Eilenberg and Norman Steenrod generalized this approach. They defined homology and cohomology as functors equipped with natural transformations subject to certain axioms (e.g., a weak equivalence of spaces passes to an isomorphism of homology groups), verified that all existing (co)homology theories satisfied these axioms, and then proved that such an axiomatization uniquely characterized the theory. Applications Classic applications of algebraic topology include: The Brouwer fixed point theorem: every continuous map from the unit n-disk to itself has a fixed point. The free rank of the nth homology group of a simplicial complex is the nth Betti number, which allows one to calculate the Euler–Poincaré characteristic. One can use the differential structure of smooth manifolds via de Rham cohomology, or Čech or sheaf cohomology to investigate the solvability of differential equations defined on the manifold in question. A manifold is orientable when the top-dimensional integral homology group is the integers, and is non-orientable when it is 0. The n-sphere admits a nowhere-vanishing continuous unit vector field if and only if n is odd. (For n = 2, this is sometimes called the "hairy ball theorem".) The Borsuk–Ulam theorem: any continuous map from the n-sphere to Euclidean n-space identifies at least one pair of antipodal points. Any subgroup of a free group is free. This result is quite interesting, because the statement is purely algebraic yet the simplest known proof is topological. Namely, any free group G may be realized as the fundamental group of a graph X. The main theorem on covering spaces tells us that every subgroup H of G is the fundamental group of some covering space Y of X; but every such Y is again a graph. Therefore, its fundamental group H is free. On the other hand, this type of application is also handled more simply by the use of covering morphisms of groupoids, and that technique has yielded subgroup theorems not yet proved by methods of algebraic topology; see . Topological combinatorics. Notable people Important theorems
Mathematics
Algebra
null
38811
https://en.wikipedia.org/wiki/Proline
Proline
Proline (symbol Pro or P) is an organic acid classed as a proteinogenic amino acid (used in the biosynthesis of proteins), although it does not contain the amino group but is rather a secondary amine. The secondary amine nitrogen is in the protonated form (NH2+) under biological conditions, while the carboxyl group is in the deprotonated −COO− form. The "side chain" from the α carbon connects to the nitrogen forming a pyrrolidine loop, classifying it as a aliphatic amino acid. It is non-essential in humans, meaning the body can synthesize it from the non-essential amino acid L-glutamate. It is encoded by all the codons starting with CC (CCU, CCC, CCA, and CCG). Proline is the only proteinogenic amino acid which is a secondary amine, as the nitrogen atom is attached both to the α-carbon and to a chain of three carbons that together form a five-membered ring. History and etymology Proline was first isolated in 1900 by Richard Willstätter who obtained the amino acid while studying N-methylproline, and synthesized proline by the reaction of sodium salt of diethyl malonate with 1,3-dibromopropane. The next year, Emil Fischer isolated proline from casein and the decomposition products of γ-phthalimido-propylmalonic ester, and published the synthesis of proline from phthalimide propylmalonic ester. The name proline comes from pyrrolidine, one of its constituents. Biosynthesis Proline is biosynthetically derived from the amino acid L-glutamate. Glutamate-5-semialdehyde is first formed by glutamate 5-kinase (ATP-dependent) and glutamate-5-semialdehyde dehydrogenase (which requires NADH or NADPH). This can then either spontaneously cyclize to form 1-pyrroline-5-carboxylic acid, which is reduced to proline by pyrroline-5-carboxylate reductase (using NADH or NADPH), or turned into ornithine by ornithine aminotransferase, followed by cyclisation by ornithine cyclodeaminase to form proline. Biological activity L-Proline has been found to act as a weak agonist of the glycine receptor and of both NMDA and non-NMDA (AMPA/kainate) ionotropic glutamate receptors. It has been proposed to be a potential endogenous excitotoxin. In plants, proline accumulation is a common physiological response to various stresses but is also part of the developmental program in generative tissues (e.g. pollen). A diet rich in proline was linked to an increased risk of depression in humans in a study from 2022 that was tested on a limited pre-clinical trial on humans and primarily in other organisms. Results were significant in the other organisms. Properties in protein structure The distinctive cyclic structure of proline's side chain gives proline an exceptional conformational rigidity compared to other amino acids. It also affects the rate of peptide bond formation between proline and other amino acids. When proline is bound as an amide in a peptide bond, its nitrogen is not bound to any hydrogen, meaning it cannot act as a hydrogen bond donor, but can be a hydrogen bond acceptor. Peptide bond formation with incoming Pro-tRNAPro in the ribosome is considerably slower than with any other tRNAs, which is a general feature of N-alkylamino acids. Peptide bond formation is also slow between an incoming tRNA and a chain ending in proline; with the creation of proline-proline bonds slowest of all. The exceptional conformational rigidity of proline affects the secondary structure of proteins near a proline residue and may account for proline's higher prevalence in the proteins of thermophilic organisms. Protein secondary structure can be described in terms of the dihedral angles φ, ψ and ω of the protein backbone. The cyclic structure of proline's side chain locks the angle φ at approximately −65°. Proline acts as a structural disruptor in the middle of regular secondary structure elements such as alpha helices and beta sheets; however, proline is commonly found as the first residue of an alpha helix and also in the edge strands of beta sheets. Proline is also commonly found in turns (another kind of secondary structure), and aids in the formation of beta turns. This may account for the curious fact that proline is usually solvent-exposed, despite having a completely aliphatic side chain. Multiple prolines and/or hydroxyprolines in a row can create a polyproline helix, the predominant secondary structure in collagen. The hydroxylation of proline by prolyl hydroxylase (or other additions of electron-withdrawing substituents such as fluorine) increases the conformational stability of collagen significantly. Hence, the hydroxylation of proline is a critical biochemical process for maintaining the connective tissue of higher organisms. Severe diseases such as scurvy can result from defects in this hydroxylation, e.g., mutations in the enzyme prolyl hydroxylase or lack of the necessary ascorbate (vitamin C) cofactor. Cis–trans isomerization Peptide bonds to proline, and to other N-substituted amino acids (such as sarcosine), are able to populate both the cis and trans isomers. Most peptide bonds overwhelmingly adopt the trans isomer (typically 99.9% under unstrained conditions), chiefly because the amide hydrogen (trans isomer) offers less steric repulsion to the preceding Cα atom than does the following Cα atom (cis isomer). By contrast, the cis and trans isomers of the X-Pro peptide bond (where X represents any amino acid) both experience steric clashes with the neighboring substitution and have a much lower energy difference. Hence, the fraction of X-Pro peptide bonds in the cis isomer under unstrained conditions is significantly elevated, with cis fractions typically in the range of 3-10%. However, these values depend on the preceding amino acid, with Gly and aromatic residues yielding increased fractions of the cis isomer. Cis fractions up to 40% have been identified for aromatic–proline peptide bonds. From a kinetic standpoint, cis–trans proline isomerization is a very slow process that can impede the progress of protein folding by trapping one or more proline residues crucial for folding in the non-native isomer, especially when the native protein requires the cis isomer. This is because proline residues are exclusively synthesized in the ribosome as the trans isomer form. All organisms possess prolyl isomerase enzymes to catalyze this isomerization, and some bacteria have specialized prolyl isomerases associated with the ribosome. However, not all prolines are essential for folding, and protein folding may proceed at a normal rate despite having non-native conformers of many X–Pro peptide bonds. Uses Proline and its derivatives are often used as asymmetric catalysts in proline organocatalysis reactions. The CBS reduction and proline catalysed aldol condensation are prominent examples. In brewing, proteins rich in proline combine with polyphenols to produce haze (turbidity). L-Proline is an osmoprotectant and therefore is used in many pharmaceutical and biotechnological applications. The growth medium used in plant tissue culture may be supplemented with proline. This can increase growth, perhaps because it helps the plant tolerate the stresses of tissue culture. For proline's role in the stress response of plants, see . Specialties Proline is one of the two amino acids that do not follow along with the typical Ramachandran plot, along with glycine. Due to the ring formation connected to the beta carbon, the ψ and φ angles about the peptide bond have fewer allowable degrees of rotation. As a result, it is often found in "turns" of proteins as its free entropy (ΔS) is not as comparatively large to other amino acids and thus in a folded form vs. unfolded form, the change in entropy is smaller. Furthermore, proline is rarely found in α and β structures as it would reduce the stability of such structures, because its side chain α-nitrogen can only form one nitrogen bond. Additionally, proline is the only amino acid that does not form a red-purple colour when developed by spraying with ninhydrin for uses in chromatography. Proline, instead, produces an orange-yellow colour. Synthesis Racemic proline can be synthesized from diethyl malonate and acrylonitrile:
Biology and health sciences
Amino acids
Biology
38824
https://en.wikipedia.org/wiki/Electric%20power%20transmission
Electric power transmission
Electric power transmission is the bulk movement of electrical energy from a generating site, such as a power plant, to an electrical substation. The interconnected lines that facilitate this movement form a transmission network. This is distinct from the local wiring between high-voltage substations and customers, which is typically referred to as electric power distribution. The combined transmission and distribution network is part of electricity delivery, known as the electrical grid. Efficient long-distance transmission of electric power requires high voltages. This reduces the losses produced by strong currents. Transmission lines use either alternating current (AC) or direct current (DC). The voltage level is changed with transformers. The voltage is stepped up for transmission, then reduced for local distribution. A wide area synchronous grid, known as an interconnection in North America, directly connects generators delivering AC power with the same relative frequency to many consumers. North America has four major interconnections: Western, Eastern, Quebec and Texas. One grid connects most of continental Europe. Historically, transmission and distribution lines were often owned by the same company, but starting in the 1990s, many countries liberalized the regulation of the electricity market in ways that led to separate companies handling transmission and distribution. System Most North American transmission lines are high-voltage three-phase AC, although single phase AC is sometimes used in railway electrification systems. DC technology is used for greater efficiency over longer distances, typically hundreds of miles. High-voltage direct current (HVDC) technology is also used in submarine power cables (typically longer than 30 miles (50 km)), and in the interchange of power between grids that are not mutually synchronized. HVDC links stabilize power distribution networks where sudden new loads, or blackouts, in one part of a network might otherwise result in synchronization problems and cascading failures. Electricity is transmitted at high voltages to reduce the energy loss due to resistance that occurs over long distances. Power is usually transmitted through overhead power lines. Underground power transmission has a significantly higher installation cost and greater operational limitations, but lowers maintenance costs. Underground transmission is more common in urban areas or environmentally sensitive locations. Electrical energy must typically be generated at the same rate at which it is consumed. A sophisticated control system is required to ensure that power generation closely matches demand. If demand exceeds supply, the imbalance can cause generation plant(s) and transmission equipment to automatically disconnect or shut down to prevent damage. In the worst case, this may lead to a cascading series of shutdowns and a major regional blackout. The US Northeast faced blackouts in 1965, 1977, 2003, and major blackouts in other US regions in 1996 and 2011. Electric transmission networks are interconnected into regional, national, and even continent-wide networks to reduce the risk of such a failure by providing multiple redundant, alternative routes for power to flow should such shutdowns occur. Transmission companies determine the maximum reliable capacity of each line (ordinarily less than its physical or thermal limit) to ensure that spare capacity is available in the event of a failure in another part of the network. Overhead High-voltage overhead conductors are not covered by insulation. The conductor material is nearly always an aluminium alloy, formed of several strands and possibly reinforced with steel strands. Copper was sometimes used for overhead transmission, but aluminum is lighter, reduces yields only marginally and costs much less. Overhead conductors are supplied by several companies. Conductor material and shapes are regularly improved to increase capacity. Conductor sizes range from 12 mm2 (#6 American wire gauge) to 750 mm2 (1,590,000 circular mils area), with varying resistance and current-carrying capacity. For large conductors (more than a few centimetres in diameter), much of the current flow is concentrated near the surface due to the skin effect. The center of the conductor carries little current but contributes weight and cost. Thus, multiple parallel cables (called bundle conductors) are used for higher capacity. Bundle conductors are used at high voltages to reduce energy loss caused by corona discharge. Today, transmission-level voltages are usually 110 kV and above. Lower voltages, such as 66 kV and 33 kV, are usually considered subtransmission voltages, but are occasionally used on long lines with light loads. Voltages less than 33 kV are usually used for distribution. Voltages above 765 kV are considered extra high voltage and require different designs. Overhead transmission wires depend on air for insulation, requiring that lines maintain minimum clearances. Adverse weather conditions, such as high winds and low temperatures, interrupt transmission. Wind speeds as low as can permit conductors to encroach operating clearances, resulting in a flashover and loss of supply. Oscillatory motion of the physical line is termed conductor gallop or flutter depending on the frequency and amplitude of oscillation. Underground Electric power can be transmitted by underground power cables. Underground cables take up no right-of-way, have lower visibility, and are less affected by weather. However, cables must be insulated. Cable and excavation costs are much higher than overhead construction. Faults in buried transmission lines take longer to locate and repair. In some metropolitan areas, cables are enclosed by metal pipe and insulated with dielectric fluid (usually an oil) that is either static or circulated via pumps. If an electric fault damages the pipe and leaks dielectric, liquid nitrogen is used to freeze portions of the pipe to enable draining and repair. This extends the repair period and increases costs. The temperature of the pipe and surroundings are monitored throughout the repair period. Underground lines are limited by their thermal capacity, which permits less overload or re-rating lines. Long underground AC cables have significant capacitance, which reduces their ability to provide useful power beyond . DC cables are not limited in length by their capacitance. History Commercial electric power was initially transmitted at the same voltage used by lighting and mechanical loads. This restricted the distance between generating plant and loads. In 1882, DC voltage could not easily be increased for long-distance transmission. Different classes of loads (for example, lighting, fixed motors, and traction/railway systems) required different voltages, and so used different generators and circuits. Thus, generators were sited near their loads, a practice that later became known as distributed generation using large numbers of small generators. Transmission of alternating current (AC) became possible after Lucien Gaulard and John Dixon Gibbs built what they called the secondary generator, an early transformer provided with 1:1 turn ratio and open magnetic circuit, in 1881. The first long distance AC line was long, built for the 1884 International Exhibition of Electricity in Turin, Italy. It was powered by a 2 kV, 130 Hz Siemens & Halske alternator and featured several Gaulard transformers with primary windings connected in series, which fed incandescent lamps. The system proved the feasibility of AC electric power transmission over long distances. The first commercial AC distribution system entered service in 1885 in via dei Cerchi, Rome, Italy, for public lighting. It was powered by two Siemens & Halske alternators rated 30 hp (22 kW), 2 kV at 120 Hz and used 19 km of cables and 200 parallel-connected 2 kV to 20 V step-down transformers provided with a closed magnetic circuit, one for each lamp. A few months later it was followed by the first British AC system, serving Grosvenor Gallery. It also featured Siemens alternators and 2.4 kV to 100 V step-down transformers – one per user – with shunt-connected primaries. Working to improve what he considered an impractical Gaulard-Gibbs design, electrical engineer William Stanley, Jr. developed the first practical series AC transformer in 1885. Working with the support of George Westinghouse, in 1886 he demonstrated a transformer-based AC lighting system in Great Barrington, Massachusetts. It was powered by a steam engine-driven 500 V Siemens generator. Voltage was stepped down to 100 volts using the Stanley transformer to power incandescent lamps at 23 businesses over . This practical demonstration of a transformer and alternating current lighting system led Westinghouse to begin installing AC systems later that year. In 1888 the first designs for an AC motor appeared. These were induction motors running on polyphase current, independently invented by Galileo Ferraris and Nikola Tesla. Westinghouse licensed Tesla's design. Practical three-phase motors were designed by Mikhail Dolivo-Dobrovolsky and Charles Eugene Lancelot Brown. Widespread use of such motors were delayed many years by development problems and the scarcity of polyphase power systems needed to power them. In the late 1880s and early 1890s smaller electric companies merged into larger corporations such as Ganz and AEG in Europe and General Electric and Westinghouse Electric in the US. These companies developed AC systems, but the technical difference between direct and alternating current systems required a much longer technical merger. Alternating current's economies of scale with large generating plants and long-distance transmission slowly added the ability to link all the loads. These included single phase AC systems, poly-phase AC systems, low voltage incandescent lighting, high-voltage arc lighting, and existing DC motors in factories and street cars. In what became a universal system, these technological differences were temporarily bridged via the rotary converters and motor-generators that allowed the legacy systems to connect to the AC grid. These stopgaps were slowly replaced as older systems were retired or upgraded. The first transmission of single-phase alternating current using high voltage came in Oregon in 1890 when power was delivered from a hydroelectric plant at Willamette Falls to the city of Portland down river. The first three-phase alternating current using high voltage took place in 1891 during the international electricity exhibition in Frankfurt. A 15 kV transmission line, approximately 175 km long, connected Lauffen on the Neckar and Frankfurt. Transmission voltages increased throughout the 20th century. By 1914, fifty-five transmission systems operating at more than 70 kV were in service. The highest voltage then used was 150 kV. Interconnecting multiple generating plants over a wide area reduced costs. The most efficient plants could be used to supply varying loads during the day. Reliability was improved and capital costs were reduced, because stand-by generating capacity could be shared over many more customers and a wider area. Remote and low-cost sources of energy, such as hydroelectric power or mine-mouth coal, could be exploited to further lower costs. The 20th century's rapid industrialization made electrical transmission lines and grids critical infrastructure. Interconnection of local generation plants and small distribution networks was spurred by World War I, when large electrical generating plants were built by governments to power munitions factories. Bulk transmission These networks use components such as power lines, cables, circuit breakers, switches and transformers. The transmission network is usually administered on a regional basis by an entity such as a regional transmission organization or transmission system operator. Transmission efficiency is improved at higher voltage and lower current. The reduced current reduces heating losses. Joule's first law states that energy losses are proportional to the square of the current. Thus, reducing the current by a factor of two lowers the energy lost to conductor resistance by a factor of four for any given size of conductor. The optimum size of a conductor for a given voltage and current can be estimated by Kelvin's law for conductor size, which states that size is optimal when the annual cost of energy wasted in resistance is equal to the annual capital charges of providing the conductor. At times of lower interest rates and low commodity costs, Kelvin's law indicates that thicker wires are optimal. Otherwise, thinner conductors are indicated. Since power lines are designed for long-term use, Kelvin's law is used in conjunction with long-term estimates of the price of copper and aluminum as well as interest rates. Higher voltage is achieved in AC circuits by using a step-up transformer. High-voltage direct current (HVDC) systems require relatively costly conversion equipment that may be economically justified for particular projects such as submarine cables and longer distance high capacity point-to-point transmission. HVDC is necessary for sending energy between unsynchronized grids. A transmission grid is a network of power stations, transmission lines, and substations. Energy is usually transmitted within a grid with three-phase AC. Single-phase AC is used only for distribution to end users since it is not usable for large polyphase induction motors. In the 19th century, two-phase transmission was used but required either four wires or three wires with unequal currents. Higher order phase systems require more than three wires, but deliver little or no benefit. While the price of generating capacity is high, energy demand is variable, making it often cheaper to import needed power than to generate it locally. Because loads often rise and fall together across large areas, power often comes from distant sources. Because of the economic benefits of load sharing, wide area transmission grids may span countries and even continents. Interconnections between producers and consumers enables power to flow even if some links are inoperative. The slowly varying portion of demand is known as the base load and is generally served by large facilities with constant operating costs, termed firm power. Such facilities are nuclear, coal or hydroelectric, while other energy sources such as concentrated solar thermal and geothermal power have the potential to provide firm power. Renewable energy sources, such as solar photovoltaics, wind, wave, and tidal, are, due to their intermittency, not considered to be firm. The remaining or peak power demand, is supplied by peaking power plants, which are typically smaller, faster-responding, and higher cost sources, such as combined cycle or combustion turbine plants typically fueled by natural gas. Long-distance transmission (hundreds of kilometers) is cheap and efficient, with costs of US$0.005–0.02 per kWh, compared to annual averaged large producer costs of US$0.01–0.025 per kWh, retail rates upwards of US$0.10 per kWh, and multiples of retail for instantaneous suppliers at unpredicted high demand moments. New York often buys over 1000 MW of low-cost hydropower from Canada. Local sources (even if more expensive and infrequently used) can protect the power supply from weather and other disasters that can disconnect distant suppliers. Hydro and wind sources cannot be moved closer to big cities, and solar costs are lowest in remote areas where local power needs are nominal. Connection costs can determine whether any particular renewable alternative is economically realistic. Costs can be prohibitive for transmission lines, but high capacity, long distance super grid transmission network costs could be recovered with modest usage fees. Grid input At power stations, power is produced at a relatively low voltage between about 2.3 kV and 30 kV, depending on the size of the unit. The voltage is then stepped up by the power station transformer to a higher voltage (115 kV to 765 kV AC) for transmission. In the United States, power transmission is, variously, 230 kV to 500 kV, with less than 230 kV or more than 500 kV as exceptions. The Western Interconnection has two primary interchange voltages: 500 kV AC at 60 Hz, and ±500 kV (1,000 kV net) DC from North to South (Columbia River to Southern California) and Northeast to Southwest (Utah to Southern California). The 287.5 kV (Hoover Dam to Los Angeles line, via Victorville) and 345 kV (Arizona Public Service (APS) line) are local standards, both of which were implemented before 500 kV became practical. Losses Transmitting electricity at high voltage reduces the fraction of energy lost to Joule heating, which varies by conductor type, the current, and the transmission distance. For example, a span at 765 kV carrying 1000 MW of power can have losses of 0.5% to 1.1%. A 345 kV line carrying the same load across the same distance has losses of 4.2%. For a given amount of power, a higher voltage reduces the current and thus the resistive losses. For example, raising the voltage by a factor of 10 reduces the current by a corresponding factor of 10 and therefore the losses by a factor of 100, provided the same sized conductors are used in both cases. Even if the conductor size (cross-sectional area) is decreased ten-fold to match the lower current, the losses are still reduced ten-fold using the higher voltage. While power loss can also be reduced by increasing the wire's conductance (by increasing its cross-sectional area), larger conductors are heavier and more expensive. And since conductance is proportional to cross-sectional area, resistive power loss is only reduced proportionally with increasing cross-sectional area, providing a much smaller benefit than the squared reduction provided by multiplying the voltage. Long-distance transmission is typically done with overhead lines at voltages of 115 to 1,200 kV. At higher voltages, where more than 2,000 kV exists between conductor and ground, corona discharge losses are so large that they can offset the lower resistive losses in the line conductors. Measures to reduce corona losses include larger conductor diameter, hollow cores or conductor bundles. Factors that affect resistance and thus loss include temperature, spiraling, and the skin effect. Resistance increases with temperature. Spiraling, which refers to the way stranded conductors spiral about the center, also contributes to increases in conductor resistance. The skin effect causes the effective resistance to increase at higher AC frequencies. Corona and resistive losses can be estimated using a mathematical model. US transmission and distribution losses were estimated at 6.6% in 1997, 6.5% in 2007 and 5% from 2013 to 2019. In general, losses are estimated from the discrepancy between power produced (as reported by power plants) and power sold; the difference constitutes transmission and distribution losses, assuming no utility theft occurs. As of 1980, the longest cost-effective distance for DC transmission was . For AC it was , though US transmission lines are substantially shorter. In any AC line, conductor inductance and capacitance can be significant. Currents that flow solely in reaction to these properties, (which together with the resistance define the impedance) constitute reactive power flow, which transmits no power to the load. These reactive currents, however, cause extra heating losses. The ratio of real power transmitted to the load to apparent power (the product of a circuit's voltage and current, without reference to phase angle) is the power factor. As reactive current increases, reactive power increases and power factor decreases. For transmission systems with low power factor, losses are higher than for systems with high power factor. Utilities add capacitor banks, reactors and other components (such as phase-shifters; static VAR compensators; and flexible AC transmission systems, FACTS) throughout the system help to compensate for the reactive power flow, reduce the losses in power transmission and stabilize system voltages. These measures are collectively called 'reactive support'. Transposition Current flowing through transmission lines induces a magnetic field that surrounds the lines of each phase and affects the inductance of the surrounding conductors of other phases. The conductors' mutual inductance is partially dependent on the physical orientation of the lines with respect to each other. Three-phase lines are conventionally strung with phases separated vertically. The mutual inductance seen by a conductor of the phase in the middle of the other two phases is different from the inductance seen on the top/bottom. Unbalanced inductance among the three conductors is problematic because it may force the middle line to carry a disproportionate amount of the total power transmitted. Similarly, an unbalanced load may occur if one line is consistently closest to the ground and operates at a lower impedance. Because of this phenomenon, conductors must be periodically transposed along the line so that each phase sees equal time in each relative position to balance out the mutual inductance seen by all three phases. To accomplish this, line position is swapped at specially designed transposition towers at regular intervals along the line using various transposition schemes. Subtransmission Subtransmission runs at relatively lower voltages. It is uneconomical to connect all distribution substations to the high main transmission voltage, because that equipment is larger and more expensive. Typically, only larger substations connect with this high voltage. Voltage is stepped down before the current is sent to smaller substations. Subtransmission circuits are usually arranged in loops so that a single line failure does not stop service to many customers for more than a short time. Loops can be normally closed, where loss of one circuit should result in no interruption, or normally open where substations can switch to a backup supply. While subtransmission circuits are usually carried on overhead lines, in urban areas buried cable may be used. The lower-voltage subtransmission lines use less right-of-way and simpler structures; undergrounding is less difficult. No fixed cutoff separates subtransmission and transmission, or subtransmission and distribution. Their voltage ranges overlap. Voltages of 69 kV, 115 kV, and 138 kV are often used for subtransmission in North America. As power systems evolved, voltages formerly used for transmission were used for subtransmission, and subtransmission voltages became distribution voltages. Like transmission, subtransmission moves relatively large amounts of power, and like distribution, subtransmission covers an area instead of just point-to-point. Transmission grid exit Substation transformers reduce the voltage to a lower level for distribution to customers. This distribution is accomplished with a combination of sub-transmission (33 to 138 kV) and distribution (3.3 to 25 kV). Finally, at the point of use, the energy is transformed to end-user voltage (100 to 4160 volts). Advantage of high-voltage transmission High-voltage power transmission allows for lesser resistive losses over long distances. This efficiency delivers a larger proportion of the generated power to the loads. In a simplified model, the grid delivers electricity from an ideal voltage source with voltage , delivering a power ) to a single point of consumption, modelled by a resistance , when the wires are long enough to have a significant resistance . If the resistances are in series with no intervening transformer, the circuit acts as a voltage divider, because the same current runs through the wire resistance and the powered device. As a consequence, the useful power (at the point of consumption) is: Should an ideal transformer convert high-voltage, low-current electricity into low-voltage, high-current electricity with a voltage ratio of (i.e., the voltage is divided by and the current is multiplied by in the secondary branch, compared to the primary branch), then the circuit is again equivalent to a voltage divider, but the wires now have apparent resistance of only . The useful power is then: For (i.e. conversion of high voltage to low voltage near the consumption point), a larger fraction of the generator's power is transmitted to the consumption point and a lesser fraction is lost to Joule heating. Modeling The terminal characteristics of the transmission line are the voltage and current at the sending (S) and receiving (R) ends. The transmission line can be modeled as a black box and a 2 by 2 transmission matrix is used to model its behavior, as follows: The line is assumed to be a reciprocal, symmetrical network, meaning that the receiving and sending labels can be switched with no consequence. The transmission matrix T has the properties: The parameters A, B, C, and D differ depending on how the desired model handles the line's resistance (R), inductance (L), capacitance (C), and shunt (parallel, leak) conductance G. The four main models are the short line approximation, the medium line approximation, the long line approximation (with distributed parameters), and the lossless line. In such models, a capital letter such as R refers to the total quantity summed over the line and a lowercase letter such as c refers to the per-unit-length quantity. Lossless line The lossless line approximation is the least accurate; it is typically used on short lines where the inductance is much greater than the resistance. For this approximation, the voltage and current are identical at the sending and receiving ends. The characteristic impedance is pure real, which means resistive for that impedance, and it is often called surge impedance. When a lossless line is terminated by surge impedance, the voltage does not drop. Though the phase angles of voltage and current are rotated, the magnitudes of voltage and current remain constant along the line. For load > SIL, the voltage drops from sending end and the line consumes VARs. For load < SIL, the voltage increases from the sending end, and the line generates VARs. Short line The short line approximation is normally used for lines shorter than . There, only a series impedance Z is considered, while C and G are ignored. The final result is that A = D = 1 per unit, B = Z Ohms, and C = 0. The associated transition matrix for this approximation is therefore: Medium line The medium line approximation is used for lines running between . The series impedance and the shunt (current leak) conductance are considered, placing half of the shunt conductance at each end of the line. This circuit is often referred to as a nominal π (pi) circuit because of the shape (π) that is taken on when leak conductance is placed on both sides of the circuit diagram. The analysis of the medium line produces: Counterintuitive behaviors of medium-length transmission lines: voltage rise at no load or small current (Ferranti effect) receiving-end current can exceed sending-end current Long line The long line model is used when a higher degree of accuracy is needed or when the line under consideration is more than long. Series resistance and shunt conductance are considered to be distributed parameters, such that each differential length of the line has a corresponding differential series impedance and shunt admittance. The following result can be applied at any point along the transmission line, where is the propagation constant. To find the voltage and current at the end of the long line, should be replaced with (the line length) in all parameters of the transmission matrix. This model applies the Telegrapher's equations. High-voltage direct current High-voltage direct current (HVDC) is used to transmit large amounts of power over long distances or for interconnections between asynchronous grids. When electrical energy is transmitted over very long distances, the power lost in AC transmission becomes appreciable and it is less expensive to use direct current instead. For a long transmission line, these lower losses (and reduced construction cost of a DC line) can offset the cost of the required converter stations at each end. HVDC is used for long submarine cables where AC cannot be used because of cable capacitance. In these cases special high-voltage cables are used. Submarine HVDC systems are often used to interconnect the electricity grids of islands, for example, between Great Britain and continental Europe, between Great Britain and Ireland, between Tasmania and the Australian mainland, between the North and South Islands of New Zealand, between New Jersey and New York City, and between New Jersey and Long Island. Submarine connections up to in length have been deployed. HVDC links can be used to control grid problems. The power transmitted by an AC line increases as the phase angle between source end voltage and destination ends increases, but too large a phase angle allows the systems at either end to fall out of step. Since the power flow in a DC link is controlled independently of the phases of the AC networks that it connects, this phase angle limit does not exist, and a DC link is always able to transfer its full rated power. A DC link therefore stabilizes the AC grid at either end, since power flow and phase angle can then be controlled independently. As an example, to adjust the flow of AC power on a hypothetical line between Seattle and Boston would require adjustment of the relative phase of the two regional electrical grids. This is an everyday occurrence in AC systems, but one that can become disrupted when AC system components fail and place unexpected loads on the grid. With an HVDC line instead, such an interconnection would: Convert AC in Seattle into HVDC; Use HVDC for the of cross-country transmission; and Convert the HVDC to locally synchronized AC in Boston, (and possibly in other cooperating cities along the transmission route). Such a system could be less prone to failure if parts of it were suddenly shut down. One example of a long DC transmission line is the Pacific DC Intertie located in the Western United States. Capacity The amount of power that can be sent over a transmission line varies with the length of the line. The heating of short line conductors due to line losses sets a thermal limit. If too much current is drawn, conductors may sag too close to the ground, or conductors and equipment may overheat. For intermediate-length lines on the order of , the limit is set by the voltage drop in the line. For longer AC lines, system stability becomes the limiting factor. Approximately, the power flowing over an AC line is proportional to the cosine of the phase angle of the voltage and current at the ends. This angle varies depending on system loading. It is undesirable for the angle to approach 90 degrees, as the power flowing decreases while resistive losses remain. The product of line length and maximum load is approximately proportional to the square of the system voltage. Series capacitors or phase-shifting transformers are used on long lines to improve stability. HVDC lines are restricted only by thermal and voltage drop limits, since the phase angle is not material. Understanding the temperature distribution along the cable route became possible with the introduction of distributed temperature sensing (DTS) systems that measure temperatures all along the cable. Without them maximum current was typically set as a compromise between understanding of operation conditions and risk minimization. This monitoring solution uses passive optical fibers as temperature sensors, either inside a high-voltage cable or externally mounted on the cable insulation. For overhead cables the fiber is integrated into the core of a phase wire. The integrated Dynamic Cable Rating (DCR)/Real Time Thermal Rating (RTTR) solution makes it possible to run the network to its maximum. It allows the operator to predict the behavior of the transmission system to reflect major changes to its initial operating conditions. Reconductoring Some utilities have embraced reconductoring to handle the increase in electricity production. Reconductoring is the replacement-in-place of existing transmission lines with higher-capacity lines. Adding transmission lines is difficult due to cost, permit intervals, and local opposition. Reconductoring has the potential to double the amount of electricity that can travel across a transmission line. A 2024 report found the United States behind countries like Belgium and the Netherlands in adoption of this technique to accommodate electrification and renewable energy. In April 2022, the Biden Administration streamlined environmental reviews for such projects, and in May 2022 announced competitive grants for them funded by the 2021 Bipartisan Infrastructure Law and 2022 Inflation Reduction Act. The rate of transmission expansion needs to double to support ongoing electrification and reach emission reduction targets. As of 2022, more than 10,000 power plant and energy storage projects were awaiting permission to connect to the US grid — 95% were zero-carbon resources. New power lines can take 10 years to plan, permit, and build. Traditional power lines use a steel core surrounded by aluminum strands (Aluminium-conductor steel-reinforced cable). Replacing the steel with a lighter, stronger composite material such as carbon fiber (ACCC conductor) allows lines to operate at higher temperatures, with less sag, and doubled transmission capacity. Lowering line sag at high temperatures can prevent wildfires from starting when power lines touch dry vegetation. Although advanced lines can cost 2-4x more than steel, total reconductoring costs are less than half of a new line, given savings in time, land acquisition, permitting, and construction. A reconductoring project in southeastern Texas upgraded 240 miles of transmission lines at a cost of $900,000 per mile, versus a 3,600-mile greenfield project that averaged $1.9 million per mile. Control To ensure safe and predictable operation, system components are controlled with generators, switches, circuit breakers and loads. The voltage, power, frequency, load factor, and reliability capabilities of the transmission system are designed to provide cost effective performance. Load balancing The transmission system provides for base load and peak load capability, with margins for safety and fault tolerance. Peak load times vary by region largely due to the industry mix. In hot and cold climates home air conditioning and heating loads affect the overall load. They are typically highest in the late afternoon in the hottest part of the year and in mid-mornings and mid-evenings in the coldest part of the year. Power requirements vary by season and time of day. Distribution system designs always take the base load and the peak load into consideration. The transmission system usually does not have a large buffering capability to match loads with generation. Thus generation has to be kept matched to the load, to prevent overloading generation equipment. Multiple sources and loads can be connected to the transmission system and they must be controlled to provide orderly transfer of power. In centralized power generation, only local control of generation is necessary. This involves synchronization of the generation units. In distributed power generation the generators are geographically distributed and the process to bring them online and offline must be carefully controlled. The load control signals can either be sent on separate lines or on the power lines themselves. Voltage and frequency can be used as signaling mechanisms to balance the loads. In voltage signaling, voltage is varied to increase generation. The power added by any system increases as the line voltage decreases. This arrangement is stable in principle. Voltage-based regulation is complex to use in mesh networks, since the individual components and setpoints would need to be reconfigured every time a new generator is added to the mesh. In frequency signaling, the generating units match the frequency of the power transmission system. In droop speed control, if the frequency decreases, the power is increased. (The drop in line frequency is an indication that the increased load is causing the generators to slow down.) Wind turbines, vehicle-to-grid, virtual power plants, and other locally distributed storage and generation systems can interact with the grid to improve system operation. Internationally, a slow move from a centralized to decentralized power systems have taken place. The main draw of locally distributed generation systems is that they reduce transmission losses by leading to consumption of electricity closer to where it was produced. Failure protection Under excess load conditions, the system can be designed to fail incrementally rather than all at once. Brownouts occur when power supplied drops below the demand. Blackouts occur when the grid fails completely. Rolling blackouts (also called load shedding) are intentionally engineered electrical power outages, used to distribute insufficient power to various loads in turn. Communications Grid operators require reliable communications to manage the grid and associated generation and distribution facilities. Fault-sensing protective relays at each end of the line must communicate to monitor the flow of power so that faulted conductors or equipment can be quickly de-energized and the balance of the system restored. Protection of the transmission line from short circuits and other faults is usually so critical that common carrier telecommunications are insufficiently reliable, while in some remote areas no common carrier is available. Communication systems associated with a transmission project may use: Microwaves Power-line communication Optical fibers Rarely, and for short distances, pilot-wires are strung along the transmission line path. Leased circuits from common carriers are not preferred since availability is not under control of the operator. Transmission lines can be used to carry data: this is called power-line carrier, or power-line communication (PLC). PLC signals can be easily received with a radio in the long wave range. Optical fibers can be included in the stranded conductors of a transmission line, in the overhead shield wires. These cables are known as optical ground wire (OPGW). Sometimes a standalone cable is used, all-dielectric self-supporting (ADSS) cable, attached to the transmission line cross arms. Some jurisdictions, such as Minnesota, prohibit energy transmission companies from selling surplus communication bandwidth or acting as a telecommunications common carrier. Where the regulatory structure permits, the utility can sell capacity in extra dark fibers to a common carrier. Market structure Electricity transmission is generally considered to be a natural monopoly, but one that is not inherently linked to generation. Many countries regulate transmission separately from generation. Spain was the first country to establish a regional transmission organization. In that country, transmission operations and electricity markets are separate. The transmission system operator is Red Eléctrica de España (REE) and the wholesale electricity market operator is Operador del Mercado Ibérico de Energía – Polo Español, S.A. (OMEL) OMEL Holding | Omel Holding. Spain's transmission system is interconnected with those of France, Portugal, and Morocco. The establishment of RTOs in the United States was spurred by the FERC's Order 888, Promoting Wholesale Competition Through Open Access Non-discriminatory Transmission Services by Public Utilities; Recovery of Stranded Costs by Public Utilities and Transmitting Utilities, issued in 1996. In the United States and parts of Canada, electric transmission companies operate independently of generation companies, but in the Southern United States vertical integration is intact. In regions of separation, transmission owners and generation owners continue to interact with each other as market participants with voting rights within their RTO. RTOs in the United States are regulated by the Federal Energy Regulatory Commission. Merchant transmission projects in the United States include the Cross Sound Cable from Shoreham, New York to New Haven, Connecticut, Neptune RTS Transmission Line from Sayreville, New Jersey, to New Bridge, New York, and Path 15 in California. Additional projects are in development or have been proposed throughout the United States, including the Lake Erie Connector, an underwater transmission line proposed by ITC Holdings Corp., connecting Ontario to load serving entities in the PJM Interconnection region. Australia has one unregulated or market interconnector – Basslink – between Tasmania and Victoria. Two DC links originally implemented as market interconnectors, Directlink and Murraylink, were converted to regulated interconnectors. A major barrier to wider adoption of merchant transmission is the difficulty in identifying who benefits from the facility so that the beneficiaries pay the toll. Also, it is difficult for a merchant transmission line to compete when the alternative transmission lines are subsidized by utilities with a monopolized and regulated rate base. In the United States, the FERC's Order 1000, issued in 2010, attempted to reduce barriers to third party investment and creation of merchant transmission lines where a public policy need is found. Transmission costs The cost of high voltage transmission is comparatively low, compared to all other costs constituting consumer electricity bills. In the UK, transmission costs are about 0.2 p per kWh compared to a delivered domestic price of around 10 p per kWh. The level of capital expenditure in the electric power T&D equipment market was estimated to be $128.9 bn in 2011. Health concerns Mainstream scientific evidence suggests that low-power, low-frequency, electromagnetic radiation associated with household currents and high transmission power lines does not constitute a short- or long-term health hazard. Some studies failed to find any link between living near power lines and developing any sickness or diseases, such as cancer. A 1997 study reported no increased risk of cancer or illness from living near a transmission line. Other studies, however, reported statistical correlations between various diseases and living or working near power lines. No adverse health effects have been substantiated for people not living close to power lines. The New York State Public Service Commission conducted a study to evaluate potential health effects of electric fields. The study measured the electric field strength at the edge of an existing right-of-way on a 765 kV transmission line. The field strength was 1.6 kV/m, and became the interim maximum strength standard for new transmission lines in New York State. The opinion also limited the voltage of new transmission lines built in New York to 345 kV. On September 11, 1990, after a similar study of magnetic field strengths, the NYSPSC issued their Interim Policy Statement on Magnetic Fields. This policy established a magnetic field standard of 200 mG at the edge of the right-of-way using the winter-normal conductor rating. As a comparison with everyday items, a hair dryer or electric blanket produces a 100 mG – 500 mG magnetic field. Applications for a new transmission line typically include an analysis of electric and magnetic field levels at the edge of rights-of-way. Public utility commissions typically do not comment on health impacts. Biological effects have been established for acute high level exposure to magnetic fields above 100 μT (1 G) (1,000 mG). In a residential setting, one study reported "limited evidence of carcinogenicity in humans and less than sufficient evidence for carcinogenicity in experimental animals", in particular, childhood leukemia, associated with average exposure to residential power-frequency magnetic field above 0.3 μT (3 mG) to 0.4 μT (4 mG). These levels exceed average residential power-frequency magnetic fields in homes, which are about 0.07 μT (0.7 mG) in Europe and 0.11 μT (1.1 mG) in North America. The Earth's natural geomagnetic field strength varies over the surface of the planet between 0.035 mT and 0.07 mT (35 μT – 70 μT or 350 mG – 700 mG) while the international standard for continuous exposure is set at 40 mT (400,000 mG or 400 G) for the general public. Tree growth regulators and herbicides may be used in transmission line right of ways, which may have health effects. Specialized transmission Grids for railways In some countries where electric locomotives or electric multiple units run on low frequency AC power, separate single phase traction power networks are operated by the railways. Prime examples are countries such as Austria, Germany and Switzerland that utilize AC technology based on 16 2/3 Hz. Norway and Sweden also use this frequency but use conversion from the 50 Hz public supply; Sweden has a 16 2/3 Hz traction grid but only for part of the system. Superconducting cables High-temperature superconductors (HTS) promise to revolutionize power distribution by providing lossless transmission. The development of superconductors with transition temperatures higher than the boiling point of liquid nitrogen has made the concept of superconducting power lines commercially feasible, at least for high-load applications. It has been estimated that waste would be halved using this method, since the necessary refrigeration equipment would consume about half the power saved by the elimination of resistive losses. Companies such as Consolidated Edison and American Superconductor began commercial production of such systems in 2007. Superconducting cables are particularly suited to high load density areas such as the business district of large cities, where purchase of an easement for cables is costly. Single-wire earth return Single-wire earth return (SWER) or single-wire ground return is a single-wire transmission line for supplying single-phase electrical power to remote areas at low cost. It is principally used for rural electrification, but also finds use for larger isolated loads such as water pumps. Single-wire earth return is also used for HVDC over submarine power cables. Wireless power transmission Both Nikola Tesla and Hidetsugu Yagi attempted to devise systems for large scale wireless power transmission in the late 1800s and early 1900s, without commercial success. In November 2009, LaserMotive won the NASA 2009 Power Beaming Challenge by powering a cable climber 1 km vertically using a ground-based laser transmitter. The system produced up to 1 kW of power at the receiver end. In August 2010, NASA contracted with private companies to pursue the design of laser power beaming systems to power low earth orbit satellites and to launch rockets using laser power beams. Wireless power transmission has been studied for transmission of power from solar power satellites to the earth. A high power array of microwave or laser transmitters would beam power to a rectenna. Major engineering and economic challenges face any solar power satellite project. Security The federal government of the United States stated that the American power grid was susceptible to cyber-warfare. The United States Department of Homeland Security works with industry to identify vulnerabilities and to help industry enhance the security of control system networks. In June 2019, Russia conceded that it was "possible" its electrical grid is under cyber-attack by the United States. The New York Times reported that American hackers from the United States Cyber Command planted malware potentially capable of disrupting the Russian electrical grid. Records Highest capacity system: 12 GW Zhundong–Wannan (准东-皖南)±1100 kV HVDC. Highest transmission voltage (AC): planned: 1.20 MV (Ultra-High Voltage) on Wardha-Aurangabad line (India), planned to initially operate at 400 kV. worldwide: 1.15 MV (Ultra-High Voltage) on Ekibastuz-Kokshetau line (Kazakhstan) Largest double-circuit transmission, Kita-Iwaki Powerline (Japan). Highest towers: Yangtze River Crossing (China) (height: ) Longest power line: Inga-Shaba (Democratic Republic of Congo) (length: ) Longest span of power line: at Ameralik Span (Greenland, Denmark) Longest submarine cables: North Sea Link, (Norway/United Kingdom) – (length of submarine cable: ) NorNed, North Sea (Norway/Netherlands) – (length of submarine cable: ) Basslink, Bass Strait, (Australia) – (length of submarine cable: , total length: ) Baltic Cable, Baltic Sea (Germany/Sweden) – (length of submarine cable: , HVDC length: , total length: ) Longest underground cables: Murraylink, Riverland/Sunraysia (Australia) – (length of underground cable: )
Technology
Electricity generation and distribution
null
38829
https://en.wikipedia.org/wiki/Three-phase%20electric%20power
Three-phase electric power
Three-phase electric power (abbreviated 3ϕ) is a common type of alternating current (AC) used in electricity generation, transmission, and distribution. It is a type of polyphase system employing three wires (or four including an optional neutral return wire) and is the most common method used by electrical grids worldwide to transfer power. Three-phase electrical power was developed in the 1880s by several people. In three-phase power, the voltage on each wire is 120 degrees phase shifted relative to each of the other wires. Because it is an AC system, it allows the voltages to be easily stepped up using transformers to high voltage for transmission and back down for distribution, giving high efficiency. A three-wire three-phase circuit is usually more economical than an equivalent two-wire single-phase circuit at the same line-to-ground voltage because it uses less conductor material to transmit a given amount of electrical power. Three-phase power is mainly used directly to power large induction motors, other electric motors and other heavy loads. Small loads often use only a two-wire single-phase circuit, which may be derived from a three-phase system. Terminology The conductors between a voltage source and a load are called lines, and the voltage between any two lines is called line voltage. The voltage measured between any line and neutral is called phase voltage. For example, for a 208/120-volt service, the line voltage is 208 volts, and the phase voltage is 120 volts. History Polyphase power systems were independently invented by Galileo Ferraris, Mikhail Dolivo-Dobrovolsky, Jonas Wenström, John Hopkinson, William Stanley Jr., and Nikola Tesla in the late 1880s. Three phase power evolved out of electric motor development. In 1885, Galileo Ferraris was doing research on rotating magnetic fields. Ferraris experimented with different types of asynchronous electric motors. The research and his studies resulted in the development of an alternator, which may be thought of as an alternating-current motor operating in reverse, so as to convert mechanical (rotating) power into electric power (as alternating current). On 11 March 1888, Ferraris published his research in a paper to the Royal Academy of Sciences in Turin. Two months later Nikola Tesla gained for a three-phase electric motor design, application filed October 12, 1887. Figure 13 of this patent shows that Tesla envisaged his three-phase motor being powered from the generator via six wires. These alternators operated by creating systems of alternating currents displaced from one another in phase by definite amounts, and depended on rotating magnetic fields for their operation. The resulting source of polyphase power soon found widespread acceptance. The invention of the polyphase alternator is key in the history of electrification, as is the power transformer. These inventions enabled power to be transmitted by wires economically over considerable distances. Polyphase power enabled the use of water-power (via hydroelectric generating plants in large dams) in remote places, thereby allowing the mechanical energy of the falling water to be converted to electricity, which then could be fed to an electric motor at any location where mechanical work needed to be done. This versatility sparked the growth of power-transmission network grids on continents around the globe. Mikhail Dolivo-Dobrovolsky developed a three-phase electrical generator and a three-phase electric motor in 1888 and studied star and delta connections. His three-phase three-wire transmission system was displayed in 1891 in Germany at the International Electrotechnical Exhibition, where Dolivo-Dobrovolsky used the system to transmit electric power at the distance of 176 km (110 miles) with 75% efficiency. In 1891 he also created a three-phase transformer and short-circuited (squirrel-cage) induction motor. <ref>Gerhard Neidhöfer: Michael von Dolivo-Dobrowolsky und der Drehstrom. Geschichte der Elektrotechnik VDE-Buchreihe, Volume 9, VDE VERLAG, Berlin Offenbach, .</ref> He designed the world's first three-phase hydroelectric power plant in 1891. Inventor Jonas Wenström received in 1890 a Swedish patent on the same three-phase system. The possibility of transferring electrical power from a waterfall at a distance was explored at the Grängesberg mine. A fall at Hällsjön, Smedjebackens kommun, where a small iron work had been located, was selected. In 1893, a three-phase system was used to transfer a distance of 15 km (10 miles), becoming the first commercial application. Principle In a symmetric three-phase power supply system, three conductors each carry an alternating current of the same frequency and voltage amplitude relative to a common reference, but with a phase difference of one third of a cycle (i.e., 120 degrees out of phase) between each. The common reference is usually connected to ground and often to a current-carrying conductor called the neutral. Due to the phase difference, the voltage on any conductor reaches its peak at one third of a cycle after one of the other conductors and one third of a cycle before the remaining conductor. This phase delay gives constant power transfer to a balanced linear load. It also makes it possible to produce a rotating magnetic field in an electric motor and generate other phase arrangements using transformers (for instance, a two-phase system using a Scott-T transformer). The amplitude of the voltage difference between two phases is times the amplitude of the voltage of the individual phases. The symmetric three-phase systems described here are simply referred to as three-phase systems because, although it is possible to design and implement asymmetric three-phase power systems (i.e., with unequal voltages or phase shifts), they are not used in practice because they lack the most important advantages of symmetric systems. In a three-phase system feeding a balanced and linear load, the sum of the instantaneous currents of the three conductors is zero. In other words, the current in each conductor is equal in magnitude to the sum of the currents in the other two, but with the opposite sign. The return path for the current in any phase conductor is the other two phase conductors. Constant power transfer is possible with any number of phases greater than one. However, two-phase systems do not have neutral-current cancellation and thus use conductors less efficiently, and more than three phases complicates infrastructure unnecessarily. Additionally, in some practical generators and motors, two phases can result in a less smooth (pulsating) torque. Three-phase systems may have a fourth wire, common in low-voltage distribution. This is the neutral wire. The neutral allows three separate single-phase supplies to be provided at a constant voltage and is commonly used for supplying multiple single-phase loads. The connections are arranged so that, as far as possible in each group, equal power is drawn from each phase. Further up the distribution system, the currents are usually well balanced. Transformers may be wired to have a four-wire secondary and a three-wire primary, while allowing unbalanced loads and the associated secondary-side neutral currents. Phase sequence Wiring for three phases is typically identified by colors that vary by country and voltage. The phases must be connected in the correct order to achieve the intended direction of rotation of three-phase motors. For example, pumps and fans do not work as intended in reverse. Maintaining the identity of phases is required if two sources could be connected at the same time. A direct connection between two different phases is a short circuit and leads to flow of unbalanced current. Advantages and disadvantages As compared to a single-phase AC power supply that uses two current-carrying conductors (phase and neutral), a three-phase supply with no neutral and the same phase-to-ground voltage and current capacity per phase can transmit three times as much power by using just 1.5 times as many wires (i.e., three instead of two). Thus, the ratio of capacity to conductor material is doubled. The ratio of capacity to conductor material increases to 3:1 with an ungrounded three-phase and center-grounded single-phase system (or 2.25:1 if both use grounds with the same gauge as the conductors). That leads to higher efficiency, lower weight, and cleaner waveforms. Three-phase supplies have properties that make them desirable in electric power distribution systems: The phase currents tend to cancel out one another, summing to zero in the case of a linear balanced load, which allows a reduction of the size of the neutral conductor because it carries little or no current. With a balanced load, all the phase conductors carry the same current and so can have the same size. Power transfer into a linear balanced load is constant, which, in motor/generator applications, helps to reduce vibrations. Three-phase systems can produce a rotating magnetic field with a specified direction and constant magnitude, which simplifies the design of electric motors, as no starting circuit is required. However, most loads are single-phase. In North America, single-family houses and individual apartments are supplied one phase from the power grid and use a split-phase system to the panelboard from which most branch circuits will carry 120 V. Circuits designed for higher powered devices such as stoves, dryers, or outlets for electric vehicles carry 240 V. In Europe, three-phase power is normally delivered to the panelboard and further to higher powered devices. Generation and distribution At the power station, an electrical generator converts mechanical power into a set of three AC electric currents, one from each coil (or winding) of the generator. The windings are arranged such that the currents are at the same frequency but with the peaks and troughs of their wave forms offset to provide three complementary currents with a phase separation of one-third cycle (120° or radians). The generator frequency is typically 50 or 60 Hz, depending on the country. At the power station, transformers change the voltage from generators to a level suitable for transmission in order to minimize losses. After further voltage conversions in the transmission network, the voltage is finally transformed to the standard utilization before power is supplied to customers. Most automotive alternators generate three-phase AC and rectify it to DC with a diode bridge. Transformer connections A "delta" (Δ) connected transformer winding is connected between phases of a three-phase system. A "wye" (Y) transformer connects each winding from a phase wire to a common neutral point. A single three-phase transformer can be used, or three single-phase transformers. In an "open delta" or "V" system, only two transformers are used. A closed delta made of three single-phase transformers can operate as an open delta if one of the transformers has failed or needs to be removed. In open delta, each transformer must carry current for its respective phases as well as current for the third phase, therefore capacity is reduced to 87%. With one of three transformers missing and the remaining two at 87% efficiency, the capacity is 58% ( of 87%).H. W. Beaty, D. G. Fink (ed.), Standard Handbook for Electrical Engineers. 15th ed., McGraw-Hill, 2007, , pp. 10–11. Where a delta-fed system must be grounded for detection of stray current to ground or protection from surge voltages, a grounding transformer (usually a zigzag transformer) may be connected to allow ground fault currents to return from any phase to ground. Another variation is a "corner grounded" delta system, which is a closed delta that is grounded at one of the junctions of transformers. Three-wire and four-wire circuits There are two basic three-phase configurations: wye (Y) and delta (Δ). As shown in the diagram, a delta configuration requires only three wires for transmission, but a wye (star) configuration may have a fourth wire. The fourth wire, if present, is provided as a neutral and is normally grounded. The three-wire and four-wire designations do not count the ground wire present above many transmission lines, which is solely for fault protection and does not carry current under normal use. A four-wire system with symmetrical voltages between phase and neutral is obtained when the neutral is connected to the "common star point" of all supply windings. In such a system, all three phases will have the same magnitude of voltage relative to the neutral. Other non-symmetrical systems have been used. The four-wire wye system is used when a mixture of single-phase and three-phase loads are to be served, such as mixed lighting and motor loads. An example of application is local distribution in Europe (and elsewhere), where each customer may be only fed from one phase and the neutral (which is common to the three phases). When a group of customers sharing the neutral draw unequal phase currents, the common neutral wire carries the currents resulting from these imbalances. Electrical engineers try to design the three-phase power system for any one location so that the power drawn from each of three phases is the same, as far as possible at that site. Electrical engineers also try to arrange the distribution network so the loads are balanced as much as possible, since the same principles that apply to individual premises also apply to the wide-scale distribution system power. Hence, every effort is made by supply authorities to distribute the power drawn on each of the three phases over a large number of premises so that, on average, as nearly as possible a balanced load is seen at the point of supply. For domestic use, some countries such as the UK may supply one phase and neutral at a high current (up to 100 A) to one property, while others such as Germany may supply 3 phases and neutral to each customer, but at a lower fuse rating, typically 40–63 A per phase, and "rotated" to avoid the effect that more load tends to be put on the first phase. Based on wye (Y) and delta (Δ) connection. Generally, there are four different types of three-phase transformer winding connections for transmission and distribution purposes: wye (Y) – wye (Y) is used for small current and high voltage, Delta (Δ) – Delta (Δ) is used for large currents and low voltages, Delta (Δ) – wye (Y) is used for step-up transformers, i.e., at generating stations, wye (Y) – Delta (Δ) is used for step-down transformers, i.e., at the end of the transmission. In North America, a high-leg delta supply is sometimes used where one winding of a delta-connected transformer feeding the load is center-tapped and that center tap is grounded and connected as a neutral as shown in the second diagram. This setup produces three different voltages: If the voltage between the center tap (neutral) and each of the top and bottom taps (phase and anti-phase) is 120 V (100%), the voltage across the phase and anti-phase lines is 240 V (200%), and the neutral to "high leg" voltage is ≈ 208 V (173%). The reason for providing the delta connected supply is usually to power large motors requiring a rotating field. However, the premises concerned will also require the "normal" North American 120 V supplies, two of which are derived (180 degrees "out of phase") between the "neutral" and either of the center-tapped phase points. Balanced circuits In the perfectly balanced case all three lines share equivalent loads. Examining the circuits, we can derive relationships between line voltage and current, and load voltage and current for wye- and delta-connected loads. In a balanced system each line will produce equal voltage magnitudes at phase angles equally spaced from each other. With V1 as our reference and V3 lagging V2 lagging V1, using angle notation, and VLN the voltage between the line and the neutral we have: These voltages feed into either a wye- or delta-connected load. Wye (or, star; Y) The voltage seen by the load will depend on the load connection; for the wye case, connecting each load to a phase (line-to-neutral) voltages gives where Ztotal is the sum of line and load impedances (Ztotal = ZLN + ZY), and θ is the phase of the total impedance (Ztotal). The phase angle difference between voltage and current of each phase is not necessarily 0 and depends on the type of load impedance, Zy. Inductive and capacitive loads will cause current to either lag or lead the voltage. However, the relative phase angle between each pair of lines (1 to 2, 2 to 3, and 3 to 1) will still be −120°. By applying Kirchhoff's current law (KCL) to the neutral node, the three phase currents sum to the total current in the neutral line. In the balanced case: Delta (Δ) In the delta circuit, loads are connected across the lines, and so loads see line-to-line voltages: (Φv1 is the phase shift for the first voltage, commonly taken to be 0°; in this case, Φv2 = −120° and Φv3 = −240° or 120°.) Further: where θ is the phase of delta impedance (ZΔ). Relative angles are preserved, so I31 lags I23 lags I12 by 120°. Calculating line currents by using KCL at each delta node gives and similarly for each other line: where, again, θ is the phase of delta impedance (ZΔ). Inspection of a phasor diagram, or conversion from phasor notation to complex notation, illuminates how the difference between two line-to-neutral voltages yields a line-to-line voltage that is greater by a factor of . As a delta configuration connects a load across phases of a transformer, it delivers the line-to-line voltage difference, which is times greater than the line-to-neutral voltage delivered to a load in the wye configuration. As the power transferred is V2/Z'', the impedance in the delta configuration must be 3 times what it would be in a wye configuration for the same power to be transferred. Single-phase loads Except in a high-leg delta system and a corner-grounded delta system, single-phase loads may be connected across any two phases, or a load can be connected from phase to neutral. Distributing single-phase loads among the phases of a three-phase system balances the load and makes most economical use of conductors and transformers. In a symmetrical three-phase four-wire wye system, the three phase conductors have the same voltage to the system neutral. The voltage between line conductors is times the phase conductor to neutral voltage: The currents returning from the customers' premises to the supply transformer all share the neutral wire. If the loads are evenly distributed on all three phases, the sum of the returning currents in the neutral wire is approximately zero. Any unbalanced phase loading on the secondary side of the transformer will use the transformer capacity inefficiently. If the supply neutral is broken, phase-to-neutral voltage is no longer maintained. Phases with higher relative loading will experience reduced voltage, and phases with lower relative loading will experience elevated voltage, up to the phase-to-phase voltage. A high-leg delta provides phase-to-neutral relationship of , however, LN load is imposed on one phase. A transformer manufacturer's page suggests that LN loading not exceed 5% of transformer capacity. Since ≈ 1.73, defining as 100% gives . If was set as 100%, then . Unbalanced loads When the currents on the three live wires of a three-phase system are not equal or are not at an exact 120° phase angle, the power loss is greater than for a perfectly balanced system. The method of symmetrical components is used to analyze unbalanced systems. Non-linear loads With linear loads, the neutral only carries the current due to imbalance between the phases. Gas-discharge lamps and devices that utilize rectifier-capacitor front-end such as switch-mode power supplies, computers, office equipment and such produce third-order harmonics that are in-phase on all the supply phases. Consequently, such harmonic currents add in the neutral in a wye system (or in the grounded (zigzag) transformer in a delta system), which can cause the neutral current to exceed the phase current. Three-phase loads An important class of three-phase load is the electric motor. A three-phase induction motor has a simple design, inherently high starting torque and high efficiency. Such motors are applied in industry for many applications. A three-phase motor is more compact and less costly than a single-phase motor of the same voltage class and rating, and single-phase AC motors above are uncommon. Three-phase motors also vibrate less and hence last longer than single-phase motors of the same power used under the same conditions. Resistive heating loads such as electric boilers or space heating may be connected to three-phase systems. Electric lighting may also be similarly connected. Line frequency flicker in light is detrimental to high-speed cameras used in sports event broadcasting for slow-motion replays. It can be reduced by evenly spreading line frequency operated light sources across the three phases so that the illuminated area is lit from all three phases. This technique was applied successfully at the 2008 Beijing Olympics. Rectifiers may use a three-phase source to produce a six-pulse DC output. The output of such rectifiers is much smoother than rectified single phase and, unlike single-phase, does not drop to zero between pulses. Such rectifiers may be used for battery charging, electrolysis processes such as aluminium production and the electric arc furnace used in steelmaking, and for operation of DC motors. Zigzag transformers may make the equivalent of six-phase full-wave rectification, twelve pulses per cycle, and this method is occasionally employed to reduce the cost of the filtering components, while improving the quality of the resulting DC. In many European countries electric stoves are usually designed for a three-phase feed with permanent connection. Individual heating units are often connected between phase and neutral to allow for connection to a single-phase circuit if three-phase is not available. Other usual three-phase loads in the domestic field are tankless water heating systems and storage heaters. Homes in Europe have standardized on a nominal 230 V ±10% between any phase and ground. Most groups of houses are fed from a three-phase street transformer so that individual premises with above-average demand can be fed with a second or third phase connection. Phase converters Phase converters are used when three-phase equipment needs to be operated on a single-phase power source. They are used when three-phase power is not available or cost is not justifiable. Such converters may also allow the frequency to be varied, allowing speed control. Some railway locomotives use a single-phase source to drive three-phase motors fed through an electronic drive. A rotary phase converter is a three-phase motor with special starting arrangements and power factor correction that produces balanced three-phase voltages. When properly designed, these rotary converters can allow satisfactory operation of a three-phase motor on a single-phase source. In such a device, the energy storage is performed by the inertia (flywheel effect) of the rotating components. An external flywheel is sometimes found on one or both ends of the shaft. A three-phase generator can be driven by a single-phase motor. This motor-generator combination can provide a frequency changer function as well as phase conversion, but requires two machines with all their expenses and losses. The motor-generator method can also form an uninterruptible power supply when used in conjunction with a large flywheel and a battery-powered DC motor; such a combination will deliver nearly constant power compared to the temporary frequency drop experienced with a standby generator set gives until the standby generator kicks in. Capacitors and autotransformers can be used to approximate a three-phase system in a static phase converter, but the voltage and phase angle of the additional phase may only be useful for certain loads. Variable-frequency drives and digital phase converters use power electronic devices to synthesize a balanced three-phase supply from single-phase input power. Testing Verification of the phase sequence in a circuit is of considerable practical importance. Two sources of three-phase power must not be connected in parallel unless they have the same phase sequence, for example, when connecting a generator to an energized distribution network or when connecting two transformers in parallel. Otherwise, the interconnection will behave like a short circuit, and excess current will flow. The direction of rotation of three-phase motors can be reversed by interchanging any two phases; it may be impractical or harmful to test a machine by momentarily energizing the motor to observe its rotation. Phase sequence of two sources can be verified by measuring voltage between pairs of terminals and observing that terminals with very low voltage between them will have the same phase, whereas pairs that show a higher voltage are on different phases. Where the absolute phase identity is not required, phase rotation test instruments can be used to identify the rotation sequence with one observation. The phase rotation test instrument may contain a miniature three-phase motor, whose direction of rotation can be directly observed through the instrument case. Another pattern uses a pair of lamps and an internal phase-shifting network to display the phase rotation. Another type of instrument can be connected to a de-energized three-phase motor and can detect the small voltages induced by residual magnetism, when the motor shaft is rotated by hand. A lamp or other indicator lights to show the sequence of voltages at the terminals for the given direction of shaft rotation. Alternatives to three-phase Split-phase electric power Used when three-phase power is not available and allows double the normal utilization voltage to be supplied for high-power loads. Two-phase electric power Uses two AC voltages, with a 90-electrical-degree phase shift between them. Two-phase circuits may be wired with two pairs of conductors, or two wires may be combined, requiring only three wires for the circuit. Currents in the common conductor add to 1.4 times ( ) the current in the individual phases, so the common conductor must be larger. Two-phase and three-phase systems can be interconnected by a Scott-T transformer, invented by Charles F. Scott. Very early AC machines, notably the first generators at Niagara Falls, used a two-phase system, and some remnant two-phase distribution systems still exist, but three-phase systems have displaced the two-phase system for modern installations. Monocyclic power An asymmetrical modified two-phase power system used by General Electric around 1897, championed by Charles Proteus Steinmetz and Elihu Thomson. This system was devised to avoid patent infringement. In this system, a generator was wound with a full-voltage single-phase winding intended for lighting loads and with a small fraction (usually 1/4 of the line voltage) winding that produced a voltage in quadrature with the main windings. The intention was to use this "power wire" additional winding to provide starting torque for induction motors, with the main winding providing power for lighting loads. After the expiration of the Westinghouse patents on symmetrical two-phase and three-phase power distribution systems, the monocyclic system fell out of use; it was difficult to analyze and did not last long enough for satisfactory energy metering to be developed. High-phase-order systems Have been built and tested for power transmission. Such transmission lines typically would use six or twelve phases. High-phase-order transmission lines allow transfer of slightly less than proportionately higher power through a given volume without the expense of a high-voltage direct current (HVDC) converter at each end of the line. However, they require correspondingly more pieces of equipment. DC AC was historically used because it could be easily transformed to higher voltages for long distance transmission. However modern electronics can raise the voltage of DC with high efficiency, and DC lacks skin effect which permits transmission wires to be lighter and cheaper and so high-voltage direct current gives lower losses over long distances. Color codes Conductors of a three-phase system are usually identified by a color code, to facilitate balanced loading and to assure the correct phase rotation for motors. Colors used may adhere to International Standard IEC 60446 (later IEC 60445), older standards or to no standard at all and may vary even within a single installation. For example, in the U.S. and Canada, different color codes are used for grounded (earthed) and ungrounded systems.
Technology
Power transmission
null
38857
https://en.wikipedia.org/wiki/Lever
Lever
A lever is a simple machine consisting of a beam or rigid rod pivoted at a fixed hinge, or fulcrum. A lever is a rigid body capable of rotating on a point on itself. On the basis of the locations of fulcrum, load and effort, the lever is divided into three types. It is one of the six simple machines identified by Renaissance scientists. A lever amplifies an input force to provide a greater output force, which is said to provide leverage, which is mechanical advantage gained in the system, equal to the ratio of the output force to the input force. As such, the lever is a mechanical advantage device, trading off force against movement. Etymology The word "lever" entered English around 1300 from . This sprang from the stem of the verb lever, meaning "to raise". The verb, in turn, goes back to , itself from the adjective levis, meaning "light" (as in "not heavy"). The word's primary origin is the Proto-Indo-European stem , meaning "light", "easy" or "nimble", among other things. The PIE stem also gave rise to the English word "light". History of the lever The earliest evidence of the lever mechanism dates back to the ancient Near East , when it was first used in a simple balance scale. In ancient Egypt , a foot pedal was used for the earliest horizontal frame loom. In Mesopotamia (modern Iraq) , the shadouf, a crane-like device that uses a lever mechanism, was invented. In ancient Egypt, workmen used the lever to move and uplift obelisks weighing more than 100 tons. This is evident from the recesses in the large blocks and the handling bosses which could not be used for any purpose other than for levers. The earliest remaining writings regarding levers date from the 3rd century BC and were provided, by common belief, by the Greek mathematician Archimedes, who famously stated "Give me a lever long enough and a fulcrum on which to place it, and I shall move the world." Autumn Stanley argues that the digging stick can be considered the first lever, which would position prehistoric women as the inventors of lever technology. Force and levers A lever is a beam connected to ground by a hinge, or pivot, called a fulcrum. The ideal lever does not dissipate or store energy, which means there is no friction in the hinge or bending in the beam. In this case, the power into the lever equals the power out, and the ratio of output to input force is given by the ratio of the distances from the fulcrum to the points of application of these forces. This is known as the law of the lever. The mechanical advantage of a lever can be determined by considering the balance of moments or torque, T, about the fulcrum. If the distance traveled is greater, then the output force is lessened. where F1 is the input force to the lever and F2 is the output force. The distances a and b are the perpendicular distances between the forces and the fulcrum. Since the moments of torque must be balanced, . So, . The mechanical advantage of a lever is the ratio of output force to input force. This relationship shows that the mechanical advantage can be computed from ratio of the distances from the fulcrum to where the input and output forces are applied to the lever, assuming a weightless lever and no losses due to friction, flexibility or wear. This remains true even though the "horizontal" distance (perpendicular to the pull of gravity) of both a and b change (diminish) as the lever changes to any position away from the horizontal. Types of levers Levers are classified by the relative positions of the fulcrum, effort and resistance (or load). It is common to call the input force "effort" and the output force "load" or "resistance". This allows the identification of three classes of levers by the relative locations of the fulcrum, the resistance and the effort: Class I – Fulcrum is located between the effort and the resistance: The effort is applied on one side of the fulcrum and the resistance (or load) on the other side. For example, a seesaw, a crowbar, a pair of scissors, a balance scale, a pair of pliers, and a claw hammer (pulling a nail). With the fulcrum in the middle, the lever's mechanical advantage may be greater than, less than, or even equal to 1. Class II – Resistance (or load) is located between the effort and the fulcrum: The effort is applied on one side of the resistance and the fulcrum is located on the other side, e.g. a wheelbarrow, a nutcracker, a bottle opener, a wrench, and the brake pedal of a car. Since the load arm is smaller than the effort arm, the lever's mechanical advantage is always greater than 1. It is also called a force multiplier lever. Class III – Effort is located between the resistance and the fulcrum: The resistance (or load) is applied on one side of the effort and the fulcrum is located on the other side, e.g. a pair of tweezers, a hammer, a pair of tongs, a fishing rod, and the mandible of a human skull. Since the effort arm is smaller than the load arm, the lever's mechanical advantage is always less than 1. It is also called a speed multiplier lever. These cases are described by the mnemonic fre 123 where the f fulcrum is between r and e for the 1st class lever, the r resistance is between f and e for the 2nd class lever, and the e effort is between f and r for the 3rd class lever. Compound lever A compound lever comprises several levers acting in series: the resistance from one lever in a system of levers acts as effort for the next, and thus the applied force is transferred from one lever to the next. Examples of compound levers include scales, nail clippers and piano keys. The malleus, incus and stapes are small bones in the middle ear, connected as compound levers, that transfer sound waves from the eardrum to the oval window of the cochlea. Law of the lever The lever is a movable bar that pivots on a fulcrum attached to a fixed point. The lever operates by applying forces at different distances from the fulcrum, or a pivot. As the lever rotates around the fulcrum, points further from this pivot move faster than points closer to the pivot. Therefore, a force applied to a point further from the pivot must be less than the force located at a point closer in, because power is the product of force and velocity. If a and b are distances from the fulcrum to points A and B and the force FA applied to A is the input and the force FB applied at B is the output, the ratio of the velocities of points A and B is given by a/b, so we have the ratio of the output force to the input force, or mechanical advantage, is given by: This is the law of the lever, which was proven by Archimedes using geometric reasoning. It shows that if the distance a from the fulcrum to where the input force is applied (point A) is greater than the distance b from fulcrum to where the output force is applied (point B), then the lever amplifies the input force. On the other hand, if the distance a from the fulcrum to the input force is less than the distance b from the fulcrum to the output force, then the lever reduces the input force. The use of velocity in the static analysis of a lever is an application of the principle of virtual work. Virtual work and the law of the lever A lever is modeled as a rigid bar connected to a ground frame by a hinged joint called a fulcrum. The lever is operated by applying an input force FA at a point A located by the coordinate vector rA on the bar. The lever then exerts an output force FB at the point B located by rB. The rotation of the lever about the fulcrum P is defined by the rotation angle θ in radians. Let the coordinate vector of the point P that defines the fulcrum be rP, and introduce the lengths which are the distances from the fulcrum to the input point A and to the output point B, respectively. Now introduce the unit vectors eA and eB from the fulcrum to the point A and B, so The velocity of the points A and B are obtained as where eA⊥ and eB⊥ are unit vectors perpendicular to eA and eB, respectively. The angle θ is the generalized coordinate that defines the configuration of the lever, and the generalized force associated with this coordinate is given by where FA and FB are components of the forces that are perpendicular to the radial segments PA and PB. The principle of virtual work states that at equilibrium the generalized force is zero, that is Thus, the ratio of the output force FB to the input force FA is obtained as which is the mechanical advantage of the lever. This equation shows that if the distance a from the fulcrum to the point A where the input force is applied is greater than the distance b from fulcrum to the point B where the output force is applied, then the lever amplifies the input force. If the opposite is true that the distance from the fulcrum to the input point A is less than from the fulcrum to the output point B, then the lever reduces the magnitude of the input force.
Technology
Basics_8
null
38869
https://en.wikipedia.org/wiki/Flax
Flax
Flax, also known as common flax or linseed, is a flowering plant, Linum usitatissimum, in the family Linaceae. It is cultivated as a food and fiber crop in regions of the world with temperate climates. In 2022, France produced 75% of the world's supply of flax. Textiles made from flax are known in English as linen, and are traditionally used for bed sheets, underclothes, and table linen. Its oil is known as linseed oil. In addition to referring to the plant, the word "flax" may refer to the unspun fibers of the flax plant. The plant species is known only as a cultivated plant and appears to have been domesticated just once from the wild species Linum bienne, called pale flax. The plants called "flax" in New Zealand are, by contrast, members of the genus Phormium. Description Several other species in the genus Linum are similar in appearance to L. usitatissimum, cultivated flax, including some that have similar blue flowers, and others with white, yellow, or red flowers. Some of these are perennial plants, unlike L. usitatissimum, which is an annual plant. Cultivated flax plants grow to tall, with slender stems. The leaves are glaucous green, slender lanceolate, long, and 3 mm broad. The flowers are 15–25 mm in diameter with five petals, which can be coloured white, blue, yellow, and red depending on the species. The fruit is a round, dry capsule 5–9 mm in diameter, containing several glossy brown seeds shaped like apple pips, 4–7 mm long. Cultivation Flax is native to the region extending from the eastern Mediterranean to India and was first domesticated in the Fertile Crescent. The soils most suitable for flax, besides the alluvial kind, are deep loams containing a large proportion of organic matter. Flax is often found growing just above the waterline in cranberry bogs. Heavy clays are unsuitable, as are soils of a gravelly or dry sandy nature. Farming flax requires few fertilizers or pesticides. Within eight weeks of sowing, the plant can reach in height, reaching within 50 days. History The earliest evidence of humans using wild flax as a textile comes from the present-day Republic of Georgia, where spun, dyed, and knotted wild flax fibers found in Dzudzuana Cave date to the Upper Paleolithic, 30,000 years ago. Humans first domesticated flax in the Fertile Crescent region. Evidence exists of a domesticated oilseed flax with increased seed-size from Tell Ramad in Syria and flax fabric fragments from Çatalhöyük in Turkey by years ago. Use of the crop steadily spread, reaching as far as Switzerland and Germany by 5,000 years ago. In China and India, domesticated flax was cultivated at least 5,000 years ago. Flax was cultivated extensively in ancient Egypt, where the temple walls had paintings of flowering flax, and mummies were embalmed using linen. Egyptian priests wore only linen, as flax was considered a symbol of purity. Phoenicians traded Egyptian linen throughout the Mediterranean and the Romans used it for their sails. As the Roman Empire declined, so did flax production. But with laws designed to publicize the hygiene of linen textiles and the health of linseed oil, Charlemagne revived the crop in the eighth century CE. Eventually, Flanders became the major center of the European linen industry in the Middle Ages. In North America, colonists introduced flax, and it flourished there, but by the early 20th century, cheap cotton and rising farm wages had caused production of flax to become concentrated in northern Russia, which came to provide 90% of the world's output. Since then, flax has lost its importance as a commercial crop, due to the easy availability of more inexpensive synthetic fibres. Diseases Production In 2022, world production of raw or retted flax was 875,995 tonnes, led by France with 75% of the total. One of the largest regions in France for flax production is Normandy with nearly one-third of the world's production. Harvesting Maturation Flax is harvested for fiber production after about 100 days, or a month after the plants flower and two weeks after the seed capsules form. The bases of the plants begin to turn yellow. If the plants are still green, the seed will not be useful, and the fiber will be underdeveloped. The fiber degrades once the plants turn brown. Flax grown for seed is allowed to mature until the seed capsules are yellow and just starting to split; it is then harvested in various ways. A combine harvester may either cut only the heads of the plants, or the whole plant. These are then dried to extract the seed. The amount of weeds in the straw affects its marketability, and this, coupled with market prices, determines whether the farmer chooses to harvest the flax straw. If the flax straw is not harvested, typically, it is burned, since the stalks are quite tough and decompose slowly (i.e., not in a single season). Formed into windrows from the harvesting process, the straw often clogs up tillage and planting equipment. Flax straw that is not of sufficient quality for fiber uses can be baled to build shelters for farm animals, or sold as biofuel, or removed from the field in the spring. Two ways are used to harvest flax fiber, one involving mechanized equipment (combines), and a second method, more manual and targeting maximum fiber length. Harvesting for fiber Mechanical Flax for fiber production is usually harvested by a specialized flax harvester. Usually built on the same machine base as a combine, but instead of the cutting head it has a flax puller. The flax plant is turned over and is gripped by rubber belts roughly 20–25 cm (8–10 inches) above ground, to avoid getting grasses and weeds in the flax. The rubber belts then pull the whole plant out of the ground with the roots so the whole length of the plant fiber can be used. The plants then pass over the machine and is placed on the field crosswise to the harvester's direction of travel. The plants are left in the field for field retting. The mature plant can also be cut with mowing equipment, similar to hay harvesting, and raked into windrows. When dried sufficiently, a combine then harvests the seeds similar to wheat or oat harvesting. Manual The plant is pulled up with the roots (not cut), so as to increase the fiber length. After this, the flax is allowed to dry, the seeds are removed, and it is then retted. Dependent upon climatic conditions, characteristics of the sown flax and fields, the flax remains on the ground between two weeks and two months for retting. As a result of alternating rain and the sun, an enzymatic action degrades the pectins which bind fibers to the straw. The farmers turn over the straw during retting to evenly rett the stalks. When the straw is retted and sufficiently dry, it is rolled up. It is then stored by farmers before extracting the fibers. Processing Threshing is the process of removing the seeds from the rest of the plant. Separating the usable flax fibers from other components requires pulling the stems through a hackle and/or beating the plants to break them. Flax processing is divided into two parts: the first part is generally done by the farmer, to bring the flax fiber into a fit state for general or common purposes. This can be performed by three machines: one for threshing out the seed, one for breaking and separating the straw (stem) from the fiber, and one for further separating the broken straw and matter from the fiber. The second part of the process brings the flax into a state for the very finest purposes, such as lace, cambric, damask, and very fine linen. This second part is performed by a refining machine. Uses Flax is grown for its seeds, which can be ground into a meal or turned into linseed oil, a product used as a nutritional supplement and as an ingredient in many wood-finishing products. Flax is also grown as an ornamental plant in gardens. Moreover, flax fibers are used to make linen. The specific epithet in its binomial name, usitatissimum, means "most useful". Flax fibers taken from the stem of the plant are two to three times as strong as cotton fibers. Additionally, flax fibers are naturally smooth and straight. Europe and North America both depended on flax for plant-based cloth until the 19th century, when cotton overtook flax as the most common plant for making rag-based paper. Flax is grown on the Canadian prairies for linseed oil, which is used as a drying oil in paints and varnishes and in products such as linoleum and printing inks. Linseed meal, the by-product of producing linseed oil from flax seeds, is used as livestock fodder. Flax seeds Flax seeds occur in brown and yellow (golden) varieties. Most types of these basic varieties have similar nutritional characteristics and equal numbers of short-chain omega-3 fatty acids. Yellow flax seeds, called solin (trade name "Linola"), have a similar oil profile to brown flax seeds and both are very high in omega-3s (alpha-linolenic acid (ALA), specifically). Flax seeds produce a vegetable oil known as flax seed oil or linseed oil, which is one of the oldest commercial oils. It is an edible oil obtained by expeller pressing and sometimes followed by solvent extraction. Solvent-processed flax seed oil has been used for many centuries as a drying oil in painting and varnishing. Although brown flax seed varieties may be consumed as readily as the yellow ones, and have been for thousands of years, these varieties are more commonly used in paints, for fiber, and for cattle feed. Culinary A 100-gram portion of ground flax seed supplies about of food energy, 41 g of fat, 28 g of fiber, and 20 g of protein. Whole flax seeds are chemically stable, but ground flax seed meal, because of oxidation, may go rancid when left exposed to air at room temperature in as little as a week. Refrigeration and storage in sealed containers will keep ground flax seed meal for a longer period before it turns rancid. Under conditions similar to those found in commercial bakeries, trained sensory panelists could not detect differences between bread made with freshly ground flax seed and bread made with flax seed that had been milled four months earlier and stored at room temperature. If packed immediately without exposure to air and light, milled flax seed is stable against excessive oxidation when stored for nine months at room temperature, and under warehouse conditions, for 20 months at ambient temperatures. Three phenolic glucosides—secoisolariciresinol diglucoside, p-coumaric acid glucoside, and ferulic acid glucoside—are present in commercial breads containing flax seed. Nutrition Flax seeds are 7% water, 18% protein, 29% carbohydrates, and 42% fat (table). In as a reference amount, flax seeds provide 534 kilocalories and contain high levels (20% or more of the Daily Value, DV) of protein, dietary fiber, several B vitamins, and dietary minerals. Flax seeds are especially rich in thiamine, magnesium, and phosphorus (DVs above 90%) (table). As a percentage of total fat, flax seeds contain 54% omega-3 fatty acids (mostly ALA), 18% omega-9 fatty acids (oleic acid), and 6% omega-6 fatty acids (linoleic acid); the seeds contain 9% saturated fat, including 5% as palmitic acid. Flax seed oil contains 53% 18:3 omega-3 fatty acids (mostly ALA) and 13% 18:2 omega-6 fatty acids. Health research A meta-analysis showed that consumption of more than 30 g of flax-seed daily for more than 12 weeks reduced body weight, body mass index (BMI), and waist circumference for persons with a BMI greater than 27. Another meta-analysis showed that consumption of flax seeds for more than 12 weeks produced small reductions in systolic blood pressure and diastolic blood pressure. A third showed that consuming flax seed or its derivatives may reduce total and LDL-cholesterol in the blood, with greater benefits in women and people with high cholesterol. A fourth showed a small reduction in c-reactive protein (a marker of inflammation) only in persons with a body mass index greater than 30. Linseed oil Safety Flax seed and its oil are generally recognized as safe for human consumption. Like many common foods, flax contains small amounts of cyanogenic glycoside, which is nontoxic when consumed in typical amounts. Typical concentrations (for example, 0.48% in a sample of defatted dehusked flax seed meal) can be removed by special processing. Fodder After crushing the seeds to extract linseed oil, the resultant linseed meal is a protein-rich feed for ruminants, rabbits, and fish. It is also often used as feed for swine and poultry, and has also been used in horse concentrate and dog food. The high omega-3 fatty acid (ALA) content of linseed meal "softens" milk, eggs, and meat, which means it causes a higher unsaturated fat content and thus lowers its storage time. The high omega-3 content also has a further disadvantage, because this fatty acid oxidises and goes rancid quickly, which shortens the storage time. Linola was developed in Australia and introduced in the 1990s with less omega-3, specifically to serve as fodder. Another disadvantage of the meal and seed is that it contains a vitamin B6 (pyridoxine) antagonist, and may require this vitamin be supplemented, especially in chickens, and furthermore linseeds contain 2–7% of mucilage (fibre), which may be beneficial in humans and cattle, but cannot be digested by non-ruminants and can be detrimental to young animals, unless possibly treated with enzymes. Linseed meal is added to cattle feed as a protein supplement. It can only be added at low percentages due to the high fat content, which is unhealthy for ruminants. Compared to oilseed meal from crucifers it measures as having lower nutrient values, however, good results are obtained in cattle, perhaps due to the mucilage, which may aid in slowing digestion and thus allowing more time to absorb nutrients. One study found that feeding flax seeds may increase omega-3 content in beef, while another found no differences. It might also act as a substitute for tallow in increasing marbling. In the US, flax-based feed for ruminants is often somewhat more expensive than other feeds on a nutrient basis. Sheep feeding on low quality forage are able to eat a large amount of linseed meal, up to 40% in one test, with positive consequences. It has been fed as supplement to water buffaloes in India, and provided a better diet than forage alone, but not as good as when substituted with soy meal. It is considered an inferior protein supplement for swine because of its fibre, the vitamin antagonist, the high omega-3 content and its low lysine content, and can only be used in small amounts in the feed. Although it may increase the omega-3 content in eggs and meat, it is also an inferior and potentially toxic feed for poultry, although it can be used in small amounts. The meal is an adequate and traditional source of protein for rabbits at 8–10%. Its use in fish feeds is limited. Raw, immature linseeds contain an amount of cyanogenic compounds and can be dangerous for monogastric animals, like horses and rabbits. Boiling removes the danger. This is not an issue in meal cake due to the processing temperature during oil extraction. Flax straw left over from the harvesting of oilseed is not very nutritious; it is tough and indigestible, and is not recommended to use as ruminant fodder, although it may be used as bedding or baled as windbreaks. Flax fibers Flax fiber is extracted from the bast beneath the surface of the stem of the flax plant. Flax fiber is soft, lustrous, and flexible; bundles of fiber have the appearance of blonde hair, hence the description "flaxen" hair. It is stronger than cotton fiber, but less elastic. The use of flax fibers dates back tens of millennia; linen, a refined textile made from flax fibers, was worn widely by Sumerian priests more than 4,000 years ago. Industrial-scale flax fiber processing existed in antiquity. A Bronze Age factory dedicated to flax processing was discovered in Euonymeia, Greece. The best grades are used for fabrics such as damasks, lace, and sheeting. Coarser grades are used for the manufacturing of twine and rope, and historically, for canvas and webbing equipment. Flax fiber is a raw material used in the high-quality paper industry for the use of printed banknotes, laboratory paper (blotting and filter), rolling paper for cigarettes, and tea bags. Flax mills for spinning flaxen yarn were invented by John Kendrew and Thomas Porthouse of Darlington, England, in 1787. New methods of processing flax have led to renewed interest in the use of flax as an industrial fiber. Preparation for spinning Before the flax fibers can be spun into linen, they must be separated from the rest of the stalk. The first step in this process is retting, which is the process of rotting away the inner stalk, leaving the outer parts intact. At this point, straw, or coarse outer stem (cortex and epidermis), is still remaining. To remove this, the flax is "broken", the straw is broken up into small, short bits, while the actual fiber is left unharmed. Scutching scrapes the outer straw from the fiber. The stems are then pulled through "hackles", which act like combs to remove the straw and some shorter fibers out of the long fiber. Retting flax Several methods are used for retting flax. It can be retted in a pond, stream, field, or tank. When the retting is complete, the bundles of flax feel soft and slimy, and quite a few fibers are standing out from the stalks. When wrapped around a finger, the inner woody part springs away from the fibers. Pond retting is the fastest. It consists of placing the flax in a pool of water which will not evaporate. It generally takes place in a shallow pool which will warm up dramatically in the sun; the process may take from a few days to a few weeks. Pond-retted flax is traditionally considered of lower quality, possibly because the product can become dirty, and is easily over-retted, damaging the fiber. This form of retting also produces quite an odor. Stream retting is similar to pool retting, but the flax is submerged in bundles in a stream or river. This generally takes two or three weeks longer than pond retting, but the end product is less likely to be dirty, does not smell as bad, and because the water is cooler, is less likely to be over-retted. Both pond and stream retting were traditionally used less because they pollute the waters used for the process. In field retting, the flax is laid out in a large field, and dew is allowed to collect on it. This process normally takes a month or more, but is generally considered to provide the highest quality flax fibers, and it produces the least pollution. Retting can also be done in a plastic trash can or any type of water-tight container of wood, concrete, earthenware, or plastic. Metal containers will not work, as an acid is produced when retting, and it would corrode the metal. If the water temperature is kept at , the retting process under these conditions takes 4 or 5 days. If the water is any colder, it takes longer. Scum collects at the top, and an odor is given off the same as in pond retting. 'Enzymatic' retting of flax has been researched as a technique to engineer fibers with specific properties. Dressing the flax Dressing the flax is the process of removing the straw from the fibers. Dressing consists of three steps: breaking, scutching, and heckling. The breaking breaks up the straw. Some of the straw is scraped from the fibers in the scutching process, and finally, the fiber is pulled through heckles to remove the last bits of straw. Breaking breaks up the straw into short segments. Scutching removes some of the straw from the fiber. Heckling is pulling the fiber through various sizes of heckling combs or heckles. A heckle is a bed of "nails"—sharp, long-tapered, tempered, polished steel pins driven into wooden blocks at regular spacing. Genetically modified flax contamination In September 2009, Canadian flax exports reportedly had been contaminated by a deregistered genetically modified cultivar called 'Triffid' that had food and feed safety approval in Canada and the U.S. Canadian growers and the Flax Council of Canada raised concerns about the marketability of this cultivar in Europe where a zero tolerance policy exists regarding unapproved genetically modified organisms. Consequently, Triffid was deregistered in 2010 and never grown commercially in Canada or the U.S. Triffid stores were destroyed, but future exports and further tests at the University of Saskatchewan proved that Triffid persisted in at least two Canadian flax varieties, possibly affecting future crops. Canadian flax seed cultivars were reconstituted with Triffid-free seed used to plant the 2014 crop. Laboratories are certified to test for the presence of Triffid at a level of one seed in 10,000. In culture Flax is an emblem of Northern Ireland and displayed by the Northern Ireland Assembly. In a coronet, it appeared on the reverse of the British one-pound coin to represent Northern Ireland on coins minted in 1986, 1991, and 2014. Flax also represents Northern Ireland on the badge of the Supreme Court of the United Kingdom and on various logos associated with it. Common flax is the national flower of Belarus. In early versions of the Sleeping Beauty tale, such as "Sun, Moon, and Talia" by Giambattista Basile, the princess pricks her finger, not on a spindle, but on a sliver of flax, which later is sucked out by her children conceived as she sleeps.
Biology and health sciences
Malpighiales
null
38890
https://en.wikipedia.org/wiki/Natural%20science
Natural science
Natural science is one of the branches of science concerned with the description, understanding and prediction of natural phenomena, based on empirical evidence from observation and experimentation. Mechanisms such as peer review and reproducibility of findings are used to try to ensure the validity of scientific advances. Natural science can be divided into two main branches: life science and physical science. Life science is alternatively known as biology, and physical science is subdivided into branches: physics, chemistry, earth science, and astronomy. These branches of natural science may be further divided into more specialized branches (also known as fields). As empirical sciences, natural sciences use tools from the formal sciences, such as mathematics and logic, converting information about nature into measurements that can be explained as clear statements of the "laws of nature". Modern natural science succeeded more classical approaches to natural philosophy. Galileo, Kepler, Descartes, Bacon, and Newton debated the benefits of using approaches which were more mathematical and more experimental in a methodical way. Still, philosophical perspectives, conjectures, and presuppositions, often overlooked, remain necessary in natural science. Systematic data collection, including discovery science, succeeded natural history, which emerged in the 16th century by describing and classifying plants, animals, minerals, and so on. Today, "natural history" suggests observational descriptions aimed at popular audiences. Criteria Philosophers of science have suggested several criteria, including Karl Popper's controversial falsifiability criterion, to help them differentiate scientific endeavors from non-scientific ones. Validity, accuracy, and quality control, such as peer review and reproducibility of findings, are amongst the most respected criteria in today's global scientific community. In natural science, impossibility assertions come to be widely accepted as overwhelmingly probable rather than considered proven to the point of being unchallengeable. The basis for this strong acceptance is a combination of extensive evidence of something not occurring, combined with an underlying theory, very successful in making predictions, whose assumptions lead logically to the conclusion that something is impossible. While an impossibility assertion in natural science can never be proved, it could be refuted by the observation of a single counterexample. Such a counterexample would require that the assumptions underlying the theory that implied the impossibility be re-examined. Branches of natural science Biology This field encompasses a diverse set of disciplines that examine phenomena related to living organisms. The scale of study can range from sub-component biophysics up to complex ecologies. Biology is concerned with the characteristics, classification and behaviors of organisms, as well as how species were formed and their interactions with each other and the environment. The biological fields of botany, zoology, and medicine date back to early periods of civilization, while microbiology was introduced in the 17th century with the invention of the microscope. However, it was not until the 19th century that biology became a unified science. Once scientists discovered commonalities between all living things, it was decided they were best studied as a whole. Some key developments in biology were the discovery of genetics, evolution through natural selection, the germ theory of disease, and the application of the techniques of chemistry and physics at the level of the cell or organic molecule. Modern biology is divided into subdisciplines by the type of organism and by the scale being studied. Molecular biology is the study of the fundamental chemistry of life, while cellular biology is the examination of the cell; the basic building block of all life. At a higher level, anatomy and physiology look at the internal structures, and their functions, of an organism, while ecology looks at how various organisms interrelate. Earth science Earth science (also known as geoscience) is an all-embracing term for the sciences related to the planet Earth, including geology, geography, geophysics, geochemistry, climatology, glaciology, hydrology, meteorology, and oceanography. Although mining and precious stones have been human interests throughout the history of civilization, the development of the related sciences of economic geology and mineralogy did not occur until the 18th century. The study of the earth, particularly paleontology, blossomed in the 19th century. The growth of other disciplines, such as geophysics, in the 20th century led to the development of the theory of plate tectonics in the 1960s, which has had a similar effect on the Earth sciences as the theory of evolution had on biology. Earth sciences today are closely linked to petroleum and mineral resources, climate research, and to environmental assessment and remediation. Atmospheric sciences Although sometimes considered in conjunction with the earth sciences, due to the independent development of its concepts, techniques, and practices and also the fact of it having a wide range of sub-disciplines under its wing, atmospheric science is also considered a separate branch of natural science. This field studies the characteristics of different layers of the atmosphere from ground level to the edge of the space. The timescale of the study also varies from day to century. Sometimes, the field also includes the study of climatic patterns on planets other than Earth. Oceanography The serious study of oceans began in the early- to mid-20th century. As a field of natural science, it is relatively young, but stand-alone programs offer specializations in the subject. Though some controversies remain as to the categorization of the field under earth sciences, interdisciplinary sciences, or as a separate field in its own right, most modern workers in the field agree that it has matured to a state that it has its own paradigms and practices. Planetary science Planetary science or planetology, is the scientific study of planets, which include terrestrial planets like the Earth, and other types of planets, such as gas giants and ice giants. Planetary science also concerns other celestial bodies, such as dwarf planets moons, asteroids, and comets. This largely includes the Solar System, but recently has started to expand to exoplanets, particularly terrestrial exoplanets. It explores various objects, spanning from micrometeoroids to gas giants, to establish their composition, movements, genesis, interrelation, and past. Planetary science is an interdisciplinary domain, having originated from astronomy and Earth science, and currently encompassing a multitude of areas, such as planetary geology, cosmochemistry, atmospheric science, physics, oceanography, hydrology, theoretical planetology, glaciology, and exoplanetology. Related fields encompass space physics, which delves into the impact of the Sun on the bodies in the Solar System, and astrobiology. Planetary science comprises interconnected observational and theoretical branches. Observational research entails a combination of space exploration, primarily through robotic spacecraft missions utilizing remote sensing, and comparative experimental work conducted in Earth-based laboratories. The theoretical aspect involves extensive mathematical modelling and computer simulation. Typically, planetary scientists are situated within astronomy and physics or Earth sciences departments in universities or research centers. However, there are also dedicated planetary science institutes worldwide. Generally, individuals pursuing a career in planetary science undergo graduate-level studies in one of the Earth sciences, astronomy, astrophysics, geophysics, or physics. They then focus their research within the discipline of planetary science. Major conferences are held annually, and numerous peer reviewed journals cater to the diverse research interests in planetary science. Some planetary scientists are employed by private research centers and frequently engage in collaborative research initiatives. Chemistry Constituting the scientific study of matter at the atomic and molecular scale, chemistry deals primarily with collections of atoms, such as gases, molecules, crystals, and metals. The composition, statistical properties, transformations, and reactions of these materials are studied. Chemistry also involves understanding the properties and interactions of individual atoms and molecules for use in larger-scale applications. Most chemical processes can be studied directly in a laboratory, using a series of (often well-tested) techniques for manipulating materials, as well as an understanding of the underlying processes. Chemistry is often called "the central science" because of its role in connecting the other natural sciences. Early experiments in chemistry had their roots in the system of alchemy, a set of beliefs combining mysticism with physical experiments. The science of chemistry began to develop with the work of Robert Boyle, the discoverer of gases, and Antoine Lavoisier, who developed the theory of the conservation of mass. The discovery of the chemical elements and atomic theory began to systematize this science, and researchers developed a fundamental understanding of states of matter, ions, chemical bonds and chemical reactions. The success of this science led to a complementary chemical industry that now plays a significant role in the world economy. Physics Physics embodies the study of the fundamental constituents of the universe, the forces and interactions they exert on one another, and the results produced by these interactions. Physics is generally regarded as foundational because all other natural sciences use and obey the field's principles and laws. Physics relies heavily on mathematics as the logical framework for formulating and quantifying principles. The study of the principles of the universe has a long history and largely derives from direct observation and experimentation. The formulation of theories about the governing laws of the universe has been central to the study of physics from very early on, with philosophy gradually yielding to systematic, quantitative experimental testing and observation as the source of verification. Key historical developments in physics include Isaac Newton's theory of universal gravitation and classical mechanics, an understanding of electricity and its relation to magnetism, Einstein's theories of special and general relativity, the development of thermodynamics, and the quantum mechanical model of atomic and subatomic physics. The field of physics is vast and can include such diverse studies as quantum mechanics and theoretical physics, applied physics and optics. Modern physics is becoming increasingly specialized, where researchers tend to focus on a particular area rather than being "universalists" like Isaac Newton, Albert Einstein, and Lev Landau, who worked in multiple areas. Astronomy Astronomy is a natural science that studies celestial objects and phenomena. Objects of interest include planets, moons, stars, nebulae, galaxies, and comets. Astronomy is the study of everything in the universe beyond Earth's atmosphere, including objects we can see with our naked eyes. It is one of the oldest sciences. Astronomers of early civilizations performed methodical observations of the night sky, and astronomical artifacts have been found from much earlier periods. There are two types of astronomy: observational astronomy and theoretical astronomy. Observational astronomy is focused on acquiring and analyzing data, mainly using basic principles of physics. In contrast, Theoretical astronomy is oriented towards developing computer or analytical models to describe astronomical objects and phenomena. This discipline is the science of celestial objects and phenomena that originate outside the Earth's atmosphere. It is concerned with the evolution, physics, chemistry, meteorology, geology, and motion of celestial objects, as well as the formation and development of the universe. Astronomy includes examining, studying, and modeling stars, planets, and comets. Most of the information used by astronomers is gathered by remote observation. However, some laboratory reproduction of celestial phenomena has been performed (such as the molecular chemistry of the interstellar medium). There is considerable overlap with physics and in some areas of earth science. There are also interdisciplinary fields such as astrophysics, planetary sciences, and cosmology, along with allied disciplines such as space physics and astrochemistry. While the study of celestial features and phenomena can be traced back to antiquity, the scientific methodology of this field began to develop in the middle of the 17th century. A key factor was Galileo's introduction of the telescope to examine the night sky in more detail. The mathematical treatment of astronomy began with Newton's development of celestial mechanics and the laws of gravitation. However, it was triggered by earlier work of astronomers such as Kepler. By the 19th century, astronomy had developed into formal science, with the introduction of instruments such as the spectroscope and photography, along with much-improved telescopes and the creation of professional observatories. Interdisciplinary studies The distinctions between the natural science disciplines are not always sharp, and they share many cross-discipline fields. Physics plays a significant role in the other natural sciences, as represented by astrophysics, geophysics, chemical physics and biophysics. Likewise chemistry is represented by such fields as biochemistry, physical chemistry, geochemistry and astrochemistry. A particular example of a scientific discipline that draws upon multiple natural sciences is environmental science. This field studies the interactions of physical, chemical, geological, and biological components of the environment, with particular regard to the effect of human activities and the impact on biodiversity and sustainability. This science also draws upon expertise from other fields, such as economics, law, and social sciences. A comparable discipline is oceanography, as it draws upon a similar breadth of scientific disciplines. Oceanography is sub-categorized into more specialized cross-disciplines, such as physical oceanography and marine biology. As the marine ecosystem is vast and diverse, marine biology is further divided into many subfields, including specializations in particular species. There is also a subset of cross-disciplinary fields with strong currents that run counter to specialization by the nature of the problems they address. Put another way: In some fields of integrative application, specialists in more than one field are a key part of most scientific discourse. Such integrative fields, for example, include nanoscience, astrobiology, and complex system informatics. Materials science Materials science is a relatively new, interdisciplinary field that deals with the study of matter and its properties and the discovery and design of new materials. Originally developed through the field of metallurgy, the study of the properties of materials and solids has now expanded into all materials. The field covers the chemistry, physics, and engineering applications of materials, including metals, ceramics, artificial polymers, and many others. The field's core deals with relating the structure of materials with their properties. Materials science is at the forefront of research in science and engineering. It is an essential part of forensic engineering (the investigation of materials, products, structures, or components that fail or do not operate or function as intended, causing personal injury or damage to property) and failure analysis, the latter being the key to understanding, for example, the cause of various aviation accidents. Many of the most pressing scientific problems that are faced today are due to the limitations of the materials that are available, and, as a result, breakthroughs in this field are likely to have a significant impact on the future of technology. The basis of materials science involves studying the structure of materials and relating them to their properties. Understanding this structure-property correlation, material scientists can then go on to study the relative performance of a material in a particular application. The major determinants of the structure of a material and, thus, of its properties are its constituent chemical elements and how it has been processed into its final form. These characteristics, taken together and related through the laws of thermodynamics and kinetics, govern a material's microstructure and thus its properties. History Some scholars trace the origins of natural science as far back as pre-literate human societies, where understanding the natural world was necessary for survival. People observed and built up knowledge about the behavior of animals and the usefulness of plants as food and medicine, which was passed down from generation to generation. These primitive understandings gave way to more formalized inquiry around 3500 to 3000 BC in the Mesopotamian and Ancient Egyptian cultures, which produced the first known written evidence of natural philosophy, the precursor of natural science. While the writings show an interest in astronomy, mathematics, and other aspects of the physical world, the ultimate aim of inquiry about nature's workings was, in all cases, religious or mythological, not scientific. A tradition of scientific inquiry also emerged in Ancient China, where Taoist alchemists and philosophers experimented with elixirs to extend life and cure ailments. They focused on the yin and yang, or contrasting elements in nature; the yin was associated with femininity and coldness, while yang was associated with masculinity and warmth. The five phases – fire, earth, metal, wood, and water – described a cycle of transformations in nature. The water turned into wood, which turned into the fire when it burned. The ashes left by fire were earth. Using these principles, Chinese philosophers and doctors explored human anatomy, characterizing organs as predominantly yin or yang, and understood the relationship between the pulse, the heart, and the flow of blood in the body centuries before it became accepted in the West. Little evidence survives of how Ancient Indian cultures around the Indus River understood nature, but some of their perspectives may be reflected in the Vedas, a set of sacred Hindu texts. They reveal a conception of the universe as ever-expanding and constantly being recycled and reformed. Surgeons in the Ayurvedic tradition saw health and illness as a combination of three humors: wind, bile and phlegm. A healthy life resulted from a balance among these humors. In Ayurvedic thought, the body consisted of five elements: earth, water, fire, wind, and space. Ayurvedic surgeons performed complex surgeries and developed a detailed understanding of human anatomy. Pre-Socratic philosophers in Ancient Greek culture brought natural philosophy a step closer to direct inquiry about cause and effect in nature between 600 and 400 BC. However, an element of magic and mythology remained. Natural phenomena such as earthquakes and eclipses were explained increasingly in the context of nature itself instead of being attributed to angry gods. Thales of Miletus, an early philosopher who lived from 625 to 546 BC, explained earthquakes by theorizing that the world floated on water and that water was the fundamental element in nature. In the 5th century BC, Leucippus was an early exponent of atomism, the idea that the world is made up of fundamental indivisible particles. Pythagoras applied Greek innovations in mathematics to astronomy and suggested that the earth was spherical. Aristotelian natural philosophy (400 BC–1100 AD) Later Socratic and Platonic thought focused on ethics, morals, and art and did not attempt an investigation of the physical world; Plato criticized pre-Socratic thinkers as materialists and anti-religionists. Aristotle, however, a student of Plato who lived from 384 to 322 BC, paid closer attention to the natural world in his philosophy. In his History of Animals, he described the inner workings of 110 species, including the stingray, catfish and bee. He investigated chick embryos by breaking open eggs and observing them at various stages of development. Aristotle's works were influential through the 16th century, and he is considered to be the father of biology for his pioneering work in that science. He also presented philosophies about physics, nature, and astronomy using inductive reasoning in his works Physics and Meteorology. While Aristotle considered natural philosophy more seriously than his predecessors, he approached it as a theoretical branch of science. Still, inspired by his work, Ancient Roman philosophers of the early 1st century AD, including Lucretius, Seneca and Pliny the Elder, wrote treatises that dealt with the rules of the natural world in varying degrees of depth. Many Ancient Roman Neoplatonists of the 3rd to the 6th centuries also adapted Aristotle's teachings on the physical world to a philosophy that emphasized spiritualism. Early medieval philosophers including Macrobius, Calcidius and Martianus Capella also examined the physical world, largely from a cosmological and cosmographical perspective, putting forth theories on the arrangement of celestial bodies and the heavens, which were posited as being composed of aether. Aristotle's works on natural philosophy continued to be translated and studied amid the rise of the Byzantine Empire and Abbasid Caliphate. In the Byzantine Empire, John Philoponus, an Alexandrian Aristotelian commentator and Christian theologian, was the first to question Aristotle's physics teaching. Unlike Aristotle, who based his physics on verbal argument, Philoponus instead relied on observation and argued for observation rather than resorting to a verbal argument. He introduced the theory of impetus. John Philoponus' criticism of Aristotelian principles of physics served as inspiration for Galileo Galilei during the Scientific Revolution. A revival in mathematics and science took place during the time of the Abbasid Caliphate from the 9th century onward, when Muslim scholars expanded upon Greek and Indian natural philosophy. The words alcohol, algebra and zenith all have Arabic roots. Medieval natural philosophy (1100–1600) Aristotle's works and other Greek natural philosophy did not reach the West until about the middle of the 12th century, when works were translated from Greek and Arabic into Latin. The development of European civilization later in the Middle Ages brought with it further advances in natural philosophy. European inventions such as the horseshoe, horse collar and crop rotation allowed for rapid population growth, eventually giving way to urbanization and the foundation of schools connected to monasteries and cathedrals in modern-day France and England. Aided by the schools, an approach to Christian theology developed that sought to answer questions about nature and other subjects using logic. This approach, however, was seen by some detractors as heresy. By the 12th century, Western European scholars and philosophers came into contact with a body of knowledge of which they had previously been ignorant: a large corpus of works in Greek and Arabic that were preserved by Islamic scholars. Through translation into Latin, Western Europe was introduced to Aristotle and his natural philosophy. These works were taught at new universities in Paris and Oxford by the early 13th century, although the practice was frowned upon by the Catholic church. A 1210 decree from the Synod of Paris ordered that "no lectures are to be held in Paris either publicly or privately using Aristotle's books on natural philosophy or the commentaries, and we forbid all this under pain of ex-communication." In the late Middle Ages, Spanish philosopher Dominicus Gundissalinus translated a treatise by the earlier Persian scholar Al-Farabi called On the Sciences into Latin, calling the study of the mechanics of nature Scientia naturalis, or natural science. Gundissalinus also proposed his classification of the natural sciences in his 1150 work On the Division of Philosophy. This was the first detailed classification of the sciences based on Greek and Arab philosophy to reach Western Europe. Gundissalinus defined natural science as "the science considering only things unabstracted and with motion," as opposed to mathematics and sciences that rely on mathematics. Following Al-Farabi, he separated the sciences into eight parts, including: physics, cosmology, meteorology, minerals science, and plant and animal science. Later, philosophers made their own classifications of the natural sciences. Robert Kilwardby wrote On the Order of the Sciences in the 13th century that classed medicine as a mechanical science, along with agriculture, hunting, and theater, while defining natural science as the science that deals with bodies in motion. Roger Bacon, an English friar and philosopher, wrote that natural science dealt with "a principle of motion and rest, as in the parts of the elements of fire, air, earth, and water, and in all inanimate things made from them." These sciences also covered plants, animals and celestial bodies. Later in the 13th century, a Catholic priest and theologian Thomas Aquinas defined natural science as dealing with "mobile beings" and "things which depend on a matter not only for their existence but also for their definition." There was broad agreement among scholars in medieval times that natural science was about bodies in motion. However, there was division about including fields such as medicine, music, and perspective. Philosophers pondered questions including the existence of a vacuum, whether motion could produce heat, the colors of rainbows, the motion of the earth, whether elemental chemicals exist, and where in the atmosphere rain is formed. In the centuries up through the end of the Middle Ages, natural science was often mingled with philosophies about magic and the occult. Natural philosophy appeared in various forms, from treatises to encyclopedias to commentaries on Aristotle. The interaction between natural philosophy and Christianity was complex during this period; some early theologians, including Tatian and Eusebius, considered natural philosophy an outcropping of pagan Greek science and were suspicious of it. Although some later Christian philosophers, including Aquinas, came to see natural science as a means of interpreting scripture, this suspicion persisted until the 12th and 13th centuries. The Condemnation of 1277, which forbade setting philosophy on a level equal with theology and the debate of religious constructs in a scientific context, showed the persistence with which Catholic leaders resisted the development of natural philosophy even from a theological perspective. Aquinas and Albertus Magnus, another Catholic theologian of the era, sought to distance theology from science in their works. "I don't see what one's interpretation of Aristotle has to do with the teaching of the faith," he wrote in 1271. Newton and the scientific revolution (1600–1800) By the 16th and 17th centuries, natural philosophy evolved beyond commentary on Aristotle as more early Greek philosophy was uncovered and translated. The invention of the printing press in the 15th century, the invention of the microscope and telescope, and the Protestant Reformation fundamentally altered the social context in which scientific inquiry evolved in the West. Christopher Columbus's discovery of a new world changed perceptions about the physical makeup of the world, while observations by Copernicus, Tyco Brahe and Galileo brought a more accurate picture of the solar system as heliocentric and proved many of Aristotle's theories about the heavenly bodies false. Several 17th-century philosophers, including Thomas Hobbes, John Locke and Francis Bacon, made a break from the past by rejecting Aristotle and his medieval followers outright, calling their approach to natural philosophy superficial. The titles of Galileo's work Two New Sciences and Johannes Kepler's New Astronomy underscored the atmosphere of change that took hold in the 17th century as Aristotle was dismissed in favor of novel methods of inquiry into the natural world. Bacon was instrumental in popularizing this change; he argued that people should use the arts and sciences to gain dominion over nature. To achieve this, he wrote that "human life [must] be endowed with discoveries and powers." He defined natural philosophy as "the knowledge of Causes and secret motions of things; and enlarging the bounds of Human Empire, to the effecting of all things possible." Bacon proposed that scientific inquiry be supported by the state and fed by the collaborative research of scientists, a vision that was unprecedented in its scope, ambition, and forms at the time. Natural philosophers came to view nature increasingly as a mechanism that could be taken apart and understood, much like a complex clock. Natural philosophers including Isaac Newton, Evangelista Torricelli and Francesco Redi conducted experiments focusing on the flow of water, measuring atmospheric pressure using a barometer and disproving spontaneous generation. Scientific societies and scientific journals emerged and were spread widely through the printing press, touching off the scientific revolution. Newton in 1687 published his The Mathematical Principles of Natural Philosophy, or Principia Mathematica, which set the groundwork for physical laws that remained current until the 19th century. Some modern scholars, including Andrew Cunningham, Perry Williams, and Floris Cohen, argue that natural philosophy is not properly called science and that genuine scientific inquiry began only with the scientific revolution. According to Cohen, "the emancipation of science from an overarching entity called 'natural philosophy is one defining characteristic of the Scientific Revolution." Other historians of science, including Edward Grant, contend that the scientific revolution that blossomed in the 17th, 18th, and 19th centuries occurred when principles learned in the exact sciences of optics, mechanics, and astronomy began to be applied to questions raised by natural philosophy. Grant argues that Newton attempted to expose the mathematical basis of nature – the immutable rules it obeyed – and, in doing so, joined natural philosophy and mathematics for the first time, producing an early work of modern physics. The scientific revolution, which began to take hold in the 17th century, represented a sharp break from Aristotelian modes of inquiry. One of its principal advances was the use of the scientific method to investigate nature. Data was collected, and repeatable measurements were made in experiments. Scientists then formed hypotheses to explain the results of these experiments. The hypothesis was then tested using the principle of falsifiability to prove or disprove its accuracy. The natural sciences continued to be called natural philosophy, but the adoption of the scientific method took science beyond the realm of philosophical conjecture and introduced a more structured way of examining nature. Newton, an English mathematician and physicist, was a seminal figure in the scientific revolution. Drawing on advances made in astronomy by Copernicus, Brahe, and Kepler, Newton derived the universal law of gravitation and laws of motion. These laws applied both on earth and in outer space, uniting two spheres of the physical world previously thought to function independently, according to separate physical rules. Newton, for example, showed that the tides were caused by the gravitational pull of the moon. Another of Newton's advances was to make mathematics a powerful explanatory tool for natural phenomena. While natural philosophers had long used mathematics as a means of measurement and analysis, its principles were not used as a means of understanding cause and effect in nature until Newton. In the 18th century and 19th century, scientists including Charles-Augustin de Coulomb, Alessandro Volta, and Michael Faraday built upon Newtonian mechanics by exploring electromagnetism, or the interplay of forces with positive and negative charges on electrically charged particles. Faraday proposed that forces in nature operated in "fields" that filled space. The idea of fields contrasted with the Newtonian construct of gravitation as simply "action at a distance", or the attraction of objects with nothing in the space between them to intervene. James Clerk Maxwell in the 19th century unified these discoveries in a coherent theory of electrodynamics. Using mathematical equations and experimentation, Maxwell discovered that space was filled with charged particles that could act upon each other and were a medium for transmitting charged waves. Significant advances in chemistry also took place during the scientific revolution. Antoine Lavoisier, a French chemist, refuted the phlogiston theory, which posited that things burned by releasing "phlogiston" into the air. Joseph Priestley had discovered oxygen in the 18th century, but Lavoisier discovered that combustion was the result of oxidation. He also constructed a table of 33 elements and invented modern chemical nomenclature. Formal biological science remained in its infancy in the 18th century, when the focus lay upon the classification and categorization of natural life. This growth in natural history was led by Carl Linnaeus, whose 1735 taxonomy of the natural world is still in use. Linnaeus, in the 1750s, introduced scientific names for all his species. 19th-century developments (1800–1900) By the 19th century, the study of science had come into the purview of professionals and institutions. In so doing, it gradually acquired the more modern name of natural science. The term scientist was coined by William Whewell in an 1834 review of Mary Somerville's On the Connexion of the Sciences. But the word did not enter general use until nearly the end of the same century. Modern natural science (1900–present) According to a famous 1923 textbook, Thermodynamics and the Free Energy of Chemical Substances, by the American chemist Gilbert N. Lewis and the American physical chemist Merle Randall, the natural sciences contain three great branches: Aside from the logical and mathematical sciences, there are three great branches of natural science which stand apart by reason of the variety of far reaching deductions drawn from a small number of primary postulates — they are mechanics, electrodynamics, and thermodynamics. Today, natural sciences are more commonly divided into life sciences, such as botany and zoology, and physical sciences, which include physics, chemistry, astronomy, and Earth sciences.
Physical sciences
Science basics
Basics and measurement
38892
https://en.wikipedia.org/wiki/Chat%20room
Chat room
The term chat room, or chatroom (and sometimes group chat; abbreviated as GC), is primarily used to describe any form of synchronous conferencing, occasionally even asynchronous conferencing. The term can thus mean any technology, ranging from real-time online chat and online interaction with strangers (e.g., online forums) to fully immersive graphical social environments. The primary use of a chat room is to share information via text with a group of other users. Generally speaking, the ability to converse with multiple people in the same conversation differentiates chat rooms from instant messaging programs, which are more typically designed for one-to-one communication. The users in a particular chat room are generally connected via a shared internet or other similar connection, and chat rooms exist catering for a wide range of subjects. New technology has enabled the use of file sharing and webcams. History The first chat system was used by the U.S. government in 1971. It was developed by Murray Turoff, a young PhD graduate from Berkeley, and its first use was during President Nixon's wage-price freeze under Project Delphi. The system was called EMISARI and would allow 10 regional offices to link together in a real-time online chat known as the party line. It was in use up until 1986. The first public online chat system was called Talkomatic, created by Doug Brown and David R. Woolley in 1973 on the PLATO System at the University of Illinois. It offered several channels, each of which could accommodate up to five people, with messages appearing on all users' screens character-by-character as they were typed. Talkomatic was very popular among PLATO users into the mid-1980s. In 2014 Brown and Woolley released a web-based version of Talkomatic. The first dedicated online chat service that was widely available to the public was the CompuServe CB Simulator in 1980, created by CompuServe executive Alexander "Sandy" Trevor in Columbus, Ohio. Chat rooms gained mainstream popularity with AOL. Jarkko Oikarinen created Internet Relay Chat (IRC) in 1988. Many peer-to-peer clients have chat rooms, e.g. Ares Galaxy, eMule, Filetopia, Retroshare, Vuze, WASTE, WinMX, etc. Many popular social media platforms are now used as chat rooms, such as WhatsApp, Facebook, Twitter, Discord, Snapchat, Instagram, TikTok, and many more. Graphical multi-user environments Visual chat rooms add graphics to the chat experience, in either 2D or 3D (employing virtual reality technology). These are characterized by using a graphic representation of the user, an avatar virtual elements such as games (in particular massively multiplayer online games) and educational material most often developed by individual site owners, who in general are simply more advanced users of the systems. The most popular environments, such as The Palace, also allow users to create or build their own spaces. Some of the most popular 3D chat experiences are IMVU and Second Life (though they extend far beyond just chat). Many such implementations generate profit by selling virtual goods to users at a high margin. Some online chat rooms also incorporate audio and video communications, so that users may actually see and hear each other. Games Games are also often played in chat rooms. These are typically implemented by an external process such as an IRC bot joining the room to conduct the game. Trivia question & answer games are most prevalent. A historic example is Hunt the Wumpus. Chatroom-based implementations of the party game Mafia also exist. A similar, but more complex style of text-based gaming are MUDs, in which players interact within a textual, interactive fiction–like environment.
Technology
Internet
null
38902
https://en.wikipedia.org/wiki/Brown
Brown
Brown is a shade. It can be considered a composite color, but it is mainly a darker shade of orange. In the CMYK color model used in printing and painting, brown is usually made by combining the colors orange and black. In the RGB color model used to project colors onto television screens and computer monitors, brown combines red and green. The color brown is seen widely in nature, wood, soil, human hair color, eye color and skin pigmentation. Brown is the color of dark wood or rich soil. In the RYB color model, brown is made by mixing the three primary colors, red, yellow, and blue. According to public opinion surveys in Europe and the United States, brown is the least favorite color of the public; it is often associated with plainness, the rustic, although it does also have positive associations, including baking, warmth, wildlife, the autumn and music. Etymology The term is from Old English , in origin for any dusky or dark shade of color. The first recorded use of brown as a color name in English was in 1000. The Common Germanic adjectives *brûnoz and *brûnâ meant both dark colors and a glistening or shining quality, whence burnish. The current meaning developed in Middle English from the 14th century. Words for the color brown around the world often come from foods or beverages; in the eastern Mediterranean, the word for brown often comes from the color of coffee: in Turkish, the word for brown is ; in Greek, . In Portuguese, Spanish and French, the word for brown or for a specific shade of brown is derived from the word for chestnut (castanea in Latin). In Southeast Asia, the color name often comes from chocolate: in Malay; in Filipino. In Japan, the word means the color of tea. History and art Ancient history Brown has been used in art since prehistoric times. Paintings using umber, a natural clay pigment composed of iron oxide and manganese oxide, have been dated to 40,000 BC. Paintings of brown horses and other animals have been found on the walls of the Lascaux cave dating back about 17,300 years. The female figures in ancient Egyptian tomb paintings have brown skin, painted with umber. Light tan was often used on painted Greek amphorae and vases, either as a background for black figures, or the reverse. The Ancient Greeks and Romans produced a fine reddish-brown ink, of a color called sepia, made from the ink of a variety of cuttlefish. This ink was used by Leonardo da Vinci, Raphael and other artists during the Renaissance, and by artists up until the present time. In Ancient Rome, brown clothing was associated with the lower classes or barbarians. The term for the plebeians, or urban poor, was "pullati", which meant literally "those dressed in brown". Post-classical history In the Middle Ages brown robes were worn by monks of the Franciscan order, as a sign of their humility and poverty. Each social class was expected to wear a color suitable to their station; and grey and brown were the colors of the poor. Russet was a coarse homespun cloth made of wool and dyed with woad and madder to give it a subdued grey or brown shade. By the statute of 1363, poor English people were required to wear russet. The medieval poem Piers Plowman describes the virtuous Christian: In the Middle Ages, dark brown pigments were rarely used in art; painters and book illuminators artists of that period preferred bright, distinct colors such as red, blue and green rather than dark colors. The umbers were not widely used in Europe before the end of the fifteenth century; The Renaissance painter and writer Giorgio Vasari (1511–1574) described them as being rather new in his time. Artists began using far greater use of browns when oil painting arrived in the late fifteenth century. During the Renaissance, artists generally used four different browns; raw umber, the dark brown clay mined from the earth around Umbria, in Italy; raw sienna, a reddish-brown earth mined near Siena, in Tuscany; burnt umber, the Umbrian clay heated until it turned a darker shade, and burnt sienna, heated until it turned a dark reddish brown. In Northern Europe, Jan van Eyck featured rich earth browns in his portraits to set off the brighter colors. Modern history 17th and 18th century The 17th and 18th century saw the greatest use of brown. Caravaggio and Rembrandt Van Rijn used browns to create chiaroscuro effects, where the subject appeared out of the darkness. Rembrandt also added umber to the ground layers of his paintings because it promoted faster drying. Rembrandt also began to use new brown pigment, called Cassel earth or Cologne earth. This was a natural earth color composed of over ninety percent organic matter, such as soil and peat. It was used by Rubens and Anthony van Dyck, and later became commonly known as Van Dyck brown. 19th and 20th century Brown was generally hated by the French impressionists, who preferred bright, pure colors. The exception among French 19th-century artists was Paul Gauguin, who created luminous brown portraits of the people and landscapes of French Polynesia. In the late 20th century, brown became a common symbol in western culture for simple, inexpensive, natural and healthy. Bag lunches were carried in plain brown paper bags; packages were wrapped in plain brown paper. Brown bread and brown sugar were viewed as more natural and healthy than white bread and white sugar. Brown in science and nature Optics Brown is a dark orange color, made by combining red, yellow and black. It can be thought of as dark orange, but it can also be made in other ways. In the RGB color model, which uses red, green and blue light in various combinations to make all the colors on computer and television screens, it is made by mixing red and green light. In terms of the visible spectrum, "brown" refers to long wavelength hues, yellow, orange, or red, in combination with low luminance or saturation. Since brown may cover a wide range of the visible spectrum, composite adjectives are used such as red brown, yellowish brown, dark brown or light brown. As a color of low intensity, brown is a tertiary color: a mix of the three subtractive primary colors is brown if the cyan content is low. Brown exists as a color perception only in the presence of a brighter color contrast. Yellow, orange, red, or rose objects are still perceived as such if the general illumination level is low, despite reflecting the same amount of red or orange light as a brown object would in normal lighting conditions. Brown pigments, dyes and inks Raw umber and burnt umber are two of the oldest pigments used by humans. Umber is a brown clay, containing a large amount of iron oxide and between five and twenty percent manganese oxide, which give the color. Its shade varies from a greenish brown to a dark brown. It takes its name from the Italian region of Umbria, where it was formerly mined. The principal source today is the island of Cyprus. Burnt umber is the same pigment which has been roasted (calcined), which turns the pigment darker and more reddish. Raw sienna and burnt sienna are also clay pigments rich in iron oxide, which were mined during the Renaissance around the city of Siena in Tuscany. Sienna contains less than five percent manganese. The natural sienna earth is a dark yellow ochre color; when roasted it becomes a rich reddish brown called burnt sienna. Mummy brown was a pigment used in oil paints made from ground Egyptian mummies. Caput mortuum is a haematite iron oxide pigment, used in painting. The name is also used in reference to mummy brown. Van Dyck brown, known in Europe as Cologne earth or Cassel earth, is another natural earth pigment, that was made up largely of decayed vegetal matter. It made a rich dark brown, and was widely used during the Renaissance to the 19th century It takes its name from the painter Anthony van Dyck, but it was used by many other artists before him. It was highly unstable and unreliable, so its use was abandoned by the 20th century, though the name continues to be used for modern synthetic pigments. The color of Van Dyck brown can be recreated by mixing ivory black with mauve or with Venetian red, or mixing cadmium red with cobalt blue. Mars brown. The names of the earth colors are still used, but very few modern pigments with these names actually contain natural earths; most of their ingredients today are synthetic. Mars brown is typical of these new colors, made with synthetic iron oxide pigments. The new colors have a superior coloring power and opacity, but not the delicate hue as their namesakes. Walnuts have been used to make a brown dye since antiquity. The Roman writer Ovid, in the first century BC described how the Gauls used the juice of the hull or husk inside the shell of the walnut to make a brown dye for wool, or a reddish dye for their hair. The chestnut tree has also been used since ancient times as a source brown dye. The bark of the tree, the leaves and the husk of the nuts have all been used to make dye. The leaves were used to make a beige or yellowish-brown dye, and in the Ottoman Empire the yellow-brown from chestnut leaves was combined with indigo blue to make shades of green. Brown eyes With few exceptions, all mammals have brown or darkly-pigmented irises. In humans, brown eyes result from a relatively high concentration of melanin in the stroma of the iris, which causes light of both shorter and longer wavelengths to be absorbed and in many parts of the world, it is nearly the only iris color present. Dark pigment of brown eyes is most common in East Asia, Central Asia, Southeast Asia, South Asia, West Asia, Oceania, Africa, Americas, etc. as well as parts of Eastern Europe and Southern Europe. The majority of people in the world overall have dark brown eyes. Brown irises range from highly pigmented, dark brown (almost black) eyes, to very light, almost amber or hazel irises composed partially of lipochrome. of Light or medium-pigmented brown eyes are common in Europe, Afghanistan, Pakistan and Northern India, as well as some parts of the Middle East, and can also be found in populations in East Asia and Southeast Asia, but are proportionally rare. (See eye color). Brown hair Brown is the second most common color of human hair, after black. It is caused by higher levels of the natural dark pigment eumelanin, and lower levels of the pale pigment pheomelanin. Brown eumelanin is more common among Europeans, while black eumelanin is more often found in the hair on non-Europeans. A small amount of black eumelanin, in the absence of other pigments, results in grey hair. A small amount of brown eumelanin in the absence of other pigments results in blond hair. Brown skin A majority of people in the world have skin that is a shade of brown, from a very light honey brown or a golden brown, to a copper or bronze color, to a coffee color or a dark chocolate brown. Skin color and race are not the same; many people classified as "white" or "black" actually have skin that is a shade of brown. Brown skin is caused by melanin, a natural pigment which is produced within the skin in cells called melanocytes. Skin pigmentation in humans evolved to primarily regulate the amount of ultraviolet radiation penetrating the skin, controlling its biochemical effects. Natural skin color can darken as a result of tanning due to exposure to sunlight. The leading theory is that skin color adapts to intense sunlight irradiation to provide partial protection against the ultraviolet fraction that produces damage and thus mutations in the DNA of the skin cells. There is a correlation between the geographic distribution of ultraviolet radiation (UVR) and the distribution of indigenous skin pigmentation around the world. Darker-skinned populations are found in the regions with the most ultraviolet, closer to the equator, while lighter skinned populations live closer to the poles, with less UVR, though immigration has changed these patterns. While white and black are commonly used to describe racial groups, brown is rarely used, because it crosses all racial lines. In Brazil, the Portuguese word pardo, which can mean different shades of brown, is used to refer to multiracial people. The Brazilian Institute of Geography and Statistics (IBGE) asks people to identify themselves as branco (white), pardo (brown), negro (black), or amarelo (yellow). In 2008 43.8 percent of the population identified themselves as pardo. (See human skin color). Soil The thin top layer of the Earth's crust on land is largely made up of soil colored different shades of brown. Good soil is composed of about forty-five percent minerals, twenty-five percent water, twenty-five percent air, and five percent organic material, living and dead. Half the color of soil comes from minerals it contains; soils containing iron turn yellowish or reddish as the iron oxidizes. Manganese, nitrogen and sulfur turn brownish or blackish as they decay naturally. Rich and fertile soils tend to be darker in color; the deeper brown color of fertile soil comes from the decomposing of the organic matter. Dead leaves and roots become black or brown as they decay. Poorer soils are usually paler brown in color, and contain less water or organic matter. Mollisols are the soil type found under grassland in the Great Plains of America, the Pampas in Argentina and the Russian Steppes. The soil is 60–80 centimeters deep and is rich in nutrients and organic matter. Loess is a type of pale yellow or buff soil, which originated as wind-blown silt. It is very fertile, but is easily eroded by wind or water. Peat is an accumulation of partially decayed vegetation, whose decomposition is slowed by water. Despite its dark brown color, it is infertile, but is useful as a fuel. Mammals and birds A large number of mammals and predatory birds have a brown coloration. This sometimes changes seasonally, and sometimes remains the same year-round. This color is likely related to camouflage, since the backdrop of some environments, such as the forest floor, is often brown, and especially in the spring and summertime when animals like the snowshoe hare get brown fur. Most mammals are dichromats and so do not easily distinguish brown fur from green grass. The brown rat or Norwegian rat (Rattus norvegicus) is one of the best known and most common rats. The brown bear (Ursus arctos) is a large bear distributed across much of northern Eurasia and North America. The ermine (Mustela erminea) has a brown back in summer, or year-round in the southern reaches of its range. Biology The solid waste excreted by human beings and many other animals is characteristically brown in color due to the presence of bilirubin, a byproduct of destruction of red blood cells. Brown in culture Surveys in Europe and the United States showed that brown was the least popular color among respondents. It was the favorite color of only one percent of respondents and the least favorite color of twenty percent of people. Brown uniforms Brown has been a popular color for military uniforms since the late 18th century, largely because of its wide availability and low visibility. When the Continental Army was established in 1775 at the outbreak of the American Revolution, the first Continental Congress declared that the official uniform color would be brown, but this was not popular with many militias, whose officers were already wearing blue. In 1778 the Congress asked George Washington to design a new uniform, and in 1779 Washington made the official color of all uniforms blue and buff. In 1846 the Indian soldiers of the Corps of Guides in British India began to wear a yellowish shade of tan, which became known as khaki from the Urdu word for dust-colored, taken from an earlier Persian word for soil. The color made an excellent natural camouflage, and was adopted by the British Army for their Abyssian Campaign in 1867–1868, and later in the Boer War. It was adopted by the United States Army during the Spanish–American War (1896), and afterwards by the United States Navy and United States Marine Corps. In the 1920s, brown became the uniform color of the Nazi Party in Germany. The Nazi paramilitary organization the Sturmabteilung (SA) wore brown uniforms and were known as the brownshirts. The color brown was used to represent the Nazi vote on maps of electoral districts in Germany. If someone voted for the Nazis, they were said to be "voting brown". The national headquarters of the Nazi party, in Munich, was called the Brown House. The Nazi seizure of power in 1933 was called the Brown Revolution. At Adolf Hitler's Obersalzberg home, the Berghof, he slept in a "bed which was usually covered by a brown quilt embroidered with a huge swastika. The swastika also appeared on Hitler's brown satin pajamas, embroidered in black against a red background on the pocket. He had a matching brown silk robe." Brown had originally been chosen as a Party color largely for convenience; large numbers of war-surplus brown uniforms from Germany's former colonial forces in Africa were cheaply available in the 1920s. It also suited the working-class and military images that the Party wished to convey. From the 1930s onwards, the Party's brown uniforms were mass-produced by German clothing firms such as Hugo Boss. Business is the color of the United Parcel Service (UPS) delivery company with their trademark brown trucks and uniforms; it was earlier the color of Pullman rail cars of the Pullman Company, and was adopted by UPS both because brown is easy to keep clean, and due to favorable associations of luxury that Pullman brown evoked. UPS has filed two trademarks on the color brown to prevent other shipping companies (and possibly other companies in general) from using the color if it creates "market confusion". In its advertising, UPS refers to itself as "Brown" ("What can Brown do for you?"). Labrecque, et al. (2012) have hypothesized that brown would be related to competence when used in advertising, as the color is typically associated with reliability and support. However, they did not find a link between brown and competence. Idioms and expressions "To be brown as a berry" (to be deeply suntanned) "To brown bag" a meal (to bring food from home to eat at work or school rather than patronizing an in-house cafeteria or a restaurant) "To experience a brown out" (a partial loss of electricity, less severe than a blackout) Brownfields are abandoned, idled, or under-used industrial and commercial facilities where redevelopment for infill housing is complicated by real or perceived environmental contaminations. '"Brown-nose" is a verb which means to be obsequious. It comes from the term for kissing the posterior of the boss in order to gain advancement. "In a brown study" (melancholy) Religion In Wicca, brown represents endurance, solidity, grounding, and strength. It is strongly associated with the element of earth. Sports The Cleveland Browns of the National Football League, take their team name from its founder and long-time coach, Paul Brown, and use brown as a team color. The Hawthorn Football Club of the Australian Football League wears a brown and gold uniform. The San Diego Padres of Major League Baseball utilizes brown as their primary color. FC St. Pauli, a German association football club, typically features brown shirts as its primary kit. Club Atlético Platense in Argentina, typically features brown shirts as its primary kit. The University of Wyoming, Brown University, St. Bonaventure University, and Lehigh University sports teams generally feature this color. In nature and culture
Physical sciences
Color terms
null
38930
https://en.wikipedia.org/wiki/Jupiter
Jupiter
Jupiter is the fifth planet from the Sun and the largest in the Solar System. It is a gas giant with a mass more than 2.5 times that of all the other planets in the Solar System combined and slightly less than one-thousandth the mass of the Sun. Its diameter is eleven times that of Earth, and a tenth that of the Sun. Jupiter orbits the Sun at a distance of , with an orbital period of . It is the third-brightest natural object in the Earth's night sky, after the Moon and Venus, and has been observed since prehistoric times. Its name derives from that of Jupiter, the chief deity of ancient Roman religion. Jupiter was the first of the Sun's planets to form, and its inward migration during the primordial phase of the Solar System affected much of the formation history of the other planets. Jupiter's atmosphere consists of 76% hydrogen and 24% helium by mass, with a denser interior. It contains trace elements and compounds like carbon, oxygen, sulfur, neon, ammonia, water vapour, phosphine, hydrogen sulfide, and hydrocarbons. Jupiter's helium abundance is 80% of the Sun's, similar to Saturn's composition. The ongoing contraction of Jupiter's interior generates more heat than the planet receives from the Sun. Its internal structure is believed to consist of an outer mantle of fluid metallic hydrogen and a diffuse inner core of denser material. Because of its rapid rate of rotation, one turn in ten hours, Jupiter is an oblate spheroid; it has a slight but noticeable bulge around the equator. The outer atmosphere is divided into a series of latitudinal bands, with turbulence and storms along their interacting boundaries; the most obvious result of this is the Great Red Spot, a giant storm that has been recorded since 1831. Jupiter's magnetic field is the strongest and second-largest contiguous structure in the Solar System, generated by eddy currents within the fluid, metallic hydrogen core. The solar wind interacts with the magnetosphere, extending it outward and affecting Jupiter's orbit. Jupiter is surrounded by a faint system of planetary rings that were discovered in 1979 by Voyager 1 and further investigated by the Galileo orbiter in the 1990s. The Jovian ring system consists mainly of dust and has three main segments: an inner torus of particles known as the halo, a relatively bright main ring, and an outer gossamer ring. The rings have a reddish colour in visible and near-infrared light. The age of the ring system is unknown, possibly dating back to Jupiter's formation. At least 95 moons orbit the planet; the four largest moons—Io, Europa, Ganymede, and Callisto—orbit within the magnetosphere, and were discovered by Galileo Galilei in 1610. Ganymede, the largest of the four, is larger than the planet Mercury. Since 1973, Jupiter has been visited by nine robotic probes: seven flybys and two dedicated orbiters, with two more en route. Name and symbol In both the ancient Greek and Roman civilizations, Jupiter was named after the chief god of the divine pantheon: Zeus to the Greeks and Jupiter to the Romans. The International Astronomical Union formally adopted the name Jupiter for the planet in 1976 and has since named its newly discovered satellites for the god's lovers, favourites, and descendants. The planetary symbol for Jupiter, , descends from a Greek zeta with a horizontal stroke, , as an abbreviation for Zeus. In Latin, Iovis is the genitive case of Iuppiter, i.e. Jupiter. It is associated with the etymology of Zeus ('sky father'). The English equivalent, Jove, is known to have come into use as a poetic name for the planet around the 14th century. Jovian is the adjectival form of Jupiter. The older adjectival form jovial, employed by astrologers in the Middle Ages, has come to mean 'happy' or 'merry', moods ascribed to Jupiter's influence in astrology. The original Greek deity Zeus supplies the root zeno-, which is used to form some Jupiter-related words, such as zenography. Formation and migration Jupiter is believed to be the oldest planet in the Solar System, having formed just one million years after the Sun and roughly 50 million years before Earth. Current models of Solar System formation suggest that Jupiter formed at or beyond the snow line: a distance from the early Sun where the temperature was sufficiently cold for volatiles such as water to condense into solids. First forming a solid core, the planet then accumulated its gaseous atmosphere. Therefore, the planet must have formed before the solar nebula was fully dispersed. During its formation, Jupiter's mass gradually increased until it had 20 times the mass of the Earth, approximately half of which was made up of silicates, ices and other heavy-element constituents. When the proto-Jupiter grew larger than 50 Earth masses it created a gap in the solar nebula. Thereafter, the growing planet reached its final mass in 3–4million years. Since Jupiter is made of the same elements as the Sun (hydrogen and helium) it has been suggested that the Solar System might have been early in its formation a system of multiple protostars, which are quite common, with Jupiter being the second but failed protostar. But the Solar System never developed into a system of multiple stars and Jupiter does not qualify as a protostar or brown dwarf since it does not have enough mass to fuse hydrogen. According to the "grand tack hypothesis", Jupiter began to form at a distance of roughly from the Sun. As the young planet accreted mass, its interaction with the gas disk orbiting the Sun and the orbital resonances from Saturn caused it to migrate inwards. This upset the orbits of several super-Earths orbiting closer to the Sun, causing them to collide destructively. Saturn would later have begun to migrate inwards at a faster rate than Jupiter until the two planets became captured in a 3:2 mean motion resonance at approximately from the Sun. This changed the direction of migration, causing them to migrate away from the Sun and out of the inner system to their current locations. All of this happened over a period of 3–6million years, with the final migration of Jupiter occurring over several hundred thousand years. Jupiter's migration from the inner solar system eventually allowed the inner planets—including Earth—to form from the rubble. There are several unresolved issues with the grand tack hypothesis. The resulting formation timescales of terrestrial planets appear to be inconsistent with the measured elemental composition. Jupiter would likely have settled into an orbit much closer to the Sun if it had migrated through the solar nebula. Some competing models of Solar System formation predict the formation of Jupiter with orbital properties that are close to those of the present-day planet. Other models predict Jupiter forming at distances much further out, such as . According to the Nice model, the infall of proto-Kuiper belt objects over the first 600 million years of Solar System history caused Jupiter and Saturn to migrate from their initial positions into a 1:2 resonance, which caused Saturn to shift into a higher orbit, disrupting the orbits of Uranus and Neptune, depleting the Kuiper belt, and triggering the Late Heavy Bombardment. According to the Jumping-Jupiter scenario, Jupiter's migration through the early solar system could have led to the ejection of a fifth gas giant. This hypothesis suggests that during its orbital migration, Jupiter's gravitational influence disrupted the orbits of other gas giants, potentially casting one planet out of the solar system entirely. The dynamics of such an event would have dramatically altered the formation and configuration of the solar system, leaving behind only the four gas giants humans observe today. Based on Jupiter's composition, researchers have made the case for an initial formation outside the molecular nitrogen (N2) snow line, which is estimated at from the Sun, and possibly even outside the argon snow line, which may be as far as . Having formed at one of these extreme distances, Jupiter would then have, over a roughly 700,000-year period, migrated inwards to its current location, during an epoch approximately 2–3 million years after the planet began to form. In this model, Saturn, Uranus, and Neptune would have formed even further out than Jupiter, and Saturn would also have migrated inwards. Physical characteristics Jupiter is a gas giant, meaning its chemical composition is primarily hydrogen and helium. These materials are classified as gasses in planetary geology, a term that does not denote the state of matter. It is the largest planet in the Solar System, with a diameter of at its equator, giving it a volume 1,321 times that of the Earth. Its average density, 1.326 g/cm3, is lower than those of the four terrestrial planets. Composition The atmosphere of Jupiter is approximately 76% hydrogen and 24% helium by mass. By volume, the upper atmosphere is about 90% hydrogen and 10% helium, with the lower proportion owing to the individual helium atoms being more massive than the molecules of hydrogen formed in this part of the atmosphere. The atmosphere contains trace amounts of elemental carbon, oxygen, sulfur, and neon, as well as ammonia, water vapour, phosphine, hydrogen sulfide, and hydrocarbons like methane, ethane and benzene. Its outermost layer contains crystals of frozen ammonia. The planet's interior is denser, with a composition of roughly 71% hydrogen, 24% helium, and 5% other elements by mass. The atmospheric proportions of hydrogen and helium are close to the theoretical composition of the primordial solar nebula. Neon in the upper atmosphere consists of 20 parts per million by mass, which is about a tenth as abundant as in the Sun. Jupiter's helium abundance is about 80% that of the Sun due to the precipitation of these elements as helium-rich droplets, a process that happens deep in the planet's interior. Based on spectroscopy, Saturn is thought to be similar in composition to Jupiter, but the other giant planets Uranus and Neptune have relatively less hydrogen and helium and relatively more of the next most common elements, including oxygen, carbon, nitrogen, and sulfur. These planets are known as ice giants because during their formation, these elements are thought to have been incorporated into them as ice; however, they probably contain very little ice. Size and mass Jupiter is about ten times larger than Earth () and smaller than the Sun (). Jupiter's mass is 318 times that of Earth; 2.5 times that of all the other planets in the Solar System combined. It is so massive that its barycentre with the Sun lies above the Sun's surface at 1.068 solar radii from the Sun's centre. Jupiter's radius is about one tenth the radius of the Sun, and its mass is one thousandth the mass of the Sun, as the densities of the two bodies are similar. A "Jupiter mass" ( or ) is used as a unit to describe masses of other objects, particularly extrasolar planets and brown dwarfs. For example, the extrasolar planet HD 209458 b has a mass of , while the brown dwarf Gliese 229 b has a mass of . Theoretical models indicate that if Jupiter had over 40% more mass, the interior would be so compressed that its volume would decrease despite the increasing amount of matter. For smaller changes in its mass, the radius would not change appreciably. As a result, Jupiter is thought to have about as large a diameter as a planet of its composition and evolutionary history can achieve. The process of further shrinkage with increasing mass would continue until appreciable stellar ignition was achieved. Although Jupiter would need to be about 75 times more massive to fuse hydrogen and become a star, its diameter is sufficient as the smallest red dwarf may be slightly larger in radius than Saturn. Jupiter radiates more heat than it receives through solar radiation, due to the Kelvin–Helmholtz mechanism within its contracting interior. This process causes Jupiter to shrink by about per year. At the time of its formation, Jupiter was hotter and was about twice its current diameter. Internal structure Before the early 21st century, most scientists proposed one of two scenarios for the formation of Jupiter. If the planet accreted first as a solid body, it would consist of a dense core, a surrounding layer of fluid metallic hydrogen (with some helium) extending outward to about 80% of the radius of the planet, and an outer atmosphere consisting primarily of molecular hydrogen. Alternatively, if the planet collapsed directly from the gaseous protoplanetary disk, it was expected to completely lack a core, consisting instead of a denser and denser fluid (predominantly molecular and metallic hydrogen) all the way to the centre. Data from the Juno mission showed that Jupiter has a diffuse core that mixes into its mantle, extending for 30–50% of the planet's radius, and comprising heavy elements with a combined mass 7–25 times the Earth. This mixing process could have arisen during formation, while the planet accreted solids and gases from the surrounding nebula. Alternatively, it could have been caused by an impact from a planet of about ten Earth masses a few million years after Jupiter's formation, which would have disrupted an originally compact Jovian core. Outside the layer of metallic hydrogen lies a transparent interior atmosphere of hydrogen. At this depth, the pressure and temperature are above molecular hydrogen's critical pressure of 1.3 MPa and critical temperature of . In this state, there are no distinct liquid and gas phases—hydrogen is said to be in a supercritical fluid state. The hydrogen and helium gas extending downward from the cloud layer gradually transitions to a liquid in deeper layers, possibly resembling something akin to an ocean of liquid hydrogen and other supercritical fluids. Physically, the gas gradually becomes hotter and denser as depth increases. Rain-like droplets of helium and neon precipitate downward through the lower atmosphere, depleting the abundance of these elements in the upper atmosphere. Calculations suggest that helium drops separate from metallic hydrogen at a radius of ( below the cloud tops) and merge again at ( beneath the clouds). Rainfalls of diamonds have been suggested to occur, as well as on Saturn and the ice giants Uranus and Neptune. The temperature and pressure inside Jupiter increase steadily inward as the heat of planetary formation can only escape by convection. At a surface depth where the atmospheric pressure level is , the temperature is around . The region where supercritical hydrogen changes gradually from a molecular fluid to a metallic fluid spans pressure ranges of with temperatures of , respectively. The temperature of Jupiter's diluted core is estimated to be with a pressure of around . Atmosphere The atmosphere of Jupiter is primarily composed of molecular hydrogen and helium, with a smaller amount of other compounds such as water, methane, hydrogen sulfide, and ammonia. Jupiter's atmosphere extends to a depth of approximately below the cloud layers. Cloud layers Jupiter is perpetually covered with clouds of ammonia crystals, which may contain ammonium hydrosulfide as well. The clouds are located in the tropopause layer of the atmosphere, forming bands at different latitudes, known as tropical regions. These are subdivided into lighter-hued zones and darker belts. The interactions of these conflicting circulation patterns cause storms and turbulence. Wind speeds of are common in zonal jet streams. The zones have been observed to vary in width, colour and intensity from year to year, but they have remained stable enough for scientists to name them. The cloud layer is about deep and consists of at least two decks of ammonia clouds: a thin, clearer region on top and a thicker, lower deck. There may be a thin layer of water clouds underlying the ammonia clouds, as suggested by flashes of lightning detected in the atmosphere of Jupiter. These electrical discharges can be up to a thousand times as powerful as lightning on Earth. The water clouds are assumed to generate thunderstorms in the same way as terrestrial thunderstorms, driven by the heat rising from the interior. The Juno mission revealed the presence of "shallow lightning" which originates from ammonia-water clouds relatively high in the atmosphere. These discharges carry "mushballs" of water-ammonia slushes covered in ice, which fall deep into the atmosphere. Upper-atmospheric lightning has been observed in Jupiter's upper atmosphere, bright flashes of light that last around 1.4milliseconds. These are known as "elves" or "sprites" and appear blue or pink due to the hydrogen. The orange and brown colours in the clouds of Jupiter are caused by upwelling compounds that change colour when they are exposed to ultraviolet light from the Sun. The exact makeup remains uncertain, but the substances are thought to be made up of phosphorus, sulfur or possibly hydrocarbons. These colourful compounds, known as chromophores, mix with the warmer clouds of the lower deck. The light-coloured zones are formed when rising convection cells form crystallising ammonia that hides the chromophores from view. Jupiter has a low axial tilt, thus ensuring that the poles always receive less solar radiation than the planet's equatorial region. Convection within the interior of the planet transports energy to the poles, balancing out temperatures at the cloud layer. Great Red Spot and other vortices A well-known feature of Jupiter is the Great Red Spot, a persistent anticyclonic storm located 22° south of the equator. It was first observed in 1831, and possibly as early as 1665. Images by the Hubble Space Telescope have shown two more "red spots" adjacent to the Great Red Spot. The storm is visible through Earth-based telescopes with an aperture of 12 cm or larger. The storm rotates counterclockwise, with a period of about six days. The maximum altitude of this storm is about above the surrounding cloud tops. The Spot's composition and the source of its red colour remain uncertain, although photodissociated ammonia reacting with acetylene is a likely explanation. The Great Red Spot is larger than the Earth. Mathematical models suggest that the storm is stable and will be a permanent feature of the planet. However, it has significantly decreased in size since its discovery. Initial observations in the late 1800s showed it to be approximately across. , the storm was measured at approximately , and was decreasing in length by about per year. In October 2021, a Juno flyby mission measured the depth of the Great Red Spot, putting it at around . Juno missions found several cyclone groups at Jupiter's poles. The northern group contains nine cyclones, with a large one in the centre and eight others around it, while its southern counterpart also consists of a centre vortex but is surrounded by five large storms and a single smaller one for a total of seven storms. In 2000, an atmospheric feature formed in the southern hemisphere that is similar in appearance to the Great Red Spot, but smaller. This was created when smaller, white oval-shaped storms merged to form a single feature—these three smaller white ovals were formed in 1939–1940. The merged feature was named Oval BA. It has since increased in intensity and changed from white to red, earning it the nickname "Little Red Spot". In April 2017, a "Great Cold Spot" was discovered in Jupiter's thermosphere at its north pole. This feature is across, wide, and cooler than surrounding material. While this spot changes form and intensity over the short term, it has maintained its general position in the atmosphere for more than 15 years. It may be a giant vortex similar to the Great Red Spot, and appears to be quasi-stable like the vortices in Earth's thermosphere. This feature may be formed by interactions between charged particles generated from Io and the strong magnetic field of Jupiter, resulting in a redistribution of heat flow. Magnetosphere Jupiter's magnetic field is the strongest of any planet in the Solar System, with a dipole moment of that is tilted at an angle of 10.31° to the pole of rotation. The surface magnetic field strength varies from up to . This field is thought to be generated by eddy currents—swirling movements of conducting materials—within the fluid, metallic hydrogen core. At about 75 Jupiter radii from the planet, the interaction of the magnetosphere with the solar wind generates a bow shock. Surrounding Jupiter's magnetosphere is a magnetopause, located at the inner edge of a magnetosheath—a region between it and the bow shock. The solar wind interacts with these regions, elongating the magnetosphere on Jupiter's lee side and extending it outward until it nearly reaches the orbit of Saturn. The four largest moons of Jupiter all orbit within the magnetosphere, which protects them from solar wind. The volcanoes on the moon Io emit large amounts of sulfur dioxide, forming a gas torus along its orbit. The gas is ionized in Jupiter's magnetosphere, producing sulfur and oxygen ions. They, together with hydrogen ions originating from the atmosphere of Jupiter, form a plasma sheet in Jupiter's equatorial plane. The plasma in the sheet co-rotates with the planet, causing deformation of the dipole magnetic field into that of a magnetodisk. Electrons within the plasma sheet generate a strong radio signature, with short, superimposed bursts in the range of 0.6–30 MHz that are detectable from Earth with consumer-grade shortwave radio receivers. As Io moves through this torus, the interaction generates Alfvén waves that carry ionized matter into the polar regions of Jupiter. As a result, radio waves are generated through a cyclotron maser mechanism, and the energy is transmitted out along a cone-shaped surface. When Earth intersects this cone, the radio emissions from Jupiter can exceed the radio output of the Sun. Planetary rings Jupiter has a faint planetary ring system composed of three main segments: an inner torus of particles known as the halo, a relatively bright main ring, and an outer gossamer ring. These rings appear to be made of dust, whereas Saturn's rings are made of ice. The main ring is most likely made out of material ejected from the satellites Adrastea and Metis, which is drawn into Jupiter because of the planet's strong gravitational influence. New material is added by additional impacts. In a similar way, the moons Thebe and Amalthea are believed to produce the two distinct components of the dusty gossamer ring. There is evidence of a fourth ring that may consist of collisional debris from Amalthea that is strung along the same moon's orbit. Orbit and rotation Jupiter is the only planet whose barycentre with the Sun lies outside the volume of the Sun, though by 7% of the Sun's radius. The average distance between Jupiter and the Sun is and it completes an orbit every 11.86 years. This is approximately two-fifths the orbital period of Saturn, forming a near orbital resonance. The orbital plane of Jupiter is inclined 1.30° compared to Earth. Because the eccentricity of its orbit is 0.049, Jupiter is slightly over 75 million km nearer the Sun at perihelion than aphelion, which means that its orbit is nearly circular. This low eccentricity is at odds with exoplanet discoveries, which have revealed Jupiter-sized planets with very high eccentricities. Models suggest this may be due to there being two giant planets in our Solar System, as the presence of a third or more giant planets tends to induce larger eccentricities. The axial tilt of Jupiter is 3.13°, which is relatively small, so its seasons are insignificant compared to those of Earth and Mars. Jupiter's rotation is the fastest of all the Solar System's planets, completing a rotation on its axis in slightly less than ten hours; this creates an equatorial bulge easily seen through an amateur telescope. Because Jupiter is not a solid body, its upper atmosphere undergoes differential rotation. The rotation of Jupiter's polar atmosphere is about five minutes longer than that of the equatorial atmosphere. The planet is an oblate spheroid, meaning that the diameter across its equator is longer than the diameter measured between its poles. On Jupiter, the equatorial diameter is longer than the polar diameter. Three systems are used as frames of reference for tracking planetary rotation, particularly when graphing the motion of atmospheric features. System I applies to latitudes from 7° N to 7° S; its period is the planet's shortest, at 9h 50 m 30.0s. System II applies at latitudes north and south of these; its period is 9h 55 m 40.6s. System III was defined by radio astronomers and corresponds to the rotation of the planet's magnetosphere; its period is Jupiter's official rotation. Observation Jupiter is usually the fourth-brightest object in the sky (after the Sun, the Moon, and Venus), although at opposition Mars can appear brighter than Jupiter. Depending on Jupiter's position with respect to the Earth, it can vary in visual magnitude from as bright as −2.94 at opposition down to −1.66 during conjunction with the Sun. The mean apparent magnitude is −2.20 with a standard deviation of 0.33. The angular diameter of Jupiter likewise varies from 50.1 to 30.5 arc seconds. Favourable oppositions occur when Jupiter is passing through the perihelion of its orbit, bringing it closer to Earth. Near opposition, Jupiter will appear to go into retrograde motion for a period of about 121 days, moving backward through an angle of 9.9° before returning to prograde movement. Because the orbit of Jupiter is outside that of Earth, the phase angle of Jupiter as viewed from Earth is always less than 11.5°; thus, Jupiter always appears nearly fully illuminated when viewed through Earth-based telescopes. It was during spacecraft missions to Jupiter that crescent views of the planet were obtained. A small telescope will usually show Jupiter's four Galilean moons and the cloud belts across Jupiter's atmosphere. A larger telescope with an aperture of will show Jupiter's Great Red Spot when it faces Earth. History Pre-telescopic research Observation of Jupiter dates back to at least the Babylonian astronomers of the 7th or 8th century BC. The ancient Chinese knew Jupiter as the "Suì Star" ( ) and established their cycle of twelve earthly branches based on the approximate number of years it takes Jupiter to revolve around the Sun; the Chinese language still uses its name (simplified as ) when referring to years of age. By the 4th century BC, these observations had developed into the Chinese zodiac, and each year became associated with a Tai Sui star and god controlling the region of the heavens opposite Jupiter's position in the night sky. These beliefs survive in some Taoist religious practices and in the East Asian zodiac's twelve animals. The Chinese historian Xi Zezong has claimed that Gan De, an ancient Chinese astronomer, reported a small star "in alliance" with the planet, which may indicate a sighting of one of Jupiter's moons with the unaided eye. If true, this would predate Galileo's discovery by nearly two millennia. A 2016 paper reports that trapezoidal rule was used by Babylonians before 50 BC for integrating the velocity of Jupiter along the ecliptic. In his 2nd century work the Almagest, the Hellenistic astronomer Claudius Ptolemaeus constructed a geocentric planetary model based on deferents and epicycles to explain Jupiter's motion relative to Earth, giving its orbital period around Earth as 4332.38 days, or 11.86 years. Ground-based telescope research In 1610, Italian polymath Galileo Galilei discovered the four largest moons of Jupiter (now known as the Galilean moons) using a telescope. This is thought to be the first telescopic observation of moons other than Earth's. Just one day after Galileo, Simon Marius independently discovered moons around Jupiter, though he did not publish his discovery in a book until 1614. It was Marius's names for the major moons, however, that stuck: Io, Europa, Ganymede, and Callisto. The discovery was a major point in favour of the heliocentric theory of the motions of the planets by Nicolaus Copernicus; Galileo's outspoken support of the Copernican theory led to him being tried and condemned by the Inquisition. In the autumn of 1639, the Neapolitan optician Francesco Fontana tested a 22-palm telescope of his own making and discovered the characteristic bands of the planet's atmosphere. During the 1660s, Giovanni Cassini used a new telescope to discover spots in Jupiter's atmosphere, observe that the planet appeared oblate, and estimate its rotation period. In 1692, Cassini noticed that the atmosphere undergoes a differential rotation. The Great Red Spot may have been observed as early as 1664 by Robert Hooke and in 1665 by Cassini, although this is disputed. The pharmacist Heinrich Schwabe produced the earliest known drawing to show details of the Great Red Spot in 1831. The Red Spot was reportedly lost from sight on several occasions between 1665 and 1708 before becoming quite conspicuous in 1878. It was recorded as fading again in 1883 and at the start of the 20th century. Both Giovanni Borelli and Cassini made careful tables of the motions of Jupiter's moons, which allowed predictions of when the moons would pass before or behind the planet. By the 1670s, Cassini observed that when Jupiter was on the opposite side of the Sun from Earth, these events would occur about 17 minutes later than expected. Ole Rømer deduced that light does not travel instantaneously (a conclusion that Cassini had earlier rejected), and this timing discrepancy was used to estimate the speed of light. In 1892, E. E. Barnard observed a fifth satellite of Jupiter with the refractor at Lick Observatory in California. This moon was later named Amalthea. It was the last planetary moon to be discovered directly by a visual observer through a telescope. An additional eight satellites were discovered before the flyby of the Voyager 1 probe in 1979. In 1932, Rupert Wildt identified absorption bands of ammonia and methane in the spectra of Jupiter. Three long-lived anticyclonic features called "white ovals" were observed in 1938. For several decades, they remained as separate features in the atmosphere that approach each other but never merge. Finally, two of the ovals merged in 1998, then absorbed the third in 2000, becoming Oval BA. Radiotelescope research In 1955, Bernard Burke and Kenneth Franklin discovered that Jupiter emits bursts of radio waves at a frequency of 22.2 MHz. The period of these bursts matched the rotation of the planet, and they used this information to determine a more precise value for Jupiter's rotation rate. Radio bursts from Jupiter were found to come in two forms: long bursts (or L-bursts) lasting up to several seconds, and short bursts (or S-bursts) lasting less than a hundredth of a second. Scientists have discovered three forms of radio signals transmitted from Jupiter: Decametric radio bursts (with a wavelength of tens of metres) vary with the rotation of Jupiter, and are influenced by the interaction of Io with Jupiter's magnetic field. Decimetric radio emission (with wavelengths measured in centimetres) was first observed by Frank Drake and Hein Hvatum in 1959. The origin of this signal is a torus-shaped belt around Jupiter's equator, which generates cyclotron radiation from electrons that are accelerated in Jupiter's magnetic field. Thermal radiation is produced by heat in the atmosphere of Jupiter. Exploration Jupiter has been visited by automated spacecraft since 1973, when the space probe Pioneer 10 passed close enough to Jupiter to send back revelations about its properties and phenomena. Missions to Jupiter are accomplished at a cost in energy, which is described by the net change in velocity of the spacecraft, or delta-v. Entering a Hohmann transfer orbit from Earth to Jupiter from low Earth orbit requires a delta-v of 6.3 km/s, which is comparable to the 9.7 km/s delta-v needed to reach low Earth orbit. Gravity assists through planetary flybys can be used to reduce the energy required to reach Jupiter. Flyby missions Beginning in 1973, several spacecraft performed planetary flyby manoeuvres that brought them within the observation range of Jupiter. The Pioneer missions obtained the first close-up images of Jupiter's atmosphere and several of its moons. They discovered that the radiation fields near the planet were much stronger than expected, but both spacecraft managed to survive in that environment. The trajectories of these spacecraft were used to refine the mass estimates of the Jovian system. Radio occultations by the planet resulted in better measurements of Jupiter's diameter and the amount of polar flattening. Six years later, the Voyager missions vastly improved the understanding of the Galilean moons and discovered Jupiter's rings. They also confirmed that the Great Red Spot was anticyclonic. Comparison of images showed that the Spot had changed hues since the Pioneer missions, turning from orange to dark brown. A torus of ionized atoms was discovered along Io's orbital path, which were found to come from erupting volcanoes on the moon's surface. As the spacecraft passed behind the planet, it observed flashes of lightning in the night side atmosphere. The next mission to encounter Jupiter was the Ulysses solar probe. In February 1992, it performed a flyby manoeuvre to attain a polar orbit around the Sun. During this pass, the spacecraft studied Jupiter's magnetosphere, although it had no cameras to photograph the planet. The spacecraft passed by Jupiter six years later, this time at a much greater distance. In 2000, the Cassini probe flew by Jupiter on its way to Saturn, and provided higher-resolution images. The New Horizons probe flew by Jupiter in 2007 for a gravity assist en route to Pluto. The probe's cameras measured plasma output from volcanoes on Io and studied all four Galilean moons in detail. Galileo mission The first spacecraft to orbit Jupiter was the Galileo mission, which reached the planet on December 7, 1995. It remained in orbit for over seven years, conducting multiple flybys of all the Galilean moons and Amalthea. The spacecraft also witnessed the impact of Comet Shoemaker–Levy 9 when it collided with Jupiter in 1994. Some of the goals for the mission were thwarted due to a malfunction in Galileos high-gain antenna. A 340-kilogram titanium atmospheric probe was released from the spacecraft in July 1995, entering Jupiter's atmosphere on December 7. It parachuted through of the atmosphere at a speed of about and collected data for 57.6 minutes until the spacecraft was destroyed. The Galileo orbiter itself experienced a more rapid version of the same fate when it was deliberately steered into the planet on September 21, 2003. NASA destroyed the spacecraft to avoid any possibility of the spacecraft crashing into and possibly contaminating the moon Europa, which may harbour life. Data from this mission revealed that hydrogen composes up to 90% of Jupiter's atmosphere. The recorded temperature was more than , and the wind speed measured more than 644 km/h (>400 mph) before the probes vaporized. Juno mission NASA's Juno mission arrived at Jupiter on July 4, 2016, with the goal of studying the planet in detail from a polar orbit. The spacecraft was originally intended to orbit Jupiter thirty-seven times over a period of twenty months. During the mission, the spacecraft will be exposed to high levels of radiation from Jupiter's magnetosphere, which may cause the failure of certain instruments. On August 27, 2016, the spacecraft completed its first flyby of Jupiter and sent back the first-ever images of Jupiter's north pole. Juno completed 12 orbits before the end of its budgeted mission plan, ending in July 2018. In June of that year, NASA extended the mission operations plan to July 2021, and in January of that year the mission was extended to September 2025 with four lunar flybys: one of Ganymede, one of Europa, and two of Io. When Juno reaches the end of the mission, it will perform a controlled deorbit and disintegrate into Jupiter's atmosphere to avoid the risk of colliding and contaminating Jupiter's moons. Cancelled missions and future plans There is an interest in missions to study Jupiter's larger icy moons, which may have subsurface liquid oceans. Funding difficulties have delayed progress, causing NASA's JIMO (Jupiter Icy Moons Orbiter) to be cancelled in 2005. A subsequent proposal was developed for a joint NASA/ESA mission called EJSM/Laplace, with a provisional launch date around 2020. EJSM/Laplace would have consisted of the NASA-led Jupiter Europa Orbiter and the ESA-led Jupiter Ganymede Orbiter. However, the ESA formally ended the partnership in April 2011, citing budget issues at NASA and the consequences on the mission timetable. Instead, ESA planned to go ahead with a European-only mission to compete in its L1 Cosmic Vision selection. These plans have been realized as the European Space Agency's Jupiter Icy Moon Explorer (JUICE), launched on April 14, 2023, followed by NASA's Europa Clipper mission, launched on October 14, 2024. Other proposed missions include the Chinese National Space Administration's Tianwen-4 mission which aims to launch an orbiter to the Jovian system and possibly Callisto around 2035, and CNSA's Interstellar Express and NASA's Interstellar Probe, which would both use Jupiter's gravity to help them reach the edges of the heliosphere. Moons Jupiter has 95 known natural satellites, and it is likely that this number would go up due to improved instrumentation. Of these, 79 are less than 10 km in diameter. The four largest moons, known as the Galilean moons, are Ganymede, Callisto, Io, and Europa (in order of decreasing size), and are visible from Earth with binoculars on a clear night. Galilean moons The moons discovered by Galileo—Io, Europa, Ganymede, and Callisto—are among the largest in the Solar System. The orbits of Io, Europa, and Ganymede form a pattern known as a Laplace resonance; for every four orbits that Io makes around Jupiter, Europa makes exactly two orbits and Ganymede makes exactly one. This resonance causes the gravitational effects of the three large moons to distort their orbits into elliptical shapes, because each moon receives an extra tug from its neighbours at the same point in every orbit it makes. The tidal force from Jupiter, on the other hand, works to circularize their orbits. The eccentricity of their orbits causes regular flexing of the three moons' shapes, with Jupiter's gravity stretching them out as they approach it and allowing them to spring back to more spherical shapes as they swing away. The friction created by this tidal flexing generates heat in the interior of the moons. This is seen most dramatically in the volcanic activity of Io (which is subject to the strongest tidal forces), and to a lesser degree in the geological youth of Europa's surface, which indicates recent resurfacing of the moon's exterior. Classification Jupiter's moons were classified into four groups of four, based on their similar orbital elements. This picture has been complicated by the discovery of numerous small outer moons since 1999. Jupiter's moons are divided into several different groups, although there are two known moons which are not part of any group (Themisto and Valetudo). The eight innermost regular moons, which have nearly circular orbits near the plane of Jupiter's equator, are thought to have formed alongside Jupiter, while the remainder are irregular moons and are thought to be captured asteroids or fragments of captured asteroids. The irregular moons within each group may have a common origin, perhaps as a larger moon or captured body that broke up. Interaction with the Solar System As the most massive of the eight planets, the gravitational influence of Jupiter has helped shape the Solar System. With the exception of Mercury, the orbits of the system's planets lie closer to Jupiter's orbital plane than the Sun's equatorial plane. The Kirkwood gaps in the asteroid belt are mostly caused by Jupiter, and the planet may have been responsible for the Late Heavy Bombardment in the inner Solar System's history. In addition to its moons, Jupiter's gravitational field controls numerous asteroids that have settled around the Lagrangian points that precede and follow the planet in its orbit around the Sun. These are known as the Trojan asteroids, and are divided into Greek and Trojan "camps" to honour the Iliad. The first of these, 588 Achilles, was discovered by Max Wolf in 1906; since then more than two thousand have been discovered. The largest is 624 Hektor. The Jupiter family is defined as comets that have a semi-major axis smaller than Jupiter's; most short-period comets belong to this group. Members of the Jupiter family are thought to form in the Kuiper belt outside the orbit of Neptune. During close encounters with Jupiter, they are perturbed into orbits with a smaller period, which then becomes circularized by regular gravitational interactions with the Sun and Jupiter. Impacts Jupiter has been called the Solar System's vacuum cleaner because of its immense gravity well and location near the inner Solar System. There are more impacts on Jupiter, such as comets, than on any other planet in the Solar System. For example, Jupiter experiences about 200 times more asteroid and comet impacts than Earth. Scientists used to believe that Jupiter partially shielded the inner system from cometary bombardment. However, computer simulations in 2008 suggest that Jupiter does not cause a net decrease in the number of comets that pass through the inner Solar System, as its gravity perturbs their orbits inward roughly as often as it accretes or ejects them. This topic remains controversial among scientists, as some think it draws comets towards Earth from the Kuiper belt, while others believe that Jupiter protects Earth from the Oort cloud. In July 1994, the Comet Shoemaker–Levy 9 comet collided with Jupiter. The impacts were closely observed by observatories around the world, including the Hubble Space Telescope and Galileo spacecraft. The event was widely covered by the media. Surveys of early astronomical records and drawings produced eight examples of potential impact observations between 1664 and 1839. However, a 1997 review determined that these observations had little or no possibility of being the results of impacts. Further investigation by this team revealed a dark surface feature discovered by astronomer Giovanni Cassini in 1690 may have been an impact scar. In culture The existence of the planet Jupiter has been known since ancient times. It is visible to the naked eye in the night sky and can be seen in the daytime when the Sun is low. To the Babylonians, this planet represented their god Marduk, chief of their pantheon from the Hammurabi period. They used Jupiter's roughly 12-year orbit along the ecliptic to define the constellations of their zodiac. The mythical Greek name for this planet is Zeus (Ζεύς), also referred to as Dias (Δίας), the planetary name of which is retained in modern Greek. The ancient Greeks knew the planet as Phaethon (), meaning "shining one" or "blazing star". The Greek myths of Zeus from the Homeric period showed particular similarities to certain Near-Eastern gods, including the Semitic El and Baal, the Sumerian Enlil, and the Babylonian god Marduk. The association between the planet and the Greek deity Zeus was drawn from Near Eastern influences and was fully established by the fourth century BC, as documented in the Epinomis of Plato and his contemporaries. The god Jupiter is the Roman counterpart of Zeus, and he is the principal god of Roman mythology. The Romans originally called Jupiter the "star of Jupiter" (Iuppiter Stella), as they believed it to be sacred to its namesake god. This name comes from the Proto-Indo-European vocative compound *Dyēu-pəter (nominative: *Dyēus-pətēr, meaning "Father Sky-God", or "Father Day-God"). As the supreme god of the Roman pantheon, Jupiter was the god of thunder, lightning, and storms, and was called the god of light and sky. In Vedic astrology, Hindu astrologers named the planet after Brihaspati, the religious teacher of the gods, and called it "Guru", which means the "Teacher". In Central Asian Turkic myths, Jupiter is called Erendiz or Erentüz, from eren (of uncertain meaning) and yultuz ("star"). The Turks calculated the period of the orbit of Jupiter as 11 years and 300 days. They believed that some social and natural events connected to Erentüz's movements in the sky. The Chinese, Vietnamese, Koreans, and Japanese called it the "wood star" (), based on the Chinese Five Elements. In China, it became known as the "Year-star" (Sui-sing), as Chinese astronomers noted that it jumped one zodiac constellation each year (with corrections). In some ancient Chinese writings, the years were, in principle, named in correlation with the Jovian zodiac signs.
Physical sciences
Astronomy
null
38940
https://en.wikipedia.org/wiki/Banana
Banana
A banana is an elongated, edible fruit – botanically a berry – produced by several kinds of large treelike herbaceous flowering plants in the genus Musa. In some countries, cooking bananas are called plantains, distinguishing them from dessert bananas. The fruit is variable in size, color and firmness, but is usually elongated and curved, with soft flesh rich in starch covered with a peel, which may have a variety of colors when ripe. It grows upward in clusters near the top of the plant. Almost all modern edible seedless (parthenocarp) cultivated bananas come from two wild species – Musa acuminata and Musa balbisiana, or hybrids of them. Musa species are native to tropical Indomalaya and Australia; they were probably domesticated in New Guinea. They are grown in 135 countries, primarily for their fruit, and to a lesser extent to make banana paper and textiles, while some are grown as ornamental plants. The world's largest producers of bananas in 2022 were India and China, which together accounted for approximately 26% of total production. Bananas are eaten raw or cooked in recipes varying from curries to banana chips, fritters, fruit preserves, or simply baked or steamed. Worldwide, there is no sharp distinction between dessert "bananas" and cooking "plantains": this works well enough in the Americas and Europe, but it breaks down in Southeast Asia where many more kinds of bananas are grown and eaten. The term "banana" is applied also to other members of the genus Musa, such as the scarlet banana (Musa coccinea), the pink banana (Musa velutina), and the Fe'i bananas. Members of the genus Ensete, such as the snow banana (Ensete glaucum) and the economically important false banana (Ensete ventricosum) of Africa are sometimes included. Both genera are in the banana family, Musaceae. Banana plantations are subject to damage by parasitic nematodes and insect pests, and to fungal and bacterial diseases, one of the most serious being Panama disease which is caused by a Fusarium fungus. This and black sigatoka threaten the production of Cavendish bananas, the main kind eaten in the Western world, which is a triploid Musa acuminata. Plant breeders are seeking new varieties, but these are difficult to breed given that commercial varieties are seedless. To enable future breeding, banana germplasm is conserved in multiple gene banks around the world. Description The banana plant is the largest herbaceous flowering plant. All the above-ground parts of a banana plant grow from a structure called a corm. Plants are normally tall and fairly sturdy with a treelike appearance, but what appears to be a trunk is actually a pseudostem composed of multiple leaf-stalks (petioles). Bananas grow in a wide variety of soils, as long as it is at least deep, has good drainage and is not compacted. They are fast-growing plants, with a growth rate of up to per day. The leaves of banana plants are composed of a stalk (petiole) and a blade (lamina). The base of the petiole widens to form a sheath; the tightly packed sheaths make up the pseudostem, which is all that supports the plant. The edges of the sheath meet when it is first produced, making it tubular. As new growth occurs in the centre of the pseudostem, the edges are forced apart. Cultivated banana plants vary in height depending on the variety and growing conditions. Most are around tall, with a range from 'Dwarf Cavendish' plants at around to 'Gros Michel' at or more. Leaves are spirally arranged and may grow long and wide. When a banana plant is mature, the corm stops producing new leaves and begins to form a flower spike or inflorescence. A stem develops which grows up inside the pseudostem, carrying the immature inflorescence until eventually it emerges at the top. Each pseudostem normally produces a single inflorescence, also known as the "banana heart". After fruiting, the pseudostem dies, but offshoots will normally have developed from the base, so that the plant as a whole is perennial. The inflorescence contains many petal-like bracts between rows of flowers. The female flowers (which can develop into fruit) appear in rows further up the stem (closer to the leaves) from the rows of male flowers. The ovary is inferior, meaning that the tiny petals and other flower parts appear at the tip of the ovary. The banana fruits develop from the banana heart, in a large hanging cluster called a bunch, made up of around nine tiers called hands, with up to 20 fruits to a hand. A bunch can weigh . The stalk ends of the fruits connect up to the rachis part of the inflorescence. Opposite the stalk end, is the blossom end, where the remnants of the flower deviate the texture from the rest of the flesh inside the peel. The fruit has been described as a "leathery berry". There is a protective outer layer (a peel or skin) with numerous long, thin strings (Vascular bundles), which run lengthwise between the skin and the edible inner white flesh. The peel is less palatable and usually discarded after peeling the fruit, optimally done from the blossom end, but often started from the stalk end. The inner part of the common yellow dessert variety can be split lengthwise into three sections that correspond to the inner portions of the three carpels by manually deforming the unopened fruit. In cultivated varieties, fertile seeds are usually absent. Evolution Phylogeny A 2011 phylogenomic analysis using nuclear genes indicates the phylogeny of some representatives of the Musaceae family. Major edible kinds of banana are shown in boldface. ‡ Many cultivated bananas are hybrids of M. acuminata x M. balbisiana (not shown in tree). Work by Li and colleagues in 2024 identifies three subspecies of M. acuminata, namely sspp. banksii, malaccensis, and zebrina, as contributing substantially to the Ban, Dh, and Ze subgenomes of triploid cultivated bananas respectively. Taxonomy The genus Musa was created by Carl Linnaeus in 1753. The name may be derived from Antonius Musa, physician to the Emperor Augustus, or Linnaeus may have adapted the Arabic word for banana, mauz. The ultimate origin of musa may be in the Trans–New Guinea languages, which have words similar to "#muku"; from there the name was borrowed into the Austronesian languages and across Asia, accompanying the cultivation of the banana as it was brought to new areas, via the Dravidian languages of India, into Arabic as a Wanderwort. The word "banana" is thought to be of West African origin, possibly from the Wolof word , and passed into English via Spanish or Portuguese. Musa is the type genus in the family Musaceae. The APG III system assigns Musaceae to the order Zingiberales, part of the commelinid clade of the monocotyledonous flowering plants. Some 70 species of Musa were recognized by the World Checklist of Selected Plant Families ; several produce edible fruit, while others are cultivated as ornamentals. The classification of cultivated bananas has long been a problematic issue for taxonomists. Linnaeus originally placed bananas into two species based only on their uses as food: Musa sapientum for dessert bananas and Musa paradisiaca for plantains. More species names were added, but this approach proved to be inadequate for the number of cultivars in the primary center of diversity of the genus, Southeast Asia. Many of these cultivars were given names that were later discovered to be synonyms. In a series of papers published from 1947 onward, Ernest Cheesman showed that Linnaeus's Musa sapientum and Musa paradisiaca were cultivars and descendants of two wild seed-producing species, Musa acuminata and Musa balbisiana, both first described by Luigi Aloysius Colla. Cheesman recommended the abolition of Linnaeus's species in favor of reclassifying bananas according to three morphologically distinct groups of cultivars – those primarily exhibiting the botanical characteristics of Musa balbisiana, those primarily exhibiting the botanical characteristics of Musa acuminata, and those with characteristics of both. Researchers Norman Simmonds and Ken Shepherd proposed a genome-based nomenclature system in 1955. This system eliminated almost all the difficulties and inconsistencies of the earlier classification of bananas based on assigning scientific names to cultivated varieties. Despite this, the original names are still recognized by some authorities, leading to confusion. The accepted scientific names for most groups of cultivated bananas are Musa acuminata Colla and Musa balbisiana Colla for the ancestral species, and Musa × paradisiaca L. for the hybrid of the two. An unusual feature of the genetics of the banana is that chloroplast DNA is inherited maternally, while mitochondrial DNA is inherited paternally. This facilitates taxonomic study of species and subspecies relationships. Informal classification In regions such as North America and Europe, Musa fruits offered for sale can be divided into small sweet "bananas" eaten raw when ripe as a dessert, and large starchy "plantains" or cooking bananas, which do not have to be ripe. Linnaeus made this distinction when naming two "species" of Musa. Members of the "plantain subgroup" of banana cultivars, most important as food in West Africa and Latin America, correspond to this description, having long pointed fruit. They are described by Ploetz et al. as "true" plantains, distinct from other cooking bananas. The cooking bananas of East Africa belong to a different group, the East African Highland bananas. Further, small farmers in Colombia grow a much wider range of cultivars than large commercial plantations do, and in Southeast Asia—the center of diversity for bananas, both wild and cultivated—the distinction between "bananas" and "plantains" does not work. Many bananas are used both raw and cooked. There are starchy cooking bananas which are smaller than those eaten raw. The range of colors, sizes and shapes is far wider than in those grown or sold in Africa, Europe or the Americas. Southeast Asian languages do not make the distinction between "bananas" and "plantains" that is made in English. Thus both Cavendish dessert bananas and Saba cooking bananas are called pisang in Malaysia and Indonesia, kluai in Thailand and chuối in Vietnam. Fe'i bananas, grown and eaten in the islands of the Pacific, are derived from a different wild species. Most Fe'i bananas are cooked, but Karat bananas, which are short and squat with bright red skins, are eaten raw. History Domestication The earliest domestication of bananas (Musa spp.) was from naturally occurring parthenocarpic (seedless) individuals of Musa banksii in New Guinea. These were cultivated by Papuans before the arrival of Austronesian-speakers. Numerous phytoliths of bananas have been recovered from the Kuk Swamp archaeological site and dated to around 10,000 to 6,500 BP. Foraging humans in this area began domestication in the late Pleistocene using transplantation and early cultivation methods. By the early to middle of the Holocene the process was complete. From New Guinea, cultivated bananas spread westward into Island Southeast Asia. They hybridized with other (possibly independently domesticated) subspecies of Musa acuminata as well as M. balbisiana in the Philippines, northern New Guinea, and possibly Halmahera. These hybridization events produced the triploid cultivars of bananas commonly grown today. The banana was one of the key crops that enabled farming to begin in Papua New Guinea. Spread From Island Southeast Asia, bananas became part of the staple domesticated crops of Austronesian peoples. These ancient introductions resulted in the banana subgroup now known as the true plantains, which include the East African Highland bananas and the Pacific plantains (the Iholena and Maoli-Popo'ulu subgroups). East African Highland bananas originated from banana populations introduced to Madagascar probably from the region between Java, Borneo, and New Guinea; while Pacific plantains were introduced to the Pacific Islands from either eastern New Guinea or the Bismarck Archipelago. 21st century discoveries of phytoliths in Cameroon dating to the first millennium BCE triggered a debate about the date of first cultivation in Africa. There is linguistic evidence that bananas were known in East Africa or Madagascar around that time. The earliest prior evidence indicates that cultivation dates to no earlier than the late 6th century AD. Malagasy people colonized Madagascar from South East Asia around 600 AD onwards. Glucanase and two other proteins specific to bananas were found in dental calculus from the early Iron Age (12th century BCE) Philistines in Tel Erani in the southern Levant. Another wave of introductions later spread bananas to other parts of tropical Asia, particularly Indochina and the Indian subcontinent. Some evidence suggests bananas were known to the Indus Valley civilisation from phytoliths recovered from the Kot Diji archaeological site in Pakistan. Southeast Asia remains the region of primary diversity of the banana. Areas of secondary diversity are found in Africa, indicating a long history of banana cultivation there. Arab Agricultural Revolution The banana may have been present in isolated locations elsewhere in the Middle East on the eve of Islam. The spread of Islam was followed by far-reaching diffusion. There are numerous references to it in Islamic texts (such as poems and hadiths) beginning in the 9th century. By the 10th century, the banana appeared in texts from Palestine and Egypt. From there it diffused into North Africa and Muslim Iberia during the Arab Agricultural Revolution. An article on banana tree cultivation is included in Ibn al-'Awwam's 12th-century agricultural work, Kitāb al-Filāḥa (Book on Agriculture). During the Middle Ages, bananas from Granada were considered among the best in the Arab world. Bananas were certainly grown in the Christian Kingdom of Cyprus by the late medieval period. Writing in 1458, the Italian traveller and writer Gabriele Capodilista wrote favourably of the extensive farm produce of the estates at Episkopi, near modern-day Limassol, including the region's banana plantations. Early modern spread In the early modern period, bananas were encountered by European explorers during the Magellan expedition in 1521, in both Guam and the Philippines. Lacking a name for the fruit, the ship's historian Antonio Pigafetta described them as "figs more than one palm long." Bananas were introduced to South America by Portuguese sailors who brought them from West Africa in the 16th century. Southeast Asian banana cultivars, as well as abaca grown for fibers, were introduced to North and Central America by the Spanish from the Philippines, via the Manila galleons. Plantation cultivation In the 15th and 16th centuries, Portuguese colonists started banana plantations in the Atlantic Islands, Brazil, and western Africa. North Americans began consuming bananas on a small scale at very high prices shortly after the Civil War, though it was only in the 1880s that the food became more widespread. As late as the Victorian Era, bananas were not widely known in Europe, although they were available. The earliest modern plantations originated in Jamaica and the related Western Caribbean Zone, including most of Central America. Plantation cultivation involved the combination of modern transportation networks of steamships and railroads with the development of refrigeration that allowed more time between harvesting and ripening. North American shippers like Lorenzo Dow Baker and Andrew Preston, the founders of the Boston Fruit Company started this process in the 1870s, with the participation of railroad builders like Minor C. Keith. Development led to the multi-national giant corporations like Chiquita and Dole. These companies were monopolistic, vertically integrated (controlling growing, processing, shipping and marketing) and usually used political manipulation to build enclave economies (internally self-sufficient, virtually tax exempt, and export-oriented, contributing little to the host economy). Their political maneuvers, which gave rise to the term banana republic for states such as Honduras and Guatemala, included working with local elites and their rivalries to influence politics or playing the international interests of the United States, especially during the Cold War, to keep the political climate favorable to their interests. Small-scale cultivation The vast majority of the world's bananas are cultivated for family consumption or for sale on local markets. They are grown in large quantities in India, while many other Asian and African countries host numerous small-scale banana growers who sell at least some of their crop. Peasants with smallholdings of 1 to 2 acres in the Caribbean produce bananas for the world market, often alongside other crops. In many tropical countries, the main cultivars produce green (unripe) bananas used for cooking. Because bananas and plantains produce fruit year-round, they provide a valuable food source during the hunger season between harvests of other crops, and are thus important for global food security. Modern cultivation Bananas are propagated asexually from offshoots. The plant is allowed to produce two shoots at a time; a larger one for immediate fruiting and a smaller "sucker" or "follower" to produce fruit in 6–8 months. As a non-seasonal crop, bananas are available fresh year-round. They are grown in some 135 countries. Cavendish In global commerce in 2009, by far the most important cultivars belonged to the triploid Musa acuminata AAA group of Cavendish group bananas. Disease is threatening the production of the Cavendish banana worldwide. It is unclear if any existing cultivar can replace Cavendish bananas, so various hybridisation and genetic engineering programs are attempting to create a disease-resistant, mass-market banana. One such strain that has emerged is the Taiwanese Cavendish or Formosana. Ripening Export bananas are picked green, and ripened in special rooms upon arrival in the destination country. These rooms are air-tight and filled with ethylene gas to induce ripening. This mimics the normal production of this gas as a ripening hormone. Ethylene stimulates the formation of amylase, an enzyme that breaks down starch into sugar, influencing the taste. Ethylene signals the production of pectinase, a different enzyme which breaks down the pectin between the cells of the banana, causing the banana to soften as it ripens. The vivid yellow color many consumers in temperate climates associate with bananas is caused by ripening around , and does not occur in Cavendish bananas ripened in tropical temperatures (over ), which leaves them green. Storage and transport Bananas are transported over long distances from the tropics to world markets. To obtain maximum shelf life, harvest comes before the fruit is mature. The fruit requires careful handling, rapid transport to ports, cooling, and refrigerated shipping. The goal is to prevent the bananas from producing their natural ripening agent, ethylene. This technology allows storage and transport for 3–4 weeks at . On arrival, bananas are held at about and treated with a low concentration of ethylene. After a few days, the fruit begins to ripen and is distributed for final sale. Ripe bananas can be held for a few days at home. If bananas are too green, they can be put in a brown paper bag with an apple or tomato overnight to speed up the ripening process. Sustainability The excessive use of fertilizers contributes greatly to eutrophication in streams and lakes, harming aquatic life, while expanding banana production has led to deforestation. As soil nutrients are depleted, more forest is cleared for plantations. This causes soil erosion and increases the frequency of flooding. Voluntary sustainability standards such as Rainforest Alliance and Fairtrade are being used to address some of these issues. Banana production certified in this way grew rapidly at the start of the 21st century to represent 36% of banana exports by 2016. However, such standards are applied mainly in countries which focus on the export market, such as Colombia, Costa Rica, Ecuador, and Guatemala; worldwide they cover only 8–10% of production. Breeding Mutation breeding can be used in this crop. Aneuploidy is a source of significant variation in allotriploid varieties. For one example, it can be a source of TR4 resistance. Lab protocols have been devised to screen for such aberrations and for possible resulting disease resistances. Wild Musa spp. provide useful resistance genetics, and are vital to breeding for TR4 resistance, as shown in introgressed resistance from wild relatives. Bananas form a hybrid-polyploid complex; hybrids can be diploid, triploid, tetraploid, or pentaploid, i.e. they may have 2, 3, 4, or 5 sets of chromosomes. This makes them difficult to breed as hybrids are often sterile, in addition to the challenge of breeding seedless (parthenocarpic) varieties. The Honduran Foundation for Agricultural Research has bred a seedless banana that is resistant to both Panama disease and black Sigatoka disease. The team made use of the fact that "seedless" varieties do rarely produce seeds; they obtained around fifteen seeds from some 30,000 cultivated plants, pollinated by hand with pollen from wild Asian bananas. Production and export , bananas are exported in larger volume and to a larger value than any other fruit. In 2022, world production of bananas and plantains combined was 179 million tonnes, led by India and China with a combined total of 26% of global production. Other major producers were Uganda, Indonesia, the Philippines, Nigeria and Ecuador. As reported for 2013, total world exports were 20 million tonnes of bananas and 859,000 tonnes of plantains. Ecuador and the Philippines were the leading exporters with 5.4 and 3.3 million tonnes, respectively, and the Dominican Republic was the leading exporter of plantains with 210,350 tonnes. Pests Bananas are damaged by a variety of pests, especially nematodes and insects. Nematodes Banana roots are subject to damage from multiple species of parasitic nematodes. Radopholus similis causes nematode root rot, the most serious nematode disease of bananas in economic terms. Root-knot is the result of infection by species of Meloidogyne, while root-lesion is caused by species of Pratylenchus, and spiral nematode root damage is the result of infection by Helicotylenchus species. Insects Among the main insect pests of banana cultivation are two beetles that cause substantial economic losses, the banana borer Cosmopolites sordidus and the banana stem weevil Odoiporus longicollis. Other significant pests include aphids and scarring beetles. Diseases Although in no danger of outright extinction, bananas of the Cavendish group, which dominate the global market, are under threat. There is a need to enrich banana biodiversity by producing diverse new banana varieties, not just focusing on the Cavendish. Its predecessor 'Gros Michel', discovered in the 1820s, was similarly dominant but had to be replaced after widespread infections of Panama disease. Monocropping of Cavendish similarly leaves it susceptible to disease and so threatens both commercial cultivation and small-scale subsistence farming. Within the data gathered from the genes of hundreds of bananas, the botanist Julie Sardos has found several wild banana ancestors currently unknown to scientists, whose genes could provide a means of defense against banana crop diseases. Some commentators have remarked that those variants which could replace what much of the world considers a "typical banana" are so different that most people would not consider them the same fruit, and blame the decline of the banana on monogenetic cultivation driven by short-term commercial motives. Overall, fungal diseases are disproportionately important to small island developing states. Panama disease Panama disease is caused by a Fusarium soil fungus, which enters the plants through the roots and travels with water into the trunk and leaves, producing gels and gums that cut off the flow of water and nutrients, causing the plant to wilt, and exposing the rest of the plant to lethal amounts of sunlight. Prior to 1960, almost all commercial banana production centered on the Gros Michel cultivar, which was highly susceptible. Cavendish was chosen as the replacement for Gros Michel because, among resistant cultivars, it produces the highest quality fruit. It requires more care during shipping, and its quality compared to Gros Michel is debated. Fusarium wilt TR4 Fusarium wilt TR4, a reinvigorated strain of Panama disease, was discovered in 1993. This virulent form of Fusarium wilt has destroyed Cavendish plantations in several southeast Asian countries and spread to Australia and India. As the soil-based fungi can easily be carried on boots, clothing, or tools, the wilt spread to the Americas despite years of preventive efforts. Without genetic diversity, Cavendish is highly susceptible to TR4, and the disease endangers its commercial production worldwide. The only known defense to TR4 is genetic resistance. This is conferred either by RGA2, a gene isolated from a TR4-resistant diploid banana, or by the nematode-derived Ced9. This may be achieved by genetic modification. Black sigatoka Black sigatoka is a fungal leaf spot disease first observed in Fiji in 1963 or 1964. It is caused by the ascomycete Mycosphaerella fijiensis. The disease, also called black leaf streak, has spread to banana plantations throughout the tropics from infected banana leaves used as packing material. It affects all main cultivars of bananas and plantains (including the Cavendish cultivars), impeding photosynthesis by blackening parts of the leaves, eventually killing the entire leaf. Starved for energy, fruit production falls by 50% or more, and the bananas that do grow ripen prematurely, making them unsuitable for export. The fungus has shown ever-increasing resistance to treatment; spraying with fungicides may be required as often as 50 times a year. Better strategies, with integrated pest management, are needed. Banana bunchy top virus Banana bunchy top virus is a plant virus of the genus Babuvirus, family Nanonviridae affecting Musa spp. (including banana, abaca, plantain and ornamental bananas) and Ensete spp. in the family Musaceae. Banana bunchy top disease symptoms include dark green streaks of variable length in leaf veins, midribs and petioles. Leaves become short and stunted as the disease progresses, becoming 'bunched' at the apex of the plant. Infected plants may produce no fruit or the fruit bunch may not emerge from the pseudostem. The virus is transmitted by the banana aphid Pentalonia nigronervosa and is widespread in Southeast Asia, Asia, the Philippines, Taiwan, Oceania and parts of Africa. There is no cure, but it can be effectively controlled by the eradication of diseased plants and the use of virus-free planting material. No resistant cultivars have been found, but varietal differences in susceptibility have been reported. The commercially important Cavendish subgroup is severely affected. Banana bacterial wilt Banana bacterial wilt is a bacterial disease caused by Xanthomonas campestris pv. musacearum. First identified on a close relative of bananas, Ensete ventricosum, in Ethiopia in the 1960s, The disease was first seen in Uganda in 2001 affecting all banana cultivars. Since then it has been diagnosed in Central and East Africa, including the banana growing regions of Rwanda, the Democratic Republic of the Congo, Tanzania, Kenya, Burundi, and Uganda. Conservation of genetic diversity Given the narrow range of genetic diversity present in bananas and the many threats via biotic (pests and diseases) and abiotic threats (such as drought) stress, conservation of the full spectrum of banana genetic resources is ongoing. In 2024, the economist Pascal Liu of the FAO described the impact of global warming as an "enormous threat" to the world supply of bananas. Banana germplasm is conserved in many national and regional gene banks, and at the world's largest banana collection, the International Musa Germplasm Transit Centre, managed by Bioversity International and hosted at KU Leuven in Belgium. Since Musa cultivars are mostly seedless, they are conserved by three main methods: in vivo (planted in field collections), in vitro (as plantlets in test tubes within a controlled environment), and by cryopreservation (meristems conserved in liquid nitrogen at −196 °C). Genes from wild banana species are conserved as DNA and as cryopreserved pollen. Seeds from wild species are sometimes conserved, although less commonly, as they are difficult to regenerate. In addition, bananas and their crop wild relatives are conserved in situ, in the wild natural habitats where they evolved and continue to do so. Diversity is also conserved in farmers' fields where continuous cultivation, adaptation and improvement of cultivars is often carried out by small-scale farmers growing traditional local cultivars. Nutrition A raw banana (not including the peel) is 75% water, 23% carbohydrates, 1% protein, and contains negligible fat. A reference amount of supplies 89 calories, 24% of the Daily Value of vitamin B6, and moderate amounts of vitamin C, manganese, potassium, and dietary fiber, with no other micronutrients in significant content (table). Although bananas are commonly thought to contain exceptional potassium content, their actual potassium content is not high per typical food serving, having only 12% of the Daily Value for potassium (table). The potassium-content ranking for bananas among fruits, vegetables, legumes, and many other foods is medium. Uses Culinary Fruit Bananas are a staple starch for many tropical populations. Depending upon cultivar and ripeness, the flesh can vary in taste from starchy to sweet, and texture from firm to mushy. Both the skin and inner part can be eaten raw or cooked. The primary component of the aroma of fresh bananas is isoamyl acetate (also known as banana oil), which, along with several other compounds such as butyl acetate and isobutyl acetate, is a significant contributor to banana flavor. Plantains are eaten cooked, often as fritters. Pisang goreng, bananas fried with batter, is a popular street food in Southeast Asia. Bananas feature in Philippine cuisine, with desserts like maruya banana fritters. Bananas can be made into fruit preserves. Banana chips are a snack produced from sliced and fried bananas, such as in Kerala. Dried bananas are ground to make banana flour. In Africa, matoke bananas are cooked in a sauce with meat and vegetables such as peanuts or beans to make the breakfast dish katogo. In Western countries, bananas are used to make desserts such as banana bread. Flowers Banana flowers (also called "banana hearts" or "banana blossoms") are used as a vegetable in South Asian and Southeast Asian cuisine. The flavor resembles that of artichoke. As with artichokes, both the fleshy part of the bracts and the heart are edible. Leaf Banana leaves are large, flexible, and waterproof. While generally too tough to actually be eaten, they are often used as ecologically friendly disposable food containers or as "plates" in South Asia and several Southeast Asian countries. In Indonesian cuisine, banana leaf is employed in cooking methods like pepes and botok; banana leaf packages containing food ingredients and spices are cooked in steam or in boiled water, or are grilled on charcoal. Certain types of tamales are wrapped in banana leaves instead of corn husks. When used so for steaming or grilling, the banana leaves protect the food ingredients from burning and add a subtle sweet flavor. In South India, it is customary to serve traditional food on a banana leaf. In Tamil Nadu (India), dried banana leaves are used as to pack food and to make cups to hold liquid food items. Trunk The tender core of the banana plant's trunk is also used in South Asian and Southeast Asian cuisine. Examples include the Burmese dish mohinga, and the Filipino dishes inubaran and kadyos, manok, kag ubad. Paper and textiles Banana fiber harvested from the pseudostems and leaves has been used for textiles in Asia since at least the 13th century. Both fruit-bearing and fibrous banana species have been used. In the Japanese system Kijōka-bashōfu, leaves and shoots are cut from the plant periodically to ensure softness. Harvested shoots are first boiled in lye to prepare fibers for yarn-making. These banana shoots produce fibers of varying degrees of softness, yielding yarns and textiles with differing qualities for specific uses. For example, the outermost fibers of the shoots are the coarsest, and are suitable for tablecloths, while the softest innermost fibers are desirable for kimono and kamishimo. This traditional Japanese cloth-making process requires many steps, all performed by hand. Banana paper can be made either from the bark of the banana plant, mainly for artistic purposes, or from the fibers of the stem and non-usable fruits. The paper may be hand-made or industrially processed. Other uses The large leaves of bananas are locally used as umbrellas. Banana peel may have capability to extract heavy metal contamination from river water, similar to other purification materials. Waste bananas can be used to feed livestock. As with all living things, potassium-containing bananas emit radioactivity at low levels occurring naturally from the potassium-40 (K-40) isotope. The banana equivalent dose of radiation was developed in 1995 as a simple teaching-tool to educate the public about the natural, small amount of K-40 radiation occurring in everyone and in common foods. Potential allergic reaction Individuals with a latex allergy may experience a reaction to handling or eating bananas. Cultural roles Arts The Edo period poet Matsuo Bashō is named after the Japanese word 芭蕉 () for the Japanese banana. The planted in his garden by a grateful student became a source of inspiration to his poetry, as well as a symbol of his life and home. The song "Yes! We Have No Bananas" was written by Frank Silver and Irving Cohn and originally released in 1923; for many decades, it was the best-selling sheet music in history. Since then the song has been rerecorded several times and has been particularly popular during banana shortages. A person slipping on a banana peel has been a staple of physical comedy for generations. An American comedy recording from 1910 features a popular character of the time, "Uncle Josh", claiming to describe his own such incident. The banana's suggestively phallic shape has been exploited in artworks from Giorgio de Chirico's 1913 painting The Uncertainty of the Poet onwards. In 2019, an exhibition of Natalia LL's video and set of photographs showing a woman "sucking on a banana" at the Warsaw National Museum was taken down and the museum's director reprimanded. The cover artwork for the 1967 debut album of The Velvet Underground features a banana made by Andy Warhol. On the original vinyl LP version, the design allowed the listener to "peel" this banana to find a pink, peeled banana on the inside. In 1989, the feminist Guerilla Girls made a screenprint with two bananas, intentionally reminiscent of Warhol's, arranged to form a "0" to answer the question in the artwork, "How many works by women artists were in the Andy Warhol and Tremaine auctions at Sotheby's?". Italian artist Maurizio Cattelan created a 2019 concept art piece titled Comedian involving taping a banana to a wall using silver duct tape. The piece was exhibited briefly at the Art Basel in Miami before being removed from the exhibition and eaten without permission in another artistic stunt titled Hungry Artist by New York artist David Datuna. Religion and folklore In India, bananas serve a prominent part in many festivals and occasions of Hindus. In South Indian weddings, particularly Tamil weddings, banana trees are tied in pairs to form an arch as a blessing to the couple for a long-lasting, useful life. In Thailand, it is believed that a certain type of banana plant may be inhabited by a spirit, Nang Tani, a type of ghost related to trees and similar plants that manifests itself as a young woman. People often tie a length of colored satin cloth around the pseudostem of the banana plants. In Malay folklore, the ghost known as Pontianak is associated with banana plants (pokok pisang), and its spirit is said to reside in them during the day. Racial signifier In European, British, and Australian sport, throwing a banana at a member of an opposing team has long been used as a form of racial abuse. The act, which was commonplace in England in the 1980s, is meant to taunt players of Black African ancestry by equating them to apes or monkeys.
Biology and health sciences
Monocots
null
38945
https://en.wikipedia.org/wiki/Cooking%20banana
Cooking banana
Cooking bananas are a group of banana cultivars in the genus Musa whose fruits are generally used in cooking. They are not eaten raw and are generally starchy. Many cooking bananas are referred to as plantains (/ˈplæntɪn/, /plænˈteɪn/, /ˈplɑːntɪn/) or 'green bananas'. In botanical usage, the term "plantain" is used only for true plantains, while other starchy cultivars used for cooking are called "cooking bananas". True plantains are cooking cultivars belonging to the AAB group, while cooking bananas are any cooking cultivar belonging to the AAB, AAA, ABB, or BBB groups. The currently accepted scientific name for all such cultivars in these groups is Musa × paradisiaca. Fe'i bananas (Musa × troglodytarum) from the Pacific Islands are often eaten roasted or boiled, and are thus informally referred to as "mountain plantains", but they do not belong to any of the species from which all modern banana cultivars are descended. Cooking bananas are a major food staple in West and Central Africa, the Caribbean islands, Central America, and northern South America. Members of the genus Musa are indigenous to the tropical regions of Southeast Asia and Oceania. Bananas fruit all year round, making them a reliable all-season staple food. Cooking bananas are treated as a starchy fruit with a relatively neutral flavor and soft texture when cooked. Cooking bananas may be eaten raw; however, they are most commonly prepared either fried, boiled, or processed into flour or dough. Description Plantains contain more starch and less sugar than dessert bananas, so they are usually cooked or otherwise processed before being eaten. They are typically boiled or fried when eaten green, and when processed, they can be made into flour and turned into baked products such as cakes, bread and pancakes. Green plantains can also be boiled and pureed and then used as thickeners for soups. The pulp of green plantain is typically hard, with the peel often so stiff that it must be cut with a knife to be removed. Mature, yellow plantains can be peeled like typical dessert bananas; the pulp is softer than in immature, green fruit and some of the starch has been converted to sugar. They can be eaten raw, but are not as flavourful as dessert bananas, so are usually cooked. When yellow plantains are fried, they tend to caramelize, turning a golden-brown color. They can also be boiled, baked, microwaved, or grilled over charcoal, either peeled or unpeeled. Plantains are a staple food in the tropical regions of the world, ranking as the tenth most important staple food in the world. As a staple, plantains are treated in much the same way as potatoes, with a similar neutral flavour and texture when the unripe fruit is cooked by steaming, boiling, or frying. Since they fruit all year, plantains are a reliable staple food, particularly in developing countries with inadequate food storage, preservation, and transportation technologies. In Africa, plantains and bananas provide more than 25 percent of the caloric requirements for over 70 million people. Plantain plantations are vulnerable to destruction by hurricanes, because Musa spp. do not withstand high winds well. An average plantain provides about of food energy and is a good source of potassium and dietary fiber. The sap from the fruit peel, as well as the entire plant, can stain clothing and hands, and can be difficult to remove. Taxonomy Linnaeus originally classified bananas into two species based only on their uses as food: Musa paradisiaca for plantains and Musa sapientum for dessert bananas. Both are now known to be hybrids between the species Musa acuminata (A genome) and Musa balbisiana (B genome). The earlier published name, Musa × paradisiaca, is now used as the scientific name for all such hybrids. Most modern plantains are sterile triploids belonging to the AAB Group, sometimes known as the "Plantain group". Other economically important cooking banana groups include the East African Highland bananas (Mutika/Lujugira subgroup) of the AAA Group and the Pacific plantains (including the Popoulo, Maoli, and Iholena subgroups), also of the AAB Group. Dishes Fried Pisang goreng ("fried banana" in Indonesian and Malay) is a plantain snack deep-fried in coconut oil. Pisang goreng can be coated in batter flour or fried without batter. It is a snack food mostly found in Indonesia, Malaysia, Singapore and Brunei. Ethakka appam, pazham (banana) boli or pazham pori are terms used for fried plantain in the state of Kerala, India. The plantain is usually dipped in sweetened rice and white flour batter and then fried in coconut or vegetable oil, similar to pisang goreng. It is also known as bajji in Southern Indian states, where it is typically served as a savory fast food. Aritikaya kura, or vepudu are terms used for deep fried or cooked plantain dish in the state of Andhra Pradesh, India. Plantain is known as Raw Banana or Aritikaya in this part of southern India. It is usually served with steamed white rice and maybe accompanied with plain curd or yogurt. It is usually a favourite dish to be served in weddings and other occasions. In the Philippines, fried bananas are also served with arroz a la cubana and is frequently characterized as one of its defining ingredients. Plantains are used in the Ivory Coast dish aloco as the main ingredient. Fried plantains are covered in an onion-tomato sauce, often with a grilled fish between the plantains and sauce. Boli or bole is the term used for roasted plantain in Nigeria. The plantain is usually grilled and served with roasted fish, ground peanuts and a hot palm oil sauce. It is a dish native to the Yoruba people of Western Nigeria. It is popular among the working class as an inexpensive midday meal. Plantain is popular in West and Central Africa, especially Cameroon, Democratic Republic of Congo, Bénin, Ghana and Nigeria; when ripe plantain is fried, it is generally called dodo ("dough-dough"). The ripe plantain is usually sliced diagonally for a large oval shape, then fried in oil to a golden brown color. The diagonal slice maximizes the surface area, allowing the plantain to cook evenly. Fried plantain can be eaten as such, or served with stew or sauce. In Ikire, a town in Osun State in southwestern Nigeria, there is a special way of preparing fried plantain known as Dodo Ikire. This variation of Dodo (Fried Plantain) is made from overripe plantain, chopped into small pieces, sprinkled with chili pepper and then fried in boiling point palm oil until the pieces turn blackish. The fried plantains are then stuffed carefully into a plastic funnel and then pressed using a wooden pestle to compress and acquire a conical shape when removed. In Ghana, the dish is called kelewele and can be found as a snack sold by street vendors. Though sweeter and spicier variations exist, kelewele is often flavored with nutmeg, chili powder, ginger and salt. In the Western hemisphere, tostones (also known as banann peze in Haiti, tachinos or chatinos in Cuba, and patacones in Colombia, Costa Rica, Ecuador, Honduras, Panama, Peru and Venezuela) are twice-fried plantain fritters, often served as a side dish, appetizer or snack. Plantains are sliced in long pieces and fried in oil. The segments are then removed and individually smashed down to about half their original height. Finally, the pieces are fried again and then seasoned, often with salt. In some countries, such as Cuba, Puerto Rico and the Dominican Republic, the tostones are dipped in Creole sauce from chicken, pork, beef, or shrimp before eating. In Haiti, bannann peze is commonly served with pikliz, a slaw-like condiment made with cabbage, onions, carrots and scotch bonnet peppers. In Nicaragua, tostones are typically served with fried cheese (Tostones con queso) and sometimes with refried beans. While the name tostones is used to describe this food when prepared at home, in some South American countries the word also describes plantain chips, which are typically purchased from a store. In western Venezuela, much of Colombia and the Peruvian Amazon, patacones are a frequently seen variation of tostones. Plantains are sliced in long pieces and fried in oil, then used to make sandwiches with pork, beef, chicken, vegetables and ketchup. They can be made with unripe patacon verde or ripe patacon amarillo plantains. Chifles is the Spanish term used in Peru and Ecuador for fried green plantains sliced thick; it is also used to describe plantain chips which are sliced thinner. In Nicaragua, they are called "tajadas" and are sliced thinly the long way. They are commonly served alongside many dishes, including fritanga, and sold in bags by themselves. In Honduras, Venezuela and Central Colombia, fried ripened plantain slices are known as tajadas. They are customary in most typical meals, such as the Venezuelan pabellón criollo. The host or waiter may also offer them as barandas (guard rails), in common slang, as the long slices are typically placed on the sides of a full dish, and therefore look as such. Some variations include adding honey or sugar and frying the slices in butter, to obtain a golden caramel; the result has a sweeter taste and a characteristic pleasant smell. The same slices are known as amarillos and fritos maduros in Puerto Rico, Cuba, and the Dominican Republic respectively. In Panama, tajadas are eaten daily together with steamed rice, meat and beans, thus making up an essential part of the Panamanian diet, as with Honduras. By contrast, in Nicaragua, tajadas are fried unripened plantain slices, and are traditionally served at a fritanga, with fried pork or carne asada, or on their own on green banana leaves, either with a cabbage salad or fresh or fried cheese. On Colombia's Caribbean coast, tajadas of fried green plantain are consumed along with grilled meats, and are the dietary equivalent of the French-fried potatoes/chips of Europe and North America. After removing the skin, maduro can be sliced (between thick) and pan-fried in oil until golden brown or according to preference. In the Dominican Republic, Ecuador, Colombia, Honduras (where they are usually eaten with the native sour cream) and Venezuela, they are also eaten baked in the oven (sometimes with cinnamon). In Puerto Rico baked plátanos maduros are usually eaten for breakfast and served with eggs (mainly an omelet with cheese), chorizo or bacon. Only salt is added to green plantains. Tacacho is a roasted plantain Amazonian cuisine dish from Peru. It is usually served con cecina, with bits of pork. In Venezuela, a yo-yo is a traditional dish made of two short slices of fried ripened plantain (see Tajada) placed on top of each other, with local soft white cheese in the middle (in a sandwich-like fashion) and held together with toothpicks. The arrangement is dipped in beaten eggs and fried again until the cheese melts and the yo-yo acquires a deep golden hue. They are served as sides or entrees. In Puerto Rico fried plantains are served in a variety of ways as side dishes, fast foods, and main course. An alternative to tostones are arañitas (little spiders). The name comes from the grated green and yellow plantain pieces forming little legs that stick out of the fritter itself, which ends up looking like a prickly spider on a plate. Alcapurrias are a traditional snack with masa dough made from grated green banana, yautía, seasoned with lard, annatto and stuffed with picadillo. Alcaparrado de plátano have additional grated plantain added to the masa. Mofongo is a beloved dish on the island celebrating a blend of cultures making it one of Puerto Ricos most important dishes. Plantains are fried once and mashed with garlic, fat (butter, lard or olive oil), chicharrón or bacon, and broth it is then formed into a ball and eaten with other meats, soup, vegetables or alone. Puerto Rican piononos are sweet and savory treats made with a combination of fried yellow plantains, cheese, picadillo, and beaten eggs. The result is sweet plantain cups stuffed with a cheese, ground beef and fluffy egg filling. Ralleno de plátano are the sweet plantain verson of papa rellena very popular street food and in cuchifritos. Boiled Eto is a Ghanaian traditional dish made from boiled and mashed yam or plantain and typically savored with boiled eggs, groundnut (peanuts) and sliced avocado. For the plantain option called 'Boodie eto', the plantain can be used unripe, slightly ripe or fully ripe. Culturally, eto was fed to a bride on the day of her marriage, but is now a popular dish enjoyed outside of special occasions as well. A traditional mangú from the Dominican Republic consists of peeled and boiled green plantains, mashed with hot water to reach a consistency slightly stiffer than mashed potatoes. It is traditionally eaten at breakfast, topped with sautéed red onions in apple cider vinegar and accompanied by fried eggs, fried cheese or fried bologna sausage, known as Dominican salami. Plantain porridge is also a common dish throughout the Caribbean, in which cooking bananas are boiled with milk, cinnamon, and nutmeg to form a thick porridge typically served at breakfast. In Uganda, cooking bananas are referred to as matooke or matoke, which is also the name of a cooking banana stew that is widely prepared in Uganda, Tanzania, Rwanda and eastern Congo. The cooking bananas (specifically East African Highland bananas) are peeled, wrapped in the plant's leaves and set in a cooking pot (a sufuria) on the stalks that have been removed from the leaves. The pot is then placed on a charcoal fire and the matoke is steamed for a few hours. While uncooked, the matoke is white and fairly hard, but cooking turns it soft and yellow. The matoke is then mashed while still wrapped in the leaves and is served with a sauce made of vegetables, ground peanuts, or some type of meat such as goat or beef. Cayeye, also called Mote de Guineo, is a traditional Colombian dish from the Caribbean Coast of the country. Cayeye is made by cooking small green bananas or plantains in water, then mashing and mixing them with refrito, made with onions, garlic, red bell pepper, tomato and achiote. Cayeye are usually served for breakfast with fresh grated Colombian cheese (Queso Costeño) and fried fish, shrimp, crab, or beef. Most popular is Cayeye with fresh cheese, avocado and fried egg on top. Funche criollo, is a dish severd for breakfast or dinner and vary on ingredients. Breakfast funche is made with coconut milk, butter, milk, sugar, cornmeal, sweet plantains, and topped with cinnamon, honey, nuts and fruit. The typically dinner version includes plantains green or yellow boiled in broth, butter, sofrito and mashed with taro, cornmeal, or yams. This is a typical dish from Puerto Rico and can be traced back to the Tainos and African slave trade. As a dough In Puerto Rico, mofongo is made by mashing fried plantains in a mortar with chicharrón or bacon, garlic, olive oil and stock. Any meat, fish, shellfish, vegetables, spices, or herbs can also be added. The resulting mixture is formed into cylinders the size of about two fists and eaten warm, usually with chicken broth. Mofongo relleno is topped with creole sauce rather than served with chicken broth. Creole sauce may contain stewed beef, chicken or seafood; it is poured into a center crater, formed with the serving spoon, in the mofongo. Grated green bananas and yautias are also used to form masa, a common ingredient for dishes such as alcapurria, which is a type of savory fritter. Fufu de platano is a traditional and very popular lunch dish in Cuba, and essentially akin to the Puerto Rican mofongo. It is a fufu made by boiling the plantains in water and mashing with a fork. The fufu is then mixed with chicken stock and sofrito, a sauce made from lard, garlic, onions, pepper, tomato sauce, a touch of vinegar and cumin. The texture of Cuban fufu is similar to the mofongo consumed in Puerto Rico, but it is not formed into a ball or fried. Fufu is also a common centuries-old traditional dish made in Côte d'Ivoire, Ghana, Nigeria, Cameroon and other West & Central African countries. It is made in a similar fashion as the Cuban fufu, but is pounded, and has a thick paste, putty-like texture which is then formed into a ball. West African fufu is sometimes separately made with cassava, yams or made with plantains combined with cassava. Other dishes While cooking bananas are starchier and often used in savory dishes as a result, many Philippine desserts also use cooking bananas as a primary ingredient, such as: Banana cue - fried ripe saba bananas coated with caramelized sugar. Binignit - a dessert soup of glutinous rice in coconut milk with ripe saba bananas as one of the main ingredients. Ginanggang - grilled saba bananas coated with margarine and sugar. Maruya - banana fritters made from saba bananas and batter. Minatamis na saging - saba bananas simmered in a sweet syrup. It is rarely eaten alone, but is instead used as an ingredient in other desserts, notably halo halo. Pritong saging - fried ripe saba bananas. Pinasugbo - thinly sliced bananas coated with caramelized sugar and sesame seeds and fried until crunchy. Saba con hielo - a shaved ice dessert which primarily uses minatamis na saging and milk. Turon - a type of dessert lumpia (spring rolls) made from ripe saba bananas wrapped in thin crepe and fried. In Ecuador, plantain is boiled, crushed, scrambled, and fried into majado. This dish is typically served with a cup of coffee and bistek, fish, or grated cheese. It is a popular breakfast dish. Majado is also used as a base to prepare tigrillo and bolones. To prepare tigrillo, majado is scrambled with pork rind, egg, cheese, green onions, parsley, and cilantro. To prepare bolones, majado is scrambled with cheese, pork rind, or a mixture of both. The resulting mixture is then shaped into a sphere which is later deep-fried. Both tigrillo and bolones are typically served with a cup of coffee. Other preparations Chips After removing the skin, the unripe fruit can be sliced thin and deep fried in hot oil to produce chips. This thin preparation of plantain is known as tostones, patacones or plataninas in some of Central American and South American countries, platanutres in Puerto Rico, mariquitas or chicharritas in Cuba and chifles in Ecuador and Peru. In Cuba, the Dominican Republic, Guatemala, Puerto Rico and Venezuela, tostones instead refers to thicker twice-fried patties (see below). In Cuba, plantain chips are called mariquitas. They are sliced thinly, and fried in oil until golden colored. They are popular appetizers served with a main dish. In Colombia they are known as platanitos and are eaten with suero atollabuey as a snack. Tostada refers to a green, unripe plantain which has been cut into sections, fried, flattened, fried again, and salted. These tostadas are often served as a side dish or a snack. They are also known as tostones or patacones in many Latin American countries. In Honduras, banana chips are called tajadas, which may be sliced vertically to create a variation known as plantain strips. Chips fried in coconut oil and sprinkled with salt, called upperi or kaya varuthathu, are a snack in South India in Kerala. They are an important item in sadya, a vegetarian feast prepared during festive occasions in Kerala. The chips are typically labeled "plantain chips" when they are made of green plantains that taste starchy, like potato chips. In Tamil Nadu, a thin variety made from green plantains is used to make chips seasoned with salt, chili powder and asafoetida. In the western/central Indian language Marathi, the plantain is called rajeli kela (figuratively meaning "king-sized" banana), and is often used to make fried chips. Dried flour In South-west, Nigeria, unripe plantains are also dried and ground into flour which is referred to as Elubo Ogede. it is considered as an healthy and nutritious food among the Yoruba's. In southern India, dried plantain powder is mixed with a little bit of fennel seed powder and boiled in milk or water to make baby food to feed babies until they are one year old. Drink In Peru, plantains are boiled and blended with water, spices, and sugar to make chapo. In Kerala, ripe plantains are boiled with sago, coconut milk, sugar and spices to make a pudding. Ketchup The Philippines uniquely processes saba bananas into banana ketchup. It was originally invented in World War II as a substitute for tomato ketchup. Nutrition Plantain is 32% carbohydrates with 2% dietary fiber and 15% sugars, 1% protein, 0.4% fat, and 65% water, and supplying of food energy in a reference serving (table). Raw plantain is an excellent source (20% or higher of the Daily Value, DV) of vitamin B6 (23% DV) and vitamin C (22% DV), and a good source (10–19% DV) of magnesium and potassium (table). Containing little beta-carotene (457 micrograms per 100 grams), plantain is not a good source of vitamin A (table). Comparison to other staple foods The following table shows the nutrient content of raw plantain and other major staple foods in a raw form on a dry weight basis to account for their different water contents. Allergies Plantain and banana allergies occur with typical characteristics of food allergy or latex fruit syndrome, including itching and mild swelling of the lips, tongue, palate or throat, skin rash, stomach complaints or anaphylactic shock. Among more than 1000 proteins identified in Musa species were numerous previously described protein allergens.
Biology and health sciences
Monocots
null
38954
https://en.wikipedia.org/wiki/Inclined%20plane
Inclined plane
An inclined plane, also known as a ramp, is a flat supporting surface tilted at an angle from the vertical direction, with one end higher than the other, used as an aid for raising or lowering a load. The inclined plane is one of the six classical simple machines defined by Renaissance scientists. Inclined planes are used to move heavy loads over vertical obstacles. Examples vary from a ramp used to load goods into a truck, to a person walking up a pedestrian ramp, to an automobile or railroad train climbing a grade. Moving an object up an inclined plane requires less force than lifting it straight up, at a cost of an increase in the distance moved. The mechanical advantage of an inclined plane, the factor by which the force is reduced, is equal to the ratio of the length of the sloped surface to the height it spans. Owing to conservation of energy, the same amount of mechanical energy (work) is required to lift a given object by a given vertical distance, disregarding losses from friction, but the inclined plane allows the same work to be done with a smaller force exerted over a greater distance. The angle of friction, also sometimes called the angle of repose, is the maximum angle at which a load can rest motionless on an inclined plane due to friction without sliding down. This angle is equal to the arctangent of the coefficient of static friction μs between the surfaces. Two other simple machines are often considered to be derived from the inclined plane. The wedge can be considered a moving inclined plane or two inclined planes connected at the base. The screw consists of a narrow inclined plane wrapped around a cylinder. The term may also refer to a specific implementation; a straight ramp cut into a steep hillside for transporting goods up and down the hill. This may include cars on rails or pulled up by a cable system; a funicular or cable railway, such as the Johnstown Inclined Plane. Uses Inclined planes are widely used in the form of loading ramps to load and unload goods on trucks, ships and planes. Wheelchair ramps are used to allow people in wheelchairs to get over vertical obstacles without exceeding their strength. Escalators and slanted conveyor belts are also forms of an inclined plane.In a funicular or cable railway a railroad car is pulled up a steep inclined plane using cables. Inclined planes also allow heavy fragile objects, including humans, to be safely lowered down a vertical distance by using the normal force of the plane to reduce the gravitational force. Aircraft evacuation slides allow people to rapidly and safely reach the ground from the height of a passenger airliner. Other inclined planes are built into permanent structures. Roads for vehicles and railroads have inclined planes in the form of gradual slopes, ramps, and causeways to allow vehicles to surmount vertical obstacles such as hills without losing traction on the road surface. Similarly, pedestrian paths and sidewalks have gentle ramps to limit their slope, to ensure that pedestrians can keep traction. Inclined planes are also used as entertainment for people to slide down in a controlled way, in playground slides, water slides, ski slopes and skateboard parks. History Inclined planes have been used by people since prehistoric times to move heavy objects. The sloping roads and causeways built by ancient civilizations such as the Romans are examples of early inclined planes that have survived, and show that they understood the value of this device for moving things uphill. The heavy stones used in ancient stone structures such as Stonehenge are believed to have been moved and set in place using inclined planes made of earth, although it is hard to find evidence of such temporary building ramps. The Egyptian pyramids were constructed using inclined planes, Siege ramps enabled ancient armies to surmount fortress walls. The ancient Greeks constructed a paved ramp 6 km (3.7 miles) long, the Diolkos, to drag ships overland across the Isthmus of Corinth. However the inclined plane was the last of the six classic simple machines to be recognised as a machine. This is probably because it is a passive and motionless device (the load is the moving part), and also because it is found in nature in the form of slopes and hills. Although they understood its use in lifting heavy objects, the ancient Greek philosophers who defined the other five simple machines did not include the inclined plane as a machine. This view persisted among a few later scientists; as late as 1826 Karl von Langsdorf wrote that an inclined plane "...is no more a machine than is the slope of a mountain". The problem of calculating the force required to push a weight up an inclined plane (its mechanical advantage) was attempted by Greek philosophers Heron of Alexandria (c. 10 - 60 CE) and Pappus of Alexandria (c. 290 - 350 CE), but their solutions were incorrect. It was not until the Renaissance that the inclined plane was solved mathematically and classed with the other simple machines. The first correct analysis of the inclined plane appeared in the work of 13th century author Jordanus de Nemore, however his solution was apparently not communicated to other philosophers of the time. Girolamo Cardano (1570) proposed the incorrect solution that the input force is proportional to the angle of the plane. Then at the end of the 16th century, three correct solutions were published within ten years, by Michael Varro (1584), Simon Stevin (1586), and Galileo Galilei (1592). Although it was not the first, the derivation of Flemish engineer Simon Stevin is the most well-known, because of its originality and use of a string of beads (see box). In 1600, Italian scientist Galileo Galilei included the inclined plane in his analysis of simple machines in Le Meccaniche ("On Mechanics"), showing its underlying similarity to the other machines as a force amplifier. The first elementary rules of sliding friction on an inclined plane were discovered by Leonardo da Vinci (1452-1519), but remained unpublished in his notebooks. They were rediscovered by Guillaume Amontons (1699) and were further developed by Charles-Augustin de Coulomb (1785). Leonhard Euler (1750) showed that the tangent of the angle of repose on an inclined plane is equal to the coefficient of friction. Terminology Slope The mechanical advantage of an inclined plane depends on its slope, meaning its gradient or steepness. The smaller the slope, the larger the mechanical advantage, and the smaller the force needed to raise a given weight. A plane's slope s is equal to the difference in height between its two ends, or "rise", divided by its horizontal length, or "run". It can also be expressed by the angle the plane makes with the horizontal, . Mechanical advantage The mechanical advantage of a simple machine as defined as the ratio of the output force exerted on the load to the input force applied. The inclined plane the output load force is just the gravitational force of the load object on the plane, its weight . The input force is the force exerted on the object, parallel to the plane, to move it up the plane. The mechanical advantage is The of an ideal inclined plane without friction is sometimes called ideal mechanical advantage while the MA when friction is included is called the actual mechanical advantage . Frictionless inclined plane If there is no friction between the object being moved and the plane, the device is called an ideal inclined plane. This condition might be approached if the object is rolling like a barrel, or supported on wheels or casters. Due to conservation of energy, for a frictionless inclined plane the work done on the load lifting it, , is equal to the work done by the input force, Work is defined as the force multiplied by the displacement an object moves. The work done on the load is equal to its weight multiplied by the vertical displacement it rises, which is the "rise" of the inclined plane The input work is equal to the force on the object times the diagonal length of the inclined plane. Substituting these values into the conservation of energy equation above and rearranging To express the mechanical advantage by the angle of the plane, it can be seen from the diagram (above) that So So the mechanical advantage of a frictionless inclined plane is equal to the reciprocal of the sine of the slope angle. The input force from this equation is the force needed to hold the load motionless on the inclined plane, or push it up at a constant velocity. If the input force is greater than this, the load will accelerate up the plane. If the force is less, it will accelerate down the plane. Inclined plane with friction Where there is friction between the plane and the load, as for example with a heavy box being slid up a ramp, some of the work applied by the input force is dissipated as heat by friction, , so less work is done on the load. Due to conservation of energy, the sum of the output work and the frictional energy losses is equal to the input work Therefore, more input force is required, and the mechanical advantage is lower, than if friction were not present. With friction, the load will only move if the net force parallel to the surface is greater than the frictional force opposing it. The maximum friction force is given by where is the normal force between the load and the plane, directed normal to the surface, and is the coefficient of static friction between the two surfaces, which varies with the material. When no input force is applied, if the inclination angle of the plane is less than some maximum value the component of gravitational force parallel to the plane will be too small to overcome friction, and the load will remain motionless. This angle is called the angle of repose and depends on the composition of the surfaces, but is independent of the load weight. It is shown below that the tangent of the angle of repose is equal to With friction, there is always some range of input force for which the load is stationary, neither sliding up or down the plane, whereas with a frictionless inclined plane there is only one particular value of input force for which the load is stationary. Analysis A load resting on an inclined plane, when considered as a free body has three forces acting on it: The applied force, exerted on the load to move it, which acts parallel to the inclined plane. The weight of the load, , which acts vertically downwards The force of the plane on the load. This can be resolved into two components: The normal force of the inclined plane on the load, supporting it. This is directed perpendicular (normal) to the surface. The frictional force, of the plane on the load acts parallel to the surface, and is always in a direction opposite to the motion of the object. It is equal to the normal force multiplied by the coefficient of static friction μ between the two surfaces. Using Newton's second law of motion the load will be stationary or in steady motion if the sum of the forces on it is zero. Since the direction of the frictional force is opposite for the case of uphill and downhill motion, these two cases must be considered separately: Uphill motion: The total force on the load is toward the uphill side, so the frictional force is directed down the plane, opposing the input force. The mechanical advantage is where . This is the condition for impending motion up the inclined plane. If the applied force Fi is greater than given by this equation, the load will move up the plane. Downhill motion: The total force on the load is toward the downhill side, so the frictional force is directed up the plane. The mechanical advantage is This is the condition for impending motion down the plane; if the applied force Fi is less than given in this equation, the load will slide down the plane. There are three cases: : The mechanical advantage is negative. In the absence of applied force the load will remain motionless, and requires some negative (downhill) applied force to slide down. : The 'angle of repose'. The mechanical advantage is infinite. With no applied force, load will not slide, but the slightest negative (downhill) force will cause it to slide. : The mechanical advantage is positive. In the absence of applied force the load will slide down the plane, and requires some positive (uphill) force to hold it motionless Mechanical advantage using power The mechanical advantage of an inclined plane is the ratio of the weight of the load on the ramp to the force required to pull it up the ramp. If energy is not dissipated or stored in the movement of the load, then this mechanical advantage can be computed from the dimensions of the ramp. In order to show this, let the position r of a rail car on along the ramp with an angle, θ, above the horizontal be given by where R is the distance along the ramp. The velocity of the car up the ramp is now Because there are no losses, the power used by force F to move the load up the ramp equals the power out, which is the vertical lift of the weight W of the load. The input power pulling the car up the ramp is given by and the power out is Equate the power in to the power out to obtain the mechanical advantage as The mechanical advantage of an inclined plane can also be calculated from the ratio of length of the ramp L to its height H, because the sine of the angle of the ramp is given by therefore, Example: If the height of a ramp is H = 1 meter and its length is L = 5 meters, then the mechanical advantage is which means that a 20 lb force will lift a 100 lb load. The Liverpool Minard inclined plane has the dimensions 1804 meters by 37.50 meters, which provides a mechanical advantage of so a 100 lb tension force on the cable will lift a 4810 lb load. The grade of this incline is 2%, which means the angle θ is small enough that sin θ≈tan θ.
Technology
Basics_8
null
38979
https://en.wikipedia.org/wiki/Photosphere
Photosphere
The photosphere is a star's outer shell from which light is radiated. It extends into a star's surface until the plasma becomes opaque, equivalent to an optical depth of approximately , or equivalently, a depth from which 50% of light will escape without being scattered. A photosphere is the region of a luminous object, usually a star, that is transparent to photons of certain wavelengths. Stars, except neutron stars, have no solid or liquid surface. Therefore, the photosphere is typically used to describe the Sun's or another star's visual surface. Etymology The term photosphere is derived from Ancient Greek roots, φῶς, φωτός/phos, photos meaning "light" and σφαῖρα/sphaira meaning "sphere", in reference to it being a spherical surface that is perceived to emit light. Temperature The surface of a star is defined to have a temperature given by the effective temperature in the Stefan–Boltzmann law. Various stars have photospheres of various temperatures. Composition of the Sun The Sun is composed primarily of the chemical elements hydrogen and helium; they account for 74.9% and 23.8%, respectively, of the mass of the Sun in the photosphere. All heavier elements, colloquially called metals in stellar astronomy, account for less than 2% of the mass, with oxygen (roughly 1% of the Sun's mass), carbon (0.3%), neon (0.2%), and iron (0.2%) being the most abundant. Sun's photosphere The Sun's photosphere has a temperature between (with an effective temperature of ) meaning human eyes perceive it as an overwhelmingly bright surface, and with sufficiently strong neutral density filter, as a hueless, gray surface. It has a density of about 3 kg/m3; increasing with increasing depth. The Sun's photosphere is 100–400 kilometers thick. Photospheric phenomena In the Sun's photosphere, the most ubiquitous phenomenon are granules—convection cells of plasma each approximately in diameter with hot rising plasma in the center and cooler plasma falling in the spaces between them, flowing at velocities of . Each granule has a lifespan of only about twenty minutes, resulting in a continually shifting "boiling" pattern. Grouping the typical granules are supergranules up to in diameter with lifespans of up to 24 hours and flow speeds of about , carrying magnetic field bundles to the edges of the cells. Other magnetically related phenomena in the Sun's photosphere include sunspots and solar faculae dispersed between granules. These features are too fine to be directly observed on other stars; however, sunspots have been indirectly observed, in which case they are referred to as starspots.
Physical sciences
Solar System
Astronomy
38992
https://en.wikipedia.org/wiki/Cosmological%20constant
Cosmological constant
In cosmology, the cosmological constant (usually denoted by the Greek capital letter lambda: ), alternatively called Einstein's cosmological constant, is a coefficient that Albert Einstein initially added to his field equations of general relativity. He later removed it; however, much later it was revived to express the energy density of space, or vacuum energy, that arises in quantum mechanics. It is closely associated with the concept of dark energy. Einstein introduced the constant in 1917 to counterbalance the effect of gravity and achieve a static universe, which was then assumed. Einstein's cosmological constant was abandoned after Edwin Hubble confirmed that the universe was expanding. From the 1930s until the late 1990s, most physicists agreed with Einstein's choice of setting the cosmological constant to zero. That changed with the discovery in 1998 that the expansion of the universe is accelerating, implying that the cosmological constant may have a positive value. Since the 1990s, studies have shown that, assuming the cosmological principle, around 68% of the mass–energy density of the universe can be attributed to dark energy. The cosmological constant is the simplest possible explanation for dark energy, and is used in the standard model of cosmology known as the ΛCDM model. According to quantum field theory (QFT), which underlies modern particle physics, empty space is defined by the vacuum state, which is composed of a collection of quantum fields. All these quantum fields exhibit fluctuations in their ground state (lowest energy density) arising from the zero-point energy existing everywhere in space. These zero-point fluctuations should contribute to the cosmological constant , but actual calculations give rise to an enormous vacuum energy. The discrepancy between theorized vacuum energy from quantum field theory and observed vacuum energy from cosmology is a source of major contention, with the values predicted exceeding observation by some 120 orders of magnitude, a discrepancy that has been called "the worst theoretical prediction in the history of physics!". This issue is called the cosmological constant problem and it is one of the greatest mysteries in science with many physicists believing that "the vacuum holds the key to a full understanding of nature". History The cosmological constant was originally introduced in Einstein's 1917 paper entitled “The cosmological considerations in the General Theory of Reality”. Einstein included the cosmological constant as a term in his field equations for general relativity because he was dissatisfied that otherwise his equations did not allow for a static universe: gravity would cause a universe that was initially non-expanding to contract. To counteract this possibility, Einstein added the cosmological constant. However, Einstein was not happy about adding this cosmological term. He later stated that "Since I introduced this term, I had always a bad conscience. ... I am unable to believe that such an ugly thing is actually realized in nature". Einstein's static universe is unstable against matter density perturbations. Furthermore, without the cosmological constant Einstein could have found the expansion of the universe before Hubble's observations. In 1929, not long after Einstein developed his static theory, observations by Edwin Hubble indicated that the universe appears to be expanding; this was consistent with a cosmological solution to the original general relativity equations that had been found by the mathematician Alexander Friedmann, working on the Einstein equations of general relativity. Einstein reportedly referred to his failure to accept the validation of his equations—when they had predicted the expansion of the universe in theory, before it was demonstrated in observation of the cosmological redshift—as his "biggest blunder" (according to George Gamow). It transpired that adding the cosmological constant to Einstein's equations does not lead to a static universe at equilibrium because the equilibrium is unstable: if the universe expands slightly, then the expansion releases vacuum energy, which causes yet more expansion. Likewise, a universe that contracts slightly will continue contracting. However, the cosmological constant remained a subject of theoretical and empirical interest. Empirically, the cosmological data of recent decades strongly suggest that our universe has a positive cosmological constant. The explanation of this small but positive value is a remaining theoretical challenge, the so-called cosmological constant problem. Some early generalizations of Einstein's gravitational theory, known as classical unified field theories, either introduced a cosmological constant on theoretical grounds or found that it arose naturally from the mathematics. For example, Arthur Eddington claimed that the cosmological constant version of the vacuum field equation expressed the "epistemological" property that the universe is "self-gauging", and Erwin Schrödinger's pure-affine theory using a simple variational principle produced the field equation with a cosmological term. In 1990s, Saul Perlmutter at Lawrence Berkeley National Laboratory, Brian Schmidt of the Australian National University and Adam Riess of the Space Telescope Science Institute were searching for type Ia supernovae. At that time, they expected to observe the deceleration of the supernovae caused by gravitational attraction of mass according to Einstein's gravitational theory. The first reports published in July 1997 from the Supernova Cosmology Project used the supernova observation to support such deceleration hypothesis. But soon they found that supernovae were accelerating away. Both teams announced this surprising result in 1998. It implied the universe is undergoing accelerating expansion. The cosmological constant is needed to explain such acceleration. Following this discovery, the cosmological constant was reinserted in the general relativity equations. Sequence of events 1915–1998 In 1915, Einstein publishes his equations of general relativity, without a cosmological constant . In 1917, Einstein adds the parameter to his equations when he realizes that his theory implies a dynamic universe for which space is a function of time. He then gives this constant a value that makes his Universe model remain static and eternal (Einstein static universe). In 1922, the Russian physicist Alexander Friedmann mathematically shows that Einstein's equations (whatever ) remain valid in a dynamic universe. In 1927, the Belgian astrophysicist Georges Lemaître shows that the Universe is expanding by combining general relativity with astronomical observations, those of Hubble in particular. In 1931, Einstein accepts the theory of an expanding universe and proposes, in 1932 with the Dutch physicist and astronomer Willem de Sitter, a model of a continuously expanding universe with zero cosmological constant (Einstein–de Sitter spacetime). In 1998, two teams of astrophysicists, the Supernova Cosmology Project and the High-Z Supernova Search Team, carried out measurements on distant supernovae which showed that the speed of galaxies' recession in relation to the Milky Way increases over time. The universe is in accelerated expansion, which requires having a strictly positive . The universe would contain a mysterious dark energy producing a repulsive force that counterbalances the gravitational braking produced by the matter contained in the universe (see Standard cosmological model). For this work, Perlmutter, Schmidt, and Riess jointly received the Nobel Prize in Physics in 2011. Equation The cosmological constant appears in the Einstein field equations in the form where the Ricci tensor , Ricci scalar and the metric tensor describe the structure of spacetime, the stress–energy tensor describes the energy density, momentum density and stress at that point in spacetime, and . The gravitational constant and the speed of light are universal constants. When is zero, this reduces to the field equation of general relativity usually used in the 20th century. When is zero, the field equation describes empty space (a vacuum). The cosmological constant has the same effect as an intrinsic energy density of the vacuum, (and an associated pressure). In this context, it is commonly moved to the right-hand side of the equation using . It is common to quote values of energy density directly, though still using the name "cosmological constant". The dimension of is generally understood as length. Using the values known in 2018 and Planck units for = and the Hubble constant 0 = = , has the value of where is the Planck length. A positive vacuum energy density resulting from a cosmological constant implies a negative pressure, and vice versa. If the energy density is positive, the associated negative pressure will drive an accelerated expansion of the universe, as observed. (See Dark energy and Cosmic inflation for details.) ΩΛ (Omega sub lambda) Instead of the cosmological constant itself, cosmologists often refer to the ratio between the energy density due to the cosmological constant and the critical density of the universe, the tipping point for a sufficient density to stop the universe from expanding forever. This ratio is usually denoted by and is estimated to be , according to results published by the Planck Collaboration in 2018. In a flat universe, is the fraction of the energy of the universe due to the cosmological constant, i.e., what we would intuitively call the fraction of the universe that is made up of dark energy. Note that this value changes over time: The critical density changes with cosmological time but the energy density due to the cosmological constant remains unchanged throughout the history of the universe, because the amount of dark energy increases as the universe grows but the amount of matter does not. Equation of state Another ratio that is used by scientists is the equation of state, usually denoted , which is the ratio of pressure that dark energy puts on the universe to the energy per unit volume. This ratio is for the cosmological constant used in the Einstein equations; alternative time-varying forms of vacuum energy such as quintessence generally use a different value. The value = , measured by the Planck Collaboration (2018) is consistent with , assuming does not change over cosmic time. Positive value Observations announced in 1998 of distance–redshift relation for Type Ia supernovae indicated that the expansion of the universe is accelerating, if one assumes the cosmological principle. When combined with measurements of the cosmic microwave background radiation these implied a value of ≈ 0.7, a result which has been supported and refined by more recent measurements (as well as previous works). If one assumes the cosmological principle, as in the case for all models that use the Friedmann–Lemaître–Robertson–Walker metric, while there are other possible causes of an accelerating universe, such as quintessence, the cosmological constant is in most respects the simplest solution. Thus, the Lambda-CDM model, the current standard model of cosmology which uses the FLRW metric, includes the cosmological constant, which is measured to be on the order of . It may be expressed as (multiplying by ) or as 10−122 ℓ−2 (where ℓ is the Planck length). The value is based on recent measurements of vacuum energy density, = ≘ = . However, due to the Hubble tension and the CMB dipole, recently it has been proposed that the cosmological principle is no longer true in the late universe and that the FLRW metric breaks down, so it is possible that observations usually attributed to an accelerating universe are simply a result of the cosmological principle not applying in the late universe. As was only recently seen, by works of 't Hooft, Susskind and others, a positive cosmological constant has surprising consequences, such as a finite maximum entropy of the observable universe (see Holographic principle). Predictions Quantum field theory A major outstanding problem is that most quantum field theories predict a huge value for the quantum vacuum. A common assumption is that the quantum vacuum is equivalent to the cosmological constant. Although no theory exists that supports this assumption, arguments can be made in its favor. Such arguments are usually based on dimensional analysis and effective field theory. If the universe is described by an effective local quantum field theory down to the Planck scale, then we would expect a cosmological constant of the order of ( in reduced Planck units). As noted above, the measured cosmological constant is smaller than this by a factor of ~10120. This discrepancy has been called "the worst theoretical prediction in the history of physics". Some supersymmetric theories require a cosmological constant that is exactly zero, which further complicates things. This is the cosmological constant problem, the worst problem of fine-tuning in physics: there is no known natural way to derive the tiny cosmological constant used in cosmology from particle physics. No vacuum in the string theory landscape is known to support a metastable, positive cosmological constant, and in 2018 a group of four physicists advanced a controversial conjecture which would imply that no such universe exists. Anthropic principle One possible explanation for the small but non-zero value was noted by Steven Weinberg in 1987 following the anthropic principle. Weinberg explains that if the vacuum energy took different values in different domains of the universe, then observers would necessarily measure values similar to that which is observed: the formation of life-supporting structures would be suppressed in domains where the vacuum energy is much larger. Specifically, if the vacuum energy is negative and its absolute value is substantially larger than it appears to be in the observed universe (say, a factor of 10 larger), holding all other variables (e.g. matter density) constant, that would mean that the universe is closed; furthermore, its lifetime would be shorter than the age of our universe, possibly too short for intelligent life to form. On the other hand, a universe with a large positive cosmological constant would expand too fast, preventing galaxy formation. According to Weinberg, domains where the vacuum energy is compatible with life would be comparatively rare. Using this argument, Weinberg predicted that the cosmological constant would have a value of less than a hundred times the currently accepted value. In 1992, Weinberg refined this prediction of the cosmological constant to 5 to 10 times the matter density. This argument depends on the vacuum energy density being constant throughout spacetime, as would be expected if dark energy were the cosmological constant. There is no evidence that the vacuum energy does vary, but it may be the case if, for example, the vacuum energy is (even in part) the potential of a scalar field such as the residual inflaton (also see Quintessence). Another theoretical approach that deals with the issue is that of multiverse theories, which predict a large number of "parallel" universes with different laws of physics and/or values of fundamental constants. Again, the anthropic principle states that we can only live in one of the universes that is compatible with some form of intelligent life. Critics claim that these theories, when used as an explanation for fine-tuning, commit the inverse gambler's fallacy. In 1995, Weinberg's argument was refined by Alexander Vilenkin to predict a value for the cosmological constant that was only ten times the matter density, i.e. about three times the current value since determined. Failure to detect dark energy An attempt to directly observe and relate quanta or fields like the chameleon particle or the symmetron theory to dark energy, in a laboratory setting, failed to detect a new force. Inferring the presence of dark energy through its interaction with baryons in the cosmic microwave background has also led to a negative result, although the current analyses have been derived only at the linear perturbation regime. It is also possible that the difficulty in detecting dark energy is due to the fact that the cosmological constant describes an existing, known interaction (e.g. electromagnetic field).
Physical sciences
Physical cosmology
null
38993
https://en.wikipedia.org/wiki/Optical%20depth
Optical depth
In physics, optical depth or optical thickness is the natural logarithm of the ratio of incident to transmitted radiant power through a material. Thus, the larger the optical depth, the smaller the amount of transmitted radiant power through the material. Spectral optical depth or spectral optical thickness is the natural logarithm of the ratio of incident to transmitted spectral radiant power through a material. Optical depth is dimensionless, and in particular is not a length, though it is a monotonically increasing function of optical path length, and approaches zero as the path length approaches zero. The use of the term "optical density" for optical depth is discouraged. In chemistry, a closely related quantity called "absorbance" or "decadic absorbance" is used instead of optical depth: the common logarithm of the ratio of incident to transmitted radiant power through a material. It is the optical depth divided by , because of the different logarithm bases used. Mathematical definitions Optical depth Optical depth of a material, denoted , is given by:where is the radiant flux received by that material; is the radiant flux transmitted by that material; is the transmittance of that material. The absorbance is related to optical depth by: Spectral optical depth Spectral optical depth in frequency and spectral optical depth in wavelength of a material, denoted and respectively, are given by: where is the spectral radiant flux in frequency transmitted by that material; is the spectral radiant flux in frequency received by that material; is the spectral transmittance in frequency of that material; is the spectral radiant flux in wavelength transmitted by that material; is the spectral radiant flux in wavelength received by that material; is the spectral transmittance in wavelength of that material. Spectral absorbance is related to spectral optical depth by: where is the spectral absorbance in frequency; is the spectral absorbance in wavelength. Relationship with attenuation Attenuation Optical depth measures the attenuation of the transmitted radiant power in a material. Attenuation can be caused by absorption, but also reflection, scattering, and other physical processes. Optical depth of a material is approximately equal to its attenuation when both the absorbance is much less than 1 and the emittance of that material (not to be confused with radiant exitance or emissivity) is much less than the optical depth: where Φet is the radiant power transmitted by that material; Φeatt is the radiant power attenuated by that material; Φei is the radiant power received by that material; Φee is the radiant power emitted by that material; T = Φet/Φei is the transmittance of that material; ATT = Φeatt/Φei is the attenuation of that material; E = Φee/Φei is the emittance of that material, and according to the Beer–Lambert law, so: Attenuation coefficient Optical depth of a material is also related to its attenuation coefficient by:where l is the thickness of that material through which the light travels; α(z) is the attenuation coefficient or Napierian attenuation coefficient of that material at z, and if α(z) is uniform along the path, the attenuation is said to be a linear attenuation and the relation becomes: Sometimes the relation is given using the attenuation cross section of the material, that is its attenuation coefficient divided by its number density: where σ is the attenuation cross section of that material; n(z) is the number density of that material at z, and if is uniform along the path, i.e., , the relation becomes: Applications Atomic physics In atomic physics, the spectral optical depth of a cloud of atoms can be calculated from the quantum-mechanical properties of the atoms. It is given bywhere d is the transition dipole moment; n is the number of atoms; ν is the frequency of the beam; c is the speed of light; ħ is the reduced Planck constant; ε0 is the vacuum permittivity; σ is the cross section of the beam; γ is the natural linewidth of the transition. Atmospheric sciences In atmospheric sciences, one often refers to the optical depth of the atmosphere as corresponding to the vertical path from Earth's surface to outer space; at other times the optical path is from the observer's altitude to outer space. The optical depth for a slant path is , where τ′ refers to a vertical path, m is called the relative airmass, and for a plane-parallel atmosphere it is determined as where θ is the zenith angle corresponding to the given path. Therefore,The optical depth of the atmosphere can be divided into several components, ascribed to Rayleigh scattering, aerosols, and gaseous absorption. The optical depth of the atmosphere can be measured with a Sun photometer. The optical depth with respect to the height within the atmosphere is given by and it follows that the total atmospheric optical depth is given by In both equations: ka is the absorption coefficient w1 is the mixing ratio ρ0 is the density of air at sea level H is the scale height of the atmosphere z is the height in question The optical depth of a plane parallel cloud layer is given bywhere: Qe is the extinction efficiency L is the liquid water path H is the geometrical thickness N is the concentration of droplets ρl is the density of liquid water So, with a fixed depth and total liquid water path, . Astronomy In astronomy, the photosphere of a star is defined as the surface where its optical depth is 2/3. This means that each photon emitted at the photosphere suffers an average of less than one scattering before it reaches the observer. At the temperature at optical depth 2/3, the energy emitted by the star (the original derivation is for the Sun) matches the observed total energy emitted. Note that the optical depth of a given medium will be different for different colors (wavelengths) of light. For planetary rings, the optical depth is the (negative logarithm of the) proportion of light blocked by the ring when it lies between the source and the observer. This is usually obtained by observation of stellar occultations.
Physical sciences
Optics
Physics
38998
https://en.wikipedia.org/wiki/Vulcanization
Vulcanization
Vulcanization (British English: vulcanisation) is a range of processes for hardening rubbers. The term originally referred exclusively to the treatment of natural rubber with sulfur, which remains the most common practice. It has also grown to include the hardening of other (synthetic) rubbers via various means. Examples include silicone rubber via room temperature vulcanizing and chloroprene rubber (neoprene) using metal oxides. Vulcanization can be defined as the curing of elastomers, with the terms 'vulcanization' and 'curing' sometimes used interchangeably in this context. It works by forming cross-links between sections of the polymer chain which results in increased rigidity and durability, as well as other changes in the mechanical and electrical properties of the material. Vulcanization, in common with the curing of other thermosetting polymers, is generally irreversible. The word was suggested by William Brockedon (a friend of Thomas Hancock who attained the British patent for the process) coming from the god Vulcan who was associated with heat and sulfur in volcanoes. History In ancient Mesoamerican cultures, rubber was used to make balls, sandal soles, elastic bands, and waterproof containers. It was cured using sulfur-rich plant juices, an early form of vulcanization. In the 1830s, Charles Goodyear worked to devise a process for strengthening rubber tires. Tires of the time would become soft and sticky with heat, accumulating road debris that punctured them. Goodyear tried heating rubber in order to mix other chemicals with it. This seemed to harden and improve the rubber, though this was due to the heating itself and not the chemicals used. Not realizing this, he repeatedly ran into setbacks when his announced hardening formulas did not work consistently. One day in 1839, when trying to mix rubber with sulfur, Goodyear accidentally dropped the mixture in a hot frying pan. To his astonishment, instead of melting further or vaporizing, the rubber remained firm and, as he increased the heat, the rubber became harder. Goodyear worked out a consistent system for this hardening, and by 1844 patented the process and was producing the rubber on an industrial scale. Applications There are many uses for vulcanized materials, some examples of which are rubber hoses, shoe soles, toys, erasers, hockey pucks, shock absorbers, conveyor belts, vibration mounts/dampers, insulation materials, tires, and bowling balls. Most rubber products are vulcanized as this greatly improves their lifespan, function, and strength. Overview In contrast with thermoplastic processes (the melt-freeze process that characterize the behaviour of most modern polymers), vulcanization, in common with the curing of other thermosetting polymers, is generally irreversible. Five types of curing systems are in common use: Sulfur systems Peroxides Metallic oxides Acetoxysilane Urethane crosslinkers Vulcanization with sulfur The most common vulcanizing methods depend on sulfur. Sulfur, by itself, is a slow vulcanizing agent and does not vulcanize synthetic polyolefins. Accelerated vulcanization is carried out using various compounds that modify the kinetics of crosslinking; this mixture is often referred to as a cure package. The main polymers subjected to sulfur vulcanization are polyisoprene (natural rubber) and styrene-butadiene rubber (SBR), which are used for most street-vehicle tires. The cure package is adjusted specifically for the substrate and the application. The reactive sites—cure sites—are allylic hydrogen atoms. These C-H bonds are adjacent to carbon-carbon double bonds (>C=C<). During vulcanization, some of these C-H bonds are replaced by chains of sulfur atoms that link with a cure site of another polymer chain. These bridges contain between one and several atoms. The number of sulfur atoms in the crosslink strongly influences the physical properties of the final rubber article. Short crosslinks give the rubber better heat resistance. Crosslinks with higher number of sulfur atoms give the rubber good dynamic properties but less heat resistance. Dynamic properties are important for flexing movements of the rubber article, e.g., the movement of a side-wall of a running tire. Without good flexing properties these movements rapidly form cracks, and ultimately will make the rubber article fail. Vulcanization of polychloroprene The vulcanization of neoprene or polychloroprene rubber (CR rubber) is carried out using metal oxides (specifically MgO and ZnO, sometimes Pb3O4) rather than sulfur compounds which are presently used with many natural and synthetic rubbers. In addition, because of various processing factors (principally scorch, this being the premature cross-linking of rubbers due to the influence of heat), the choice of accelerator is governed by different rules to other diene rubbers. Most conventionally used accelerators are problematic when CR rubbers are cured and the most important accelerant has been found to be ethylene thiourea (ETU), which, although being an excellent and proven accelerator for polychloroprene, has been classified as reprotoxic. From 2010 to 2013, the European rubber industry had a research project titled SafeRubber to develop a safer alternative to the use of ETU. Vulcanization of silicones Room-temperature vulcanizing (RTV) silicone is constructed of reactive oil-based polymers combined with strengthening mineral fillers. There are two types of room-temperature vulcanizing silicone: RTV-1 (One-component systems); hardens due to the action of atmospheric humidity, a catalyst, and acetoxysilane. Acetoxysilane, when exposed to humid conditions, will form acetic acid. The curing process begins on the outer surface and progresses through to its core. The product is packed in airtight cartridges and is either in a fluid or paste form. RTV-1 silicone has good adhesion, elasticity, and durability characteristics. The Shore hardness can be varied between 18 and 60. Elongation at break can range from 150% up to 700%. They have excellent aging resistance due to superior resistance to UV radiation and weathering. RTV-2 (Two-component systems); two-component products that, when mixed, cure at room-temperature to a solid elastomer, a gel, or a flexible foam. RTV-2 remains flexible from . Break-down occurs at temperatures above , leaving an inert silica deposit that is non-flammable and non-combustible. They can be used for electrical insulation due to their dielectric properties. Mechanical properties are satisfactory. RTV-2 is used to make flexible moulds, as well as many technical parts for industry and paramedical applications.
Technology
Materials
null
39007
https://en.wikipedia.org/wiki/Wall
Wall
A wall is a structure and a surface that defines an area; carries a load; provides security, shelter, or soundproofing; or, is decorative. There are many kinds of walls, including: Border barriers between countries Brick walls Defensive walls in fortifications Permanent, solid fences Retaining walls, which hold back dirt, stone, water, or noise sound Stone walls Walls in buildings that form a fundamental part of the superstructure or separate interior rooms, sometimes for fire safety Glass walls in which the primary structure is made of glass; does not include openings within walls that have glass coverings as these are windows Walls that protect from oceans (seawalls) or rivers (levees) Etymology The term wall comes from the Latin vallum meaning "an earthen wall or rampart set with palisades, a row or line of stakes, a wall, a rampart, fortification", while the Latin word murus means a defensive stone wall. English uses the same word to mean an external wall and the internal sides of a room, but this is not universal. Many languages distinguish between the two. In German, some of this distinction can be seen between Wand and Mauer, in Spanish between pared and muro. Defensive wall The word wall originally referred to defensive walls and ramparts. Building wall The purposes of walls in buildings are to support roofs, floors and ceilings; to enclose a space as part of the building envelope along with a roof to give buildings form; and to provide shelter and security. In addition, the wall may house various types of utilities such as electrical wiring or plumbing. Wall construction falls into two basic categories: framed walls or mass-walls. In framed walls the load is transferred to the foundation through posts, columns or studs. Framed walls most often have three or more separate components: the structural elements (such as 2×4 studs in a house wall), insulation, and finish elements or surfaces (such as drywall or panelling). Mass-walls are of a solid material including masonry, concrete including slipform stonemasonry, log building, cordwood construction, adobe, rammed earth, cob, earthbag construction, bottles, tin cans, straw-bale construction, and ice. Walls may or may not be leadbearing. Walls are required to conform to the local building and/or fire codes. There are three basic methods walls control water intrusion: moisture storage, drained cladding, or face-sealed cladding. Moisture storage is typical of stone and brick mass-wall buildings where moisture is absorbed and released by the walls of the structure itself. Drained cladding also known as screened walls acknowledges moisture will penetrate the cladding so a moisture barrier such as housewrap or felt paper inside the cladding provides a second line of defense and sometimes a drainage plane or air gap allows a path for the moisture to drain down through and exit the wall. Sometimes ventilation is provided in addition to the drainage plane such as in rainscreen construction. Face-sealed also called barrier wall or perfect barrier cladding relies on maintaining a leak-free surface of the cladding. Examples of face sealed cladding are the early exterior insulation finishing systems, structural glazing, metal clad panels, and corrugated metal. Building walls frequently become works of art, externally and internally, such as when featuring mosaic work or when murals are painted on them; or as design foci when they exhibit textures or painted finishes for effect. Curtain wall In architecture and civil engineering, curtain wall refers to a building facade that is not load-bearing but provides decoration, finish, front, face, or historical preservation. Precast wall Precast walls are walls which have been manufactured in a factory and then shipped to where it is needed, ready to install. It is faster to install compared to brick and other walls and may have a lower cost compared to other types of wall. Precast walls are cost effective compare to Brick Wall compound wall. Mullion wall Mullion walls are a structural system that carries the load of the floor slab on prefabricated panels around the perimeter. Partition wall A partition wall is a usually thin wall that is used to separate or divide a room, primarily a pre-existing one. Partition walls are usually not load-bearing, and can be constructed out of many materials, including steel panels, bricks, cloth, plastic, plasterboard, wood, blocks of clay, terracotta, concrete, and glass. Some partition walls are made of sheet glass. Glass partition walls are a series of individual toughened glass panels mounted in wood or metal framing. They may be suspended from or slide along a robust aluminium ceiling track. The system does not require the use of a floor guide, which allows easy operation and an uninterrupted threshold. A timber partition consists of a wooden framework, supported on the floor or by side walls. Metal lath and plaster, properly laid, forms a reinforced partition wall. Partition walls constructed from fibre cement backer board are popular as bases for tiling in kitchens or in wet areas like bathrooms. Galvanized sheet fixed to wooden or steel members are mostly adopted in works of temporary character. Plain or reinforced partition walls may also be constructed from concrete, including pre-cast concrete blocks. Metal framed partitioning is also available. This partition consists of track (used primarily at the base and head of the partition) and studs (vertical sections fixed into the track typically spaced at 24", 16", or at 12"). Internal wall partitions, also known as office partitioning, are usually made of plasterboard (drywall) or varieties of glass. Toughened glass is a common option, as low-iron glass (better known as opti-white glass) increases light and solar heat transmission. Wall partitions are constructed using beads and tracking that is either hung from the ceiling or fixed into the ground. The panels are inserted into the tracking and fixed. Some wall partition variations specify their fire resistance and acoustic performance rating. Movable partitions Movable partitions are walls that open to join two or more rooms into one large floor area. These include: Sliding—a series of panels that slide in tracks fixed to the floor and ceiling, similar sliding doors Sliding and folding doors —similar to sliding folding doors, these are good for smaller spans Folding partition walls - a series of interlocking panels suspended from an overhead track that when extended provide an acoustical separation, and when retracted stack against a wall, ceiling, closet, or ceiling pocket. Screens—usually constructed of a metal or timber frame fixed with plywood and chipboard and supported with legs for free standing and easy movement Pipe and drape—fixed or telescopic uprights and horizontals provide a ground supported drape system with removable panels. Party wall Party walls are walls that separate buildings or units within a building. They provide fire resistance and sound resistance between occupants in a building. The minimum fire resistance and sound resistance required for the party wall is determined by a building code and may be modified to suit a variety of situations. Ownership of such walls can become a legal issue. It is not a load-bearing wall and may be owned by different people. Infill wall An infill wall is the supported wall that closes the perimeter of a building constructed with a three-dimensional framework structure. Fire wall Fire walls resist spread of fire within or sometimes between structures to provide passive fire protection. A delay in the spread of fire gives occupants more time to escape and fire fighters more time to extinguish the fire. Some fire walls allow fire resistive window assemblies, and are made of non-combustible material such as concrete, cement block, brick, or fire rated drywall. Wall penetrations are sealed with fire resistive materials. A doorway in a firewall must have a rated fire door. Fire walls provide varying resistance to the spread of fire, (e.g., one, two, three or four hours). Firewalls can also act as smoke barriers when constructed vertically from slab to roof deck and horizontally from an exterior wall to exterior wall subdividing a building into sections. Shear wall Shear walls resist lateral forces such as in an earthquake or severe wind. There are different kinds of shear walls such as the steel plate shear wall. Knee wall Knee walls are short walls that either support rafters or add height in the top floor rooms of houses. In a -story house, the knee wall supports the half story. Cavity wall Cavity walls are walls made with a space between two "skins" to inhibit heat transfer. Pony wall Pony wall (or dwarf wall) is a general term for short walls, such as: A half wall that only extends partway from floor to ceiling, without supporting anything A stem wall—a concrete wall that extends from the foundation slab to the cripple wall or floor joists A cripple wall—a framed wall from the stem wall or foundation slab to the floor joists Demountable wall Demountable walls fall into 3 different main types: Glass walls (unitesed panels or butt joint), Laminated particle board walls (this may also include other finishes, such as whiteboards, cork board, magnetic, etc., typically all on purpose-made wall studs) Drywall Solar energy A trombe wall in passive solar building design acts as a heat sink. Shipbuilding On a ship, a wall that separates major compartments is called a bulkhead. A thinner wall between cabins is called a partition. Boundary wall Boundary walls include privacy walls, boundary-marking walls on property, and town walls. These intergrade into fences. The conventional differentiation is that a fence is of minimal thickness and often open in nature, while a wall is usually more than a nominal thickness and is completely closed, or opaque. More to the point, an exterior structure of wood or wire is generally called a fence—but one of masonry is a wall. A common term for both is barrier, which is convenient for structures that are partly wall and partly fence—for example the Berlin Wall. Another kind of wall-fence ambiguity is the ha-ha—which is set below ground level to protect a view, yet acts as a barrier (to cattle, for example). Before the invention of artillery, many of the world's cities and towns, particularly in Europe and Asia, had defensive or protective walls (also called town walls or city walls). In fact, the English word "wall" derives from Latin vallum—a type of fortification wall. These walls are no longer relevant for defense, so such cities have grown beyond their walls, and many fortification walls, or portions of them, have been torn down—for example in Rome, Italy and Beijing, China. Examples of protective walls on a much larger scale include the Great Wall of China and Hadrian's Wall. Border wall Some walls formally mark the border between one population and another. A border wall is constructed to limit the movement of people across a certain line or border. These structures vary in placement with regard to international borders and topography. The most famous example of border barrier in history is probably the Great Wall of China, a series of walls that separated the Empire of China from nomadic powers to the north. The most prominent recent example is the Berlin Wall, which surrounded the enclave of West Berlin and separated it from East Germany for most of the Cold War era. The US-Mexico border wall, separating the United States and Mexico, is another recent example. Retaining wall In areas of rocky soils around the world, farmers have often pulled large quantities of stone out of their fields to make farming easier and have stacked those stones to make walls that either mark the field boundary, or the property boundary, or both. Retaining walls resist movement of earth, stone, or water. They may be part of a building or external. The ground surface or water on one side of a retaining wall is typically higher than on the other side. A dike is a retaining wall, as is a levee, a load-bearing foundation wall, and a sea wall. Shared wall Special laws often govern walls that neighbouring properties share. Typically, one neighbour cannot alter the common wall if it is likely to affect the building or property on the other side. A wall may also separate apartment or hotel rooms from each other. Each wall has two sides and breaking a wall on one side will break the wall on the other side. Portable wall Portable walls, such as room dividers or portable partitions divide a larger open space into smaller rooms. Portable walls can be static, such as cubicle walls, or can be wall panels mounted on casters to provide an easy way to reconfigure assembly space. They are often found inside schools, churches, convention centers, hotels, and corporate facilities. Temporary wall A temporary wall is constructed for easy removal or demolition. A typical temporary wall can be constructed with 1⁄2" (6 mm) to 5⁄8" (16 mm) sheet rock (plasterboard), metal 2 × 3s (approx. 5 × 7 cm), or 2 × 4s, or taped, plastered and compounded. Most installation companies use lattice (strips of wood) to cover the joints of the temporary wall with the ceiling. These are sometimes known as pressurized walls or temporary pressurized walls. Walls in popular culture Walls are often seen in popular culture, oftentimes representing barriers preventing progress or entry. For example: Fictional and symbolic walls The progressive/psychedelic rock band Pink Floyd used a metaphorical wall to represent the isolation felt by the protagonist of their 1979 concept album The Wall. The American poet laureate Robert Frost describes a pointless rock wall as a metaphor for the myopia of the culture-bound in his poem "Mending Wall", published in 1914. Walls are a recurring symbol in Ursula K. Le Guin's 1974 novel The Dispossessed'. In some cases, a wall may refer to an individual's debilitating mental or physical condition, seen as an impassable barrier. In George R. R. Martin's A Song of Ice and Fire series and its television adaptation, Game of Thrones'', The Wall plays multiple important roles: as a colossal fortification, made of ice and fortified with magic spells; as a cultural barrier; and as a codification of assumptions. Breaches of the wall, who is allowed to cross it and who is not, and its destruction have important symbolic, logistical, and socio-political implications in the storyline. Reportedly over 700 feet high and 100 leagues (300 miles) wide, it divides the northern border of the Seven Kingdoms realm from the domain of the wildlings and several categories of undead who live beyond it. Historical walls In a real-life example, the Berlin Wall, constructed by the Soviet Union to divide Berlin into NATO and Warsaw Pact zones of occupation, became a worldwide symbol of oppression and isolation. Social media walls Another common usage is as a communal surface to write upon. For instance the social networking site Facebook previously used an electronic "wall" to log the scrawls of friends until it was replaced by the "timeline" feature.
Technology
Architectural elements
null
39029
https://en.wikipedia.org/wiki/Bamboo
Bamboo
Bamboos are a diverse group of mostly evergreen perennial flowering plants making up the subfamily Bambusoideae of the grass family Poaceae. Giant bamboos are the largest members of the grass family, in the case of Dendrocalamus sinicus having individual stalks (culms) reaching a length of , up to in thickness and a weight of up to . The internodes of bamboos can also be of great length. Kinabaluchloa wrayi has internodes up to in length. and Arthrostylidium schomburgkii has internodes up to in length, exceeded in length only by papyrus. By contrast, the stalks of the tiny bamboo Raddiella vanessiae of the savannas of French Guiana measure only in length by about in width. The origin of the word "bamboo" is uncertain, but it probably comes from the Dutch or Portuguese language, which originally borrowed it from Malay or Kannada. In bamboo, as in other grasses, the internodal regions of the stem are usually hollow and the vascular bundles in the cross-section are scattered throughout the walls of the stalk instead of in a cylindrical cambium layer between the bark (phloem) and the wood (xylem) as in dicots and conifers. The dicotyledonous woody xylem is also absent. The absence of secondary growth wood causes the stems of monocots, including the palms and large bamboos, to be columnar rather than tapering. Bamboos include some of the fastest-growing plants in the world, due to a unique rhizome-dependent system. Certain species of bamboo can grow within a 24-hour period, at a rate of almost an hour (equivalent to every 90 seconds). Growth up to in 24 hours has been observed in the instance of Japanese giant timber bamboo (Phyllostachys bambusoides). This rapid growth and tolerance for marginal land, make bamboo a good candidate for afforestation, carbon sequestration and climate change mitigation. Bamboo is versatile and has notable economic and cultural significance in South Asia, Southeast Asia, and East Asia, being used for building materials, as a food source, and as a raw product, and depicted often in arts, such as in bamboo paintings and bambooworking. Bamboo, like wood, is a natural composite material with a high strength-to-weight ratio useful for structures. Bamboo's strength-to-weight ratio is similar to timber, and its strength is generally similar to a strong softwood or hardwood timber. Some bamboo species have displayed remarkable strength under test conditions. Bambusa tulda of Bangladesh and adjoining India has tested as high as 60,000 psi (400 MPa) in tensile strength. Other bamboo species make extraordinarily hard material. Bambusa tabacaria of China contains so much silica that it will make sparks when struck by an axe. Taxonomy Bamboos have long been considered the most basal grass genera, mostly because of the presence of bracteate, indeterminate inflorescences, "pseudospikelets", and flowers with three lodicules, six stamens, and three stigmata. Following more recent molecular phylogenetic research, many tribes and genera of grasses formerly included in the Bambusoideae are now classified in other subfamilies, e.g. the Anomochlooideae, the Puelioideae, and the Ehrhartoideae. The subfamily in its current sense belongs to the BOP clade of grasses, where it is sister to the Pooideae (bluegrasses and relatives). The bamboos comprise three clades classified as tribes, and these strongly correspond with geographic divisions representing the New World herbaceous species (Olyreae), tropical woody bamboos (Bambuseae), and temperate woody bamboos (Arundinarieae). The woody bamboos do not form a monophyletic group; instead, the tropical woody and herbaceous bamboos are sister to the temperate woody bamboos. Altogether, more than 1,400 species are placed in 115 genera. 21 genera: Subtribe Buergersiochloinae one genus: Buergersiochloa. Subtribe Olyrineae 17 genera: Agnesia, Arberella, Cryptochloa, Diandrolyra, Ekmanochloa, Froesiochloa, Lithachne, Maclurolyra, Mniochloa, Olyra, Parodiolyra, Piresiella, Raddia, Raddiella, Rehia, Reitzia (syn. Piresia), Sucrea. Subtribe Parianinae three genera: Eremitis, Pariana, Parianella. 73 genera: Subtribe Arthrostylidiinae: 15 genera: Actinocladum, Alvimia, Arthrostylidium, Athroostachys, Atractantha, Aulonemia, Cambajuva, Colanthelia, Didymogonyx, Elytrostachys, Filgueirasia, Glaziophyton, Merostachys, Myriocladus, Rhipidocladum. Subtribe Bambusinae: 17 genera: Bambusa, Bonia, Cochinchinochloa, Dendrocalamus, Fimbribambusa, Gigantochloa, Maclurochloa, Melocalamus, Neomicrocalamus, Oreobambos, Oxytenanthera, Phuphanochloa, Pseudoxytenanthera, Soejatmia, Thyrsostachys, Vietnamosasa, Yersinochloa. Subtribe Chusqueinae: one genus: Chusquea. Subtribe Dinochloinae: 7 genera: Cyrtochloa, Dinochloa, Mullerochloa, Neololeba, Pinga, Parabambusa, Sphaerobambos. Subtribe Greslaniinae: one genus: Greslania. Subtribe Guaduinae: 5 genera: Apoclada, Eremocaulon, Guadua, Olmeca, Otatea. Subtribe Hickeliinae: 9 genera: Cathariostachys, Decaryochloa, Hickelia, Hitchcockella, Nastus, Perrierbambus, Sirochloa, Sokinochloa, Valiha. Subtribe Holttumochloinae: 3 genera: Holttumochloa, Kinabaluchloa, Nianhochloa. Subtribe Melocanninae: 9 genera: Annamocalamus, Cephalostachyum, Davidsea, Melocanna, Neohouzeaua, Ochlandra, Pseudostachyum, Schizostachyum, Stapletonia. Subtribe Racemobambosinae: 3 genera: Chloothamnus, Racemobambos, Widjajachloa. Subtribe Temburongiinae: one genus: Temburongia. incertae sedis 2 genera: Ruhooglandia, Temochloa. 31 genera: Acidosasa, Ampelocalamus, Arundinaria, Bashania, Bergbambos, Chimonobambusa, Chimonocalamus, Drepanostachyum, Fargesia, Ferrocalamus, Gaoligongshania, Gelidocalamus, Himalayacalamus, Indocalamus, Indosasa, Kuruna, Oldeania, Oligostachyum, Phyllostachys, Pleioblastus, Pseudosasa, Sarocalamus, Sasa, Sasaella, Sasamorpha, Semiarundinaria, Shibataea, Sinobambusa, Thamnocalamus, Vietnamocalamus, Yushania. Distribution Most bamboo species are native to warm and moist tropical and to warm temperate climates. Their range also extends to cool mountainous regions and highland cloud forests. In the Asia-Pacific region, they occur across East Asia, from north to 50 °N latitude in Sakhalin, to south to northern Australia, and west to India and the Himalayas. China, Japan, Korea, India and Australia, all have several endemic populations. They also occur in small numbers in sub-Saharan Africa, confined to tropical areas, from southern Senegal in the north to southern Mozambique and Madagascar in the south. In the Americas, bamboo has a native range from 47 °S in southern Argentina and the beech forests of central Chile, through the South American tropical rainforests, to the Andes in Ecuador near , with a noticeable gap through the Atacama Desert. Three species of bamboo, all in the genus Arundinaria, are also native through Central America and Mexico, northward into the Southeastern United States. Bamboo thickets called canebrakes once formed a dominant ecosystem in some parts of the Southeastern United States, but they are now considered critically endangered ecosystems. Canada and continental Europe are not known to have any native species of bamboo. Many species are also cultivated as garden plants outside of this range, including in Europe and areas of North America where no native wild bamboo exists. Recently, some attempts have been made to grow bamboo on a commercial basis in the Great Lakes region of east-central Africa, especially in Rwanda. In the United States, several companies are growing, harvesting, and distributing species such as Phyllostachys nigra (Henon) and Phyllostachys edulis (Moso). Ecology The two general patterns for the growth of bamboo are "clumping", and "running", with short and long underground rhizomes, respectively. Clumping bamboo species tend to spread slowly, as the growth pattern of the rhizomes is to simply expand the root mass gradually, similar to ornamental grasses. Running bamboos need to be controlled during cultivation because of their potential for aggressive behavior. They spread mainly through their rhizomes, which can spread widely underground and send up new culms to break through the surface. Running bamboo species are highly variable in their tendency to spread; this is related to the species, soil and climate conditions. Some send out runners of several meters a year, while others stay in the same general area for long periods. If neglected, over time, they can cause problems by moving into adjacent areas. Bamboos include some of the fastest-growing plants on Earth, with reported growth rates up to in 24 hours. These depend on local soil and climatic conditions, as well as species, and a more typical growth rate for many commonly cultivated bamboos in temperate climates is in the range of per day during the growing period. Primarily growing in regions of warmer climates during the late Cretaceous period, vast fields existed in what is now Asia. Some of the largest timber bamboo grow over tall, and be as large as in diameter. The size range for mature bamboo is species-dependent, with the smallest bamboos reaching only several inches high at maturity. A typical height range covering many of the common bamboos grown in the United States is , depending on species. Anji County of China, known as the "Town of Bamboo", provides the optimal climate and soil conditions to grow, harvest, and process some of the most valued bamboo poles available worldwide. Unlike all trees, individual bamboo culms emerge from the ground at their full diameter and grow to their full height in a single growing season of three to four months. During this time, each new shoot grows vertically into a culm with no branching out until the majority of the mature height is reached. Then, the branches extend from the nodes and leafing out occurs. In the next year, the pulpy wall of each culm slowly hardens. During the third year, the culm hardens further. The shoot is now a fully mature culm. Over the next 2–5 years (depending on species), fungus begins to form on the outside of the culm, which eventually penetrates and overcomes the culm. Around 5–8 years later (species- and climate-dependent), the fungal growths cause the culm to collapse and decay. This brief life means culms are ready for harvest and suitable for use in construction within about three to seven years. Individual bamboo culms do not get any taller or larger in diameter in subsequent years than they do in their first year, and they do not replace any growth lost from pruning or natural breakage. Bamboo has a wide range of hardiness depending on species and locale. Small or young specimens of an individual species produce small culms initially. As the clump and its rhizome system mature, taller and larger culms are produced each year until the plant approaches its particular species limits of height and diameter. Many tropical bamboo species die at or near freezing temperatures, while some of the hardier temperate bamboos survive temperatures as low as . Some of the hardiest bamboo species are grown in USDA plant hardiness zone 5, although they typically defoliate and may even lose all above-ground growth, yet the rhizomes survive and send up shoots again the next spring. In milder climates, such as USDA zone 7 and above, most bamboo remain fully leafed out and green year-round. Mass flowering Bamboos seldom and unpredictably flower and the frequency of flowering varies greatly from species to species. Once flowering takes place, a plant declines and often dies entirely. In fact, many species only flower at intervals as long as 65 or 120 years. These taxa exhibit mass flowering (or gregarious flowering), with all plants in a particular 'cohort' flowering over a several-year period. Any plant derived through clonal propagation from this cohort will also flower regardless of whether it has been planted in a different location. The longest mass flowering interval known is 120 years, and it is for the species Phyllostachys bambusoides (Sieb. & Zucc.). In this species, all plants of the same stock flower at the same time, regardless of differences in geographic locations or climatic conditions, and then the bamboo dies. The commercially important bamboo Guadua, or Cana brava (Guadua angustifolia) bloomed for the first time in recorded history in 1971, suggesting a blooming interval well in excess of 130 years. The lack of environmental impact on the time of flowering indicates the presence of some sort of "alarm clock" in each cell of the plant which signals the diversion of all energy to flower production and the cessation of vegetative growth. This mechanism, as well as the evolutionary cause behind it, is still largely a mystery. Invasive species Some bamboo species are acknowledged as having high potential for becoming invasive species. A study commissioned by International Bamboo and Rattan Organisation, found that invasive species typically are varieties that spread via rhizomes rather than by clumping, as most commercially viable woody bamboos do. In the United States, the National Invasive Species Information Center agency of the Department of Agriculture has Golden Bamboo (Phyllostachys aurea) listed as an invasive species. Animal diet Bamboo contains large amounts of protein and very low amounts of carbohydrates allowing this plant to be the source of food for many animals. Soft bamboo shoots, stems and leaves are the major food source of the giant panda of China, the red panda of Nepal, and the bamboo lemurs of Madagascar. The red panda can eat up to a day which is also about the full body weight of the animal. With raw bamboo containing trace amounts of harmful cyanide with higher concentrations in bamboo shoots, the golden bamboo lemur ingests many times the quantity of the taxiphyllin-containing bamboo that would be lethal to a human. Mountain gorillas of Central Africa also feed on bamboo, and have been documented consuming bamboo sap which was fermented and alcoholic; chimpanzees and elephants of the region also eat the stalks. The larvae of the bamboo borer (the moth Omphisa fuscidentalis) of Laos, Myanmar, Thailand and Yunnan, China feed off the pulp of live bamboo. In turn, these caterpillars are considered a local delicacy. Bamboo is also used for livestock feed with research showing some bamboo varieties have higher protein content over other varieties of bamboo. Cultivation General In Brazil, the Brazilian Center for Innovation and Sustainability - CEBIS, a non-profit organization, promotes the development of Brazil's bamboo production chain. Last year, it helped with the approval of law n~21,162 in the state of Paraná, which encourages Bamboo Culture aiming at the dissemination of its agricultural cultivation and the valorization of bamboo as an instrument for promoting the sustainable socioeconomic development of the State through its multiple functionalities. Bamboo cultivation neutralizes carbon emissions. Bamboo cultivation is cheap and in addition to adding value to its production chain, it is a sustainable crop that brings environmental, economic and social benefits. Its production can be used from construction to food. Recently, it was qualified and classified for the National Commission for Sustainable Development Objectives - CNDOS of the Presidency of the Republic of the federal government of Brazil. Harvesting Bamboo used for construction purposes must be harvested when the culms reach their greatest strength and when sugar levels in the sap are at their lowest, as high sugar content increases the ease and rate of pest infestation. As compared to forest trees, bamboo species grow fast. Bamboo plantations can be readily harvested for a shorter period than tree plantations. Harvesting of bamboo is typically undertaken according to these cycles: Lifecycle of the culm: As each individual culm goes through a five to seven-year lifecycle, they are ideally allowed to reach this level of maturity prior to full capacity harvesting. The clearing out or thinning of culms, particularly older decaying culms, helps to ensure adequate light and resources for new growth. Well-maintained clumps may have a productivity three to four times that of an unharvested wild clump. Consistent with the lifecycle described above, bamboo is harvested from two to three years through to five to seven years, depending on the species. Annual cycle: Most all growth of new bamboo occurs during the wet season and disturbing the clump during this phase will potentially damage the upcoming crop, while harvesting immediately prior to the wet/growth season may also damage new shoots, therefore harvesting is best a few months prior to the start of the wet season. Also during this high-rainfall period, sap levels are at their highest, and then diminish towards the dry season. Daily cycle: During the height of the day, photosynthesis is at its peak, producing the highest levels of sugar in sap, making this the least ideal time of day to harvest and many traditional practitioners believe the best time to harvest is at dawn or dusk on a waning moon. Leaching Leaching is the removal of sap after harvest. In many areas of the world, the sap levels in harvested bamboo are reduced either through leaching or post-harvest photosynthesis. For example: Cut bamboo is raised clear of the ground and leaned against the rest of the clump for one to two weeks until leaves turn yellow to allow full consumption of sugars by the plant. A similar method is undertaken, but with the base of the culm standing in fresh water, either in a large drum or stream to leach out sap. Cut culms are immersed in a running stream and weighted down for three to four weeks. Water is pumped through the freshly cut culms, forcing out the sap (this method is often used in conjunction with the injection of some form of treatment). In the process of water leaching, the bamboo is dried slowly and evenly in the shade to avoid cracking in the outer skin of the bamboo, thereby reducing opportunities for pest infestation. Durability of bamboo in construction is directly related to how well it is handled from the moment of planting through harvesting, transportation, storage, design, construction, and maintenance. Bamboo harvested at the correct time of year and then exposed to ground contact or rain will break down just as quickly as incorrectly harvested material. Toxicity Gardeners working with bamboo plants have occasionally reported allergic reactions varying from no effects during previous exposures, to immediate itchiness and rash developing into red welts after several hours where the skin had been in contact with the plant (contact allergy), and in some cases into swollen eyelids and breathing difficulties (dyspnoea). A skin prick test using bamboo extract was positive for the immunoglobulin E (IgE) in an available case study. The shoots (newly emerged culms) of bamboo contain the toxin taxiphyllin (a cyanogenic glycoside), which produces cyanide in the gut. Uses Culinary The shoots of most species are edible either raw or cooked, with the tough sheath removed. Cooking removes the slight bitterness. The shoots are used in numerous Asian dishes and broths, and are available in supermarkets in various sliced forms, in both fresh and canned versions. The bamboo shoot in its fermented state forms an important ingredient in cuisines across the Himalayas. In Assam, India, for example, it is called khorisa. In Nepal, a delicacy popular across ethnic boundaries consists of bamboo shoots fermented with turmeric and oil, and cooked with potatoes into a dish that usually accompanies rice ( () in Nepali). In Indonesia, they are sliced thin and then boiled with santan (thick coconut milk) and spices to make a dish called gulai rebung. Other recipes using bamboo shoots are sayur lodeh (mixed vegetables in coconut milk) and lun pia (sometimes written lumpia: fried wrapped bamboo shoots with vegetables). The shoots of some species contain toxins that need to be leached or boiled out before they can be eaten safely. Pickled bamboo, used as a condiment, may also be made from the pith of the young shoots. The sap of young stalks tapped during the rainy season may be fermented to make ulanzi (a sweet wine) or simply made into a soft drink. Bamboo leaves are also used as wrappers for steamed dumplings which usually contains glutinous rice and other ingredients, such as the zongzi from China. Pickled bamboo shoots ( ) are cooked with black-eyed beans as a delicacy in Nepal. Many Nepalese restaurants around the world serve this dish as aloo bodi tama. Fresh bamboo shoots are sliced and pickled with mustard seeds and turmeric and kept in glass jar in direct sunlight for the best taste. It is used alongside many dried beans in cooking during winters. Baby shoots (Nepali: tusa) of a very different variety of bamboo ( ) native to Nepal is cooked as a curry in hilly regions. In Sambalpur, India, the tender shoots are grated into juliennes and fermented to prepare kardi. The name is derived from the Sanskrit word for bamboo shoot, karira. This fermented bamboo shoot is used in various culinary preparations, notably amil, a sour vegetable soup. It is also made into pancakes using rice flour as a binding agent. The shoots that have turned a little fibrous are fermented, dried, and ground to sand-sized particles to prepare a garnish known as hendua. It is also cooked with tender pumpkin leaves to make sag green leaves. In Konkani cuisine, the tender shoots (kirlu) are grated and cooked with crushed jackfruit seeds to prepare kirla sukke. In southern India and some regions of southwest China, the seeds of the dying bamboo plant are consumed as a grain known as "bamboo rice". The taste of cooked bamboo seeds is reported to be similar to wheat and the appearance similar to rice, but bamboo seeds have been found to have lower nutrient levels than both. The seeds can be pulverized into a flour with which to make cakes. The Indian state of Sikkim has promoted bamboo water bottles to keep the state free from plastic bottles The empty hollow in the stalks of larger bamboo is often used to cook food in many Asian cultures. Soups are boiled and rice is cooked in the hollows of fresh stalks of bamboo directly over a flame. Similarly, steamed tea is sometimes rammed into bamboo hollows to produce compressed forms of pu'er tea. Cooking food in bamboo is said to give the food a subtle but distinctive taste. Fuel Working Writing surface Bamboo was in widespread use in early China as a medium for written documents. The earliest surviving examples of such documents, written in ink on string-bound bundles of bamboo strips (or "slips"), date from the fifth century BC during the Warring States period.
Biology and health sciences
Poales
null
39059
https://en.wikipedia.org/wiki/European%20bison
European bison
The European bison (: bison) (Bison bonasus) or the European wood bison, also known as the wisent ( or ), the zubr (), or sometimes colloquially as the European buffalo, is a European species of bison. It is one of two extant species of bison, alongside the American bison. The European bison is the heaviest wild land animal in Europe, and individuals in the past may have been even larger than their modern-day descendants. During late antiquity and the Middle Ages, bison became extinct in much of Europe and Asia, surviving into the 20th century only in northern-central Europe and the northern Caucasus Mountains. During the early years of the 20th century, bison were hunted to extinction in the wild. By the late 2010s, the species numbered several thousand and had been returned to the wild by captive breeding programmes. It is no longer in immediate danger of extinction, but remains absent from most of its historical range. It is not to be confused with the aurochs (Bos primigenius), the extinct ancestor of domestic cattle, with which it once co-existed. Besides humans, bison have few predators. In the 19th century, there were scattered reports of wolves, lions, tigers, and bears hunting bison. In the past, especially during the Middle Ages, humans commonly killed bison for their hide and meat. They used their horns to make drinking horns. European bison were hunted to extinction in the wild in the early 20th century, with the last wild animals of the B. b. bonasus subspecies being shot in the Białowieża Forest (on today's Belarus–Poland border) in 1921. The last of the Caucasian wisent subspecies (B. b. caucasicus) was shot in the northwestern Caucasus in 1927. The Carpathian wisent (B. b. hungarorum) had been hunted to extinction by 1852. The Białowieża or lowland European bison was kept alive in captivity, and has since been reintroduced into several countries in Europe. In 1996, the International Union for Conservation of Nature classified the European bison as an endangered species, no longer extinct in the wild. Its status has improved since then, changing to vulnerable and later to near-threatened. European bison were first scientifically described by Carl Linnaeus in 1758. Some later descriptions treat the European bison as conspecific with the American bison. Three subspecies of the European bison existed in the recent past, but only one, the nominate subspecies (B. b. bonasus), survives today. The ancestry and relationships of the wisent to fossil bison species remain controversial and disputed. The European bison is one of the national animals of Poland and Belarus. Etymology The ancient Greeks and ancient Romans were the first to name bison as such; the 2nd-century AD authors Pausanias and Oppian referred to them as . Earlier, in the 4th century BC, during the Hellenistic period, Aristotle referred to bison as . He also noted that the Paeonians called it μόναπος (monapos). Claudius Aelianus, writing in the late 2nd or early 3rd centuries AD, also referred to the species as , and both Pliny the Elder's Natural History and Gaius Julius Solinus used and . Both Martial and Seneca the Younger mention (. ). Later Latin spellings of the term included , , and . John Trevisa is the earliest author cited by the Oxford English Dictionary as using, in his 1398 translation of Bartholomeus Anglicus's De proprietatibus rerum, the Latin plural in English, as "bysontes" ( and ). Philemon Holland's 1601 translation of Pliny's Natural History, referred to "bisontes". The marginalia of the King James Version gives "bison" as a gloss for the Biblical animal called the "pygarg" mentioned in the Book of Deuteronomy. Randle Cotgrave's 1611 French–English dictionary notes that was already in use, and it may have influenced the adoption of the word into English, or alternatively it may have been borrowed direct from Latin. John Minsheu's 1617 lexicon, Ductor in linguas, gives a definition for Bíson (). In the 18th century the name of the European animal was applied to the closely related American bison (initially in Latin in 1693, by John Ray) and the Indian bison (the gaur, Bos gaurus). Historically the word was also applied to Indian domestic cattle, the zebu (B. indicus or B. primigenius indicus). Because of the scarcity of the European bison the word 'bison' was most familiar in relation to the American species. By the time of the adoption of 'bison' into Early Modern English, the early medieval English name for the species had long been obsolete: the had descended from , and was related to . The word 'wisent' was then borrowed in the 19th century from modern [], itself related to , , , and to , , , and ultimately, like the Old English name, from Proto-Germanic. The word 'zubr' in English is a borrowing from , previously also used to denote one race of the European bison. The Polish żubr is similar to the word for the European bison in other modern Slavic languages, such as and . The noun for the European bison in all living Slavonic tongues is thought to be derived from Proto-Slavic: *zǫbrъ ~ *izǫbrъ, which itself possibly comes from Proto-Indo-European: *ǵómbʰ- for tooth, horn or peg. Description The European bison is the heaviest surviving wild land animal in Europe. Similar to their American cousins, European bison were potentially larger historically than remnant descendants; modern animals are about in length, not counting a tail of , in height, and in weight for males, and about in body length without tails, in height, and in weight for females. At birth, calves are quite small, weighing between . In the free-ranging population of the Białowieża Forest of Belarus and Poland, body masses among adults (aged 6 and over) are on average in the cases of males, and among females. An occasional big bull European bison can weigh up to or more with old bull records of for lowland wisent and for Caucasian wisent. On average, it is lighter in body mass, and yet slightly taller at the shoulder, than its American relatives, the wood bison (Bison bison athabascae) and the plains bison (Bison bison bison). Compared to the American species, the wisent has shorter hair on the neck, head, and forequarters, but longer tail and horns. See differences from American bison. The European bison makes a variety of vocalisations depending on its mood and behaviour, but when anxious, it emits a growl-like sound, known in Polish as chruczenie (). This sound can also be heard from wisent males during the mating season. History Prehistory The similar skeletal morphology of the wisent with the steppe bison (Bison priscus) which also formerly inhabited Europe complicates the understanding of the early evolution of the European bison. It is thought that European bison genetically diverged from steppe bison (as well as modern American bison, which are descended from steppe bison) at least 100,000 years ago. Genetic evidence indicates that European bison were present across Europe, from France to the Caucasus during the Last Glacial Period, where they co-existed alongside steppe bison. Late Pleistocene European bison belong to two mitochondrial genome lineages, which one study estimated to have split around 400,000 years ago, Bb1 (also known as Bison X) and Bb2. Bb1 has been found across Europe spanning from France (with a sedimentary ancient DNA record data from the lower archaeological stratigraphic sequence of El Mirón Cave (Cantabria, Spain).) to the Caucasus, while Bb2 was originally only found in the Caucasus before expanding westwards from around 14,000 years ago. Bb1 became extinct at the end of the Late Pleistocene, with all modern European bison belonging to the Bb2 lineage. At the end of the Last Glacial Period steppe bison became extinct in Europe, leaving European bison as the only bison species in the region. Historically, the lowland European bison's range encompassed most of the lowlands of northern Europe, extending from the Massif Central to the Volga River and the Caucasus. It may have once lived in the Asiatic part of what is now the Russian Federation, reaching to Lake Baikal and Altai Mountains in east. The European bison is known in southern Sweden only between 9500 and 8700 BP, and in Denmark similarly is documented only from the Pre-Boreal. It is not recorded from the British Isles, nor from Italy or the Iberian Peninsula, although prehistorical absence of the species among British Isles is debatable; bison fossils of unclarified species have been found on Doggerland or Brown Bank, and Isle of Wight and Oxfordshire, followed by fossil records of Pleistocene woodland bison and steppe bison from the isles. Furthermore, a 16-kyr-old specimen belonging to the ancient Bison X group of European bison has been found in Northern Italy. The extinct steppe bison (B. priscus) is known from across Eurasia and North America, last occurring 7,000 BC to 5,400 BC, and is depicted in the Cave of Altamira and Lascaux. The Pleistocene woodland bison (B. schoetensacki) has been proposed to have last existed around 36,000 BC. But other authors restrict B. schoetensacki to remains that are hundreds of thousands of years older. Cave paintings appear to distinguish between B. bonasus and B. priscus. Antiquity and Middle Ages Within mainland Europe, its range decreased as human populations expanded and cut down forests. They seemed to be common in Aristotle's period on Mount Mesapion (possibly the modern Ograzhden). In the same wider area Pausanias calling them Paeonian bulls and bison, gives details on how they were captured alive; adding also the fact that a golden Paeonian bull head was offered to Delphi by the Paeonian king Dropion (3rd century BC) who lived in what is today Tikveš. The last references (Oppian, Claudius Aelianus) to the animal in the transitional Mediterranean/Continental biogeographical region in the Balkans in the area of modern borderline between Greece, North Macedonia and Bulgaria date to the 3rd century AD. In northern Bulgaria, the wisent was thought to have survived until the 9th or 10th century AD, but more recent data summary shows that the species survived up to 13th - 14th century AD in eastern Bulgaria and up to 16th - 17th century AD in the northern part of the country. There is a possibility that the species' range extended to East Thrace during the 7th–8th century AD. Its population in Gaul was extinct in the 8th century AD. The species survived in the Ardennes and the Vosges Mountains until the 15th century. In the Early Middle Ages, the wisent apparently still occurred in the forest steppes east of the Urals, in the Altai Mountains, and seems to have reached Lake Baikal in the east. The northern boundary in the Holocene was probably around 60°N in Finland. European bison survived in a few natural forests in Europe, but their numbers dwindled. Early Modern period In 1513 the Białowieża Forest, at this point one of the last areas on Earth where the European bison still roamed free, was transferred from the Troki Voivodeship of Lithuania to the Podlaskie Voivodeship, which after the Union of Lublin became part of the Polish Crown. In the Polish–Lithuanian Commonwealth, at first European bison in the Białowieża Forest were legally the property of the Grand Dukes of Lithuania and later belonged to the Crown of the Kingdom of Poland. Polish-Lithuanian rulers took measures to protect the European bison, such as King Sigismund II Augustus who instituted the death penalty for poaching bison in Białowieża in the mid-16th century. Wild European bison herds existed in the forest until the mid-17th century. In 1701, King Augustus II the Strong greatly increased protection over the forest; the first written sources mentioning the use of some forest meadows for the production of winter fodder for the bison come from this period. In the early 19th century, after the partitions of the Polish Commonwealth, the Russian tsars retained old Polish-Lithuanian laws protecting the European bison herd in Białowieża. Despite these measures and others, the European bison population continued to decline over the following century, with only Białowieża and Northern Caucasus populations surviving into the 20th century. The last European bison in Transylvania died in 1790. Early 20th century During World War I, occupying German troops killed 600 of the European bison in the Białowieża Forest for sport, meat, hides and horns. A German scientist informed army officers that the European bison were facing imminent extinction, but at the very end of the war, retreating German soldiers shot all but nine animals. The last wild European bison in Poland was killed in 1921. The last wild European bison in the world was killed by poachers in 1927 in the western Caucasus. By that year, 48 remained, all held by zoos. The International Society for the Preservation of the Wisent was founded on 25 and 26 August 1923 in Berlin, following the example of the American Bison Society. The first chairman was Kurt Priemel, director of the Frankfurt Zoo, and among the members were experts like Hermann Pohle, Max Hilzheimer and Julius Riemer. The first goal of the society was to take stock of all living bison, in preparation for a breeding programme. Important members were the Polish Hunting Association and the Poznań zoological gardens, as well as a number of Polish private individuals, who provided funds to acquire the first bison cows and bulls. The breeding book was published in the company's annual report from 1932. While Priemel aimed to grow the population slowly with pure conservation of the breeding line, Lutz Heck planned to grow the population faster by cross-breeding with American bison in a separate breeding project in Munich, in 1934. World War II Heck gained the support of then Reichsjägermeister Hermann Göring, who hoped for huntable big game. Heck promised his powerful supporter in writing: "Since surplus bulls will soon be set, the hunting of the Wisent will be possible again in the foreseeable future". Göring himself took over the patronage of the German Professional Association of Wisent Breeders and Hegers, founded at Heck's suggestion. Kurt Priemel, who had since resigned as president of the International Society for the Preservation of the Wisent, warned in vain against "manification". Heck answered by announcing that Göring would take action against Priemel if he continued to oppose his crossing plans. Priemel was then banned from publishing in relation to bison breeding, and the regular bookkeeper of the International Society, Erna Mohr, was forced to hand over the official register in 1937. Thus, the older society was effectively incorporated into the newly created Professional Association. After the Second World War, therefore, only the pure-blooded bison in the game park Springe near Hanover were recognised as part of the international herd book. 1950s onwards The first two bison were released into the wild in the Białowieża Forest in 1929. By 1964 more than 100 existed. Over the following decades, thanks to Polish and international efforts, the Białowieża Forest regained its position as the location with the world's largest population of European bison, including those in the wild. In 2005–2007, a wild bison nicknamed Pubal became renowned in southeast Poland due to his friendly interactions with humans and unwillingness to reintegrate into the wild. As of 2014 there were 1,434 wisents in Poland, out of which 1,212 were in free-range herds and 522 belonged to the wild population in the Białowieża Forest. Compared to 2013, the total population in 2014 increased by 4.1%, while the free-ranging population increased by 6.5%. Bison from Poland have also been transported beyond the country's borders to boost the local populations of other countries, among them Bulgaria, Spain, Romania, Czechia and others. Poland has been described as the world's breeding centre of the European bison, where the bison population doubled between 1995 and 2017, reaching 2,269 by the end of 2019 – the total population has been increasing by around 15% to 18% yearly. In July 2022 a small population was released into woodland by Canterbury in Kent to trial their reintroduction into the UK. In May 2024, a small population was released in central Portugal. In 2012 and 2019 bisons were released in protected areas on Bornholm and Northern Jutland, Denmark. Genetic history A 2003 study of mitochondrial DNA indicated four distinct maternal lineages in the tribe Bovini: Taurine cattle and zebu Wisent American bison and yak Banteng, gaur, and gayal Y chromosome analysis associated wisent and American bison. An earlier study, using amplified fragment-length polymorphism fingerprinting, showed a close association of wisent and American bison and probably with yak. It noted the interbreeding of Bovini species made determining relationships problematic. European bison can crossbreed with American bison. This hybrid is known in Poland as a żubrobizon. The products of a German interbreeding programme were destroyed after the Second World War. This programme was related to the impulse which created the Heck cattle. The cross-bred individuals created at other zoos were eliminated from breed books by the 1950s. A Russian back-breeding programme resulted in a wild herd of hybrid animals, which presently lives in the Caucasian Biosphere Reserve (550 animals in 1999). Wisent-cattle hybrids also occur, similar to the North American beefalo. Cattle and European bison hybridise fairly readily. First-generation males are infertile. In 1847, a herd of wisent-cattle hybrids named żubroń () was created by Leopold Walicki. The animals were intended to become durable and cheap alternatives to cattle. The experiment was continued by researchers from the Polish Academy of Sciences until the late 1980s. Although the program resulted in a quite successful animal that was both hardy and could be bred in marginal grazing lands, it was eventually discontinued. Currently, the only surviving żubroń herd consists of just a few animals in Białowieża Forest, Poland and Belarus. In 2016, the first whole genome sequencing data from two European bison bulls from the Białowieża Forest revealed that the bison and bovine species diverged from about 1.7 to 0.85 Mya, through a speciation process involving limited gene flow. These data further support the occurrence of more recent secondary contacts, posterior to the divergence between Bos primigenius primigenius and B. p. namadicus (ca. 150,000 years ago), between the wisent and (European) taurine cattle lineages. An independent study of mitochondrial DNA and autosomal markers confirmed these secondary contacts (with an estimate of up to 10% of bovine ancestry in the modern wisent genome) leading the authors to go further in their conclusions by proposing the wisent to be a hybrid between steppe bison and aurochs with a hybridisation event originating before 120,000 years ago. However, other studies considered the 10% estimate for aurochs gene flow a gross overstimate and based on flawed data, and not supported by the data from the full nuclear genome of the wisent, and that the actual contribution from aurochs/cattle around 2.4-3.2%, which is suggested to have occurred in the last 70,000 years. Some of the authors however support the hypothesis that similarity of wisent and cattle (Bos) mitochondrial genomes is result of incomplete lineage sorting during divergence of Bos and Bison from their common ancestors rather than further post-speciation gene flow (ancient hybridisation between Bos and Bison). But they agree that limited gene flow from Bos primigenius taurus could account for the affiliation between wisent and cattle nuclear genomes (in contrast to mitochondrial ones). Alternatively, genome sequencing completed on remains attributed to the Pleistocene woodland bison (B. schoetensacki), and published in 2017, posited that genetic similarities between the Pleistocene woodland bison and the wisent suggest that B. schoetensaki was the ancestor of the European wisent. However, other studies have disputed the attribution of the remains to this species, otherwise known from remains hundreds of thousands of years older, instead referring them to an unnamed lineage of bison closely related to B. bonasus. A 2018 study proposed that the lineage leading to the wisent and to the lineage ancestral to both the American bison and Bison priscus had split over 1 million years ago, with the mitochondrial DNA discrepancy being the result of incomplete lineage sorting. The authors also proposed that during the late Middle Pleistocene there had been interbreeding between the ancestor of the wisent and the common ancestor of Bison priscus and the American bison. Behaviour and biology Social structure and territorial behaviours The European bison is a herd animal, which lives in both mixed and solely male groups. Mixed groups consist of adult females, calves, young aged 2–3 years, and young adult bulls. The average herd size is dependent on environmental factors, though on average, they number eight to 13 animals per herd. Herds consisting solely of bulls are smaller than mixed ones, containing two individuals on average. European bison herds are not family units. Different herds frequently interact, combine, and quickly split after exchanging individuals. Bison social structure has been described by specialists as a matriarchy, as it is the cows of the herd that lead it, and decide where the entire group moves to graze. Although larger and heavier than the females, the oldest and most powerful male bulls are usually satellites that hang around the edges of the herd to protect the group. Bulls begin to serve a more active role in the herd when a danger to the group's safety appears, as well as during the mating season – when they compete with each other. Territory held by bulls is correlated by age, with young bulls aged between five and six tending to form larger home ranges than older males. The European bison does not defend territory, and herd ranges tend to greatly overlap. Core areas of territory are usually sited near meadows and water sources. Reproduction The rutting season occurs from August through to October. Bulls aged 4–6 years, though sexually mature, are prevented from mating by older bulls. Cows usually have a gestation period of 264 days, and typically give birth to one calf at a time. On average, male calves weigh at birth, and females . Body size in males increases proportionately to the age of 6 years. While females have a higher increase in body mass in their first year, their growth rate is comparatively slower than that of males by the age of 3–5. Bulls reach sexual maturity at the age of two, while cows do so in their third year. European bison have lived as long as 30 years in captivity, but in the wild their lifespan is usually between 18 and 24 years, with females living longer than males. Productive breeding years are between four and 20 years of age in females, and only between six and 12 years of age in males. Diet European bison feed predominantly on grasses, although they also browse on shoots and leaves; in summer, an adult male can consume 32 kg of food in a day. European bison in the Białowieża Forest in Poland have traditionally been fed hay in the winter for centuries, and large herds may gather around this diet supplement. European bison need to drink every day, and in winter can be seen breaking ice with their heavy hooves. Differences from American bison Although superficially similar, a number of physical and behavioural differences are seen between the European bison and the American bison. The bison has 14 pairs of ribs, while the American bison has 15. Adult European bison are (on average) taller than American bison, and have longer legs. European bison tend to browse more, and graze less than their American relatives; to accommodate this their necks are set differently. Compared to the American bison, the nose of the European bison is set further forward than the forehead when the neck is in a neutral position. The body of the wisent is less hairy, though its tail is hairier than that of the American species. The horns of the European bison point forward through the plane of their faces, making them more adept at fighting through the interlocking of horns in the same manner as domestic cattle, unlike the American bison, which favours charging. European bison are less tameable than the American ones, and breed with domestic cattle less readily. The European bison is less shaggy, with a more lanky body shape. In terms of behavioural capability, European bison runs slower and with less stamina yet jumps higher and longer than American bisons, showing signs of more developed adaptations into mountainous habitats. Conservation The protection of the European bison has a long history; between the 15th and 18th centuries, those in the forest of Białowieża were protected and their diet supplemented. Efforts to restore this species to the wild began in 1929, with the establishment of the Bison Restitution Centre at Białowieża, Poland. Subsequently, in 1948, the Bison Breeding Centre was established within the Prioksko-Terrasny Biosphere Reserve. The modern herds are managed as two separate lines – one consisting of only Bison bonasus bonasus (all descended from only seven animals) and one consisting of all 12 ancestors, including the one B. b. caucasicus bull. The latter is generally not considered a separate subspecies because they contain DNA from both B. b. bonasus and B. b. caucasicius, although some scientists classify them as a new subspecies, B. b. montanus. Only a limited amount of inbreeding depression from the population bottleneck has been found, having a small effect on skeletal growth in cows and a small rise in calf mortality. Genetic variability continues to shrink. From five initial bulls, all current European bison bulls have one of only two remaining Y chromosomes. Reintroduction Beginning in 1951, European bison have been reintroduced into the wild, including some areas where they were never found wild. Free-ranging herds are currently found in Poland, Lithuania, Belarus, Ukraine, Bulgaria, Romania, Russia, Slovakia, Latvia, Switzerland, Kyrgyzstan, Germany, and in forest preserves in the Western Caucasus. The Białowieża Primeval Forest, an ancient woodland that straddles the border between Poland and Belarus, continues to have the largest free-living European bison population in the world with around 1000 wild bison counted in 2014. Herds have also been introduced in Moldova (2005), Spain (2010), Denmark (2012), the Czech Republic (2014), and Portugal (2024) Reintroduction of bison to a 20 square mile grasslands area in the Țarcu Mountains of Romania in 2014 was found to have resulted in an additional 54,000 tons of carbon draw-down annually. The Wilder Blean project, headed up by the Wildwood Trust and Kent Wildlife Trust, introduced European bison to the UK for the first time in 6000 years (although there was an unsuccessful attempt in Scotland in 2011, and the European bison is not confirmed to be native to England while the British Isles once used to be inhabited by now-extinct Steppe bison and Pleistocene woodland bison). The herd of 3 females, with plans to also release a male in the following months, was set free in July 2022 within a 2,500-acre conservation area in West Blean and Thornden Woods, near Canterbury. Unknown to the rangers, one of the females was pregnant and gave birth to a calf in October 2022, marking the first wild bison born in the UK for the first time in millennia. In winter 2023, the matriarch of the herd gave birth to a male calf. A further two female calves were born at the site in October 2024. As below-mentioned, there are established herds in Spain, Portugal and Italy, however European bison has not been recorded naturally from the Iberian and Italian Peninsulas, while these regions were once inhabited by Pleistocene woodland bison and Steppe bison. Numbers and distribution Numbers by country The total worldwide population recorded in 2019 was around 7,500 – about half of this number being in Poland and Belarus, with over 25% of the global population located in Poland alone. For 2016, the number was 6,573 (including 4,472 free-ranging) and has been increasing. Some local populations are estimated as: : 10 animals : 29 animals in 2021. : 2,385 animals in 2023. : Around 150 animals in northeastern Bulgaria; a smaller population has been reintroduced in the eastern Rhodope Mountains. : 106 animals in 2017. : Two herds were established in the summer of 2012, as part of conservation of the species. First, fourteen animals were released in meadows near the town of Randers, and later, seven animals on Bornholm. In June 2012, one male and six females were moved from Poland to the Danish island Bornholm. The plan was to examine if it is possible to establish a wild population of bison on the island over a five-year period. In 2018, it was decided to keep the bison on Bornholm, but maintained within the large fenced-in part of the Almindingen forest where originally introduced. In 2019, the bison that initially had been introduced near Randers were moved to the more suitable and spacious Lille Vildmose; these were supplemented by seven animals from the Netherlands in 2021. : One herd was established in 2005 in the Alps near the village of Thorenc (close to the city of Grasse), as part of conservation of the species. In 2015, it contained around 50 animals. : A herd of 8 animals (1 male, 5 females, and 2 calves) was released into nature in April 2013 at the Rothaarsteig natural reserve near Bad Berleburg (North Rhine-Westphalia) after 850 years of absence since the species became extinct in that region. As of May 2015, 13 free-roaming wisents lived there. In September 2017 one of the free-living Polish animals swam the border river Oder and migrated to Germany. It was the first wild bison seen in Germany for more than 250 years. German authorities ordered the animal to be killed and it was shot dead by hunters in September 2017. As of 2020, the population has steadily increased to 26 individuals, living in one subpopulation. : 11 animals in the Őrség National Park and few more in the Körös-Maros National Park. : A small herd can be found in the Natura Viva Park near Verona, Italy, where the animals are protected and are prepared to be put in nature again in the wild areas of Romania. : Animals were reintroduced at one point. : Animals were reintroduced in Pape Nature Reserve in 2007. : 214 free-ranging animals as of 2017. : Extirpated from Moldova since the 18th century, wisents were reintroduced with the arrival of three European bison from Białowieża Forest in Poland several days before Moldova's Independence Day on 27 August 2005. Moldova is currently interested in expanding their wisent population, and began talks with Belarus in 2019 regarding a bison exchange program between the two countries. Bisons can be found in Pădurea Domnească. : Natuurpark Lelystad: In 1976, the first wisent arrived from Białowieża. Natuurpark Lelystad is a breeding centre with a herd of approx. 25 animals living together with Przewalski's horses. All wisents are registered in the European Studbook and are of the Lowland line. It is one of the suppliers for re-introduction projects in Europe. Kraansvlak herd established in 2007 with three wisents, and expanded to six in 2008; the Maashorst herd established in 2016 with 11 wisents; and the Veluwe herd established in 2016 with a small herd. In 2020 a new herd of 14 bison was established in the Slikken van de Heen. Numbers at the end of 2017 were: Lelystad 24, Kraansvlak 22, Maashorst 15 and the Veluwe 5, for a total of 66 animals. : By the end of 2019 the number was 2,269, of which 2,048 were free-roaming and 221 were living in captivity, including zoos. A total of 770 belonged to the wild population in the Białowieża Forest and 668 to Bieszczady National Park. The total population has been increasing by around 15% to 18% yearly. Between 1995 and 2017 the number of bison in Poland doubled; from 2012 to 2017 it rose by 30%. Poland has been described as the world's breeding centre of the European bison. Zubr from Poland have also been transported beyond the country's borders to boost the local populations of other countries – among them Bulgaria, Czechia, Denmark, Moldova, Romania, Spain, Switzerland, and others. As the number of animals is growing, more bison are spotted in areas where they have not been seen in centuries, especially migrating males in Spring. The placement of about 40 free-roaming bison in the Lasy Janowskie in 2020/2021 resulted in ecologists' efforts to redesign some bridges of the S19 highway (constructed in 2020–2022) to allow large animals to cross it. : A herd of 8 bisons were introduced in central Portugal for the first time in 2024 in Termas de Monfortinho and Herdade do Vale Freitoso, through the "Rewilding Portugal" programme. : The European bison were reintroduced in 1958, when the first two animals were brought from Poland and kept in a reserve in Hațeg. Similar locations later appeared in Vama Buzăului (Valea Zimbrilor Nature Reserve) and Bucșani, Dâmbovița. The idea of free bison, on the Romanian territory, was born in 1999, through a program supported by the World Bank and the European Union. Almost 160 free-roaming animals, as of 2019, population slowly increasing in the four areas where wild bison can be found: Northern Romania – Vânători-Neamț Natural Park, and South-West Romania – Țarcu Mountains and Poiana Ruscă Mountains, as part of the Life-Bison project initiated by WWF Romania and Rewilding Europe, with co-funding from the EU through its LIFE Programme, but also in the Southern Carpathians, in the Făgăraș Mountains, as part of the Foundation Conservation Carpathia project, carried out within the LIFE Carpathia project. Since 2019, Foundation Conservation Carpathia has started to reintroduce the European Bison in the Făgăraș Mountains, after more than 200 years since their disappearance from the central forests of Romania. Foundation Conservation Carpathia aims to reintroduce 75 European bisons into the Făgăraș Mountains. In June 2024, 14 additional bison were brought to the southern Carpathian mountains from Germany and Sweden. : As of 2020, the population of Wisents in Russia has greatly recovered and stands at 1,588 individuals. : In March 2022, 5 animals (one bull and four cows) were reintroduced where bison went extinct c.1800. Animals were transported from the Białowieża Forest and reintroduced on the Fruška Gora mountain. : A bison reserve was established in Topoľčianky in 1958. The reserve has a maximum capacity of 13 animals but has bred around 180 animals for various zoos. As of 2020, there was also a wild breeding herd of 48 animals in Poloniny National Park with an increasing population. : Two herds in northern Spain were established in 2010. As of 2018, the total population neared a hundred animals, half of them in Castile and León, but also in Asturias, Valencia, Extremadura and the Pyrenees. : There are approximately 139 animals. : More than 50 animals. Coming from Poland, one male and four females were introduced in November 2019 into the natural reserve and forest of Suchy, Vaud Canton, western Switzerland. On 15 June 2020, the first baby of that population was born. Besides the Suchy breeding station, several zoos in Switzerland are keeping bison too. From September 2022, at least five animals will be kept in semi-freedom in Welschenrohr, with hiking paths cutting through the enclosure. : A population of around 400 animals, population was recently introduced to several national parks and is increasing. State program of conservation and reproduction was approved in 2022. : In 2011, 3 animals were introduced into Alladale Wilderness Reserve in Scotland. Plans to move more into the reserve were made, but the project failed due to not being "well thought through", and the project was terminated in 2013. 11 years later, 3 female bison were introduced to the West Blean and Thornden Woods in Kent, England on 18 July 2022. A calf, also female, was unexpectedly born in September 2022, bringing the total number to 4. On 24 December 2022 a bull was introduced after delays brought about by Brexit-related complications. This makes these 5 bison the first "complete" wild herd in the UK in thousands of years. The birth of a male calf in winter 2023 and two female calves in October 2024 increased the herd's numbers to 8 animals. Distribution The largest European bison herds — of both captive and wild populations — are still located in Poland and Belarus, the majority of which can be found in the Białowieża Forest including the most numerous population of free-living European bison in the world with most of the animals living on the Polish side of the border. Poland remains the world's breeding centre for the wisent. In the years 1945 to 2014, from the Białowieża National Park alone, 553 specimens were sent to most captive populations of the bison in Europe as well as all breeding sanctuaries for the species in Poland. Since 1983, a small reintroduced population lives in the Altai Mountains. This population suffers from inbreeding depression and needs the introduction of unrelated animals for "blood refreshment". In the long term, authorities hope to establish a population of about 1,000 animals in the area. One of the northernmost current populations of the European bison lives in Vologodskaya Oblast in the Northern Dvina valley at about 60°N. It survives without supplementary winter feeding. Another Russian population lives in the forests around the Desna River on the border between Russia and Ukraine. The north-easternmost population lives in Pleistocene Park south of Chersky in Siberia, a project to recreate the steppe ecosystem which began to be altered 10,000 years ago. Five wisents were introduced on 24 April 2011. The wisents were brought to the park from the Prioksko-Terrasny Nature Reserve near Moscow. The bison originated from a population in Denmark. Winter temperatures often drop below −50 °C. Four of the five bison have subsequently died due to problems acclimatizing to the low winter temperature. Plans are being made to reintroduce two herds in Germany and in the Netherlands in Oostvaardersplassen Nature Reserve in Flevoland as well as the Veluwe. In 2007, a bison pilot project in a fenced area was begun in Zuid-Kennemerland National Park in the Netherlands. Because of their limited genetic pool, they are considered highly vulnerable to illnesses such as foot-and-mouth disease. In March 2016, a herd was released in the Maashorst Nature Reserve in North Brabant. Zoos in 30 countries also have quite a few bison involved in captive-breeding programs. Cultural significance Representations of the European bison from different ages, across millennia of human society's existence, can be found throughout Eurasia in the form of drawings and rock carvings; one of the oldest and most famous instances of the latter can be found in the Cave of Altamira, present-day Spain, where cave art featuring the wisent from the Upper Paleolithic was discovered. The bison has also been represented in a wide range of art in human history, such as sculptures, paintings, photographs, glass art, and more. Sculptures of the wisent constructed in the 19th and 20th centuries continue to stand in a number of European cities; arguably the most notable of these are the zubr statue in Spała from 1862 designed by Mihály Zichy and the two bison sculptures in Kiel sculpted by August Gaul in 1910–1913. However, a number of other monuments to the animal also exist, such as those in Hajnówka and Pszczyna or at the Kyiv Zoo entrance. Mikołaj Hussowczyk, a poet writing in Latin about the Grand Duchy of Lithuania during the early 16th century, described the bison in a historically significant fictional work from 1523. The European bison is considered one of the national animals of Poland and Belarus. Due to this and the fact that half of the worldwide European bison population can be found spread across these two countries, the wisent is still featured prominently in the heraldry of these neighbouring states (especially in the overlapping region of Eastern Poland and Western Belarus). Examples in Poland include the coats of arms of: the counties of Hajnówka and Zambrów, the towns Sokółka and Żywiec, the villages Białowieża and Narewka, as well as the coats of arms of the Pomian and Wieniawa families. Examples in Belarus include the Grodno and Brest voblasts, the town of Svislach, and others. The European bison can also be found on the coats of arms of places in neighbouring countries: Perloja in southern Lithuania, Lypovets and Zubrytsia in west-central Ukraine, and Zubří in east Czechia – as well as further outside the region, such as Kortezubi in the Basque Country, and Jabel in Germany. A flavoured vodka called Żubrówka (), originating as a recipe of the szlachta of the Kingdom of Poland in the 14th century, has since 1928 been industrially produced as a brand in Poland. In the decades that followed, it became known as the "world's best known Polish vodka" and sparked the creation of a number of copy brands inspired by the original in Belarus, Russia, Germany, as well as other brands in Poland. The original Polish brand is known for placing a decorative blade of bison grass from the Białowieża Forest in each bottle of their product; both the plant's name in Polish and the vodka are named after żubr, the Polish name for the European bison. The bison also appears commercially as a symbol of a number of other Polish brands, such as the popular beer brand Żubr and on the logo of Poland's second largest bank, Bank Pekao S.A.
Biology and health sciences
Artiodactyla
null
39068
https://en.wikipedia.org/wiki/Digital%20electronics
Digital electronics
Digital electronics is a field of electronics involving the study of digital signals and the engineering of devices that use or produce them. This is in contrast to analog electronics which work primarily with analog signals. Despite the name, digital electronics designs include important analog design considerations. Digital electronic circuits are usually made from large assemblies of logic gates, often packaged in integrated circuits. Complex devices may have simple electronic representations of Boolean logic functions. History The binary number system was refined by Gottfried Wilhelm Leibniz (published in 1705) and he also established that by using the binary system, the principles of arithmetic and logic could be joined. Digital logic as we know it was the brain-child of George Boole in the mid-19th century. In an 1886 letter, Charles Sanders Peirce described how logical operations could be carried out by electrical switching circuits. Eventually, vacuum tubes replaced relays for logic operations. Lee De Forest's modification of the Fleming valve in 1907 could be used as an AND gate. Ludwig Wittgenstein introduced a version of the 16-row truth table as proposition 5.101 of Tractatus Logico-Philosophicus (1921). Walther Bothe, inventor of the coincidence circuit, shared the 1954 Nobel Prize in physics, for creating the first modern electronic AND gate in 1924. Mechanical analog computers started appearing in the first century and were later used in the medieval era for astronomical calculations. In World War II, mechanical analog computers were used for specialized military applications such as calculating torpedo aiming. During this time the first electronic digital computers were developed, with the term digital being proposed by George Stibitz in 1942. Originally they were the size of a large room, consuming as much power as several hundred modern PCs. Claude Shannon, demonstrating that electrical applications of Boolean algebra could construct any logical numerical relationship, ultimately laid the foundations of digital computing and digital circuits in his master's thesis of 1937, which is considered to be arguably the most important master's thesis ever written, winning the 1939 Alfred Noble Prize. The Z3 was an electromechanical computer designed by Konrad Zuse. Finished in 1941, it was the world's first working programmable, fully automatic digital computer. Its operation was facilitated by the invention of the vacuum tube in 1904 by John Ambrose Fleming. At the same time that digital calculation replaced analog, purely electronic circuit elements soon replaced their mechanical and electromechanical equivalents. John Bardeen and Walter Brattain invented the point-contact transistor at Bell Labs in 1947, followed by William Shockley inventing the bipolar junction transistor at Bell Labs in 1948. At the University of Manchester, a team under the leadership of Tom Kilburn designed and built a machine using the newly developed transistors instead of vacuum tubes. Their "transistorised computer", and the first in the world, was operational by 1953, and a second version was completed there in April 1955. From 1955 and onwards, transistors replaced vacuum tubes in computer designs, giving rise to the "second generation" of computers. Compared to vacuum tubes, transistors were smaller, more reliable, had indefinite lifespans, and required less power than vacuum tubes - thereby giving off less heat, and allowing much denser concentrations of circuits, up to tens of thousands in a relatively compact space. In 1955, Carl Frosch and Lincoln Derick discovered silicon dioxide surface passivation effects. In 1957 Frosch and Derick, using masking and predeposition, were able to manufacture silicon dioxide field effect transistors; the first planar transistors, in which drain and source were adjacent at the same surface. At Bell Labs, the importance of Frosch and Derick technique and transistors was immediately realized. Results of their work circulated around Bell Labs in the form of BTL memos before being published in 1957. At Shockley Semiconductor, Shockley had circulated the preprint of their article in December 1956 to all his senior staff, including Jean Hoerni, who would later invent the planar process in 1959 while at Fairchild Semiconductor. At Bell Labs, J.R. Ligenza and W.G. Spitzer studied the mechanism of thermally grown oxides, fabricated a high quality Si/SiO2 stack and published their results in 1960. Following this research at Bell Labs, Mohamed Atalla and Dawon Kahng proposed a silicon MOS transistor in 1959 and successfully demonstrated a working MOS device with their Bell Labs team in 1960. The team included E. E. LaBate and E. I. Povilonis who fabricated the device; M. O. Thurston, L. A. D’Asaro, and J. R. Ligenza who developed the diffusion processes, and H. K. Gummel and R. Lindner who characterized the device. While working at Texas Instruments in July 1958, Jack Kilby recorded his initial ideas concerning the integrated circuit (IC), then successfully demonstrated the first working integrated circuit on 12 September 1958. Kilby's chip was made of germanium. The following year, Robert Noyce at Fairchild Semiconductor invented the silicon integrated circuit. The basis for Noyce's silicon IC was Hoerni's planar process. The MOSFET's advantages include high scalability, affordability, low power consumption, and high transistor density. Its rapid on–off electronic switching speed also makes it ideal for generating pulse trains, the basis for electronic digital signals, in contrast to BJTs which, more slowly, generate analog signals resembling sine waves. Along with MOS large-scale integration (LSI), these factors make the MOSFET an important switching device for digital circuits. The MOSFET revolutionized the electronics industry, and is the most common semiconductor device. In the early days of integrated circuits, each chip was limited to only a few transistors, and the low degree of integration meant the design process was relatively simple. Manufacturing yields were also quite low by today's standards. The wide adoption of the MOSFET transistor by the early 1970s led to the first large-scale integration (LSI) chips with more than 10,000 transistors on a single chip. Following the wide adoption of CMOS, a type of MOSFET logic, by the 1980s, millions and then billions of MOSFETs could be placed on one chip as the technology progressed, and good designs required thorough planning, giving rise to new design methods. The transistor count of devices and total production rose to unprecedented heights. The total amount of transistors produced until 2018 has been estimated to be (13sextillion). The wireless revolution (the introduction and proliferation of wireless networks) began in the 1990s and was enabled by the wide adoption of MOSFET-based RF power amplifiers (power MOSFET and LDMOS) and RF circuits (RF CMOS). Wireless networks allowed for public digital transmission without the need for cables, leading to digital television, satellite and digital radio, GPS, wireless Internet and mobile phones through the 1990s2000s. Properties An advantage of digital circuits when compared to analog circuits is that signals represented digitally can be transmitted without degradation caused by noise. For example, a continuous audio signal transmitted as a sequence of 1s and 0s, can be reconstructed without error, provided the noise picked up in transmission is not enough to prevent identification of the 1s and 0s. In a digital system, a more precise representation of a signal can be obtained by using more binary digits to represent it. While this requires more digital circuits to process the signals, each digit is handled by the same kind of hardware, resulting in an easily scalable system. In an analog system, additional resolution requires fundamental improvements in the linearity and noise characteristics of each step of the signal chain. With computer-controlled digital systems, new functions can be added through software revision and no hardware changes are needed. Often this can be done outside of the factory by updating the product's software. This way, the product's design errors can be corrected even after the product is in a customer's hands. Information storage can be easier in digital systems than in analog ones. The noise immunity of digital systems permits data to be stored and retrieved without degradation. In an analog system, noise from aging and wear degrade the information stored. In a digital system, as long as the total noise is below a certain level, the information can be recovered perfectly. Even when more significant noise is present, the use of redundancy permits the recovery of the original data provided too many errors do not occur. In some cases, digital circuits use more energy than analog circuits to accomplish the same tasks, thus producing more heat which increases the complexity of the circuits such as the inclusion of heat sinks. In portable or battery-powered systems this can limit the use of digital systems. For example, battery-powered cellular phones often use a low-power analog front-end to amplify and tune the radio signals from the base station. However, a base station has grid power and can use power-hungry, but very flexible software radios. Such base stations can easily be reprogrammed to process the signals used in new cellular standards. Many useful digital systems must translate from continuous analog signals to discrete digital signals. This causes quantization errors. Quantization error can be reduced if the system stores enough digital data to represent the signal to the desired degree of fidelity. The Nyquist–Shannon sampling theorem provides an important guideline as to how much digital data is needed to accurately portray a given analog signal. If a single piece of digital data is lost or misinterpreted, in some systems only a small error may result, while in other systems the meaning of large blocks of related data can completely change. For example, a single-bit error in audio data stored directly as linear pulse-code modulation causes, at worst, a single audible click. But when using audio compression to save storage space and transmission time, a single bit error may cause a much larger disruption. Because of the cliff effect, it can be difficult for users to tell if a particular system is right on the edge of failure, or if it can tolerate much more noise before failing. Digital fragility can be reduced by designing a digital system for robustness. For example, a parity bit or other error management method can be inserted into the signal path. These schemes help the system detect errors, and then either correct the errors, or request retransmission of the data. Construction A digital circuit is typically constructed from small electronic circuits called logic gates that can be used to create combinational logic. Each logic gate is designed to perform a function of Boolean logic when acting on logic signals. A logic gate is generally created from one or more electrically controlled switches, usually transistors but thermionic valves have seen historic use. The output of a logic gate can, in turn, control or feed into more logic gates. Another form of digital circuit is constructed from lookup tables, (many sold as "programmable logic devices", though other kinds of PLDs exist). Lookup tables can perform the same functions as machines based on logic gates, but can be easily reprogrammed without changing the wiring. This means that a designer can often repair design errors without changing the arrangement of wires. Therefore, in small-volume products, programmable logic devices are often the preferred solution. They are usually designed by engineers using electronic design automation software. Integrated circuits consist of multiple transistors on one silicon chip and are the least expensive way to make a large number of interconnected logic gates. Integrated circuits are usually interconnected on a printed circuit board which is a board that holds electrical components, and connects them together with copper traces. Design Engineers use many methods to minimize logic redundancy in order to reduce the circuit complexity. Reduced complexity reduces component count and potential errors and therefore typically reduces cost. Logic redundancy can be removed by several well-known techniques, such as binary decision diagrams, Boolean algebra, Karnaugh maps, the Quine–McCluskey algorithm, and the heuristic computer method. These operations are typically performed within a computer-aided design system. Embedded systems with microcontrollers and programmable logic controllers are often used to implement digital logic for complex systems that do not require optimal performance. These systems are usually programmed by software engineers or by electricians, using ladder logic. Representation A digital circuit's input-output relationship can be represented as a truth table. An equivalent high-level circuit uses logic gates, each represented by a different shape (standardized by IEEE/ANSI 91–1984). A low-level representation uses an equivalent circuit of electronic switches (usually transistors). Most digital systems divide into combinational and sequential systems. The output of a combinational system depends only on the present inputs. However, a sequential system has some of its outputs fed back as inputs, so its output may depend on past inputs in addition to present inputs, to produce a sequence of operations. Simplified representations of their behavior called state machines facilitate design and test. Sequential systems divide into two further subcategories. "Synchronous" sequential systems change state all at once when a clock signal changes state. "Asynchronous" sequential systems propagate changes whenever inputs change. Synchronous sequential systems are made using flip flops that store inputted voltages as a bit only when the clock changes. Synchronous systems The usual way to implement a synchronous sequential state machine is to divide it into a piece of combinational logic and a set of flip flops called a state register. The state register represents the state as a binary number. The combinational logic produces the binary representation for the next state. On each clock cycle, the state register captures the feedback generated from the previous state of the combinational logic and feeds it back as an unchanging input to the combinational part of the state machine. The clock rate is limited by the most time-consuming logic calculation in the combinational logic. Asynchronous systems Most digital logic is synchronous because it is easier to create and verify a synchronous design. However, asynchronous logic has the advantage of its speed not being constrained by an arbitrary clock; instead, it runs at the maximum speed of its logic gates. Nevertheless, most systems need to accept external unsynchronized signals into their synchronous logic circuits. This interface is inherently asynchronous and must be analyzed as such. Examples of widely used asynchronous circuits include synchronizer flip-flops, switch debouncers and arbiters. Asynchronous logic components can be hard to design because all possible states, in all possible timings must be considered. The usual method is to construct a table of the minimum and maximum time that each such state can exist and then adjust the circuit to minimize the number of such states. The designer must force the circuit to periodically wait for all of its parts to enter a compatible state (this is called "self-resynchronization"). Without careful design, it is easy to accidentally produce asynchronous logic that is unstable—that is—real electronics will have unpredictable results because of the cumulative delays caused by small variations in the values of the electronic components. Register transfer systems Many digital systems are data flow machines. These are usually designed using synchronous register transfer logic and written with hardware description languages such as VHDL or Verilog. In register transfer logic, binary numbers are stored in groups of flip flops called registers. A sequential state machine controls when each register accepts new data from its input. The outputs of each register are a bundle of wires called a bus that carries that number to other calculations. A calculation is simply a piece of combinational logic. Each calculation also has an output bus, and these may be connected to the inputs of several registers. Sometimes a register will have a multiplexer on its input so that it can store a number from any one of several buses. Asynchronous register-transfer systems (such as computers) have a general solution. In the 1980s, some researchers discovered that almost all synchronous register-transfer machines could be converted to asynchronous designs by using first-in-first-out synchronization logic. In this scheme, the digital machine is characterized as a set of data flows. In each step of the flow, a synchronization circuit determines when the outputs of that step are valid and instructs the next stage when to use these outputs. Computer design The most general-purpose register-transfer logic machine is a computer. This is basically an automatic binary abacus. The control unit of a computer is usually designed as a microprogram run by a microsequencer. A microprogram is much like a player-piano roll. Each table entry of the microprogram commands the state of every bit that controls the computer. The sequencer then counts, and the count addresses the memory or combinational logic machine that contains the microprogram. The bits from the microprogram control the arithmetic logic unit, memory and other parts of the computer, including the microsequencer itself. In this way, the complex task of designing the controls of a computer is reduced to the simpler task of programming a collection of much simpler logic machines. Almost all computers are synchronous. However, asynchronous computers have also been built. One example is the ASPIDA DLX core. Another was offered by ARM Holdings. They do not, however, have any speed advantages because modern computer designs already run at the speed of their slowest component, usually memory. They do use somewhat less power because a clock distribution network is not needed. An unexpected advantage is that asynchronous computers do not produce spectrally-pure radio noise. They are used in some radio-sensitive mobile-phone base-station controllers. They may be more secure in cryptographic applications because their electrical and radio emissions can be more difficult to decode. Computer architecture Computer architecture is a specialized engineering activity that tries to arrange the registers, calculation logic, buses and other parts of the computer in the best way possible for a specific purpose. Computer architects have put a lot of work into reducing the cost and increasing the speed of computers in addition to boosting their immunity to programming errors. An increasingly common goal of computer architects is to reduce the power used in battery-powered computer systems, such as smartphones. Design issues in digital circuits Digital circuits are made from analog components. The design must assure that the analog nature of the components does not dominate the desired digital behavior. Digital systems must manage noise and timing margins, parasitic inductances and capacitances. Bad designs have intermittent problems such as glitches, vanishingly fast pulses that may trigger some logic but not others, runt pulses that do not reach valid threshold voltages. Additionally, where clocked digital systems interface to analog systems or systems that are driven from a different clock, the digital system can be subject to metastability where a change to the input violates the setup time for a digital input latch. Since digital circuits are made from analog components, digital circuits calculate more slowly than low-precision analog circuits that use a similar amount of space and power. However, the digital circuit will calculate more repeatably, because of its high noise immunity. Automated design tools Much of the effort of designing large logic machines has been automated through the application of electronic design automation (EDA). Simple truth table-style descriptions of logic are often optimized with EDA that automatically produce reduced systems of logic gates or smaller lookup tables that still produce the desired outputs. The most common example of this kind of software is the Espresso heuristic logic minimizer. Optimizing large logic systems may be done using the Quine–McCluskey algorithm or binary decision diagrams. There are promising experiments with genetic algorithms and annealing optimizations. To automate costly engineering processes, some EDA can take state tables that describe state machines and automatically produce a truth table or a function table for the combinational logic of a state machine. The state table is a piece of text that lists each state, together with the conditions controlling the transitions between them and their associated output signals. Often, real logic systems are designed as a series of sub-projects, which are combined using a tool flow. The tool flow is usually controlled with the help of a scripting language, a simplified computer language that can invoke the software design tools in the right order. Tool flows for large logic systems such as microprocessors can be thousands of commands long, and combine the work of hundreds of engineers. Writing and debugging tool flows is an established engineering specialty in companies that produce digital designs. The tool flow usually terminates in a detailed computer file or set of files that describe how to physically construct the logic. Often it consists of instructions on how to draw the transistors and wires on an integrated circuit or a printed circuit board. Parts of tool flows are debugged by verifying the outputs of simulated logic against expected inputs. The test tools take computer files with sets of inputs and outputs and highlight discrepancies between the simulated behavior and the expected behavior. Once the input data is believed to be correct, the design itself must still be verified for correctness. Some tool flows verify designs by first producing a design, then scanning the design to produce compatible input data for the tool flow. If the scanned data matches the input data, then the tool flow has probably not introduced errors. The functional verification data are usually called test vectors. The functional test vectors may be preserved and used in the factory to test whether newly constructed logic works correctly. However, functional test patterns do not discover all fabrication faults. Production tests are often designed by automatic test pattern generation software tools. These generate test vectors by examining the structure of the logic and systematically generating tests targeting particular potential faults. This way the fault coverage can closely approach 100%, provided the design is properly made testable (see next section). Once a design exists, and is verified and testable, it often needs to be processed to be manufacturable as well. Modern integrated circuits have features smaller than the wavelength of the light used to expose the photoresist. Software that are designed for manufacturability add interference patterns to the exposure masks to eliminate open-circuits and enhance the masks' contrast. Design for testability There are several reasons for testing a logic circuit. When the circuit is first developed, it is necessary to verify that the design circuit meets the required functional, and timing specifications. When multiple copies of a correctly designed circuit are being manufactured, it is essential to test each copy to ensure that the manufacturing process has not introduced any flaws. A large logic machine (say, with more than a hundred logical variables) can have an astronomical number of possible states. Obviously, factory testing every state of such a machine is unfeasible, for even if testing each state only took a microsecond, there are more possible states than there are microseconds since the universe began! Large logic machines are almost always designed as assemblies of smaller logic machines. To save time, the smaller sub-machines are isolated by permanently installed design for test circuitry and are tested independently. One common testing scheme provides a test mode that forces some part of the logic machine to enter a test cycle. The test cycle usually exercises large independent parts of the machine. Boundary scan is a common test scheme that uses serial communication with external test equipment through one or more shift registers known as scan chains. Serial scans have only one or two wires to carry the data, and minimize the physical size and expense of the infrequently used test logic. After all the test data bits are in place, the design is reconfigured to be in normal mode and one or more clock pulses are applied, to test for faults (e.g. stuck-at low or stuck-at high) and capture the test result into flip-flops or latches in the scan shift register(s). Finally, the result of the test is shifted out to the block boundary and compared against the predicted good machine result. In a board-test environment, serial-to-parallel testing has been formalized as the JTAG standard. Trade-offs Cost Since a digital system may use many logic gates, the overall cost of building a computer correlates strongly with the cost of a logic gate. In the 1930s, the earliest digital logic systems were constructed from telephone relays because these were inexpensive and relatively reliable. The earliest integrated circuits were constructed to save weight and permit the Apollo Guidance Computer to control an inertial guidance system for a spacecraft. The first integrated circuit logic gates cost nearly US$50, which in would be equivalent to $. Mass-produced gates on integrated circuits became the least-expensive method to construct digital logic. With the rise of integrated circuits, reducing the absolute number of chips used represented another way to save costs. The goal of a designer is not just to make the simplest circuit, but to keep the component count down. Sometimes this results in more complicated designs with respect to the underlying digital logic but nevertheless reduces the number of components, board size, and even power consumption. Reliability Another major motive for reducing component count on printed circuit boards is to reduce the manufacturing defect rate due to failed soldered connections and increase reliability. Defect and failure rates tend to increase along with the total number of component pins. The failure of a single logic gate may cause a digital machine to fail. Where additional reliability is required, redundant logic can be provided. Redundancy adds cost and power consumption over a non-redundant system. The reliability of a logic gate can be described by its mean time between failure (MTBF). Digital machines first became useful when the MTBF for a switch increased above a few hundred hours. Even so, many of these machines had complex, well-rehearsed repair procedures, and would be nonfunctional for hours because a tube burned-out, or a moth got stuck in a relay. Modern transistorized integrated circuit logic gates have MTBFs greater than 82 billion hours (). This level of reliability is required because integrated circuits have so many logic gates. Fan-out Fan-out describes how many logic inputs can be controlled by a single logic output without exceeding the electrical current ratings of the gate outputs. The minimum practical fan-out is about five. Modern electronic logic gates using CMOS transistors for switches have higher fan-outs. Speed The switching speed describes how long it takes a logic output to change from true to false or vice versa. Faster logic can accomplish more operations in less time. Modern electronic digital logic routinely switches at , and some laboratory systems switch at more than .. Logic families Digital design started with relay logic which is slow. Occasionally a mechanical failure would occur. Fan-outs were typically about 10, limited by the resistance of the coils and arcing on the contacts from high voltages. Later, vacuum tubes were used. These were very fast, but generated heat, and were unreliable because the filaments would burn out. Fan-outs were typically 5 to 7, limited by the heating from the tubes' current. In the 1950s, special computer tubes were developed with filaments that omitted volatile elements like silicon. These ran for hundreds of thousands of hours. The first semiconductor logic family was resistor–transistor logic. This was a thousand times more reliable than tubes, ran cooler, and used less power, but had a very low fan-out of 3. Diode–transistor logic improved the fan-out up to about 7, and reduced the power. Some DTL designs used two power supplies with alternating layers of NPN and PNP transistors to increase the fan-out. Transistor–transistor logic (TTL) was a great improvement over these. In early devices, fan-out improved to 10, and later variations reliably achieved 20. TTL was also fast, with some variations achieving switching times as low as 20 ns. TTL is still used in some designs. Emitter coupled logic is very fast but uses a lot of power. It was extensively used for high-performance computers, such as the Illiac IV, made up of many medium-scale components. By far, the most common digital integrated circuits built today use CMOS logic, which is fast, offers high circuit density and low power per gate. This is used even in large, fast computers, such as the IBM System z. Recent developments In 2009, researchers discovered that memristors can implement a Boolean state storage and provides a complete logic family with very small amounts of space and power, using familiar CMOS semiconductor processes. The discovery of superconductivity has enabled the development of rapid single flux quantum (RSFQ) circuit technology, which uses Josephson junctions instead of transistors. Most recently, attempts are being made to construct purely optical computing systems capable of processing digital information using nonlinear optical elements.
Technology
Electronics: General
null
39073
https://en.wikipedia.org/wiki/Sifaka
Sifaka
A sifaka (; ) is a lemur of the genus Propithecus from the family Indriidae within the order Primates. The common name is an onomatopoeia of their characteristic "shi-fak" alarm call. Like all lemurs, they are found only on the island of Madagascar. All species of sifakas are threatened, ranging from endangered to critically endangered. Anatomy and physiology Sifakas are medium-sized indriids with a head and body length of and a weight of . Their tail is just as long as their body, which differentiates them from the Indri. Their fur is long and silky, with coloration varying by species from yellowish-white to blackish-brown. Their round, hairless face is always black. As with all lemurs, the sifaka has special adaptations for grooming, including a toilet-claw on its second toe and a toothcomb. Sifakas move by vertical clinging and leaping, meaning they maintain an upright position leaping from tree trunk to tree trunk and moving along branches. They are skillful climbers and powerful jumpers, able to make leaps up to from one tree to the next. On the ground, they move like all indrids, with bipedal, sideways hopping movements of the hind legs, holding their fore limbs up for balance. Sifakas are diurnal and arboreal. Sifakas are herbivores, eating leaves, flowers, and fruits. When not searching for food, they spend a good part of the day sunbathing, stretched on the branches. Sifakas live in larger groups than the other indrids (up to 13 animals). They have a firm territory, which they mark with scent glands. Edges of different sifaka territories can overlap. Though they defend their territory from invasion by others of their species, they may peacefully co-exist with other lemur species such as the red-bellied lemur and the common brown lemur. Successful invasions are known to result in death of male members, group takeover, and infanticide. Predators of the sifaka include the fossa, a puma-like mammal native to Madagascar, and aerial hunters such as hawks. The sifaka usually avoids these attacks with its agile acrobatics through the trees high above the ground. However, they have been known to attack by biting and scratching and have even been witnessed fighting off a Madagascar ground boa. A four- to five-month gestation period ends with the birth of a single offspring in July. The young holds fast to the mother's belly when small, but then later is carried on her back. Young are weaned after about six months and reach full maturity at the age of two to three years. The life expectancy of the sifakas is up to 20 years. Threats Conservative estimates show that the utilization of fire for slash and burn cultivation, cattle raising, logging, and mining activities has contributed to the loss of 52% of the forested land since the 1950s, impacting the survival of sifakas. For instance, Perrier's sifaka relies solely on vast forest cover, so little to no institutions are addressing these threats or the species' conservation status. Classification Family Indriidae Genus Indri Genus Avahi Genus Propithecus P. diadema group Diademed sifaka, P. diadema Milne-Edwards's sifaka, P. edwardsi Silky sifaka, P. candidus Perrier's sifaka, P. perrieri P. verreauxi group Coquerel's sifaka, P. coquereli Verreaux's sifaka, P. verreauxi Von der Decken's sifaka, P. deckenii Crowned sifaka, P. coronatus Golden-crowned sifaka, P. tattersalli
Biology and health sciences
Strepsirrhini
Animals
39136
https://en.wikipedia.org/wiki/Accelerating%20expansion%20of%20the%20universe
Accelerating expansion of the universe
Observations show that the expansion of the universe is accelerating, such that the velocity at which a distant galaxy recedes from the observer is continuously increasing with time. The accelerated expansion of the universe was discovered in 1998 by two independent projects, the Supernova Cosmology Project and the High-Z Supernova Search Team, which used distant type Ia supernovae to measure the acceleration. The idea was that as type Ia supernovae have almost the same intrinsic brightness (a standard candle), and since objects that are further away appear dimmer, the observed brightness of these supernovae can be used to measure the distance to them. The distance can then be compared to the supernovae's cosmological redshift, which measures how much the universe has expanded since the supernova occurred; the Hubble law established that the further away an object is, the faster it is receding. The unexpected result was that objects in the universe are moving away from one another at an accelerating rate. Cosmologists at the time expected that recession velocity would always be decelerating, due to the gravitational attraction of the matter in the universe. Three members of these two groups have subsequently been awarded Nobel Prizes for their discovery. Confirmatory evidence has been found in baryon acoustic oscillations, and in analyses of the clustering of galaxies. The accelerated expansion of the universe is thought to have begun since the universe entered its dark-energy-dominated era roughly 5 billion years ago. Within the framework of general relativity, an accelerated expansion can be accounted for by a positive value of the cosmological constant , equivalent to the presence of a positive vacuum energy, dubbed "dark energy". While there are alternative possible explanations, the description assuming dark energy (positive ) is used in the standard model of cosmology, which also includes cold dark matter (CDM) and is known as the Lambda-CDM model. Background In the decades since the detection of cosmic microwave background (CMB) in 1965, the Big Bang model has become the most accepted model explaining the evolution of our universe. The Friedmann equation defines how the energy in the universe drives its expansion. where represents the curvature of the universe, is the scale factor, is the total energy density of the universe, and is the Hubble parameter. The critical density is defined as and the density parameter The Hubble parameter can then be rewritten as where the four currently hypothesized contributors to the energy density of the universe are curvature, matter, radiation and dark energy. Each of the components decreases with the expansion of the universe (increasing scale factor), except perhaps the dark energy term. It is the values of these cosmological parameters which physicists use to determine the acceleration of the universe. The acceleration equation describes the evolution of the scale factor with time where the pressure is defined by the cosmological model chosen. (see explanatory models) Physicists at one time were so assured of the deceleration of the universe's expansion that they introduced a so-called deceleration parameter . Recent observations indicate this deceleration parameter is negative. Relation to inflation According to the theory of cosmic inflation, the very early universe underwent a period of very rapid, quasi-exponential expansion. While the time-scale for this period of expansion was far shorter than that of the existing expansion, this was a period of accelerated expansion with some similarities to the current epoch. Technical definition The definition of "accelerating expansion" is that the second time derivative of the cosmic scale factor, , is positive, which is equivalent to the deceleration parameter, , being negative. However, note this does not imply that the Hubble parameter is increasing with time. Since the Hubble parameter is defined as , it follows from the definitions that the derivative of the Hubble parameter is given by so the Hubble parameter is decreasing with time unless . Observations prefer , which implies that is positive but is negative. Essentially, this implies that the cosmic recession velocity of any one particular galaxy is increasing with time, but its velocity/distance ratio is still decreasing; thus different galaxies expanding across a sphere of fixed radius cross the sphere more slowly at later times. It is seen from above that the case of "zero acceleration/deceleration" corresponds to is a linear function of , , , and . Evidence for acceleration The rate of expansion of the universe can be analyzed using the magnitude-redshift relationship of astronomical objects using standard candles, or their distance-redshift relationship using standard rulers. Also a factor is the growth of large-scale structure, finding that the observed values of the cosmological parameters are best described by models which include an accelerating expansion. Supernova observation In 1998, the first evidence for acceleration came from the observation of Type Ia supernovae, which are exploding white dwarf stars that have exceeded their stability limit. Because they all have similar masses, their intrinsic luminosity can be standardized. Repeated imaging of selected areas of the sky is used to discover the supernovae, then follow-up observations give their peak brightness, which is converted into a quantity known as luminosity distance (see distance measures in cosmology for details). Spectral lines of their light can be used to determine their redshift. For supernovae at redshift less than around 0.1, or light travel time less than 10 percent of the age of the universe, this gives a nearly linear distance–redshift relation due to Hubble's law. At larger distances, since the expansion rate of the universe has changed over time, the distance-redshift relation deviates from linearity, and this deviation depends on how the expansion rate has changed over time. The full calculation requires computer integration of the Friedmann equation, but a simple derivation can be given as follows: the redshift directly gives the cosmic scale factor at the time the supernova exploded. So a supernova with a measured redshift implies the universe was  =  of its present size when the supernova exploded. In the case of accelerated expansion, is positive; therefore, was smaller in the past than today. Thus, an accelerating universe took a longer time to expand from 2/3 to 1 times its present size, compared to a non-accelerating universe with constant and the same present-day value of the Hubble constant. This results in a larger light-travel time, larger distance and fainter supernovae, which corresponds to the actual observations. Adam Riess et al. found that "the distances of the high-redshift SNe Ia were, on average, 10% to 15% further than expected in a low mass density universe without a cosmological constant". This means that the measured high-redshift distances were too large, compared to nearby ones, for a decelerating universe. Several researchers have questioned the majority opinion on the acceleration or the assumption of the "cosmological principle" (that the universe is homogeneous and isotropic). For example, a 2019 paper analyzed the Joint Light-curve Analysis catalog of Type Ia supernovas, containing ten times as many supernova as were used in the 1998 analyses, and concluded that there was little evidence for a "monopole", that is, for an isotropic acceleration in all directions.
Physical sciences
Physical cosmology
Astronomy
39137
https://en.wikipedia.org/wiki/Quintessence%20%28physics%29
Quintessence (physics)
In physics, quintessence is a hypothetical form of dark energy, more precisely a scalar field minimally coupled to gravity, postulated as an explanation of the observation of an accelerating rate of expansion of the universe. The first example of this scenario was proposed by Ratra and Peebles (1988) and Wetterich (1988). The concept was expanded to more general types of time-varying dark energy, and the term "quintessence" was first introduced in a 1998 paper by Robert R. Caldwell, Rahul Dave and Paul Steinhardt. It has been proposed by some physicists to be a fifth fundamental force. Quintessence differs from the cosmological constant explanation of dark energy in that it is dynamic; that is, it changes over time, unlike the cosmological constant which, by definition, does not change. Quintessence can be either attractive or repulsive depending on the ratio of its kinetic and potential energy. Those working with this postulate believe that quintessence became repulsive about ten billion years ago, about 3.5 billion years after the Big Bang. A group of researchers argued in 2021 that observations of the Hubble tension may imply that only quintessence models with a nonzero coupling constant are viable. Terminology The name comes from quinta essentia (fifth element). So called in Latin starting from the Middle Ages, this was the (first) element added by Aristotle to the other four ancient classical elements because he thought it was the essence of the celestial world. Aristotle posited it to be a pure, fine, and primigenial element. Later scholars identified this element with aether. Similarly, modern quintessence would be the fifth known "dynamical, time-dependent, and spatially inhomogeneous" contribution to the overall mass–energy content of the universe. Of course, the other four components are not the ancient Greek classical elements, but rather "baryons, neutrinos, dark matter, [and] radiation." Although neutrinos are sometimes considered radiation, the term "radiation" in this context is only used to refer to massless photons. Spatial curvature of the cosmos (which has not been detected) is excluded because it is non-dynamical and homogeneous; the cosmological constant would not be considered a fifth component in this sense, because it is non-dynamical, homogeneous, and time-independent. Scalar field Quintessence (Q) is a scalar field with an equation of state where wq, the ratio of pressure pq and density q, is given by the potential energy and a kinetic term: Hence, quintessence is dynamic, and generally has a density and wq parameter that varies with time. Specifically, wq parameter can vary within the range [-1,1]. By contrast, a cosmological constant is static, with a fixed energy density and wq = −1. Tracker behavior Many models of quintessence have a tracker behavior, which according to Ratra and Peebles (1988) and Paul Steinhardt et al. (1999) partly solves the cosmological constant problem. In these models, the quintessence field has a density which closely tracks (but is less than) the radiation density until matter-radiation equality, which triggers quintessence to start having characteristics similar to dark energy, eventually dominating the universe. This naturally sets the low scale of the dark energy. When comparing the predicted expansion rate of the universe as given by the tracker solutions with cosmological data, a main feature of tracker solutions is that one needs four parameters to properly describe the behavior of their equation of state, whereas it has been shown that at most a two-parameter model can optimally be constrained by mid-term future data (horizon 2015–2020). Specific models Some special cases of quintessence are phantom energy, in which wq < −1, and k-essence (short for kinetic quintessence), which has a non-standard form of kinetic energy. If this type of energy were to exist, it would cause a big rip in the universe due to the growing energy density of dark energy, which would cause the expansion of the universe to increase at a faster-than-exponential rate. Holographic dark energy Holographic dark energy models, compared with cosmological constant models, imply a high degeneracy. It has been suggested that dark energy might originate from quantum fluctuations of spacetime, and is limited by the event horizon of the universe. Studies with quintessence dark energy found that it dominates gravitational collapse in a spacetime simulation, based on the holographic thermalization. These results show that the smaller the state parameter of quintessence is, the harder it is for the plasma to thermalize. Quintom scenario In 2004, when scientists fitted the evolution of dark energy with the cosmological data, they found that the equation of state had possibly crossed the cosmological constant boundary ( = –1) from above to below. A proven no-go theorem indicates this situation, called the Quintom scenario, requires at least two degrees of freedom for dark energy models involving ideal gases or scalar fields. In 2024, more detailed data from the Dark Energy Spectroscopic Instrument provided evidence suggesting a possible "Quintom-B" scenario, with the equation of state crossing the boundary from below to above.
Physical sciences
Physical cosmology
Astronomy
39159
https://en.wikipedia.org/wiki/Mace%20%28bludgeon%29
Mace (bludgeon)
A mace is a blunt weapon, a type of club or virge that uses a heavy head on the end of a handle to deliver powerful strikes. A mace typically consists of a strong, heavy, wooden or metal shaft, often reinforced with metal, featuring a head made of stone, bone, copper, bronze, iron, or steel. The head of a mace can be shaped with flanges or knobs to increase the pressure of an impact by focusing the force on a small point. They would bind on metal instead of sliding around it, allowing them to deliver more force to an armored opponent than a traditional mace. This effect increased the potential for the mace to injure an armored opponent through weak spots in the armor, and even damage plate armor by denting it, potentially binding overlapping plates and impeding the wearer's range of motion. Medieval historian and re-enactor Todd Todeschini (AKA Todd Cutler) demonstrated this effect with period accurate equipment in a series of tests on video. Maces are rarely used today for actual combat, but many government bodies (for instance, the British House of Commons and the U.S. Congress), universities and other institutions have ceremonial maces and continue to display them as symbols of authority. They are often paraded in academic, parliamentary or civic rituals and processions. Etymology The modern English word mace entered Middle English from Old French mace, ("large mallet/sledgehammer, mace") itself from a Vulgar Latin term *mattia or *mattea (cf. Italian mazza, "club, baton, mace"), probably from Latin mateola (uncertain, possibly a kind of club, hammer, or "hoe handle/stick"). Possibly influenced by Latin mattiobarbulus ("type of javelin"), mattiarius ("soldier armed with said javelin"), from mataris, matara ("Gallic javelin"), from a Gaulish or Celtic word. Development history Prehistory The mace was developed during the Upper Paleolithic from the simple club, by adding sharp spikes of either flint or obsidian. In Europe, an elaborately carved ceremonial flint mace head was one of the artifacts discovered in excavations of the Neolithic mound of Knowth in Ireland, and Bronze Age archaeology cites numerous finds of perforated mace heads. In ancient Ukraine, stone mace heads were first used nearly eight millennia ago. The others known were disc maces with oddly formed stones mounted perpendicularly to their handle. The Narmer Palette shows a king swinging a mace. See the articles on the Narmer Macehead and the Scorpion Macehead for examples of decorated maces inscribed with the names of kings. The problem with early maces was that their stone heads shattered easily and it was difficult to fix the head to the wooden handle reliably. The Egyptians attempted to give them a disk shape in the predynastic period (about 3850–3650 BC) in order to increase their impact and even provide some cutting capabilities, but this seems to have been a short-lived improvement. A rounded pear form of mace head known as a "piriform" replaced the disc mace in the Naqada II period of pre-dynastic Upper Egypt (3600–3250 BC) and was used throughout the Naqada III period (3250–3100 BC). Similar mace heads were also used in Mesopotamia around 2450–1900 BC. On a Sumerian Clay tablet written by the scribe Gar.Ama, the title Lord of the Mace is listed in the year 3100 BC. The Assyrians used maces probably about nineteenth century BC and in their campaigns; the maces were usually made of stone or marble and furnished with gold or other metals, but were rarely used in battle unless fighting heavily armoured infantry. An important, later development in mace heads was the use of metal for their composition. With the advent of copper mace heads, they no longer shattered and a better fit could be made to the wooden club by giving the eye of the mace head the shape of a cone and using a tapered handle. The Shardanas or warriors from Sardinia who fought for Ramses II against the Hittites were armed with maces consisting of wooden sticks with bronze heads. Many bronze statuettes of the times show Sardinian warriors carrying swords, bows and original maces. Ancient history Persians used a variety of maces and fielded large numbers of heavily armoured and armed cavalry (see Cataphract). For a heavily armed Persian knight, a mace was as effective as a sword or battle axe. In fact, Shahnameh has many references to heavily armoured knights facing each other using maces, axes, and swords. The enchanted talking mace Sharur made its first appearance in Sumerian/Akkadian mythology during the epic of Ninurta. The Indian epics Ramayana and Mahabharata describe the extensive use of the gada in ancient Indian warfare as gada-yuddha or 'mace combat'. The ancient Romans did not make wide use of maces, probably because of the influence of armour, and due to the nature of the Roman infantry's fighting style which involved the Pilum (spear) and the Gladius (short sword used in a stabbing fashion), though auxiliaries from Syria Palestina were armed with clubs and maces at the battles of Immae and Emesa in 272 AD. They proved highly effective against the heavily armoured horsemen of Palmyra. Post classical history Western Europe During the Middle Ages metal armour such as mail protected against the blows of edged weapons. Though iron became increasingly common, copper and bronze were also used, especially in iron-deficient areas. One example of a mace capable of penetrating armour is the flanged mace. The flanges allow it to dent or penetrate thick armour. Flange maces did not become popular until after knobbed maces. Although there are some references to flanged maces (bardoukion) as early as the Byzantine Empire c. 900 it is commonly accepted that the flanged mace did not become popular in Europe until the 12th century, when it was concurrently developed in Russia and Mid-west Asia. Maces, being simple to make, cheap, and straightforward in application, were quite common weapons. It is popularly believed that maces were employed by the clergy in warfare to avoid shedding blood (sine effusione sanguinis). The evidence for this is sparse and appears to derive almost entirely from the depiction of Bishop Odo of Bayeux wielding a club-like mace at the Battle of Hastings in 1066 in the Bayeux Tapestry, the idea being that he did so to avoid either shedding blood or bearing the arms of war. In the 1893 work Arms and Armour in Antiquity and the Middle Ages, Paul Lacombe and Charles Boutell state that the mace was chiefly used for blows struck upon the head of an enemy. Eastern Europe Eastern European maces often had pear shaped heads. These maces were also used by the Moldavian ruler Stephen the Great in some of his wars (see Bulawa). The mace is also the favourite weapon of Prince Marko, a hero in South Slavic epic poetry. The pernach was a type of flanged mace developed since the 12th century in the region of Kievan Rus', and later widely used throughout the whole of Europe. The name comes from the Slavic word pero (перо) meaning feather, reflecting the form of pernach that resembled a fletched arrow. Pernachs were the first form of the flanged mace to enjoy a wide usage. It was well suited to penetrate plate armour and chain mail. In the later times it was often used as a symbol of power by the military leaders in Eastern Europe. Pre-Columbian America The cultures of pre-Columbian America used clubs and maces extensively. The warriors of the Moche state and the Inca Empire used maces with bone, stone or copper heads and wooden shafts. The quauholōlli was used in Mesoamerica. Asia Maces in Asia were most often steel clubs with a spherical head. In Persia, the "Gorz" (spherical-head mace) served as a primary combat arm across many eras, most often being used by heavy infantry or Cataphracts. In India a form of these clubs was used by wrestlers to exercise the arms and shoulders. They have been known as gada since ancient times. During the Mughal era, the flanged mace of Persia was introduced to South Asia. The term shishpar is a Persian phrase which literally translates to "six-wings", to refer to the (often) six flanges on the mace. The shishpar mace was introduced by the Delhi Sultanate and continued to be utilized until the 18th century. Modern history Trench raiding clubs used during World War I were modern variations on the medieval mace. They were homemade mêlée weapons used by both the Allies and the Central Powers. Clubs were used during night time trench raiding expeditions as a quiet and effective way of killing or wounding enemy soldiers. Makeshift maces were also found in the possession of some football hooligans in the 1980s. In 2020 China–India skirmishes personnel of People's Liberation Army Ground Force were seen using makeshift maces (batons wrapped in barbed wire and clubs embedded with nails). Ceremonial use Maces have had a role in ceremonial practices over time, including some still in use today. Parliamentary maces The ceremonial mace is a short, richly ornamented staff often made of silver, the upper part of which is furnished with a knob or other head-piece and decorated with a coat of arms. The ceremonial mace was commonly borne before eminent ecclesiastical corporations, magistrates, and academic bodies as a mark and symbol of jurisdiction. Ceremonial maces are important in many parliaments following the Westminster system. They are carried in by the sergeant-at-arms or some other mace-bearers and displayed on the clerks' table while parliament is in session to show that a parliament is fully constituted. They are removed when the session ends. The mace is also removed from the table when a new speaker is being elected to show that parliament is not ready to conduct business. Ecclesiastical maces Maces may also be carried before clergy members in church processions, although in the case of the Roman Catholic pope and cardinals, they have largely been replaced with processional crosses. Parade maces Maces are also used as a parade item, rather than a tool of war, notably in military bands. Specific movements of the mace from the drum major will signal specific orders to the band they lead. The mace can signal anything from a step-off to a halt, from the commencement of playing to the cut off. University maces University maces are employed in a manner similar to parliamentary maces. They symbolize the authority and independence of a chartered university and the authority vested in the provost. They are typically carried in at the beginning of a convocation ceremony and are often less than half a meter high. Heraldic use Like many weapons from feudal times, maces have been used in heraldic blazons as either a charge on a shield or other item, or as external ornamentation. Thus, in France: the city of Cognac (in the Charente département): Argent on a horse sable harnessed or a man proper vested azure with a cloak gules holding a mace, on a chief France modern the city of Colmar (in Haut-Rhin): per pale gules and vert a mace per bend sinister or. Three maces, probably a canting device (Kolben means mace in German, cfr. Columbaria the Latin name of the city) appear on a 1214 seal. The arms in a 15th-century stained-glass window show the mace per bend on argent. the duke of Retz (a pairie created in 1581 for Albert de Gondy) had Or two maces or clubs per saltire sable, bound gules the Garde des sceaux ('keeper of the seals', still the formal title of the French Republic's Minister of Justice) places behind the shield, two silver and gilded maces in saltire, and the achievement is surmounted by a mortier (magistrate's hat)
Technology
Melee weapons
null
39184
https://en.wikipedia.org/wiki/NTFS
NTFS
NT File System (NTFS) (commonly called New Technology File System) is a proprietary journaling file system developed by Microsoft in the 1990s. It was developed to overcome scalability, security and other limitations with FAT. NTFS adds several features that FAT and HPFS lack, including: access control lists (ACLs); filesystem encryption; transparent compression; sparse files; file system journaling and volume shadow copy, a feature that allows backups of a system while in use. Starting with Windows NT 3.1, it is the default file system of the Windows NT family superseding the File Allocation Table (FAT) file system. NTFS read/write support is available on Linux and BSD using NTFS3 in Linux and NTFS-3G in BSD. NTFS uses several files hidden from the user to store metadata about other files stored on the drive which can help improve speed and performance when reading data. History In the mid-1980s, Microsoft and IBM formed a joint project to create the next generation of graphical operating system; the result was OS/2 and HPFS. Because Microsoft disagreed with IBM on many important issues, they eventually separated; OS/2 remained an IBM project and Microsoft worked to develop Windows NT and NTFS. The HPFS file system for OS/2 contained several important new features. When Microsoft created their new operating system, they borrowed many of these concepts for NTFS. The original NTFS developers were Tom Miller, Gary Kimura, Brian Andrew, and David Goebel. Probably as a result of this common ancestry, HPFS and NTFS use the same disk partition identification type code (07). Using the same Partition ID Record Number is highly unusual, since there were dozens of unused code numbers available, and other major file systems have their own codes. For example, FAT has more than nine (one each for FAT12, FAT16, FAT32, etc.). Algorithms identifying the file system in a partition type 07 must perform additional checks to distinguish between HPFS and NTFS. Versions Microsoft has released five versions of NTFS: The version number (e.g. v5.0 in Windows 2000) is based on the operating system version; it should not be confused with the NTFS version number (v3.1 since Windows XP). Although subsequent versions of Windows added new file system-related features, they did not change NTFS itself. For example, Windows Vista implemented NTFS symbolic links, Transactional NTFS, partition shrinking, and self-healing. NTFS symbolic links are a new feature in the file system; all the others are new operating system features that make use of NTFS features already in place. Scalability NTFS is optimized for 4 KB clusters, but supports a maximum cluster size of 2MB. (Earlier implementations support up to 64KB) The maximum NTFS volume size that the specification can support is clusters, but not all implementations achieve this theoretical maximum, as discussed below. The maximum NTFS volume size implemented in Windows XP Professional is clusters, partly due to partition table limitations. For example, using 64KB clusters, the maximum size Windows XP NTFS volume is 256TB minus 64KB. Using the default cluster size of 4KB, the maximum NTFS volume size is 16TB minus 4KB. Both of these are vastly higher than the 128GB limit in Windows XP SP1. The size of a partition in the Master Boot Record (MBR) is limited to 2 TiB with a hard drive with 512-byte physical sectors, although for a 4 KiB physical sector the MBR partition size limit is 16 TiB. An alternative is to use multiple GUID Partition Table (GPT or "dynamic") volumes for be combined to create a single NTFS volume larger than 2 TiB. Booting from a GPT volume to a Windows environment in a Microsoft supported way requires a system with Unified Extensible Firmware Interface (UEFI) and 64-bit support. GPT data disks are supported on systems with BIOS. The NTFS maximum theoretical limit on the size of individual files is 16EB ( or ) minus 1KB, which totals 18,446,744,073,709,550,592 bytes. With Windows 10 version 1709 and Windows Server 2019, the maximum implemented file size is 8PB minus 2MB or 9,007,199,252,643,840 bytes. Interoperability Windows While the different NTFS versions are for the most part fully forward- and backward-compatible, there are technical considerations for mounting newer NTFS volumes in older versions of Microsoft Windows. This affects dual-booting, and external portable hard drives. For example, attempting to use an NTFS partition with "Previous Versions" (Volume Shadow Copy) on an operating system that does not support it will result in the contents of those previous versions being lost. A Windows command-line utility called convert.exe can convert supporting file systems to NTFS, including HPFS (only on Windows NT 3.1, 3.5, and 3.51), FAT16 and FAT32 (on Windows 2000 and later). FreeBSD FreeBSD 3.2 released in May 1999 included read-only NTFS support written by Semen Ustimenko. This implementation was ported to NetBSD by Christos Zoulas and Jaromir Dolecek and released with NetBSD 1.5 in December 2000. The FreeBSD implementation of NTFS was also ported to OpenBSD by Julien Bordet and offers native read-only NTFS support by default on i386 and amd64 platforms Linux Linux kernel versions 2.1.74 and later include a driver written by Martin von Löwis which has the ability to read NTFS partitions; kernel versions 2.5.11 and later contain a new driver written by Anton Altaparmakov (University of Cambridge) and Richard Russon which supports file read. The ability to write to files was introduced with kernel version 2.6.15 in 2006 which allows users to write to existing files but does not allow the creation of new ones. Paragon's NTFS driver (see below) has been merged into kernel version 5.15, and it supports read/write on normal, compressed and sparse files, as well as journal replaying. NTFS-3G is a free GPL-licensed FUSE implementation of NTFS that was initially developed as a Linux kernel driver by Szabolcs Szakacsits. It was re-written as a FUSE program to work on other systems that FUSE supports like macOS, FreeBSD, NetBSD, OpenBSD, Solaris, QNX, and Haiku and allows reading and writing to NTFS partitions. A performance enhanced commercial version of NTFS-3G, called "Tuxera NTFS for Mac", is also available from the NTFS-3G developers. Captive NTFS, a 'wrapping' driver that uses Windows' own driver , exists for Linux. It was built as a Filesystem in Userspace (FUSE) program and released under the GPL but work on Captive NTFS ceased in 2006. Linux kernel versions 5.15 onwards carry NTFS3, a fully functional NTFS Read-Write driver which works on NTFS versions up to 3.1 and is maintained primarily by the Paragon Software Group. macOS Mac OS X 10.3 included Ustimenko's read-only implementation of NTFS from FreeBSD. Then in 2006 Apple hired Anton Altaparmakov to write a new NTFS implementation for Mac OS X 10.6. Native NTFS write support is included in 10.6 and later, but is not activated by default, although workarounds do exist to enable the functionality. However, user reports indicate the functionality is unstable and tends to cause kernel panics. Paragon Software Group sells a read-write driver named NTFS for Mac, which is also included on some models of Seagate hard drives. OS/2 The NetDrive package for OS/2 (and derivatives such as eComStation and ArcaOS) supports a plugin which allows read and write access to NTFS volumes. DOS There is a free-for-personal-use read/write driver for MS-DOS by Avira called "NTFS4DOS". Ahead Software developed a "NTFSREAD" driver (version 1.200) for DR-DOS 7.0x between 2002 and 2004. It was part of their Nero Burning ROM software. Security NTFS uses access control lists and user-level encryption to help secure user data. Access control lists (ACLs) In NTFS, each file or folder is assigned a security descriptor that defines its owner and contains two access control lists (ACLs). The first ACL, called discretionary access control list (DACL), defines exactly what type of interactions (e.g. reading, writing, executing or deleting) are allowed or forbidden by which user or groups of users. For example, files in the folder may be read and executed by all users but modified only by a user holding administrative privileges. Windows Vista adds mandatory access control info to DACLs. DACLs are the primary focus of User Account Control in Windows Vista and later. The second ACL, called system access control list (SACL), defines which interactions with the file or folder are to be audited and whether they should be logged when the activity is successful, failed or both. For example, auditing can be enabled on sensitive files of a company, so that its managers get to know when someone tries to delete them or make a copy of them, and whether they succeed. Encryption Encrypting File System (EFS) provides user-transparent encryption of any file or folder on an NTFS volume. EFS works in conjunction with the EFS service, Microsoft's CryptoAPI and the EFS File System Run-Time Library (FSRTL). EFS works by encrypting a file with a bulk symmetric key (also known as the File Encryption Key, or FEK), which is used because it takes a relatively small amount of time to encrypt and decrypt large amounts of data than if an asymmetric key cipher is used. The symmetric key that is used to encrypt the file is then encrypted with a public key that is associated with the user who encrypted the file, and this encrypted data is stored in an alternate data stream of the encrypted file. To decrypt the file, the file system uses the private key of the user to decrypt the symmetric key that is stored in the data stream. It then uses the symmetric key to decrypt the file. Because this is done at the file system level, it is transparent to the user. Also, in case of a user losing access to their key, support for additional decryption keys has been built into the EFS system, so that a recovery agent can still access the files if needed. NTFS-provided encryption and NTFS-provided compression are mutually exclusive; however, NTFS can be used for one and a third-party tool for the other. The support of EFS is not available in Basic, Home, and MediaCenter versions of Windows, and must be activated after installation of Professional, Ultimate, and Server versions of Windows or by using enterprise deployment tools within Windows domains. Features Journaling NTFS is a journaling file system and uses the NTFS Log () to record metadata changes to the volume. It is a feature that FAT does not provide and is critical for NTFS to ensure that its complex internal data structures will remain consistent in case of system crashes or data moves performed by the defragmentation API, and allow easy rollback of uncommitted changes to these critical data structures when the volume is remounted. Notably affected structures are the volume allocation bitmap, modifications to MFT records such as moves of some variable-length attributes stored in MFT records and attribute lists, and indices for directories and security descriptors. The () format has evolved through several versions: The incompatibility of the versions implemented by Windows 8, Windows 10, Windows 11 prevents Windows 7 (and earlier versions of Windows) from recognizing version 2.0 of the . Backward compatibility is provided by downgrading the to version 1.1 when an NTFS volume is cleanly dismounted. It is again upgraded to version 2.0 when mounting on a compatible version of Windows. However, when hibernating to disk in the logoff state (a.k.a. Hybrid Boot or Fast Boot, which is enabled by default), mounted file systems are not dismounted, and thus the s of any active file systems are not downgraded to version 1.1. The inability to process version 2.0 of the by versions of Windows older than 8.0 results in an unnecessary invocation of the CHKDSK disk repair utility. This is particularly a concern in a multi-boot scenario involving pre- and post-8.0 versions of Windows, or when frequently moving a storage device between older and newer versions. A Windows Registry setting exists to prevent the automatic upgrade of the to the newer version. The problem can also be dealt with by disabling Hybrid Boot. The USN Journal (Update Sequence Number Journal) is a system management feature that records (in ) changes to files, streams and directories on the volume, as well as their various attributes and security settings. The journal is made available for applications to track changes to the volume. This journal can be enabled or disabled on non-system volumes. Hard links The hard link feature allows different file names to directly refer to the same file contents. Hard links may link only to files in the same volume, because each volume has its own MFT. Hard links were originally included to support the POSIX subsystem in Windows NT. Although hard links use the same MFT record (inode) which records file metadata such as file size, modification date, and attributes, NTFS also caches this data in the directory entry as a performance enhancement. This means that when listing the contents of a directory using FindFirstFile/FindNextFile family of APIs, (equivalent to the POSIX opendir/readdir APIs) you will also receive this cached information, in addition to the name and inode. However, you may not see up-to-date information, as this information is only guaranteed to be updated when a file is closed, and then only for the directory from which the file was opened. This means where a file has multiple names via hard links, updating a file via one name does not update the cached data associated with the other name. You can always obtain up-to-date data using GetFileInformationByHandle (which is the true equivalent of POSIX stat function). This can be done using a handle which has no access to the file itself (passing zero to CreateFile for dwDesiredAccess), and closing this handle has the incidental effect of updating the cached information. Windows uses hard links to support short (8.3) filenames in NTFS. Operating system support is needed because there are legacy applications that can work only with 8.3 filenames, but support can be disabled. In this case, an additional filename record and directory entry is added, but both 8.3 and long file name are linked and updated together, unlike a regular hard link. The NTFS file system has a limit of 1023 hard links on a file. Alternate data stream (ADS) Alternate data streams allow more than one data stream to be associated with a filename (a fork), using the format "filename:streamname" (e.g., "text.txt:extrastream"). These streams are not shown to or made editable by users through any typical GUI application built into Windows by default, disguising their existence from most users. Although intended for helpful metadata, their arcane nature makes them a potential hiding place for malware, spyware, unseen browser history, and other potentially unwanted information. Alternate streams are not listed in Windows Explorer, and their size is not included in the file's size. When the file is copied or moved to another file system without ADS support the user is warned that alternate data streams cannot be preserved. No such warning is typically provided if the file is attached to an e-mail, or uploaded to a website. Thus, using alternate streams for critical data may cause problems. Microsoft provides a downloadable tool called Streams to view streams on a selected volume. Starting with Windows PowerShell 3.0, it is possible to manage ADS natively with six cmdlets: Add-Content, Clear-Content, Get-Content, Get-Item, Remove-Item, Set-Content. A small ADS named Zone.Identifier is added by Internet Explorer and by most browsers to mark files downloaded from external sites as possibly unsafe to run; the local shell would then require user confirmation before opening them. When the user indicates that they no longer want this confirmation dialog, this ADS is deleted. This functionality is also known as "Mark of the Web". Without deep modifications to the source code, all Chromium (e.g. Google Chrome) and Firefox-based web browsers also write the Zone.Identifier stream to downloaded files. Malware has used alternate data streams to hide code. Since the late 2000s, some malware scanners and other special tools check for alternate data streams. Due to the risks associated with ADS, particularly involving privacy and the Zone.Identifier stream, there exists software specifically designed to strip streams from files (certain streams with perceived risk or all of them) in a user-friendly way. NTFS Streams were introduced in Windows NT 3.1, to enable Services for Macintosh (SFM) to store resource forks. Although current versions of Windows Server no longer include SFM, third-party Apple Filing Protocol (AFP) products (such as GroupLogic's ExtremeZ-IP) still use this feature of the file system. File compression Compression is enabled on a per-folder or per-file basis by setting the 'compressed' attribute. When compression is enabled on a folder, any files moved or saved to that folder will be automatically compressed using LZNT1 algorithm (a variant of LZ77). The compression algorithm is designed to support cluster sizes of up to 4 KB; when the cluster size is greater than 4 KB on an NTFS volume, NTFS compression is not available. Data is compressed in 16-cluster chunks (up to 64 KB in size); if the compression reduces 64KB of data to 60KB or less, NTFS treats the unneeded 4KB pages like empty sparse file clusters—they are not written. This allows for reasonable random-access times as the OS merely has to follow the chain of fragments. Compression works best with files that have repetitive content, are seldom written, are usually accessed sequentially, and are not themselves compressed. Single-user systems with limited hard disk space can benefit from NTFS compression for small files, from 4KB to 64KB or more, depending on compressibility. Files smaller than approximately 900 bytes are stored within the directory entry of the MFT. Advantages Users of fast multi-core processors will find improvements in application speed by compressing their applications and data as well as a reduction in space used. Even when SSD controllers already compress data, there is still a reduction in I/Os since less data is transferred. According to research by Microsoft's NTFS Development team, 50–60GB is a reasonable maximum size for a compressed file on an NTFS volume with a 4KB (default) cluster (block) size. This reasonable maximum size decreases sharply for volumes with smaller cluster sizes. Disadvantages Large compressible files become highly fragmented since every chunk smaller than 64KB becomes a fragment. Flash memory, such as SSD drives do not have the head movement delays and high access time of mechanical hard disk drives, so fragmentation has only a smaller penalty. If system files that are needed at boot time (such as drivers, NTLDR, winload.exe, or BOOTMGR) are compressed, the system may fail to boot correctly, because decompression filters are not yet loaded. Later editions of Windows do not allow important system files to be compressed. System compression Since Windows 10, Microsoft has introduced new file compression scheme based on the XPRESS algorithm with 4K/8K/16K block size and the LZX algorithm; both are variants of LZ77 updated with Huffman entropy coding and range coding, which LZNT1 lacked. These compression algorithms were taken from Windows Imaging Format (WIM file). The new compression scheme is used by CompactOS feature, which reduces disk usage by compressing Windows system files. CompactOS is not an extension of NTFS file compression and does not use the 'compressed' attribute; instead, it sets a reparse point on each compressed file with a WOF (Windows Overlay Filter) tag, but the actual data is stored in an alternate data stream named "WofCompressedData", which is decompressed on-the-fly by a WOF filesystem filter driver, and the main file is an empty sparse file. This design is meant purely for read-only access, so any writes to compressed files result in an automatic decompression. CompactOS compression is intended for OEMs who prepare OS images with the flag of the tool in Windows ADK, but it can also be manually turned on per file with the flag of the command. CompactOS algorithm avoids file fragmentation by writing compressed data in contiguously allocated chunks, unlike core NTFS compression. CompactOS file compression is an improved version of WIMBoot feature introduced in Windows 8.1. WIMBoot reduces Windows disk usage by keeping system files in a compressed WIM image on a separate hidden disk partition. Similarly to CompactOS, Windows system directories only contain sparse files marked by a reparse point with a WOF tag, and Windows Overlay Filter driver decompresses file contents on-the-fly from the WIM image. WIMBoot is less effective than CompactOS though, as new updated versions of system files need to be written to the system partition, consuming disk space. Sparse files Sparse files are files interspersed with empty segments for which no actual storage space is used. To the applications, the file looks like an ordinary file with empty regions seen as regions filled with zeros; the file system maintains an internal list of such regions for each sparse file. A sparse file does not necessarily include sparse zeros areas; the "sparse file" attribute just means that the file is allowed to have them. Database applications, for instance, may use sparse files. As with compressed files, the actual sizes of sparse files are not taken into account when determining quota limits. Volume Shadow Copy The Volume Shadow Copy Service (VSS) keeps historical versions of files and folders on NTFS volumes by copying old, newly overwritten data to shadow copy via copy-on-write technique. The user may later request an earlier version to be recovered. This also allows data backup programs to archive files currently in use by the file system. Windows Vista also introduced persistent shadow copies for use with System Restore and Previous Versions features. Persistent shadow copies, however, are deleted when an older operating system mounts that NTFS volume. This happens because the older operating system does not understand the newer format of persistent shadow copies. Transactions As of Windows Vista, applications can use Transactional NTFS (TxF) to group multiple changes to files together into a single transaction. The transaction will guarantee that either all of the changes happen, or none of them do, and that no application outside the transaction will see the changes until they are committed. It uses similar techniques as those used for Volume Shadow Copies (i.e. copy-on-write) to ensure that overwritten data can be safely rolled back, and a CLFS log to mark the transactions that have still not been committed, or those that have been committed but still not fully applied (in case of system crash during a commit by one of the participants). Transactional NTFS does not restrict transactions to just the local NTFS volume, but also includes other transactional data or operations in other locations such as data stored in separate volumes, the local registry, or SQL databases, or the current states of system services or remote services. These transactions are coordinated network-wide with all participants using a specific service, the DTC, to ensure that all participants will receive same commit state, and to transport the changes that have been validated by any participant (so that the others can invalidate their local caches for old data or rollback their ongoing uncommitted changes). Transactional NTFS allows, for example, the creation of network-wide consistent distributed file systems, including with their local live or offline caches. Microsoft now advises against using TxF: "Microsoft strongly recommends developers utilize alternative means" since "TxF may not be available in future versions of Microsoft Windows". Quotas Disk quotas were introduced in NTFS v3. They allow the administrator of a computer that runs a version of Windows that supports NTFS to set a threshold of disk space that users may use. It also allows administrators to keep track of how much disk space each user is using. An administrator may specify a certain level of disk space that a user may use before they receive a warning, and then deny access to the user once they hit their upper limit of space. Disk quotas do not take into account NTFS's transparent file-compression, should this be enabled. Applications that query the amount of free space will also see the amount of free space left to the user who has a quota applied to them. Reparse points Introduced in NTFS v3, NTFS reparse points are used by associating a reparse tag in the user space attribute of a file or directory. Microsoft includes several default tags including symbolic links, directory junction points and volume mount points. When the Object Manager parses a file system name lookup and encounters a reparse attribute, it will reparse the name lookup, passing the user controlled reparse data to every file system filter driver that is loaded into Windows. Each filter driver examines the reparse data to see whether it is associated with that reparse point, and if that filter driver determines a match, then it intercepts the file system request and performs its special functionality. Limitations Resizing Starting with Windows Vista Microsoft added the built-in ability to shrink or expand a partition. However, this ability does not relocate page file fragments or files that have been marked as unmovable, so shrinking a volume will often require relocating or disabling any page file, the index of Windows Search, and any Shadow Copy used by System Restore. Various third-party tools are capable of resizing NTFS partitions. OneDrive Since 2017, Microsoft requires the OneDrive file structure to reside on an NTFS disk. This is because OneDrive Files On-Demand feature uses NTFS reparse points to link files and folders that are stored in OneDrive to the local filesystem, making the file or folder unusable with any previous version of Windows, with any other NTFS file system driver, or any file system and backup utilities not updated to support it. Structure NTFS is made up of several components including: a partition boot sector (PBS) that holds boot information; the master file table that stores a record of all files and folders in the filesystem; a series of meta files that help structure meta data more efficiently; data streams and locking mechanisms. Internally, NTFS uses B-trees to index file system data. A file system journal is used to guarantee the integrity of the file system metadata but not individual files' content. Systems using NTFS are known to have improved reliability compared to FAT file systems. NTFS allows any sequence of 16-bit values for name encoding (e.g. file names, stream names or index names) except 0x0000. This means UTF-16 code units are supported, but the file system does not check whether a sequence is valid UTF-16 (it allows any sequence of short values, not restricted to those in the Unicode standard). In Win32 namespace, any UTF-16 code units are case insensitive whereas in POSIX namespace they are case sensitive. File names are limited to 255 UTF-16 code units. Certain names are reserved in the volume root directory and cannot be used for files. These are $MFT, $MFTMirr, $LogFile, $Volume, $AttrDef, . (dot), $Bitmap, $Boot, $BadClus, $Secure, $UpCase, and $Extend. . (dot) and $Extend are both directories; the others are files. The NT kernel limits full paths to 32,767 UTF-16 code units. There are some additional restrictions on code points and file names. Partition Boot Sector (PBS) This boot partition format is roughly based upon the earlier FAT filesystem, but the fields are in different locations. Some of these fields, especially the "sectors per track", "number of heads" and "hidden sectors" fields may contain dummy values on drives where they either do not make sense or are not determinable. The OS first looks at the 8 bytes at 0x30 to find the cluster number of the $MFT, then multiplies that number by the number of sectors per cluster (1 byte found at 0x0D). This value is the sector offset (LBA) to the $MFT, which is described below. Master File Table In NTFS, all file, directory and metafile data—file name, creation date, access permissions (by the use of access control lists), and size—are stored as metadata in the Master File Table (MFT). This abstract approach allowed easy addition of file system features during Windows NT's development—an example is the addition of fields for indexing used by the Active Directory and the Windows Search. This also enables fast file search software to locate named local files and folders included in the MFT very quickly, without requiring any other index. The MFT structure supports algorithms which minimize disk fragmentation. A directory entry consists of a filename and a "file ID" (analogous to the inode number), which is the record number representing the file in the Master File Table. The file ID also contains a reuse count to detect stale references. While this strongly resembles the W_FID of Files-11, other NTFS structures radically differ. A partial copy of the MFT, called the MFT mirror, is stored to be used in case of corruption. If the first record of the MFT is corrupted, NTFS reads the second record to find the MFT mirror file. Locations for both files are stored in the boot sector. Metafiles NTFS contains several files that define and organize the file system. In all respects, most of these files are structured like any other user file ($Volume being the most peculiar), but are not of direct interest to file system clients. These metafiles define files, back up critical file system data, buffer file system changes, manage free space allocation, satisfy BIOS expectations, track bad allocation units, and store security and disk space usage information. All content is in an unnamed data stream, unless otherwise indicated. These metafiles are treated specially by Windows, handled directly by the NTFS.SYS driver and are difficult to directly view: special purpose-built tools are needed. As of Windows 7, the NTFS driver completely prohibits user access, resulting in a BSoD whenever an attempt to execute a metadata file is made. One such tool is the nfi.exe ("NTFS File Sector Information Utility") that is freely distributed as part of the Microsoft "OEM Support Tools". For example, to obtain information on the "$MFT"-Master File Table Segment the following command is used: Another way to bypass the restriction is to use 7-Zip's file manager and go to the low-level NTFS path \\.\X:\ (where X:\ resembles any drive/partition). Here, 3 new folders will appear: $EXTEND, [DELETED] (a pseudo-folder that 7-Zip uses to attach files deleted from the file system to view), and [SYSTEM] (another pseudo-folder that contains all the NTFS metadata files). This trick can be used from removable devices (USB flash drives, external hard drives, SD cards, etc.) inside Windows, but doing this on the active partition requires offline access (namely WinRE). Attribute lists, attributes, and streams For each file (or directory) described in the MFT record, there is a linear repository of stream descriptors (also named attributes), packed together in one or more MFT records (containing the so-called attributes list), with extra padding to fill the fixed 1 KB size of every MFT record, and that fully describes the effective streams associated with that file. Each attribute has an attribute type (a fixed-size integer mapping to an attribute definition in file ), an optional attribute name (for example, used as the name for an alternate data stream), and a value, represented in a sequence of bytes. For NTFS, the standard data of files, the alternate data streams, or the index data for directories are stored as attributes. According to , some attributes can be either resident or non-resident. The attribute, which contains file data, is such an example. When the attribute is resident (which is represented by a flag), its value is stored directly in the MFT record. Otherwise, clusters are allocated for the data, and the cluster location information is stored as data runs in the attribute. For each file in the MFT, the attributes identified by attribute type, attribute name must be unique. Additionally, NTFS has some ordering constraints for these attributes. There is a predefined null attribute type, used to indicate the end of the list of attributes in one MFT record. It must be present as the last attribute in the record (all other storage space available after it will be ignored and just consists of padding bytes to match the record size in the MFT). Some attribute types are required and must be present in each MFT record, except unused records that are just indicated by null attribute types. This is the case for the attribute that is stored as a fixed-size record and contains the timestamps and other basic single-bit attributes (compatible with those managed by FAT in DOS or Windows 9x). Some attribute types cannot have a name and must remain anonymous. This is the case for the standard attributes, or for the preferred NTFS "filename" attribute type, or the "short filename" attribute type, when it is also present (for compatibility with DOS-like applications, see below). It is also possible for a file to contain only a short filename, in which case it will be the preferred one, as listed in the Windows Explorer. The filename attributes stored in the attribute list do not make the file immediately accessible through the hierarchical file system. In fact, all the filenames must be indexed separately in at least one other directory on the same volume. There it must have its own MFT record and its own security descriptors and attributes that reference the MFT record number for this file. This allows the same file or directory to be "hardlinked" several times from several containers on the same volume, possibly with distinct filenames. The default data stream of a regular file is a stream of type but with an anonymous name, and the ADSs are similar but must be named. On the other hand, the default data stream of directories has a distinct type, but are not anonymous: they have an attribute name ("" in NTFS 3+) that reflects its indexing format. All attributes of a given file may be displayed by using the nfi.exe ("NTFS File Sector Information Utility") that is freely distributed as part of the Microsoft "OEM Support Tools". Windows system calls may handle alternate data streams. Depending on the operating system, utility and remote file system, a file transfer might silently strip data streams. A safe way of copying or moving files is to use the BackupRead and BackupWrite system calls, which allow programs to enumerate streams, to verify whether each stream should be written to the destination volume and to knowingly skip unwanted streams. Resident vs. non-resident attributes To optimize the storage and reduce the I/O overhead for the very common case of attributes with very small associated value, NTFS prefers to place the value within the attribute itself (if the size of the attribute does not then exceed the maximum size of an MFT record), instead of using the MFT record space to list clusters containing the data; in that case, the attribute will not store the data directly but will just store an allocation map (in the form of data runs) pointing to the actual data stored elsewhere on the volume. When the value can be accessed directly from within the attribute, it is called "resident data" (by computer forensics workers). The amount of data that fits is highly dependent on the file's characteristics, but 700 to 800 bytes is common in single-stream files with non-lengthy filenames and no ACLs. Some attributes (such as the preferred filename, the basic file attributes) cannot be made non-resident. For non-resident attributes, their allocation map must fit within MFT records. Encrypted-by-NTFS, sparse data streams, or compressed data streams cannot be made resident. The format of the allocation map for non-resident attributes depends on its capability of supporting sparse data storage. In the current implementation of NTFS, once a non-resident data stream has been marked and converted as sparse, it cannot be changed back to non-sparse data, so it cannot become resident again, unless this data is fully truncated, discarding the sparse allocation map completely. When a non-resident attribute is so fragmented that its effective allocation map cannot fit entirely within one MFT record, NTFS stores the attribute in multiple records. The first one among them is called the base record, while the others are called extension records. NTFS creates a special attribute to store information mapping different parts of the long attribute to the MFT records, which means the allocation map may be split into multiple records. The itself can also be non-resident, but its own allocation map must fit within one MFT record. When there are too many attributes for a file (including ADS's, extended attributes, or security descriptors), so that they cannot fit all within the MFT record, extension records may also be used to store the other attributes, using the same format as the one used in the base MFT record, but without the space constraints of one MFT record. The allocation map is stored in a form of data runs with compressed encoding. Each data run represents a contiguous group of clusters that store the attribute value. For files on a multi-GB volume, each entry can be encoded as 5 to 7 bytes, which means a MFT record can store about 100 such data runs. However, as the also has a size limit, it is dangerous to have more than 1 million fragments of a single file on an NTFS volume, which also implies that it is in general not a good idea to use NTFS compression on a file larger than . The NTFS file system driver will sometimes attempt to relocate the data of some of the attributes that can be made non-resident into the clusters, and will also attempt to relocate the data stored in clusters back to the attribute inside the MFT record, based on priority and preferred ordering rules, and size constraints. Since resident files do not directly occupy clusters ("allocation units"), it is possible for an NTFS volume to contain more files on a volume than there are clusters. For example, a partition NTFS formats with 19,543,064 clusters of . Subtracting system files (a log file, a 2,442,888-byte Bitmap file, and about 25 clusters of fixed overhead) leaves 19,526,158 clusters free for files and indices. Since there are four MFT records per cluster, this volume theoretically could hold almost 4 × 19,526,158 = 78,104,632 resident files. Opportunistic locks Opportunistic file locks (oplocks) allow clients to alter their buffering strategy for a given file or stream in order to increase performance and reduce network use. Oplocks apply to the given open stream of a file and do not affect oplocks on a different stream. Oplocks can be used to transparently access files in the background. A network client may avoid writing information into a file on a remote server if no other process is accessing the data, or it may buffer read-ahead data if no other process is writing data. Windows supports four different types of oplocks: Level 2 (or shared) oplock: multiple readers, no writers (i.e. read caching). Level 1 (or exclusive) oplock: exclusive access with arbitrary buffering (i.e. read and write caching). Batch oplock (also exclusive): a stream is opened on the server, but closed on the client machine (i.e. read, write and handle caching). Filter oplock (also exclusive): applications and file system filters can "back out" when others try to access the same stream (i.e. read and write caching) (since Windows 2000) Opportunistic locks have been enhanced in Windows 7 and Windows Server 2008 R2 with per-client oplock keys. Time Windows NT and its descendants keep internal timestamps as UTC and make the appropriate conversions for display purposes; all NTFS timestamps are in UTC. For historical reasons, the versions of Windows that do not support NTFS all keep time internally as local zone time, and therefore so do all file systems – other than NTFS – that are supported by current versions of Windows. This means that when files are copied or moved between NTFS and non-NTFS partitions, the OS needs to convert timestamps on the fly. But if some files are moved when daylight saving time (DST) is in effect, and other files are moved when standard time is in effect, there can be some ambiguities in the conversions. As a result, especially shortly after one of the days on which local zone time changes, users may observe that some files have timestamps that are incorrect by one hour. Due to the differences in implementation of DST in different jurisdictions, this can result in a potential timestamp error of up to 4 hours in any given 12 months. This problem is further exacerbated for any computers for which local time zone changes sometimes (e.g. due to moving the computer from one time zone to another, as often happens with laptops and other portable devices).
Technology
Data storage and memory
null
39221
https://en.wikipedia.org/wiki/Thermodynamic%20free%20energy
Thermodynamic free energy
In thermodynamics, the thermodynamic free energy is one of the state functions of a thermodynamic system. The change in the free energy is the maximum amount of work that the system can perform in a process at constant temperature, and its sign indicates whether the process is thermodynamically favorable or forbidden. Since free energy usually contains potential energy, it is not absolute but depends on the choice of a zero point. Therefore, only relative free energy values, or changes in free energy, are physically meaningful. The free energy is the portion of any first-law energy that is available to perform thermodynamic work at constant temperature, i.e., work mediated by thermal energy. Free energy is subject to irreversible loss in the course of such work. Since first-law energy is always conserved, it is evident that free energy is an expendable, second-law kind of energy. Several free energy functions may be formulated based on system criteria. Free energy functions are Legendre transforms of the internal energy. The Gibbs free energy is given by , where is the enthalpy, is the absolute temperature, and is the entropy. , where is the internal energy, is the pressure, and is the volume. is the most useful for processes involving a system at constant pressure and temperature , because, in addition to subsuming any entropy change due merely to heat, a change in also excludes the work needed to "make space for additional molecules" produced by various processes. Gibbs free energy change therefore equals work not associated with system expansion or compression, at constant temperature and pressure, hence its utility to solution-phase chemists, including biochemists. The historically earlier Helmholtz free energy is defined in contrast as . Its change is equal to the amount of reversible work done on, or obtainable from, a system at constant . Thus its appellation "work content", and the designation (). Since it makes no reference to any quantities involved in work (such as and ), the Helmholtz function is completely general: its decrease is the maximum amount of work which can be done by a system at constant temperature, and it can increase at most by the amount of work done on a system isothermally. The Helmholtz free energy has a special theoretical importance since it is proportional to the logarithm of the partition function for the canonical ensemble in statistical mechanics. (Hence its utility to physicists; and to gas-phase chemists and engineers, who do not want to ignore work.) Historically, the term 'free energy' has been used for either quantity. In physics, free energy most often refers to the Helmholtz free energy, denoted by (or ), while in chemistry, free energy most often refers to the Gibbs free energy. The values of the two free energies are usually quite similar and the intended free energy function is often implicit in manuscripts and presentations. Meaning of "free" The basic definition of "energy" is a measure of a body's (in thermodynamics, the system's) ability to cause change. For example, when a person pushes a heavy box a few metres forward, that person exerts mechanical energy, also known as work, on the box over a distance of a few meters forward. The mathematical definition of this form of energy is the product of the force exerted on the object and the distance by which the box moved (). Because the person changed the stationary position of the box, that person exerted energy on that box. The work exerted can also be called "useful energy", because energy was converted from one form into the intended purpose, i.e. mechanical use. For the case of the person pushing the box, the energy in the form of internal (or potential) energy obtained through metabolism was converted into work to push the box. This energy conversion, however, was not straightforward: while some internal energy went into pushing the box, some was diverted away (lost) in the form of heat (transferred thermal energy). For a reversible process, heat is the product of the absolute temperature and the change in entropy of a body (entropy is a measure of disorder in a system). The difference between the change in internal energy, which is , and the energy lost in the form of heat is what is called the "useful energy" of the body, or the work of the body performed on an object. In thermodynamics, this is what is known as "free energy". In other words, free energy is a measure of work (useful energy) a system can perform at constant temperature. Mathematically, free energy is expressed as This expression has commonly been interpreted to mean that work is extracted from the internal energy while represents energy not available to perform work. However, this is incorrect. For instance, in an isothermal expansion of an ideal gas, the internal energy change is and the expansion work is derived exclusively from the term supposedly not available to perform work. But it is noteworthy that the derivative form of the free energy: (for Helmholtz free energy) does indeed indicate that a spontaneous change in a non-reactive system's free energy (NOT the internal energy) comprises the available energy to do work (compression in this case) and the unavailable energy . Similar expression can be written for the Gibbs free energy change. In the 18th and 19th centuries, the theory of heat, i.e., that heat is a form of energy having relation to vibratory motion, was beginning to supplant both the caloric theory, i.e., that heat is a fluid, and the four element theory, in which heat was the lightest of the four elements. In a similar manner, during these years, heat was beginning to be distinguished into different classification categories, such as "free heat", "combined heat", "radiant heat", specific heat, heat capacity, "absolute heat", "latent caloric", "free" or "perceptible" caloric (calorique sensible), among others. In 1780, for example, Laplace and Lavoisier stated: “In general, one can change the first hypothesis into the second by changing the words ‘free heat, combined heat, and heat released’ into ‘vis viva, loss of vis viva, and increase of vis viva.’" In this manner, the total mass of caloric in a body, called absolute heat, was regarded as a mixture of two components; the free or perceptible caloric could affect a thermometer, whereas the other component, the latent caloric, could not. The use of the words "latent heat" implied a similarity to latent heat in the more usual sense; it was regarded as chemically bound to the molecules of the body. In the adiabatic compression of a gas, the absolute heat remained constant but the observed rise in temperature implied that some latent caloric had become "free" or perceptible. During the early 19th century, the concept of perceptible or free caloric began to be referred to as "free heat" or "heat set free". In 1824, for example, the French physicist Sadi Carnot, in his famous "Reflections on the Motive Power of Fire", speaks of quantities of heat ‘absorbed or set free’ in different transformations. In 1882, the German physicist and physiologist Hermann von Helmholtz coined the phrase ‘free energy’ for the expression , in which the change in A (or G) determines the amount of energy ‘free’ for work under the given conditions, specifically constant temperature. Thus, in traditional use, the term "free" was attached to Gibbs free energy for systems at constant pressure and temperature, or to Helmholtz free energy for systems at constant temperature, to mean ‘available in the form of useful work.’ With reference to the Gibbs free energy, we need to add the qualification that it is the energy free for non-volume work and compositional changes. An increasing number of books and journal articles do not include the attachment "free", referring to G as simply Gibbs energy (and likewise for the Helmholtz energy). This is the result of a 1988 IUPAC meeting to set unified terminologies for the international scientific community, in which the adjective ‘free’ was supposedly banished. This standard, however, has not yet been universally adopted, and many published articles and books still include the descriptive ‘free’. Application Just like the general concept of energy, free energy has a few definitions suitable for different conditions. In physics, chemistry, and biology, these conditions are thermodynamic parameters (temperature , volume , pressure , etc.). Scientists have come up with several ways to define free energy. The mathematical expression of Helmholtz free energy is: This definition of free energy is useful for gas-phase reactions or in physics when modeling the behavior of isolated systems kept at a constant volume. For example, if a researcher wanted to perform a combustion reaction in a bomb calorimeter, the volume is kept constant throughout the course of a reaction. Therefore, the heat of the reaction is a direct measure of the free energy change, . In solution chemistry, on the other hand, most chemical reactions are kept at constant pressure. Under this condition, the heat of the reaction is equal to the enthalpy change of the system. Under constant pressure and temperature, the free energy in a reaction is known as Gibbs free energy . These functions have a minimum in chemical equilibrium, as long as certain variables (, and or ) are held constant. In addition, they also have theoretical importance in deriving Maxwell relations. Work other than may be added, e.g., for electrochemical cells, or work in elastic materials and in muscle contraction. Other forms of work which must sometimes be considered are stress-strain, magnetic, as in adiabatic demagnetization used in the approach to absolute zero, and work due to electric polarization. These are described by tensors. In most cases of interest there are internal degrees of freedom and processes, such as chemical reactions and phase transitions, which create entropy. Even for homogeneous "bulk" materials, the free energy functions depend on the (often suppressed) composition, as do all proper thermodynamic potentials (extensive functions), including the internal energy. is the number of molecules (alternatively, moles) of type in the system. If these quantities do not appear, it is impossible to describe compositional changes. The differentials for processes at uniform pressure and temperature are (assuming only work): where μi is the chemical potential for the ith component in the system. The second relation is especially useful at constant and , conditions which are easy to achieve experimentally, and which approximately characterize living creatures. Under these conditions, it simplifies to Any decrease in the Gibbs function of a system is the upper limit for any isothermal, isobaric work that can be captured in the surroundings, or it may simply be dissipated, appearing as times a corresponding increase in the entropy of the system and/or its surrounding. An example is surface free energy, the amount of increase of free energy when the area of surface increases by every unit area. The path integral Monte Carlo method is a numerical approach for determining the values of free energies, based on quantum dynamical principles. Work and free energy change For a reversible isothermal process, ΔS = qrev/T and therefore the definition of A results in (at constant temperature) This tells us that the change in free energy equals the reversible or maximum work for a process performed at constant temperature. Under other conditions, free-energy change is not equal to work; for instance, for a reversible adiabatic expansion of an ideal gas, Importantly, for a heat engine, including the Carnot cycle, the free-energy change after a full cycle is zero, while the engine produces nonzero work. It is important to note that for heat engines and other thermal systems, the free energies do not offer convenient characterizations; internal energy and enthalpy are the preferred potentials for characterizing thermal systems. Free energy change and spontaneous processes According to the second law of thermodynamics, for any process that occurs in a closed system, the inequality of Clausius, ΔS > q/Tsurr, applies. For a process at constant temperature and pressure without non-PV work, this inequality transforms into . Similarly, for a process at constant temperature and volume, . Thus, a negative value of the change in free energy is a necessary condition for a process to be spontaneous; this is the most useful form of the second law of thermodynamics in chemistry. In chemical equilibrium at constant T and p without electrical work, dG = 0. History The quantity called "free energy" is a more advanced and accurate replacement for the outdated term affinity, which was used by chemists in previous years to describe the force that caused chemical reactions. The term affinity, as used in chemical relation, dates back to at least the time of Albertus Magnus. From the 1998 textbook Modern Thermodynamics by Nobel Laureate and chemistry professor Ilya Prigogine we find: "As motion was explained by the Newtonian concept of force, chemists wanted a similar concept of ‘driving force’ for chemical change. Why do chemical reactions occur, and why do they stop at certain points? Chemists called the ‘force’ that caused chemical reactions affinity, but it lacked a clear definition." During the entire 18th century, the dominant view with regard to heat and light was that put forth by Isaac Newton, called the Newtonian hypothesis, which states that light and heat are forms of matter attracted or repelled by other forms of matter, with forces analogous to gravitation or to chemical affinity. In the 19th century, the French chemist Marcellin Berthelot and the Danish chemist Julius Thomsen had attempted to quantify affinity using heats of reaction. In 1875, after quantifying the heats of reaction for a large number of compounds, Berthelot proposed the principle of maximum work, in which all chemical changes occurring without intervention of outside energy tend toward the production of bodies or of a system of bodies which liberate heat. In addition to this, in 1780 Antoine Lavoisier and Pierre-Simon Laplace laid the foundations of thermochemistry by showing that the heat given out in a reaction is equal to the heat absorbed in the reverse reaction. They also investigated the specific heat and latent heat of a number of substances, and amounts of heat given out in combustion. In a similar manner, in 1840 Swiss chemist Germain Hess formulated the principle that the evolution of heat in a reaction is the same whether the process is accomplished in one-step process or in a number of stages. This is known as Hess' law. With the advent of the mechanical theory of heat in the early 19th century, Hess's law came to be viewed as a consequence of the law of conservation of energy. Based on these and other ideas, Berthelot and Thomsen, as well as others, considered the heat given out in the formation of a compound as a measure of the affinity, or the work done by the chemical forces. This view, however, was not entirely correct. In 1847, the English physicist James Joule showed that he could raise the temperature of water by turning a paddle wheel in it, thus showing that heat and mechanical work were equivalent or proportional to each other, i.e., approximately, . This statement came to be known as the mechanical equivalent of heat and was a precursory form of the first law of thermodynamics. By 1865, the German physicist Rudolf Clausius had shown that this equivalence principle needed amendment. That is, one can use the heat derived from a combustion reaction in a coal furnace to boil water, and use this heat to vaporize steam, and then use the enhanced high-pressure energy of the vaporized steam to push a piston. Thus, we might naively reason that one can entirely convert the initial combustion heat of the chemical reaction into the work of pushing the piston. Clausius showed, however, that we must take into account the work that the molecules of the working body, i.e., the water molecules in the cylinder, do on each other as they pass or transform from one step of or state of the engine cycle to the next, e.g., from () to (). Clausius originally called this the "transformation content" of the body, and then later changed the name to entropy. Thus, the heat used to transform the working body of molecules from one state to the next cannot be used to do external work, e.g., to push the piston. Clausius defined this transformation heat as . In 1873, Willard Gibbs published A Method of Geometrical Representation of the Thermodynamic Properties of Substances by Means of Surfaces, in which he introduced the preliminary outline of the principles of his new equation able to predict or estimate the tendencies of various natural processes to ensue when bodies or systems are brought into contact. By studying the interactions of homogeneous substances in contact, i.e., bodies, being in composition part solid, part liquid, and part vapor, and by using a three-dimensional volume-entropy-internal energy graph, Gibbs was able to determine three states of equilibrium, i.e., "necessarily stable", "neutral", and "unstable", and whether or not changes will ensue. In 1876, Gibbs built on this framework by introducing the concept of chemical potential so to take into account chemical reactions and states of bodies that are chemically different from each other. In his own words, to summarize his results in 1873, Gibbs states: In this description, as used by Gibbs, ε refers to the internal energy of the body, η refers to the entropy of the body, and ν is the volume of the body. Hence, in 1882, after the introduction of these arguments by Clausius and Gibbs, the German scientist Hermann von Helmholtz stated, in opposition to Berthelot and Thomas' hypothesis that chemical affinity is a measure of the heat of reaction of chemical reaction as based on the principle of maximal work, that affinity is not the heat given out in the formation of a compound but rather it is the largest quantity of work which can be gained when the reaction is carried out in a reversible manner, e.g., electrical work in a reversible cell. The maximum work is thus regarded as the diminution of the free, or available, energy of the system (Gibbs free energy G at T = constant, P = constant or Helmholtz free energy A at T = constant, V = constant), whilst the heat given out is usually a measure of the diminution of the total energy of the system (Internal energy). Thus, G or A is the amount of energy "free" for work under the given conditions. Up until this point, the general view had been such that: “all chemical reactions drive the system to a state of equilibrium in which the affinities of the reactions vanish”. Over the next 60 years, the term affinity came to be replaced with the term free energy. According to chemistry historian Henry Leicester, the influential 1923 textbook Thermodynamics and the Free Energy of Chemical Reactions by Gilbert N. Lewis and Merle Randall led to the replacement of the term "affinity" by the term "free energy" in much of the English-speaking world.
Physical sciences
Thermodynamics
null
39248
https://en.wikipedia.org/wiki/Limit%20%28category%20theory%29
Limit (category theory)
In category theory, a branch of mathematics, the abstract notion of a limit captures the essential properties of universal constructions such as products, pullbacks and inverse limits. The dual notion of a colimit generalizes constructions such as disjoint unions, direct sums, coproducts, pushouts and direct limits. Limits and colimits, like the strongly related notions of universal properties and adjoint functors, exist at a high level of abstraction. In order to understand them, it is helpful to first study the specific examples these concepts are meant to generalize. Definition Limits and colimits in a category are defined by means of diagrams in . Formally, a diagram of shape in is a functor from to : The category is thought of as an index category, and the diagram is thought of as indexing a collection of objects and morphisms in patterned on . One is most often interested in the case where the category is a small or even finite category. A diagram is said to be small or finite whenever is. Limits Let be a diagram of shape in a category . A cone to is an object of together with a family of morphisms indexed by the objects of , such that for every morphism in , we have . A limit of the diagram is a cone to such that for every cone to there exists a unique morphism such that for all in . One says that the cone factors through the cone with the unique factorization . The morphism is sometimes called the mediating morphism. Limits are also referred to as universal cones, since they are characterized by a universal property (see below for more information). As with every universal property, the above definition describes a balanced state of generality: The limit object has to be general enough to allow any cone to factor through it; on the other hand, has to be sufficiently specific, so that only one such factorization is possible for every cone. Limits may also be characterized as terminal objects in the category of cones to F. It is possible that a diagram does not have a limit at all. However, if a diagram does have a limit then this limit is essentially unique: it is unique up to a unique isomorphism. For this reason one often speaks of the limit of F. Colimits The dual notions of limits and cones are colimits and co-cones. Although it is straightforward to obtain the definitions of these by inverting all morphisms in the above definitions, we will explicitly state them here: A co-cone of a diagram is an object of together with a family of morphisms for every object of , such that for every morphism in , we have . A colimit of a diagram is a co-cone of such that for any other co-cone of there exists a unique morphism such that for all in . Colimits are also referred to as universal co-cones. They can be characterized as initial objects in the category of co-cones from . As with limits, if a diagram has a colimit then this colimit is unique up to a unique isomorphism. Variations Limits and colimits can also be defined for collections of objects and morphisms without the use of diagrams. The definitions are the same (note that in definitions above we never needed to use composition of morphisms in ). This variation, however, adds no new information. Any collection of objects and morphisms defines a (possibly large) directed graph . If we let be the free category generated by , there is a universal diagram whose image contains . The limit (or colimit) of this diagram is the same as the limit (or colimit) of the original collection of objects and morphisms. Weak limit and weak colimits are defined like limits and colimits, except that the uniqueness property of the mediating morphism is dropped. Examples Limits The definition of limits is general enough to subsume several constructions useful in practical settings. In the following we will consider the limit (L, φ) of a diagram F : J → C. Terminal objects. If J is the empty category there is only one diagram of shape J: the empty one (similar to the empty function in set theory). A cone to the empty diagram is essentially just an object of C. The limit of F is any object that is uniquely factored through by every other object. This is just the definition of a terminal object. Products. If J is a discrete category then a diagram F is essentially nothing but a family of objects of C, indexed by J. The limit L of F is called the product of these objects. The cone φ consists of a family of morphisms φX : L → F(X) called the projections of the product. In the category of sets, for instance, the products are given by Cartesian products and the projections are just the natural projections onto the various factors. Powers. A special case of a product is when the diagram F is a constant functor to an object X of C. The limit of this diagram is called the Jth power of X and denoted XJ. Equalizers. If J is a category with two objects and two parallel morphisms from one object to the other, then a diagram of shape J is a pair of parallel morphisms in C. The limit L of such a diagram is called an equalizer of those morphisms. Kernels. A kernel is a special case of an equalizer where one of the morphisms is a zero morphism. Pullbacks. Let F be a diagram that picks out three objects X, Y, and Z in C, where the only non-identity morphisms are f : X → Z and g : Y → Z. The limit L of F is called a pullback or a fiber product. It can nicely be visualized as a commutative square: Inverse limits. Let J be a directed set (considered as a small category by adding arrows i → j if and only if i ≥ j) and let F : Jop → C be a diagram. The limit of F is called an inverse limit or projective limit. If J = 1, the category with a single object and morphism, then a diagram of shape J is essentially just an object X of C. A cone to an object X is just a morphism with codomain X. A morphism f : Y → X is a limit of the diagram X if and only if f is an isomorphism. More generally, if J is any category with an initial object i, then any diagram of shape J has a limit, namely any object isomorphic to F(i). Such an isomorphism uniquely determines a universal cone to F. Topological limits. Limits of functions are a special case of limits of filters, which are related to categorical limits as follows. Given a topological space X, denote by F the set of filters on X, x ∈ X a point, V(x) ∈ F the neighborhood filter of x, A ∈ F a particular filter and the set of filters finer than A and that converge to x. The filters F are given a small and thin category structure by adding an arrow A → B if and only if A ⊆ B. The injection becomes a functor and the following equivalence holds : x is a topological limit of A if and only if A is a categorical limit of Colimits Examples of colimits are given by the dual versions of the examples above: Initial objects are colimits of empty diagrams. Coproducts are colimits of diagrams indexed by discrete categories. Copowers are colimits of constant diagrams from discrete categories. Coequalizers are colimits of a parallel pair of morphisms. Cokernels are coequalizers of a morphism and a parallel zero morphism. Pushouts are colimits of a pair of morphisms with common domain. Direct limits are colimits of diagrams indexed by directed sets. Properties Existence of limits A given diagram F : J → C may or may not have a limit (or colimit) in C. Indeed, there may not even be a cone to F, let alone a universal cone. A category C is said to have limits of shape J if every diagram of shape J has a limit in C. Specifically, a category C is said to have products if it has limits of shape J for every small discrete category J (it need not have large products), have equalizers if it has limits of shape (i.e. every parallel pair of morphisms has an equalizer), have pullbacks if it has limits of shape (i.e. every pair of morphisms with common codomain has a pullback). A complete category is a category that has all small limits (i.e. all limits of shape J for every small category J). One can also make the dual definitions. A category has colimits of shape J if every diagram of shape J has a colimit in C. A cocomplete category is one that has all small colimits. The existence theorem for limits states that if a category C has equalizers and all products indexed by the classes Ob(J) and Hom(J), then C has all limits of shape J. In this case, the limit of a diagram F : J → C can be constructed as the equalizer of the two morphisms given (in component form) by There is a dual existence theorem for colimits in terms of coequalizers and coproducts. Both of these theorems give sufficient and necessary conditions for the existence of all (co)limits of shape J. Universal property Limits and colimits are important special cases of universal constructions. Let C be a category and let J be a small index category. The functor category CJ may be thought of as the category of all diagrams of shape J in C. The diagonal functor is the functor that maps each object N in C to the constant functor Δ(N) : J → C to N. That is, Δ(N)(X) = N for each object X in J and Δ(N)(f) = idN for each morphism f in J. Given a diagram F: J → C (thought of as an object in CJ), a natural transformation ψ : Δ(N) → F (which is just a morphism in the category CJ) is the same thing as a cone from N to F. To see this, first note that Δ(N)(X) = N for all X implies that the components of ψ are morphisms ψX : N → F(X), which all share the domain N. Moreover, the requirement that the cone's diagrams commute is true simply because this ψ is a natural transformation. (Dually, a natural transformation ψ : F → Δ(N) is the same thing as a co-cone from F to N.) Therefore, the definitions of limits and colimits can then be restated in the form: A limit of F is a universal morphism from Δ to F. A colimit of F is a universal morphism from F to Δ. Adjunctions Like all universal constructions, the formation of limits and colimits is functorial in nature. In other words, if every diagram of shape J has a limit in C (for J small) there exists a limit functor which assigns each diagram its limit and each natural transformation η : F → G the unique morphism lim η : lim F → lim G commuting with the corresponding universal cones. This functor is right adjoint to the diagonal functor Δ : C → CJ. This adjunction gives a bijection between the set of all morphisms from N to lim F and the set of all cones from N to F which is natural in the variables N and F. The counit of this adjunction is simply the universal cone from lim F to F. If the index category J is connected (and nonempty) then the unit of the adjunction is an isomorphism so that lim is a left inverse of Δ. This fails if J is not connected. For example, if J is a discrete category, the components of the unit are the diagonal morphisms δ : N → NJ. Dually, if every diagram of shape J has a colimit in C (for J small) there exists a colimit functor which assigns each diagram its colimit. This functor is left adjoint to the diagonal functor Δ : C → CJ, and one has a natural isomorphism The unit of this adjunction is the universal cocone from F to colim F. If J is connected (and nonempty) then the counit is an isomorphism, so that colim is a left inverse of Δ. Note that both the limit and the colimit functors are covariant functors. As representations of functors One can use Hom functors to relate limits and colimits in a category C to limits in Set, the category of sets. This follows, in part, from the fact the covariant Hom functor Hom(N, –) : C → Set preserves all limits in C. By duality, the contravariant Hom functor must take colimits to limits. If a diagram F : J → C has a limit in C, denoted by lim F, there is a canonical isomorphism which is natural in the variable N. Here the functor Hom(N, F–) is the composition of the Hom functor Hom(N, –) with F. This isomorphism is the unique one which respects the limiting cones. One can use the above relationship to define the limit of F in C. The first step is to observe that the limit of the functor Hom(N, F–) can be identified with the set of all cones from N to F: The limiting cone is given by the family of maps πX : Cone(N, F) → Hom(N, FX) where X(ψ) = ψX. If one is given an object L of C together with a natural isomorphism Φ : Hom(L, –) → Cone(–, F), the object L will be a limit of F with the limiting cone given by ΦL(idL). In fancy language, this amounts to saying that a limit of F is a representation of the functor Cone(–, F) : C → Set. Dually, if a diagram F : J → C has a colimit in C, denoted colim F, there is a unique canonical isomorphism which is natural in the variable N and respects the colimiting cones. Identifying the limit of Hom(F–, N) with the set Cocone(F, N), this relationship can be used to define the colimit of the diagram F as a representation of the functor Cocone(F, –). Interchange of limits and colimits of sets Let I be a finite category and J be a small filtered category. For any bifunctor there is a natural isomorphism In words, filtered colimits in Set commute with finite limits. It also holds that small colimits commute with small limits. Functors and limits If F : J → C is a diagram in C and G : C → D is a functor then by composition (recall that a diagram is just a functor) one obtains a diagram GF : J → D. A natural question is then: “How are the limits of GF related to those of F?” Preservation of limits A functor G : C → D induces a map from Cone(F) to Cone(GF): if Ψ is a cone from N to F then GΨ is a cone from GN to GF. The functor G is said to preserve the limits of F if (GL, Gφ) is a limit of GF whenever (L, φ) is a limit of F. (Note that if the limit of F does not exist, then G vacuously preserves the limits of F.) A functor G is said to preserve all limits of shape J if it preserves the limits of all diagrams F : J → C. For example, one can say that G preserves products, equalizers, pullbacks, etc. A continuous functor is one that preserves all small limits. One can make analogous definitions for colimits. For instance, a functor G preserves the colimits of F if G(L, φ) is a colimit of GF whenever (L, φ) is a colimit of F. A cocontinuous functor is one that preserves all small colimits. If C is a complete category, then, by the above existence theorem for limits, a functor G : C → D is continuous if and only if it preserves (small) products and equalizers. Dually, G is cocontinuous if and only if it preserves (small) coproducts and coequalizers. An important property of adjoint functors is that every right adjoint functor is continuous and every left adjoint functor is cocontinuous. Since adjoint functors exist in abundance, this gives numerous examples of continuous and cocontinuous functors. For a given diagram F : J → C and functor G : C → D, if both F and GF have specified limits there is a unique canonical morphism which respects the corresponding limit cones. The functor G preserves the limits of F if and only if this map is an isomorphism. If the categories C and D have all limits of shape J then lim is a functor and the morphisms τF form the components of a natural transformation The functor G preserves all limits of shape J if and only if τ is a natural isomorphism. In this sense, the functor G can be said to commute with limits (up to a canonical natural isomorphism). Preservation of limits and colimits is a concept that only applies to covariant functors. For contravariant functors the corresponding notions would be a functor that takes colimits to limits, or one that takes limits to colimits. Lifting of limits A functor G : C → D is said to lift limits for a diagram F : J → C if whenever (L, φ) is a limit of GF there exists a limit (L′, φ′) of F such that G(L′, φ′) = (L, φ). A functor G lifts limits of shape J if it lifts limits for all diagrams of shape J. One can therefore talk about lifting products, equalizers, pullbacks, etc. Finally, one says that G lifts limits if it lifts all limits. There are dual definitions for the lifting of colimits. A functor G lifts limits uniquely for a diagram F if there is a unique preimage cone (L′, φ′) such that (L′, φ′) is a limit of F and G(L′, φ′) = (L, φ). One can show that G lifts limits uniquely if and only if it lifts limits and is amnestic. Lifting of limits is clearly related to preservation of limits. If G lifts limits for a diagram F and GF has a limit, then F also has a limit and G preserves the limits of F. It follows that: If G lifts limits of all shape J and D has all limits of shape J, then C also has all limits of shape J and G preserves these limits. If G lifts all small limits and D is complete, then C is also complete and G is continuous. The dual statements for colimits are equally valid. Creation and reflection of limits Let F : J → C be a diagram. A functor G : C → D is said to create limits for F if whenever (L, φ) is a limit of GF there exists a unique cone (L′, φ′) to F such that G(L′, φ′) = (L, φ), and furthermore, this cone is a limit of F. reflect limits for F if each cone to F whose image under G is a limit of GF is already a limit of F. Dually, one can define creation and reflection of colimits. The following statements are easily seen to be equivalent: The functor G creates limits. The functor G lifts limits uniquely and reflects limits. There are examples of functors which lift limits uniquely but neither create nor reflect them. Examples Every representable functor C → Set preserves limits (but not necessarily colimits). In particular, for any object A of C, this is true of the covariant Hom functor Hom(A,–) : C → Set. The forgetful functor U : Grp → Set creates (and preserves) all small limits and filtered colimits; however, U does not preserve coproducts. This situation is typical of algebraic forgetful functors. The free functor F : Set → Grp (which assigns to every set S the free group over S) is left adjoint to forgetful functor U and is, therefore, cocontinuous. This explains why the free product of two free groups G and H is the free group generated by the disjoint union of the generators of G and H. The inclusion functor Ab → Grp creates limits but does not preserve coproducts (the coproduct of two abelian groups being the direct sum). The forgetful functor Top → Set lifts limits and colimits uniquely but creates neither. Let Metc be the category of metric spaces with continuous functions for morphisms. The forgetful functor Metc → Set lifts finite limits but does not lift them uniquely. A note on terminology Older terminology referred to limits as "inverse limits" or "projective limits", and to colimits as "direct limits" or "inductive limits". This has been the source of a lot of confusion. There are several ways to remember the modern terminology. First of all, cokernels, coproducts, coequalizers, and codomains are types of colimits, whereas kernels, products equalizers, and domains are types of limits. Second, the prefix "co" implies "first variable of the ". Terms like "cohomology" and "cofibration" all have a slightly stronger association with the first variable, i.e., the contravariant variable, of the bifunctor.
Mathematics
Category theory
null
39316
https://en.wikipedia.org/wiki/Compass
Compass
A compass is a device that shows the cardinal directions used for navigation and geographic orientation. It commonly consists of a magnetized needle or other element, such as a compass card or compass rose, which can pivot to align itself with magnetic north. Other methods may be used, including gyroscopes, magnetometers, and GPS receivers. Compasses often show angles in degrees: north corresponds to 0°, and the angles increase clockwise, so east is 90°, south is 180°, and west is 270°. These numbers allow the compass to show azimuths or bearings which are commonly stated in degrees. If local variation between magnetic north and true north is known, then direction of magnetic north also gives direction of true north. Among the Four Great Inventions, the magnetic compass was first invented as a device for divination as early as the Chinese Han dynasty (since c. 206 BC), and later adopted for navigation by the Song dynasty Chinese during the 11th century. The first usage of a compass recorded in Western Europe and the Islamic world occurred around 1190. The magnetic compass is the most familiar compass type. It functions as a pointer to "magnetic north", the local magnetic meridian, because the magnetized needle at its heart aligns itself with the horizontal component of the Earth's magnetic field. The magnetic field exerts a torque on the needle, pulling the North end or pole of the needle approximately toward the Earth's North magnetic pole, and pulling the other toward the Earth's South magnetic pole. The needle is mounted on a low-friction pivot point, in better compasses a jewel bearing, so it can turn easily. When the compass is held level, the needle turns until, after a few seconds to allow oscillations to die out, it settles into its equilibrium orientation. In navigation, directions on maps are usually expressed with reference to geographical or true north, the direction toward the Geographical North Pole, the rotation axis of the Earth. Depending on where the compass is located on the surface of the Earth the angle between true north and magnetic north, called magnetic declination can vary widely with geographic location. The local magnetic declination is given on most maps, to allow the map to be oriented with a compass parallel to true north. The locations of the Earth's magnetic poles slowly change with time, which is referred to as geomagnetic secular variation. The effect of this means a map with the latest declination information should be used. Some magnetic compasses include means to manually compensate for the magnetic declination, so that the compass shows true directions. History Natural magnet One of the earliest known references to lodestone's magnetic properties was made by 6th century BC Greek philosopher Thales of Miletus, whom the ancient Greeks credited with discovering lodestone's attraction to iron and other lodestones. The name magnet may come from lodestones found in Magnesia, Anatolia. The ancient Indian medical text Sushruta Samhita describes using magnetic properties of the lodestone to remove arrows embedded in a person's body. The earliest Chinese literary reference to magnetism occurs in the 4th-century BC Book of the Devil Valley Master (Guiguzi). In the chronicle Lüshi Chunqiu, from the 2nd century BC, it is explicitly stated that "the lodestone makes iron come or it attracts it." Artificial compass Some claims state that the first compasses in ancient Han dynasty China were made of lodestone, a naturally magnetized ore of iron. The earliest mention of a needle's attraction appears in a work composed between 20 and 100 AD, the Lunheng (Balanced Inquiries): "A lodestone attracts a needle." In the 2nd century BC, Chinese geomancers were experimenting with the magnetic properties of lodestone to make a "south-pointing spoon" for divination. When it is placed on a smooth bronze plate, the spoon would invariably rotate to a north–south axis. While this has been shown to work, archaeologists have yet to discover an actual spoon made of magnetite in a Han tomb. The wet compass reached Southern India in the 4th century AD. Later compasses were made of iron needles, magnetized by striking them with a lodestone, which appeared in China by 1088 during the Song dynasty, as described by Shen Kuo. Dry compasses began to appear around 1300 in Medieval Europe and the Islamic world. This was supplanted in the early 20th century by the liquid-filled magnetic compass. Design Modern compasses usually use a magnetized needle or dial inside a capsule completely filled with a liquid (lamp oil, mineral oil, white spirits, purified kerosene, or ethyl alcohol are common). While older designs commonly incorporated a flexible rubber diaphragm or airspace inside the capsule to allow for volume changes caused by temperature or altitude, some modern liquid compasses use smaller housings and/or flexible capsule materials to accomplish the same result. The liquid inside the capsule serves to damp the movement of the needle, reducing oscillation time and increasing stability. Key points on the compass, including the north end of the needle are often marked with phosphorescent, photoluminescent, or self-luminous materials to enable the compass to be read at night or in poor light. As the compass fill liquid is noncompressible under pressure, many ordinary liquid-filled compasses will operate accurately underwater to considerable depths. Many modern compasses incorporate a baseplate and protractor tool, and are referred to variously as "orienteering", "baseplate", "map compass" or "protractor" designs. This type of compass uses a separate magnetized needle inside a rotating capsule, an orienting "box" or gate for aligning the needle with magnetic north, a transparent base containing map orienting lines, and a bezel (outer dial) marked in degrees or other units of angular measurement. The capsule is mounted in a transparent baseplate containing a direction-of-travel (DOT) indicator for use in taking bearings directly from a map. Other features found on modern orienteering compasses are map and romer scales for measuring distances and plotting positions on maps, luminous markings on the face or bezels, various sighting mechanisms (mirror, prism, etc.) for taking bearings of distant objects with greater precision, gimbal-mounted, "global" needles for use in differing hemispheres, special rare-earth magnets to stabilize compass needles, adjustable declination for obtaining instant true bearings without resorting to arithmetic, and devices such as inclinometers for measuring gradients. The sport of orienteering has also resulted in the development of models with extremely fast-settling and stable needles utilizing rare-earth magnets for optimal use with a topographic map, a land navigation technique known as terrain association. Many marine compasses designed for use on boats with constantly shifting angles use dampening fluids such as isopar M or isopar L to limit the rapid fluctuation and direction of the needle. The military forces of a few nations, notably the United States Army, continue to issue field compasses with magnetized compass dials or cards instead of needles. A magnetic card compass is usually equipped with an optical, lensatic, or prismatic sight, which allows the user to read the bearing or azimuth off the compass card while simultaneously aligning the compass with the objective (see photo). Magnetic card compass designs normally require a separate protractor tool in order to take bearings directly from a map. The U.S. M-1950 military lensatic compass does not use a liquid-filled capsule as a damping mechanism, but rather electromagnetic induction to control oscillation of its magnetized card. A "deep-well" design is used to allow the compass to be used globally with a card tilt of up to 8 degrees without impairing accuracy. As induction forces provide less damping than fluid-filled designs, a needle lock is fitted to the compass to reduce wear, operated by the folding action of the rear sight/lens holder. The use of air-filled induction compasses has declined over the years, as they may become inoperative or inaccurate in freezing temperatures or extremely humid environments due to condensation or water ingress. Some military compasses, like the U.S. M-1950 (Cammenga 3H) military lensatic compass, the Silva 4b Militaire, and the Suunto M-5N(T) contain the radioactive material tritium () and a combination of phosphors. The U.S. M-1950 equipped with self-luminous lighting contains 120 mCi (millicuries) of tritium. The purpose of the tritium and phosphors is to provide illumination for the compass, via radioluminescent tritium illumination, which does not require the compass to be "recharged" by sunlight or artificial light. However, tritium has a half-life of only about 12 years, so a compass that contains 120 mCi of tritium when new will contain only 60 when it is 12 years old, 30 when it is 24 years old, and so on. Consequently, the illumination of the display will fade. Mariners' compasses can have two or more magnets permanently attached to a compass card, which moves freely on a pivot. A lubber line, which can be a marking on the compass bowl or a small fixed needle, indicates the ship's heading on the compass card. Traditionally the card is divided into thirty-two points (known as rhumbs), although modern compasses are marked in degrees rather than cardinal points. The glass-covered box (or bowl) contains a suspended gimbal within a binnacle. This preserves the horizontal position. The magnetic compass is very reliable at moderate latitudes, but in geographic regions near the Earth's magnetic poles it becomes unusable. As the compass is moved closer to one of the magnetic poles, the magnetic declination, the difference between the direction to geographical north and magnetic north, becomes greater and greater. At some point close to the magnetic pole the compass will not indicate any particular direction but will begin to drift. Also, the needle starts to point up or down when getting closer to the poles, because of the so-called magnetic inclination. Cheap compasses with bad bearings may get stuck because of this and therefore indicate a wrong direction. Magnetic compasses are influenced by any fields other than Earth's. Local environments may contain magnetic mineral deposits and artificial sources such as MRIs, large iron or steel bodies, electrical engines or strong permanent magnets. Any electrically conductive body produces its own magnetic field when it is carrying an electric current. Magnetic compasses are prone to errors in the neighborhood of such bodies. Some compasses include magnets which can be adjusted to compensate for external magnetic fields, making the compass more reliable and accurate. A compass is also subject to errors when the compass is accelerated or decelerated in an airplane or automobile. Depending on which of the Earth's hemispheres the compass is located and if the force is acceleration or deceleration the compass will increase or decrease the indicated heading. Compasses that include compensating magnets are especially prone to these errors, since accelerations tilt the needle, bringing it closer or further from the magnets. Another error of the mechanical compass is the turning error. When one turns from a heading of east or west the compass will lag behind the turn or lead ahead of the turn. Magnetometers, and substitutes such as gyrocompasses, are more stable in such situations. Variants A thumb compass is a type of compass commonly used in orienteering, a sport in which map reading and terrain association are paramount. Consequently, most thumb compasses have minimal or no degree markings at all, and are normally used only to orient the map to magnetic north. An oversized rectangular needle or north indicator aids visibility. Thumb compasses are also often transparent so that an orienteer can hold a map in the hand with the compass and see the map through the compass. The best models use rare-earth magnets to reduce needle settling time to 1 second or less. The earth inductor compass (or "induction compass") determines directions using the principle of electromagnetic induction, with the Earth's magnetic field acting as the induction field for an electric generator, the measurable output of which varies depending on orientation . A vertical card magnetic compass installed in an airplane can eliminate some magnetic dipping errors while making the compass less confusing to read in the cockpit. The compass dial is driven by a set of gears controlled by a magnet mounted on a shaft. Eddy current induced into a damping cup also helps mitigate magnet oscillation. Small electronic compasses (eCompasses) found in clocks, mobile phones, and other electronic devices are solid-state microelectromechanical systems (MEMS) compasses, usually built out of two or three magnetic field sensors that provide data for a microprocessor. Often, the device is a discrete component which outputs either a digital or analog signal proportional to its orientation. This signal is interpreted by a controller or microprocessor and either used internally, or sent to a display unit. The sensor uses highly calibrated internal electronics to measure the response of the device to the Earth's magnetic field. Apart from navigational compasses, other specialty compasses have also been designed to accommodate specific uses. These include: The Qibla compass, which is used by Muslims to show the direction to Mecca for prayers. The optical or prismatic compass, most often used by surveyors, but also by cave explorers, foresters, and geologists. These compasses generally use a liquid-damped capsule and magnetized floating compass dial with an integral optical sight, often fitted with built-in photoluminescent or battery-powered illumination. Using the optical sight, such compasses can be read with extreme accuracy when taking bearings to an object, often to fractions of a degree. Most of these compasses are designed for heavy-duty use, with high-quality needles and jeweled bearings, and many are fitted for tripod mounting for additional accuracy. The trough compass, mounted in a rectangular box whose length was often several times its width, date back several centuries. They were used for land surveying, particularly with plane tables. The luopan, a compass used by feng shui practitioners. Construction A magnetic rod is required when constructing a compass. This can be created by aligning an iron or steel rod with Earth's magnetic field and then tempering or striking it. However, this method produces only a weak magnet so other methods are preferred. For example, a magnetised rod can be created by repeatedly rubbing an iron rod with a magnetic lodestone. This magnetised rod (or magnetic needle) is then placed on a low-friction surface to allow it to freely pivot to align itself with the magnetic field. It is then labeled so the user can distinguish the north-pointing from the south-pointing end; in modern convention the north end is typically marked in some way. If a needle is rubbed on a lodestone or other magnet, the needle becomes magnetized. When it is inserted in a cork or piece of wood, and placed in a bowl of water it becomes a compass. Such devices were universally used as compasses until the invention of the box-like compass with a "dry" pivoting needle, sometime around 1300. Originally, many compasses were marked only as to the direction of magnetic north, or to the four cardinal points (north, south, east, west). Later, these were divided, in China into 24, and in Europe into 32 equally spaced points around the compass card. For a table of the thirty-two points, see compass points. In the modern era, the 360-degree system took hold. This system is still in use today for civilian navigators. The degree system spaces 360 equidistant points located clockwise around the compass dial. In the 19th century some European nations adopted the "grad" (also called grade or gon) system instead, where a right angle is 100 grads to give a circle of 400 grads. Dividing grads into tenths to give a circle of 4000 decigrades has also been used in armies. Most military forces have adopted the French "millieme" system. This is an approximation of a milli-radian (6283 per circle), in which the compass dial is spaced into 6400 units or "mils" for additional precision when measuring angles, laying artillery, etc. The value to the military is that one angular mil subtends approximately one metre at a distance of one kilometer. Imperial Russia used a system derived by dividing the circumference of a circle into chords of the same length as the radius. Each of these was divided into 100 spaces, giving a circle of 600. The Soviet Union divided these into tenths to give a circle of 6000 units, usually translated as "mils". This system was adopted by the former Warsaw Pact countries, e.g., the Soviet Union, East Germany, etc., often counterclockwise (see picture of wrist compass). This is still in use in Russia. Because the Earth's magnetic field's inclination and intensity vary at different latitudes, compasses are often balanced during manufacture so that the dial or needle will be level, eliminating needle drag. Most manufacturers balance their compass needles for one of five zones, ranging from zone 1, covering most of the Northern Hemisphere, to zone 5 covering Australia and the southern oceans. This individual zone balancing prevents excessive dipping of one end of the needle, which can cause the compass card to stick and give false readings. Some compasses feature a special needle balancing system that will accurately indicate magnetic north regardless of the particular magnetic zone. Other magnetic compasses have a small sliding counterweight installed on the needle. This sliding counterweight, called a "rider", can be used for counterbalancing the needle against the dip caused by inclination if the compass is taken to a zone with a higher or lower dip. Like any magnetic device, compasses are affected by nearby ferrous materials, as well as by strong local electromagnetic forces. Compasses used for wilderness land navigation should not be used in proximity to ferrous metal objects or electromagnetic fields (car electrical systems, automobile engines, steel pitons, etc.) as that can affect their accuracy. Compasses are particularly difficult to use accurately in or near trucks, cars or other mechanized vehicles even when corrected for deviation by the use of built-in magnets or other devices. Large amounts of ferrous metal combined with the on-and-off electrical fields caused by the vehicle's ignition and charging systems generally result in significant compass errors. At sea, a ship's compass must also be corrected for errors, called deviation, caused by iron and steel in its structure and equipment. The ship is swung, that is rotated about a fixed point while its heading is noted by alignment with fixed points on the shore. A compass deviation card is prepared so that the navigator can convert between compass and magnetic headings. The compass can be corrected in three ways. First the lubber line can be adjusted so that it is aligned with the direction in which the ship travels, then the effects of permanent magnets can be corrected for by small magnets fitted within the case of the compass. The effect of ferromagnetic materials in the compass's environment can be corrected by two iron balls mounted on either side of the compass binnacle in concert with permanent magnets and a Flinders bar. The coefficient represents the error in the lubber line, while the ferromagnetic effects and the non-ferromagnetic component. A similar process is used to calibrate the compass in light general aviation aircraft, with the compass deviation card often mounted permanently just above or below the magnetic compass on the instrument panel. Fluxgate electronic compasses can be calibrated automatically, and can also be programmed with the correct local compass variation so as to indicate the true heading. Use A magnetic compass points to magnetic north pole, which is approximately 1,000 miles from the true geographic North Pole. A magnetic compass's user can determine true North by finding the magnetic north and then correcting for variation and deviation. Variation is defined as the angle between the direction of true (geographic) north and the direction of the meridian between the magnetic poles. Variation values for most of the oceans had been calculated and published by 1914. Deviation refers to the response of the compass to local magnetic fields caused by the presence of iron and electric currents; one can partly compensate for these by careful location of the compass and the placement of compensating magnets under the compass itself. Mariners have long known that these measures do not completely cancel deviation; hence, they performed an additional step by measuring the compass bearing of a landmark with a known magnetic bearing. They then pointed their ship to the next compass point and measured again, graphing their results. In this way, correction tables could be created, which would be consulted when compasses were used when traveling in those locations. Mariners are concerned about very accurate measurements; however, casual users need not be concerned with differences between magnetic and true North. Except in areas of extreme magnetic declination variance (20 degrees or more), this is enough to protect from walking in a substantially different direction than expected over short distances, provided the terrain is fairly flat and visibility is not impaired. By carefully recording distances (time or paces) and magnetic bearings traveled, one can plot a course and return to one's starting point using the compass alone. Compass navigation in conjunction with a map (terrain association) requires a different method. To take a map bearing or true bearing (a bearing taken in reference to true, not magnetic north) to a destination with a protractor compass, the edge of the compass is placed on the map so that it connects the current location with the desired destination (some sources recommend physically drawing a line). The orienting lines in the base of the compass dial are then rotated to align with actual or true north by aligning them with a marked line of longitude (or the vertical margin of the map), ignoring the compass needle entirely. The resulting true bearing or map bearing may then be read at the degree indicator or direction-of-travel (DOT) line, which may be followed as an azimuth (course) to the destination. If a magnetic north bearing or compass bearing is desired, the compass must be adjusted by the amount of magnetic declination before using the bearing so that both map and compass are in agreement. In the given example, the large mountain in the second photo was selected as the target destination on the map. Some compasses allow the scale to be adjusted to compensate for the local magnetic declination; if adjusted correctly, the compass will give the true bearing instead of the magnetic bearing. The modern hand-held protractor compass always has an additional direction-of-travel (DOT) arrow or indicator inscribed on the baseplate. To check one's progress along a course or azimuth, or to ensure that the object in view is indeed the destination, a new compass reading may be taken to the target if visible (here, the large mountain). After pointing the DOT arrow on the baseplate at the target, the compass is oriented so that the needle is superimposed over the orienting arrow in the capsule. The resulting bearing indicated is the magnetic bearing to the target. Again, if one is using "true" or map bearings, and the compass does not have preset, pre-adjusted declination, one must additionally add or subtract magnetic declination to convert the magnetic bearing into a true bearing. The exact value of the magnetic declination is place-dependent and varies over time, though declination is frequently given on the map itself or obtainable on-line from various sites. If the hiker has been following the correct path, the compass' corrected (true) indicated bearing should closely correspond to the true bearing previously obtained from the map. A compass should be laid down on a level surface so that the needle only rests or hangs on the bearing fused to the compass casing – if used at a tilt, the needle might touch the casing on the compass and not move freely, hence not pointing to the magnetic north accurately, giving a faulty reading. To see if the needle is well leveled, look closely at the needle, and tilt it slightly to see if the needle is swaying side to side freely and the needle is not contacting the casing of the compass. If the needle tilts to one direction, tilt the compass slightly and gently to the opposing direction until the compass needle is horizontal, lengthwise. Items to avoid around compasses are magnets of any kind and any electronics. Magnetic fields from electronics can easily disrupt the needle, preventing it from aligning with the Earth's magnetic fields, causing inaccurate readings. The Earth's natural magnetic forces are considerably weak, measuring at 0.5 gauss and magnetic fields from household electronics can easily exceed it, overpowering the compass needle. Exposure to strong magnets, or magnetic interference can sometimes cause the magnetic poles of the compass needle to differ or even reverse. Avoid iron rich deposits when using a compass, for example, certain rocks which contain magnetic minerals, like Magnetite. This is often indicated by a rock with a surface which is dark and has a metallic luster, not all magnetic mineral bearing rocks have this indication. To see if a rock or an area is causing interference on a compass, get out of the area, and see if the needle on the compass moves. If it does, it means that the area or rock the compass was previously at is causing interference and should be avoided. Non-magnetic compasses There are other ways to find north than the use of magnetism, and from a navigational point of view a total of seven possible ways exist (where magnetism is one of the seven). Two sensors that use two of the remaining six principles are often also called compasses, i.e. the gyrocompass and GPS-compass. A gyrocompass is similar to a gyroscope. It is a non-magnetic compass that finds true north by using an (electrically powered) fast-spinning wheel and friction forces in order to exploit the rotation of the Earth. Gyrocompasses are widely used on ships. They have two main advantages over magnetic compasses: they find true north, i.e., the direction of Earth's rotational axis, as opposed to magnetic north, they are not affected by ferromagnetic metal (including iron, steel, cobalt, nickel, and various alloys) in a ship's hull. (No compass is affected by nonferromagnetic metal, although a magnetic compass will be affected by any kind of wires with electric current passing through them.) Large ships typically rely on a gyrocompass, using the magnetic compass only as a backup. Increasingly, electronic fluxgate compasses are used on smaller vessels. However, magnetic compasses are still widely in use as they can be small, use simple reliable technology, are comparatively cheap, are often easier to use than GPS, require no energy supply, and unlike GPS, are not affected by objects, e.g. trees, that can block the reception of electronic signals. GPS receivers using two or more antennae mounted separately and blending the data with an inertial motion unit (IMU) can now achieve 0.02° in heading accuracy and have startup times in seconds rather than hours for gyrocompass systems. The devices accurately determine the positions (latitudes, longitudes and altitude) of the antennae on the Earth, from which the cardinal directions can be calculated. Manufactured primarily for maritime and aviation applications, they can also detect pitch and roll of ships. Small, portable GPS receivers with only a single antenna can also determine directions if they are being moved, even if only at walking pace. By accurately determining its position on the Earth at times a few seconds apart, the device can calculate its speed and the true bearing (relative to true north) of its direction of motion. Frequently, it is preferable to measure the direction in which a vehicle is actually moving, rather than its heading, i.e. the direction in which its nose is pointing. These directions may be different if there is a crosswind or tidal current. GPS compasses share the main advantages of gyrocompasses. They determine true North, as opposed to magnetic North, and they are unaffected by perturbations of the Earth's magnetic field. Additionally, compared with gyrocompasses, they are much cheaper, they work better in polar regions, they are less prone to be affected by mechanical vibration, and they can be initialized far more quickly. However, they depend on the functioning of, and communication with, the GPS satellites, which might be disrupted by an electronic attack or by the effects of a severe solar storm. Gyrocompasses remain in use for military purposes (especially in submarines, where magnetic and GPS compasses are useless), but have been largely superseded by GPS compasses, with magnetic backups, in civilian contexts.
Technology
Navigation and timekeeping
null
39330
https://en.wikipedia.org/wiki/Congruence%20%28geometry%29
Congruence (geometry)
In geometry, two figures or objects are congruent if they have the same shape and size, or if one has the same shape and size as the mirror image of the other. More formally, two sets of points are called congruent if, and only if, one can be transformed into the other by an isometry, i.e., a combination of rigid motions, namely a translation, a rotation, and a reflection. This means that either object can be repositioned and reflected (but not resized) so as to coincide precisely with the other object. Therefore, two distinct plane figures on a piece of paper are congruent if they can be cut out and then matched up completely. Turning the paper over is permitted. In elementary geometry the word congruent is often used as follows. The word equal is often used in place of congruent for these objects. Two line segments are congruent if they have the same length. Two angles are congruent if they have the same measure. Two circles are congruent if they have the same diameter. In this sense, the sentence "two plane figures are congruent" implies that their corresponding characteristics are congruent (or equal) including not just their corresponding sides and angles, but also their corresponding diagonals, perimeters, and areas. The related concept of similarity applies if the objects have the same shape but do not necessarily have the same size. (Most definitions consider congruence to be a form of similarity, although a minority require that the objects have different sizes in order to qualify as similar.) Determining congruence of polygons For two polygons to be congruent, they must have an equal number of sides (and hence an equal number—the same number—of vertices). Two polygons with n sides are congruent if and only if they each have numerically identical sequences (even if clockwise for one polygon and counterclockwise for the other) side-angle-side-angle-... for n sides and n angles. Congruence of polygons can be established graphically as follows: First, match and label the corresponding vertices of the two figures. Second, draw a vector from one of the vertices of one of the figures to the corresponding vertex of the other figure. Translate the first figure by this vector so that these two vertices match. Third, rotate the translated figure about the matched vertex until one pair of corresponding sides matches. Fourth, reflect the rotated figure about this matched side until the figures match. If at any time the step cannot be completed, the polygons are not congruent. Congruence of triangles Two triangles are congruent if their corresponding sides are equal in length, and their corresponding angles are equal in measure. Symbolically, we write the congruency and incongruency of two triangles and as follows: In many cases it is sufficient to establish the equality of three corresponding parts and use one of the following results to deduce the congruence of the two triangles. Determining congruence Sufficient evidence for congruence between two triangles in Euclidean space can be shown through the following comparisons: SAS (side-angle-side): If two pairs of sides of two triangles are equal in length, and the included angles are equal in measurement, then the triangles are congruent. SSS (side-side-side): If three pairs of sides of two triangles are equal in length, then the triangles are congruent. ASA (angle-side-angle): If two pairs of angles of two triangles are equal in measurement, and the included sides are equal in length, then the triangles are congruent. The ASA postulate is attributed to Thales of Miletus. In most systems of axioms, the three criteria – SAS, SSS and ASA – are established as theorems. In the School Mathematics Study Group system SAS is taken as one (#15) of 22 postulates. AAS (angle-angle-side): If two pairs of angles of two triangles are equal in measurement, and a pair of corresponding non-included sides are equal in length, then the triangles are congruent. AAS is equivalent to an ASA condition, by the fact that if any two angles are given, so is the third angle, since their sum should be 180°. ASA and AAS are sometimes combined into a single condition, AAcorrS – any two angles and a corresponding side. RHS (right-angle-hypotenuse-side), also known as HL (hypotenuse-leg): If two right-angled triangles have their hypotenuses equal in length, and a pair of other sides are equal in length, then the triangles are congruent. Side-side-angle The SSA condition (side-side-angle) which specifies two sides and a non-included angle (also known as ASS, or angle-side-side) does not by itself prove congruence. In order to show congruence, additional information is required such as the measure of the corresponding angles and in some cases the lengths of the two pairs of corresponding sides. There are a few possible cases: If two triangles satisfy the SSA condition and the length of the side opposite the angle is greater than or equal to the length of the adjacent side (SSA, or long side-short side-angle), then the two triangles are congruent. The opposite side is sometimes longer when the corresponding angles are acute, but it is always longer when the corresponding angles are right or obtuse. Where the angle is a right angle, also known as the hypotenuse-leg (HL) postulate or the right-angle-hypotenuse-side (RHS) condition, the third side can be calculated using the Pythagorean theorem thus allowing the SSS postulate to be applied. If two triangles satisfy the SSA condition and the corresponding angles are acute and the length of the side opposite the angle is equal to the length of the adjacent side multiplied by the sine of the angle, then the two triangles are congruent. If two triangles satisfy the SSA condition and the corresponding angles are acute and the length of the side opposite the angle is greater than the length of the adjacent side multiplied by the sine of the angle (but less than the length of the adjacent side), then the two triangles cannot be shown to be congruent. This is the ambiguous case and two different triangles can be formed from the given information, but further information distinguishing them can lead to a proof of congruence. Angle-angle-angle In Euclidean geometry, AAA (angle-angle-angle) (or just AA, since in Euclidean geometry the angles of a triangle add up to 180°) does not provide information regarding the size of the two triangles and hence proves only similarity and not congruence in Euclidean space. However, in spherical geometry and hyperbolic geometry (where the sum of the angles of a triangle varies with size) AAA is sufficient for congruence on a given curvature of surface. CPCTC This acronym stands for Corresponding Parts of Congruent Triangles are Congruent, which is an abbreviated version of the definition of congruent triangles. In more detail, it is a succinct way to say that if triangles and are congruent, that is, with corresponding pairs of angles at vertices and ; and ; and and , and with corresponding pairs of sides and ; and ; and and , then the following statements are true: The statement is often used as a justification in elementary geometry proofs when a conclusion of the congruence of parts of two triangles is needed after the congruence of the triangles has been established. For example, if two triangles have been shown to be congruent by the SSS criteria and a statement that corresponding angles are congruent is needed in a proof, then CPCTC may be used as a justification of this statement. A related theorem is CPCFC, in which "triangles" is replaced with "figures" so that the theorem applies to any pair of polygons or polyhedrons that are congruent. Definition of congruence in analytic geometry In a Euclidean system, congruence is fundamental; it is the counterpart of equality for numbers. In analytic geometry, congruence may be defined intuitively thus: two mappings of figures onto one Cartesian coordinate system are congruent if and only if, for any two points in the first mapping, the Euclidean distance between them is equal to the Euclidean distance between the corresponding points in the second mapping. A more formal definition states that two subsets A and B of Euclidean space Rn are called congruent if there exists an isometry f : Rn → Rn (an element of the Euclidean group E(n)) with f(A) = B. Congruence is an equivalence relation. Congruent conic sections Two conic sections are congruent if their eccentricities and one other distinct parameter characterizing them are equal. Their eccentricities establish their shapes, equality of which is sufficient to establish similarity, and the second parameter then establishes size. Since two circles, parabolas, or rectangular hyperbolas always have the same eccentricity (specifically 0 in the case of circles, 1 in the case of parabolas, and in the case of rectangular hyperbolas), two circles, parabolas, or rectangular hyperbolas need to have only one other common parameter value, establishing their size, for them to be congruent. Congruent polyhedra For two polyhedra with the same combinatorial type (that is, the same number E of edges, the same number of faces, and the same number of sides on corresponding faces), there exists a set of E measurements that can establish whether or not the polyhedra are congruent. The number is tight, meaning that less than E measurements are not enough if the polyhedra are generic among their combinatorial type. But less measurements can work for special cases. For example, cubes have 12 edges, but 9 measurements are enough to decide if a polyhedron of that combinatorial type is congruent to a given regular cube. Congruent triangles on a sphere As with plane triangles, on a sphere two triangles sharing the same sequence of angle-side-angle (ASA) are necessarily congruent (that is, they have three identical sides and three identical angles). This can be seen as follows: One can situate one of the vertices with a given angle at the south pole and run the side with given length up the prime meridian. Knowing both angles at either end of the segment of fixed length ensures that the other two sides emanate with a uniquely determined trajectory, and thus will meet each other at a uniquely determined point; thus ASA is valid. The congruence theorems side-angle-side (SAS) and side-side-side (SSS) also hold on a sphere; in addition, if two spherical triangles have an identical angle-angle-angle (AAA) sequence, they are congruent (unlike for plane triangles). The plane-triangle congruence theorem angle-angle-side (AAS) does not hold for spherical triangles. As in plane geometry, side-side-angle (SSA) does not imply congruence. Notation A symbol commonly used for congruence is an equals symbol with a tilde above it, , corresponding to the Unicode character 'approximately equal to' (U+2245). In the UK, the three-bar equal sign (U+2261) is sometimes used.
Mathematics
Geometry: General
null
39354
https://en.wikipedia.org/wiki/Uracil
Uracil
Uracil () (symbol U or Ura) is one of the four nucleotide bases in the nucleic acid RNA. The others are adenine (A), cytosine (C), and guanine (G). In RNA, uracil binds to adenine via two hydrogen bonds. In DNA, the uracil nucleobase is replaced by thymine (T). Uracil is a demethylated form of thymine. Uracil is a common and naturally occurring pyrimidine derivative. The name "uracil" was coined in 1885 by the German chemist Robert Behrend, who was attempting to synthesize derivatives of uric acid. Originally discovered in 1900 by Alberto Ascoli, it was isolated by hydrolysis of yeast nuclein; it was also found in bovine thymus and spleen, herring sperm, and wheat germ. It is a planar, unsaturated compound that has the ability to absorb light. Uracil that was formed extraterrestrially has been detected in the Murchison meteorite, in near-Earth asteroid Ryugu, and possibly on the surface of the moon Titan. It has been synthesized under cold laboratory conditions similar to outer space, from pyrimidine embedded in water ice and exposed to ultraviolet light. Properties In RNA, uracil base-pairs with adenine and replaces thymine during DNA transcription. Methylation of uracil produces thymine. In DNA, the evolutionary substitution of thymine for uracil may have increased DNA stability and improved the efficiency of DNA replication (discussed below). Uracil pairs with adenine through hydrogen bonding. When base pairing with adenine, uracil acts as both a hydrogen bond acceptor and a hydrogen bond donor. In RNA, uracil binds with a ribose sugar to form the ribonucleoside uridine. When a phosphate attaches to uridine, uridine 5′-monophosphate is produced. Uracil undergoes amide-imidic acid tautomeric shifts because any nuclear instability the molecule may have from the lack of formal aromaticity is compensated by the cyclic-amidic stability. The amide tautomer is referred to as the lactam structure, while the imidic acid tautomer is referred to as the lactim structure. These tautomeric forms are predominant at pH 7. The lactam structure is the most common form of uracil. Uracil also recycles itself to form nucleotides by undergoing a series of phosphoribosyltransferase reactions. Degradation of uracil produces the substrates β-alanine, carbon dioxide, and ammonia. → + + Oxidative degradation of uracil produces urea and maleic acid in the presence of H2O2 and Fe2+ or in the presence of diatomic oxygen and Fe2+. Uracil is a weak acid. The first site of ionization of uracil is not known. The negative charge is placed on the oxygen anion and produces a pKa of less than or equal to 12. The basic pKa = −3.4, while the acidic pKa = 9.389. In the gas phase, uracil has four sites that are more acidic than water. In DNA Uracil is rarely found in DNA, and this may have been an evolutionary change to increase genetic stability. This is because cytosine can deaminate spontaneously to produce uracil through hydrolytic deamination. Therefore, if there were an organism that used uracil in its DNA, the deamination of cytosine (which undergoes base pairing with guanine) would lead to formation of uracil (which would base pair with adenine) during DNA synthesis. Uracil-DNA glycosylase excises uracil bases from double-stranded DNA. This enzyme would therefore recognize and cut out both types of uracil – the one incorporated naturally, and the one formed due to cytosine deamination, which would trigger unnecessary and inappropriate repair processes. This problem is believed to have been solved in terms of evolution, that is by "tagging" (methylating) uracil. Methylated uracil is identical to thymine. Hence the hypothesis that, over time, thymine became standard in DNA instead of uracil. So cells continue to use uracil in RNA, and not in DNA, because RNA is shorter-lived than DNA, and any potential uracil-related errors do not lead to lasting damage. Apparently, either there was no evolutionary pressure to replace uracil in RNA with the more complex thymine, or uracil has some chemical property that is useful in RNA, which thymine lacks. Uracil-containing DNA still exists, for example in: DNA of several phages Endopterygote development Hypermutations during the synthesis of vertebrate antibodies. Synthesis Biological Organisms synthesize uracil, in the form of uridine monophosphate (UMP), by decarboxylating orotidine 5'-monophosphate (orotidylic acid). In humans this decarboxylation is achieved by the enzyme UMP synthase. In contrast to the purine nucleotides, the pyrimidine ring (orotidylic acid) that leads uracil is synthesized first and then linked to ribose phosphate, forming UMP. Laboratory There are many laboratory synthesis of uracil available. The first reaction is the simplest of the syntheses, by adding water to cytosine to produce uracil and ammonia: + → + The most common way to synthesize uracil is by the condensation of malic acid with urea in fuming sulfuric acid: + → + 2 + Uracil can also be synthesized by a double decomposition of thiouracil in aqueous chloroacetic acid. Photodehydrogenation of 5,6-diuracil, which is synthesized by beta-alanine reacting with urea, produces uracil. Prebiotic In 2009, NASA scientists reported having produced uracil from pyrimidine and water ice by exposing it to ultraviolet light under space-like conditions. This suggests a possible natural original source for uracil. In 2014, NASA scientists reported that additional complex DNA and RNA organic compounds of life, including uracil, cytosine and thymine, have been formed in the laboratory under outer space conditions, starting with ice, pyrimidine, ammonia, and methanol, which are compounds found in astrophysical environments. Pyrimidine, like polycyclic aromatic hydrocarbons (PAHs), a carbon-rich chemical found in the Universe, may have been formed in red giants or in interstellar dust and gas clouds. Based on 12C/13C isotopic ratios of organic compounds found in the Murchison meteorite, it is believed that uracil, xanthine, and related molecules can also be formed extraterrestrially. Data from the Cassini mission, orbiting in the Saturn system, suggests that uracil is present in the surface of the moon Titan. In 2023, uracil was observed in a sample from 162173 Ryugu, a near-Earth asteroid, with no exposure to Earth's biosphere, giving further evidence for synthesis in space. Reactions Uracil readily undergoes regular reactions including oxidation, nitration, and alkylation. While in the presence of phenol (PhOH) and sodium hypochlorite (NaOCl), uracil can be visualized in ultraviolet light. Uracil also has the capability to react with elemental halogens because of the presence of more than one strongly electron donating group. Uracil readily undergoes addition to ribose sugars and phosphates to partake in synthesis and further reactions in the body. Uracil becomes uridine, uridine monophosphate (UMP), uridine diphosphate (UDP), uridine triphosphate (UTP), and uridine diphosphate glucose (UDP-glucose). Each one of these molecules is synthesized in the body and has specific functions. When uracil reacts with anhydrous hydrazine, a first-order kinetic reaction occurs and the uracil ring opens up. If the pH of the reaction increases to > 10.5, the uracil anion forms, making the reaction go much more slowly. The same slowing of the reaction occurs if the pH decreases, because of the protonation of the hydrazine. The reactivity of uracil remains unchanged, even if the temperature changes. Uses Uracil's use in the body is to help carry out the synthesis of many enzymes necessary for cell function through bonding with riboses and phosphates. Uracil serves as allosteric regulator and coenzyme for reactions in animals and in plants. UMP controls the activity of carbamoyl phosphate synthetase and aspartate transcarbamoylase in plants, while UDP and UTP regulate CPSase II activity in animals. UDP-glucose regulates the conversion of glucose to galactose in the liver and other tissues in the process of carbohydrate metabolism. Uracil is also involved in the biosynthesis of polysaccharides and the transportation of sugars containing aldehydes. Uracil is important for the detoxification of many carcinogens, for instance those found in tobacco smoke. Uracil is also required to detoxify many drugs such as cannabinoids (THC) and morphine (opioids). It can also slightly increase the risk for cancer in unusual cases in which the body is extremely deficient in folate. The deficiency in folate leads to increased ratio of deoxyuridine monophosphates (dUMP)/deoxythymidine monophosphates (dTMP) and uracil misincorporation into DNA and eventually low production of DNA. Uracil can be used for drug delivery and as a pharmaceutical. When elemental fluorine reacts with uracil, they produce 5-fluorouracil. 5-Fluorouracil is an anticancer drug (antimetabolite) used to masquerade as uracil during the nucleic acid replication process. Because 5-fluorouracil is similar in shape to, but does not undergo the same chemistry as, uracil, the drug inhibits RNA transcription enzymes, thereby blocking RNA synthesis and stopping the growth of cancerous cells. Uracil can also be used in the synthesis of caffeine. Uracil has also shown potential as a HIV viral capsid inhibitor. Uracil derivatives have antiviral, anti-tubercular and anti-leishmanial activity. Uracil can be used to determine microbial contamination of tomatoes. The presence of uracil indicates lactic acid bacteria contamination of the fruit. Uracil derivatives containing a diazine ring are used in pesticides. Uracil derivatives are more often used as antiphotosynthetic herbicides, destroying weeds in cotton, sugar beet, turnips, soya, peas, sunflower crops, vineyards, berry plantations, and orchards. Uracil derivatives can enhance the activity of antimicrobial polysaccharides such as chitosan. In yeast, uracil concentrations are inversely proportional to uracil permease. Mixtures containing uracil are also commonly used to test reversed-phase HPLC columns. As uracil is essentially unretained by the non-polar stationary phase, this can be used to determine the dwell time (and subsequently dwell volume, given a known flow rate) of the system.
Biology and health sciences
Nucleic acids
Biology
39358
https://en.wikipedia.org/wiki/Open%20set
Open set
In mathematics, an open set is a generalization of an open interval in the real line. In a metric space (a set with a distance defined between every two points), an open set is a set that, with every point in it, contains all points of the metric space that are sufficiently near to (that is, all points whose distance to is less than some value depending on ). More generally, an open set is a member of a given collection of subsets of a given set, a collection that has the property of containing every union of its members, every finite intersection of its members, the empty set, and the whole set itself. A set in which such a collection is given is called a topological space, and the collection is called a topology. These conditions are very loose, and allow enormous flexibility in the choice of open sets. For example, every subset can be open (the discrete topology), or no subset can be open except the space itself and the empty set (the indiscrete topology). In practice, however, open sets are usually chosen to provide a notion of nearness that is similar to that of metric spaces, without having a notion of distance defined. In particular, a topology allows defining properties such as continuity, connectedness, and compactness, which were originally defined by means of a distance. The most common case of a topology without any distance is given by manifolds, which are topological spaces that, near each point, resemble an open set of a Euclidean space, but on which no distance is defined in general. Less intuitive topologies are used in other branches of mathematics; for example, the Zariski topology, which is fundamental in algebraic geometry and scheme theory. Motivation Intuitively, an open set provides a method to distinguish two points. For example, if about one of two points in a topological space, there exists an open set not containing the other (distinct) point, the two points are referred to as topologically distinguishable. In this manner, one may speak of whether two points, or more generally two subsets, of a topological space are "near" without concretely defining a distance. Therefore, topological spaces may be seen as a generalization of spaces equipped with a notion of distance, which are called metric spaces. In the set of all real numbers, one has the natural Euclidean metric; that is, a function which measures the distance between two real numbers: . Therefore, given a real number x, one can speak of the set of all points close to that real number; that is, within ε of x. In essence, points within ε of x approximate x to an accuracy of degree ε. Note that ε > 0 always but as ε becomes smaller and smaller, one obtains points that approximate x to a higher and higher degree of accuracy. For example, if x = 0 and ε = 1, the points within ε of x are precisely the points of the interval (−1, 1); that is, the set of all real numbers between −1 and 1. However, with ε = 0.5, the points within ε of x are precisely the points of (−0.5, 0.5). Clearly, these points approximate x to a greater degree of accuracy than when ε = 1. The previous discussion shows, for the case x = 0, that one may approximate x to higher and higher degrees of accuracy by defining ε to be smaller and smaller. In particular, sets of the form (−ε, ε) give us a lot of information about points close to x = 0. Thus, rather than speaking of a concrete Euclidean metric, one may use sets to describe points close to x. This innovative idea has far-reaching consequences; in particular, by defining different collections of sets containing 0 (distinct from the sets (−ε, ε)), one may find different results regarding the distance between 0 and other real numbers. For example, if we were to define R as the only such set for "measuring distance", all points are close to 0 since there is only one possible degree of accuracy one may achieve in approximating 0: being a member of R. Thus, we find that in some sense, every real number is distance 0 away from 0. It may help in this case to think of the measure as being a binary condition: all things in R are equally close to 0, while any item that is not in R is not close to 0. In general, one refers to the family of sets containing 0, used to approximate 0, as a neighborhood basis; a member of this neighborhood basis is referred to as an open set. In fact, one may generalize these notions to an arbitrary set (X); rather than just the real numbers. In this case, given a point (x) of that set, one may define a collection of sets "around" (that is, containing) x, used to approximate x. Of course, this collection would have to satisfy certain properties (known as axioms) for otherwise we may not have a well-defined method to measure distance. For example, every point in X should approximate x to some degree of accuracy. Thus X should be in this family. Once we begin to define "smaller" sets containing x, we tend to approximate x to a greater degree of accuracy. Bearing this in mind, one may define the remaining axioms that the family of sets about x is required to satisfy. Definitions Several definitions are given here, in an increasing order of technicality. Each one is a special case of the next one. Euclidean space A subset of the Euclidean -space is open if, for every point in , there exists a positive real number (depending on ) such that any point in whose Euclidean distance from is smaller than belongs to . Equivalently, a subset of is open if every point in is the center of an open ball contained in An example of a subset of that is not open is the closed interval , since neither nor belongs to for any , no matter how small. Metric space A subset U of a metric space is called open if, for any point x in U, there exists a real number ε > 0 such that any point satisfying belongs to U. Equivalently, U is open if every point in U has a neighborhood contained in U. This generalizes the Euclidean space example, since Euclidean space with the Euclidean distance is a metric space. Topological space A topology on a set is a set of subsets of with the properties below. Each member of is called an open set. and Any union of sets in belong to : if then Any finite intersection of sets in belong to : if then together with is called a topological space. Infinite intersections of open sets need not be open. For example, the intersection of all intervals of the form where is a positive integer, is the set which is not open in the real line. A metric space is a topological space, whose topology consists of the collection of all subsets that are unions of open balls. There are, however, topological spaces that are not metric spaces. Properties The union of any number of open sets, or infinitely many open sets, is open. The intersection of a finite number of open sets is open. A complement of an open set (relative to the space that the topology is defined on) is called a closed set. A set may be both open and closed (a clopen set). The empty set and the full space are examples of sets that are both open and closed. A set can never been considered as open by itself. This notion is relative to a containing set and a specific topology on it. Whether a set is open depends on the topology under consideration. Having opted for greater brevity over greater clarity, we refer to a set X endowed with a topology as "the topological space X" rather than "the topological space ", despite the fact that all the topological data is contained in If there are two topologies on the same set, a set U that is open in the first topology might fail to be open in the second topology. For example, if X is any topological space and Y is any subset of X, the set Y can be given its own topology (called the 'subspace topology') defined by "a set U is open in the subspace topology on Y if and only if U is the intersection of Y with an open set from the original topology on X." This potentially introduces new open sets: if V is open in the original topology on X, but isn't open in the original topology on X, then is open in the subspace topology on Y. As a concrete example of this, if U is defined as the set of rational numbers in the interval then U is an open subset of the rational numbers, but not of the real numbers. This is because when the surrounding space is the rational numbers, for every point x in U, there exists a positive number a such that all points within distance a of x are also in U. On the other hand, when the surrounding space is the reals, then for every point x in U there is positive a such that all points within distance a of x are in U (because U contains no non-rational numbers). Uses Open sets have a fundamental importance in topology. The concept is required to define and make sense of topological space and other topological structures that deal with the notions of closeness and convergence for spaces such as metric spaces and uniform spaces. Every subset A of a topological space X contains a (possibly empty) open set; the maximum (ordered under inclusion) such open set is called the interior of A. It can be constructed by taking the union of all the open sets contained in A. A function between two topological spaces and is if the preimage of every open set in is open in The function is called if the image of every open set in is open in An open set on the real line has the characteristic property that it is a countable union of disjoint open intervals. Special types of open sets Clopen sets and non-open and/or non-closed sets A set might be open, closed, both, or neither. In particular, open and closed sets are not mutually exclusive, meaning that it is in general possible for a subset of a topological space to simultaneously be both an open subset a closed subset. Such subsets are known as . Explicitly, a subset of a topological space is called if both and its complement are open subsets of ; or equivalently, if and In topological space the empty set and the set itself are always clopen. These two sets are the most well-known examples of clopen subsets and they show that clopen subsets exist in topological space. To see, it suffices to remark that, by definition of a topology, and are both open, and that they are also closed, since each is the complement of the other. The open sets of the usual Euclidean topology of the real line are the empty set, the open intervals and every union of open intervals. The interval is open in by definition of the Euclidean topology. It is not closed since its complement in is which is not open; indeed, an open interval contained in cannot contain , and it follows that cannot be a union of open intervals. Hence, is an example of a set that is open but not closed. By a similar argument, the interval is a closed subset but not an open subset. Finally, neither nor its complement are open (because they cannot be written as a union of open intervals); this means that is neither open nor closed. If a topological space is endowed with the discrete topology (so that by definition, every subset of is open) then every subset of is a clopen subset. For a more advanced example reminiscent of the discrete topology, suppose that is an ultrafilter on a non-empty set Then the union is a topology on with the property that non-empty proper subset of is an open subset or else a closed subset, but never both; that is, if (where ) then of the following two statements is true: either (1) or else, (2) Said differently, subset is open or closed but the subsets that are both (i.e. that are clopen) are and Regular open sets A subset of a topological space is called a if or equivalently, if , where , , and denote, respectively, the topological boundary, interior, and closure of in . A topological space for which there exists a base consisting of regular open sets is called a . A subset of is a regular open set if and only if its complement in is a regular closed set, where by definition a subset of is called a if or equivalently, if Every regular open set (resp. regular closed set) is an open subset (resp. is a closed subset) although in general, the converses are true. Generalizations of open sets Throughout, will be a topological space. A subset of a topological space is called: if , and the complement of such a set is called . , , or if it satisfies any of the following equivalent conditions: There exists subsets such that is open in is a dense subset of and There exists an open (in ) subset such that is a dense subset of The complement of a preopen set is called . if . The complement of a b-open set is called . or if it satisfies any of the following equivalent conditions: is a regular closed subset of There exists a preopen subset of such that The complement of a β-open set is called . if it satisfies any of the following equivalent conditions: Whenever a sequence in converges to some point of then that sequence is eventually in Explicitly, this means that if is a sequence in and if there exists some is such that in then is eventually in (that is, there exists some integer such that if then ). is equal to its in which by definition is the set The complement of a sequentially open set is called . A subset is sequentially closed in if and only if is equal to its , which by definition is the set consisting of all for which there exists a sequence in that converges to (in ). and is said to have if there exists an open subset such that is a meager subset, where denotes the symmetric difference. The subset is said to have the Baire property in the restricted sense if for every subset of the intersection has the Baire property relative to . if or, equivalently, . The complement in of a semi-open set is called a set. The (in ) of a subset denoted by is the intersection of all semi-closed subsets of that contain as a subset. if for each there exists some semiopen subset of such that (resp. ) if its complement in is a θ-closed (resp. ) set, where by definition, a subset of is called (resp. ) if it is equal to the set of all of its θ-cluster points (resp. δ-cluster points). A point is called a (resp. a ) of a subset if for every open neighborhood of in the intersection is not empty (resp. is not empty). Using the fact that and whenever two subsets satisfy the following may be deduced: Every α-open subset is semi-open, semi-preopen, preopen, and b-open. Every b-open set is semi-preopen (i.e. β-open). Every preopen set is b-open and semi-preopen. Every semi-open set is b-open and semi-preopen. Moreover, a subset is a regular open set if and only if it is preopen and semi-closed. The intersection of an α-open set and a semi-preopen (resp. semi-open, preopen, b-open) set is a semi-preopen (resp. semi-open, preopen, b-open) set. Preopen sets need not be semi-open and semi-open sets need not be preopen. Arbitrary unions of preopen (resp. α-open, b-open, semi-preopen) sets are once again preopen (resp. α-open, b-open, semi-preopen). However, finite intersections of preopen sets need not be preopen. The set of all α-open subsets of a space forms a topology on that is finer than A topological space is Hausdorff if and only if every compact subspace of is θ-closed. A space is totally disconnected if and only if every regular closed subset is preopen or equivalently, if every semi-open subset is preopen. Moreover, the space is totally disconnected if and only if the of every preopen subset is open.
Mathematics
Geometry
null
39374
https://en.wikipedia.org/wiki/Large%20Magellanic%20Cloud
Large Magellanic Cloud
The Large Magellanic Cloud (LMC) is a dwarf galaxy and satellite galaxy of the Milky Way. At a distance of around , the LMC is the second- or third-closest galaxy to the Milky Way, after the Sagittarius Dwarf Spheroidal ( away) and the possible dwarf irregular galaxy called the Canis Major Overdensity. Based on the D25 isophote at the B-band (445 nm wavelength of light), the Large Magellanic Cloud is about across. It is roughly one-hundredth the mass of the Milky Way and is the fourth-largest galaxy in the Local Group, after the Andromeda Galaxy (M31), the Milky Way, and the Triangulum Galaxy (M33). The LMC is classified as a Magellanic spiral. It contains a stellar bar that is geometrically off-center, suggesting that it was once a barred dwarf spiral galaxy before its spiral arms were disrupted, likely by tidal interactions from the nearby Small Magellanic Cloud (SMC) and the Milky Way's gravity. The LMC is predicted to merge with the Milky Way in approximately 2.4 billion years. With a declination of about −70°, the LMC is visible as a faint "cloud" from the southern hemisphere of the Earth and from as far north as 20° N. It straddles the constellations Dorado and Mensa and has an apparent length of about 10° to the naked eye, 20 times the Moon's diameter, from dark sites away from light pollution. History of observation Both the Large and Small Magellanic Clouds have been easily visible for southern nighttime observers well back into prehistory. It has been claimed that the first known written mention of the Large Magellanic Cloud was by the Persian astronomer 'Abd al-Rahman al-Sufi Shirazi (later known in Europe as "Azophi"), which he referred to as Al Bakr, the White Ox, in his Book of Fixed Stars around 964 AD. However, this seems to be a misunderstanding of a reference to some stars south of Canopus which he admits he has not seen. The first confirmed recorded observation was in 1503–1504 by Amerigo Vespucci in a letter about his third voyage. He mentioned "three Canopes, two bright and one obscure"; "bright" refers to the two Magellanic Clouds, and "obscure" refers to the Coalsack. Ferdinand Magellan sighted the LMC on his voyage in 1519 and his writings brought it into common Western knowledge. The galaxy now bears his name. The galaxy and southern end of Dorado are in the current epoch at opposition on about 5 December when thus visible from sunset to sunrise from equatorial points such as Ecuador, the Congos, Uganda, Kenya and Indonesia and for part of the night in nearby months. Above about 28° south, such as most of Australia and South Africa, the galaxy is always sufficiently above the horizon to be considered properly circumpolar, thus during spring and autumn the cloud is also visible much of the night, and the height of winter in June nearly coincides with closest proximity to the Sun's apparent position. Measurements with the Hubble Space Telescope, announced in 2006, suggest the Large and Small Magellanic Clouds may be moving too quickly to be orbiting the Milky Way. Astronomers discovered a new black hole inside the Large Magellanic Cloud in November 2021 using the European Southern Observatory's Very Large Telescope in Chile. Astronomers claim its gravity is influenced by a nearby star, which is about five times the mass of the Sun. Geometry The Large Magellanic Cloud has a prominent central bar and spiral arm. The central bar seems to be warped so that the east and west ends are nearer the Milky Way than the middle. In 2014, measurements from the Hubble Space Telescope made it possible to determine a rotation period of 250 million years. The LMC was long considered to be a planar galaxy that could be assumed to lie at a single distance from the Solar System. However, in 1986, Caldwell and Coulson found that field Cepheid variables in the northeast lie closer to the Milky Way than those in the southwest. From 2001 to 2002 this inclined geometry was confirmed by the same means, by core helium-burning red clump stars, and by the tip of the red giant branch. All three papers find an inclination of 35°, where a face-on galaxy has an inclination of 0°. Further work on the structure of the LMC using the kinematics of carbon stars showed that the LMC's disk is both thick and flared, likely due to interactions with the SMC. Regarding the distribution of star clusters in the LMC, Schommer et al. measured velocities for 80 clusters and found that the LMC's cluster system has kinematics consistent with the clusters moving in a disk-like distribution. These results were confirmed by Grocholski et al., who calculated distances to a sample of clusters and showed that the cluster system is distributed in the same plane as the field stars. Distance Distance to the LMC has been calculated using standard candles; Cepheid variables are one of the most popular. These have been shown to have a relationship between their absolute luminosity and the period over which their brightness varies. However the variable of metallicity may also need to be taken as a component of this as consensus is this likely affects their period-luminosity relations. Unfortunately, those in the Milky Way typically used to calibrate the relation are more metal-rich than those found in the LMC. Modern 8-meter-class optical telescopes have discovered eclipsing binaries throughout the Local Group. Parameters of these systems can be measured without mass or compositional assumptions. The light echoes of supernova 1987A are also geometric measurements, without any stellar models or assumptions. In 2006, the Cepheid absolute luminosity was re-calibrated using Cepheid variables in the galaxy Messier 106 that cover a range of metallicities. Using this improved calibration, they find an absolute distance modulus of , or . This distance has been confirmed by other authors. By cross-correlating different measurement methods, one can bound the distance; the residual errors are now less than the estimated size parameters of the LMC. The results of a study using late-type eclipsing binaries to determine the distance more accurately was published in the scientific journal Nature in March 2013. A distance of with an accuracy of 2.2% was obtained. Features Like many irregular galaxies, the LMC is rich in gas and dust, and is currently undergoing vigorous star formation activity. It holds the Tarantula Nebula, the most active star-forming region in the Local Group. The LMC has a wide range of galactic objects and phenomena that make it known as an "astronomical treasure-house, a great celestial laboratory for the study of the growth and evolution of the stars", per Robert Burnham Jr. Surveys of the galaxy have found roughly 60 globular clusters, 400 planetary nebulae and 700 open clusters, along with hundreds of thousands of giant and supergiant stars. Supernova 1987A—the nearest supernova in recent years—was in the Large Magellanic Cloud. The Lionel-Murphy SNR (N86) nitrogen-abundant supernova remnant was named by astronomers at the Australian National University's Mount Stromlo Observatory, acknowledging Australian High Court Justice Lionel Murphy's interest in science and its perceived resemblance to his large nose. A bridge of gas connects the Small Magellanic Cloud (SMC) with the LMC, which evinces tidal interaction between the galaxies. The Magellanic Clouds have a common envelope of neutral hydrogen, indicating that they have been gravitationally bound for a long time. This bridge of gas is a star-forming site. X-ray sources No X-rays above background were detected from either cloud during the September 20, 1966, Nike-Tomahawk rocket flight nor that of two days later. The second took off from Johnston Atoll at 17:13 UTC and reached an apogee of , with spin-stabilization at 5.6 rps. The LMC was not detected in the X-ray range 8–80 keV. Another was launched from same atoll at 11:32 UTC on October 29, 1968, to scan the LMC for X-rays. The first discrete X-ray source in Dorado was at RA Dec , and it was the Large Magellanic Cloud. This X-ray source extended over about 12° and is consistent with the Cloud. Its emission rate between 1.5–10.5 keV for a distance of 50 kpc is /s. An X-ray astronomy instrument was carried aboard a Thor missile launched from the same atoll on September 24, 1970, at 12:54 UTC and altitudes above , to search for the Small Magellanic Cloud and to extend observation of the LMC. The source in the LMC appeared extended and contained star ε Dor. The X-ray luminosity (Lx) over the range 1.5–12 keV was (). The Large Magellanic Cloud (LMC) appears in the constellations Mensa and Dorado. LMC X-1 (the first X-ray source in the LMC) is at RA Dec , and is a high-mass X-ray binary (star system) source (HMXB). Of the first five luminous LMC X-ray binaries: LMC X-1, X-2, X-3, X-4 and A 0538–66 (detected by Ariel 5 at A 0538–66), LMC X-2 is the one that is a bright low-mass X-ray binary system (LMXB) in the LMC. DEM L316 in the Cloud consists of two supernova remnants. Chandra X-ray spectra show that the hot gas shell on the upper left has an abundance of iron. This implies that the upper-left SNR is the product of a Type Ia supernova; much lower such abundance in the lower remnant belies a Type II supernova. A 16 ms X-ray pulsar is associated with SNR 0538-69.1. SNR 0540-697 was resolved using ROSAT. Gallery
Physical sciences
Notable galaxies
null
39375
https://en.wikipedia.org/wiki/Space%20suit
Space suit
A space suit (or spacesuit) is an environmental suit used for protection from the harsh environment of outer space, mainly from its vacuum as a highly specialized pressure suit, but also its temperature extremes, as well as radiation and micrometeoroids. Basic space suits are worn as a safety precaution inside spacecrafts in case of loss of cabin pressure. For extravehicular activity (EVA) more complex space suits are worn, featuring a portable life support system. Pressure suits are in general needed at low pressure environments above the Armstrong limit, at around above Earth. Space suits augment pressure suits with complex system of equipment and environmental systems designed to keep the wearer comfortable, and to minimize the effort required to bend the limbs, resisting a soft pressure garment's natural tendency to stiffen against the vacuum. A self-contained oxygen supply and environmental control system is frequently employed to allow complete freedom of movement, independent of the spacecraft. Three types of space suits exist for different purposes: IVA (intravehicular activity), EVA (extravehicular activity), and IEVA (intra/extravehicular activity). IVA suits are meant to be worn inside a pressurized spacecraft, and are therefore lighter and more comfortable. IEVA suits are meant for use inside and outside the spacecraft, such as the Gemini G4C suit. They include more protection from the harsh conditions of space, such as protection from micrometeoroids and extreme temperature change. EVA suits, such as the EMU, are used outside spacecraft, for either planetary exploration or spacewalks. They must protect the wearer against all conditions of space, as well as provide mobility and functionality. The first full-pressure suits for use at extreme altitudes were designed by individual inventors as early as the 1930s. The first space suit worn by a human in space was the Soviet SK-1 suit worn by Yuri Gagarin in 1961. Since then space suits have been worn beside in Earth orbit, en-route and on the surface of the Moon. Requirements A space suit must perform several functions to allow its occupant to work safely and comfortably, inside or outside a spacecraft. It must provide: A stable internal pressure. This can be less than Earth's atmosphere, as there is usually no need for the space suit to carry nitrogen (which comprises about 78% of Earth's atmosphere and is not used by the body). Lower pressure allows for greater mobility, but requires the suit occupant to breathe pure oxygen for a time before going into this lower pressure, to avoid decompression sickness. Mobility. Movement is typically opposed by the pressure of the suit; mobility is achieved by careful joint design. See the Design concepts section. Supply of breathable oxygen and elimination of carbon dioxide; these gases are exchanged with the spacecraft or a Portable Life Support System (PLSS) Temperature regulation. Unlike on Earth, where heat can be transferred by convection to the atmosphere, in space, heat can be lost only by thermal radiation or by conduction to objects in physical contact with the exterior of the suit. Since the temperature on the outside of the suit varies greatly between sunlight and shadow, the suit is heavily insulated, and air temperature is maintained at a comfortable level. A communication system, with external electrical connection to the spacecraft or PLSS Means of collecting and containing solid and liquid bodily waste (such as a Maximum Absorbency Garment) Secondary requirements Advanced suits better regulate the astronaut's temperature with a Liquid Cooling and Ventilation Garment (LCVG) in contact with the astronaut's skin, from which the heat is dumped into space through an external radiator in the PLSS. Additional requirements for EVA include: Shielding against ultraviolet radiation Limited shielding against particle radiation Means to maneuver, dock, release, and tether onto a spacecraft Protection against small micrometeoroids, some traveling at up to 27,000 kilometers per hour, provided by a puncture-resistant Thermal Micrometeoroid Garment, which is the outermost layer of the suit. Experience has shown the greatest chance of exposure occurs near the gravitational field of a moon or planet, so these were first employed on the Apollo lunar EVA suits (see United States suit models below). As part of astronautical hygiene control (i.e., protecting astronauts from extremes of temperature, radiation, etc.), a space suit is essential for extravehicular activity. The Apollo/Skylab A7L suit included eleven layers in all: an inner liner, a LCVG, a pressure bladder, a restraint layer, another liner, and a Thermal Micrometeoroid Garment consisting of five aluminized insulation layers and an external layer of white Ortho-Fabric. This space suit is capable of protecting the astronaut from temperatures ranging from to . During exploration of the Moon or Mars, there will be the potential for lunar or Martian dust to be retained on the space suit. When the space suit is removed on return to the spacecraft, there will be the potential for the dust to contaminate surfaces and increase the risks of inhalation and skin exposure. Astronautical hygienists are testing materials with reduced dust retention times and the potential to control the dust exposure risks during planetary exploration. Novel ingress and egress approaches, such as suitports, are being explored as well. In NASA space suits, communications are provided via a cap worn over the head, which includes earphones and a microphone. Due to the coloration of the version used for Apollo and Skylab, which resembled the coloration of the comic strip character Snoopy, these caps became known as "Snoopy caps". Operating pressure Generally, to supply enough oxygen for respiration, a space suit using pure oxygen must have a pressure of about , equal to the partial pressure of oxygen in the Earth's atmosphere at sea level, plus and water vapor pressure, both of which must be subtracted from the alveolar pressure to get alveolar oxygen partial pressure in 100% oxygen atmospheres, by the alveolar gas equation. The latter two figures add to , which is why many modern space suits do not use , but (this is a slight overcorrection, as alveolar partial pressures at sea level are slightly less than the former). In space suits that use 20.7 kPa, the astronaut gets only 20.7 kPa − 11.6 kPa = of oxygen, which is about the alveolar oxygen partial pressure attained at an altitude of above sea level. This is about 42% of normal partial pressure of oxygen at sea level, about the same as pressure in a commercial passenger jet aircraft, and is the realistic lower limit for safe ordinary space suit pressurization which allows reasonable capacity for work. Oxygen prebreathing When space suits below a specific operating pressure are used from craft that are pressurized to normal atmospheric pressure (such as the Space Shuttle), this requires astronauts to "pre-breathe" (meaning pre-breathe pure oxygen for a period) before donning their suits and depressurizing in the air lock. This procedure purges the body of dissolved nitrogen, so as to avoid decompression sickness due to rapid depressurization from a nitrogen-containing atmosphere. In the US space shuttle, cabin pressure was reduced from normal atmospheric to 70kPa (equivalent to an altitude of about 3000m) for 24 hours before EVA, and after donning the suit, a pre-breathing period of 45 minutes on pure oxygen before decompressing to the EMU working pressure of 30kPa. In the ISS there is no cabin pressure reduction, instead a 4-hour oxygen pre-breathe at normal cabin pressure is used to desaturate nitrogen to an acceptable level. US studies show that a rapid decompression from 101kPa to 55kPa has an acceptable risk, and Russian studies show that direct decompression from 101kPa to 40kPa after 30 minutes of oxygen pre-breathing, roughly the time required for pre-EVA suit checks, is acceptable. Physiological effects of unprotected space exposure The human body can briefly survive the hard vacuum of space unprotected, despite contrary depictions in some popular science fiction. Consciousness is retained for up to 15 seconds as the effects of oxygen starvation set in. No snap freeze effect occurs because all heat must be lost through thermal radiation or the evaporation of liquids, and the blood does not boil because it remains pressurized within the body, but human flesh expands up to about twice its volume due to ebullism in such conditions, giving the visual effect of a body builder rather than an overfilled balloon. In space, there are highly energized subatomic particles that can cause radiation damage by disrupting essential biological processes. Exposure to radiation can create problems via two methods: the particles can react with water in the human body to produce free radicals that break DNA molecules apart, or by directly breaking the DNA molecules. Temperature in space can vary extremely depending on the exposure to radiant energy sources. Temperatures from solar radiation can reach up to , and in its absence, down to . Because of this, space suits must provide sufficient insulation and cooling for the conditions in which they will be used. The vacuum environment of space has no pressure, so gases will expand and exposed liquids may evaporate. Some solids may sublimate. It is necessary to wear a suit that provides sufficient internal body pressure in space. The most immediate hazard is in attempting to hold one's breath during explosive decompression as the expansion of gas can damage the lungs by overexpansion rupture. These effects have been confirmed through various accidents (including in very-high-altitude conditions, outer space and training vacuum chambers). Human skin does not need to be protected from vacuum and is gas-tight by itself. It only needs to be mechanically restrained to retain its normal shape and the internal tissues to retain their volume. This can be accomplished with a tight-fitting elastic body suit and a helmet for containing breathing gases, known as a space activity suit (SAS). Design concepts A space suit should allow its user natural unencumbered movement. Nearly all designs try to maintain a constant volume no matter what movements the wearer makes. This is because mechanical work is needed to change the volume of a constant pressure system. If flexing a joint reduces the volume of the space suit, then the astronaut must do extra work every time they bend that joint, and they have to maintain a force to keep the joint bent. Even if this force is very small, it can be seriously fatiguing to constantly fight against one's suit. It also makes delicate movements very difficult. The work required to bend a joint is dictated by the formula where Vi and Vf are respectively the initial and final volume of the joint, P is the pressure in the suit, and W is the resultant work. It is generally true that all suits are more mobile at lower pressures. However, because a minimum internal pressure is dictated by life support requirements, the only means of further reducing work is to minimize the change in volume. All space suit designs try to minimize or eliminate this problem. The most common solution is to form the suit out of multiple layers. The bladder layer is a rubbery, airtight layer much like a balloon. The restraint layer goes outside the bladder, and provides a specific shape for the suit. Since the bladder layer is larger than the restraint layer, the restraint takes all of the stresses caused by the pressure inside the suit. Since the bladder is not under pressure, it will not "pop" like a balloon, even if punctured. The restraint layer is shaped in such a way that bending a joint causes pockets of fabric, called "gores", to open up on the outside of the joint, while folds called "convolutes" fold up on the inside of the joint. The gores make up for the volume lost on the inside of the joint, and keep the suit at a nearly constant volume. However, once the gores are opened all the way, the joint cannot be bent any further without a considerable amount of work. In some Russian space suits, strips of cloth were wrapped tightly around the cosmonaut's arms and legs outside the space suit to stop the space suit from ballooning when in space. The outermost layer of a space suit, the Thermal Micrometeoroid Garment, provides thermal insulation, protection from micrometeoroids, and shielding from harmful solar radiation. There are four main conceptual approaches to suit design: Soft suits Soft suits typically are made mostly of fabrics. All soft suits have some hard parts; some even have hard joint bearings. Intra-vehicular activity and early EVA suits were soft suits. Hard-shell suits Hard-shell suits are usually made of metal or composite materials and do not use fabric for joints. Hard suits joints use ball bearings and wedge-ring segments similar to an adjustable elbow of a stove pipe to allow a wide range of movement with the arms and legs. The joints maintain a constant volume of air internally and do not have any counter-force. Therefore, the astronaut does not need to exert to hold the suit in any position. Hard suits can also operate at higher pressures which would eliminate the need for an astronaut to pre-breathe oxygen to use a space suit before an EVA from a spacecraft cabin. The joints may get into a restricted or locked position requiring the astronaut to manipulate or program the joint. The NASA Ames Research Center experimental AX-5 hard-shell space suit had a flexibility rating of 95%. The wearer could move into 95% of the positions they could without the suit on. Hybrid suits Hybrid suits have hard-shell parts and fabric parts. NASA's Extravehicular Mobility Unit (EMU) uses a fiberglass Hard Upper Torso (HUT) and fabric limbs. ILC Dover's I-Suit replaces the HUT with a fabric soft upper torso to save weight, restricting the use of hard components to the joint bearings, helmet, waist seal, and rear entry hatch. Virtually all workable space suit designs incorporate hard components, particularly at interfaces such as the waist seal, bearings, and in the case of rear-entry suits, the back hatch, where all-soft alternatives are not viable. Skintight suits Skintight suits, also known as mechanical counterpressure suits or space activity suits, are a proposed design which would use a heavy elastic body stocking to compress the body. The head is in a pressurized helmet, but the rest of the body is pressurized only by the elastic effect of the suit. This mitigates the constant volume problem, reduces the possibility of a space suit depressurization and gives a very lightweight suit. When not worn, the elastic garments may appear to be that of clothing for a small child. These suits may be very difficult to put on and face problems with providing a uniform pressure. Most proposals use the body's natural perspiration to keep cool. Sweat evaporates readily in vacuum and may desublime or deposit on objects nearby: optics, sensors, the astronaut's visor, and other surfaces. The icy film and sweat residue may contaminate sensitive surfaces and affect optical performance. Contributing technologies Related preceding technologies include the stratonautical space suit, the gas mask used in World War II, the oxygen mask used by pilots of high-flying bombers in World War II, the high-altitude or vacuum suit required by pilots of the Lockheed U-2 and SR-71 Blackbird, the diving suit, rebreather, scuba diving gear, and many others. Many space suit designs are taken from the U.S. Air Force suits, which are designed to work in "high-altitude aircraft pressure[s]", such as the Mercury IVA suit or the Gemini G4C, or the Advanced Crew Escape Suits. Glove technology The Mercury IVA, the first U.S. space suit design, included lights at the tips of the gloves in order to provide visual aid. As the need for extravehicular activity grew, suits such as the Apollo A7L included gloves made of a metal fabric called Chromel-r in order to prevent punctures. In order to retain a better sense of touch for the astronauts, the fingertips of the gloves were made of silicone. With the shuttle program, it became necessary to be able to operate spacecraft modules, so the ACES suits featured gripping on the gloves. EMU gloves, which are used for spacewalks, are heated to keep the astronaut's hands warm. The Phase VI gloves, meant for use with the Mark III suit, are the first gloves to be designed with "laser scanning technology, 3D computer modeling, stereo lithography, laser cutting technology and CNC machining". This allows for cheaper, more accurate production, as well as increased detail in joint mobility and flexibility. Life support technology Prior to the Apollo missions, life support in space suits was connected to the space capsule via an umbilical cable. However, with the Apollo missions, life support was configured into a removable capsule called the Portable Life Support System that allowed the astronaut to explore the Moon without having to be attached to the space craft. The EMU space suit, used for spacewalks, allows the astronaut to manually control the internal environment of the suit. The Mark III suit has a backpack containing about 12 pounds of liquid air for breathing, pressurization, and heat exchange. Helmet technology The development of the spheroidal dome helmet was key in balancing the need for field of view, pressure compensation, and low weight. One inconvenience with some space suits is the head being fixed facing forwards and being unable to turn to look sideways. Astronauts call this effect "alligator head". High-altitude suits Evgeniy Chertovsky created his full-pressure suit or high-altitude "skafandr" (скафандр) in 1931. (скафандр also means "diving suit"). Emilio Herrera designed and built a full-pressure "stratonautical space suit" in 1935, which was to have been used during an open-basket balloon stratospheric flight scheduled for early 1936. Wiley Post experimented with a number of pressure suits for record-breaking flights. Russell Colley created the space suits worn by the Project Mercury astronauts, including fitting Alan Shepard for his ride as America's first man in space on May 5, 1961. List of space suit models Soviet and Russian suit models SK series (CK)the spacesuit used for the Vostok program (1961–1963). Worn by Yuri Gagarin on the first crewed space flight. No pressure suits were worn aboard Voskhod 1. Berkut (Беркут meaning "golden eagle")the spacesuit was a modified SK-1 used by the crew of Voskhod 2 which included Alexei Leonov on the first spacewalk during (1965). From Soyuz 1 to Soyuz 11 (1967–1971) no pressure suits were worn during launch and reentry. Yastreb (Ястреб meaning "hawk")extravehicular activity spacesuit used during a crew exchange between Soyuz 4 and Soyuz 5 (1969). Krechet-94 (Кречет meaning "gyrfalcon")designed for the canceled Soviet crewed Moon landing. Strizh (Стриж meaning "swift (bird)")developed for pilots of Buran-class orbiters. Sokol (Сокол meaning "falcon")suits worn by Soyuz crew members during launch and reentry. They were first worn on Soyuz 12. They have been used from 1973 to present. Orlan (Орлан meaning "sea-eagle" or "bald eagle")suits for extravehicular activity, originally developed for the Soviet lunar program as a lunar orbit EVA suit. It is Russia's current EVA suit. Used from 1977 to present. United States suit models In the early 1950s, Siegfried Hansen and colleagues at Litton Industries designed and built a working hard-shell suit, which was used inside vacuum chambers and was the predecessor of space suits used in NASA missions. Navy Mark IV high-altitude/vacuum suitused for Project Mercury (1961–1963). Gemini space suits (1965–1966)there were three main variants developed: G3C designed for intra-vehicle use; G4C specially designed for EVA and intra-vehicle use; and a special G5C suit worn by the Gemini 7 crew for 14 days inside the spacecraft. Manned Orbiting Laboratory MH-7 space suits for the canceled MOL program. Apollo Block I A1C suit (1966–1967)a derivative of the Gemini suit, worn by primary and backup crews in training for two early Apollo missions. The nylon pressure garment melted and burned through in the Apollo 1 cabin fire. This suit became obsolete when crewed Block I Apollo flights were discontinued after the fire. Apollo/Skylab A7L EVA and Moon suitsThe Block II Apollo suit was the primary pressure suit worn for eleven Apollo flights, three Skylab flights, and the US astronauts on the Apollo–Soyuz Test Project between 1968 and 1975. The pressure garment's nylon outer layer was replaced with fireproof Beta cloth after the Apollo 1 fire. This suit was the first to employ a liquid-cooled inner garment and outer micrometeoroid garment. Beginning with the Apollo 13 mission, it also introduced "commander's stripes" so that a pair of space walkers will not appear identical on camera. Shuttle Ejection Escape Suitused from STS-1 (1981) to STS-4 (1982) by a two-man crew used in conjunction with the then-installed ejection seats. Derived from a USAF model. These were removed once the Shuttle became certified. From STS-5 (1982) to STS-51-L (1986) no pressure suits were worn during launch and reentry. The crew would wear only a blue-flight suit with an oxygen helmet. Launch Entry Suit first used on STS-26 (1988), the first flight after the Challenger disaster. It was a partial pressure suit derived from a USAF model. It was used from 1988 to 1998. Advanced Crew Escape Suit used on the Space Shuttle starting in 1994. The Advanced Crew Escape Suit or ACES suit, is a full-pressure suit worn by all Space Shuttle crews for the ascent and entry portions of flight. The suit is a direct descendant of the United States Air Force high-altitude pressure suits worn by SR-71 Blackbird and U-2 spy plane pilots, North American X-15 and Gemini pilot-astronauts, and the Launch Entry Suits worn by NASA astronauts starting on the STS-26 flight. It is derived from a USAF model. Extravehicular Mobility Unit (EMU)used on both the Space Shuttle and International Space Station (ISS). The EMU is an independent anthropomorphic system that provides environmental protection, mobility, life support, and communications for a Space Shuttle or ISS crew member to perform an EVA in Earth orbit. Used from 1982 to present, but only available in limited sizing as of 2019. Aerospace company SpaceX developed an IVA suit which is worn by astronauts involved in Commercial Crew Program missions operated by SpaceX since the Demo-2 mission. As a continuation of this suit design, SpaceX developed an EVA suit in 2024. The EVA version of the suit was used during the Polaris Dawn private space mission for the first ever commercial spacewalk. Orion Crew Survival System (OCSS)will be used during launch and re-entry on the Orion MPCV. It is derived from the Advanced Crew Escape Suit but is able to operate at a higher pressure and has improved mobility in the shoulders. SpaceX suit ("Starman suit") In February 2015, SpaceX began developing a space suit for astronauts to wear within the Dragon 2 space capsule. Its appearance was jointly designed by Jose Fernandez—a Hollywood costume designer known for his works for superhero and science fiction films—and SpaceX founder and CEO Elon Musk. The first images of the suit were revealed in September 2017. A mannequin, called "Starman" (after David Bowie's song of the same name), wore the SpaceX space suit during the maiden launch of the Falcon Heavy in February 2018. For this exhibition launch, the suit was not pressurized and carried no sensors. The suit, which is suitable for vacuum, offers protection against cabin depressurization through a single tether at the astronaut's thigh that feeds air and electronic connections. The helmets, which are 3D-printed, contain microphones and speakers. As the suits need the tether connection and do not offer protection against radiation, they are not used for extra-vehicular activities. The suits are custom-made for each astronaut. In 2018, NASA commercial crew astronauts Bob Behnken, and Doug Hurley tested the spacesuit inside the Dragon 2 spacecraft in order to familiarize themselves with the suit. They wore it in the Crew Dragon Demo-2 flight launched on 30 May 2020. The suit is worn by astronauts involved in Commercial Crew Program missions involving SpaceX. On 4 May 2024, SpaceX unveiled a spacesuit designed for extravehicular activity based on the IVA suit for Polaris Dawn mission in Polaris program. As with the IVA suit, the helmets are 3D-printed, though the EVA helmet incorporates a heads-up display providing information and a camera on suit metrics during operation. It is more mobile, includes new thermal insulation fabrics, and materials used Falcon’s interstage and Crew Dragon’s external unpressurized trunk. Future NASA contracted suits On 1 June 2022, NASA announced it had selected competing Axiom Space and Collins Aerospace to develop and provide astronauts with next generation spacesuit and spacewalk systems to first test and later use outside the International Space Station, as well as on the lunar surface for the crewed Artemis missions, and prepare for human missions to Mars. Chinese suit models Shuguang space suit: First generation EVA space suit developed by China for the 1967 canceled Project 714 crewed space program. It has a mass of about , has an orange colour, and is made of high-resistance multi-layer polyester fabric. The astronaut could use it inside the cabin and conduct an EVA as well. 'Project 863 space suit: Cancelled project of second generation Chinese EVA space suit. Shenzhou IVA (神舟) space suit: The suit was first worn by Yang Liwei on Shenzhou 5, the first crewed Chinese space flight, it closely resembles a Sokol-KV2 suit, but it is believed to be a Chinese-made version rather than an actual Russian suit. Pictures show that the suits on Shenzhou 6 differ in detail from the earlier suit; they are also reported to be lighter. Haiying (海鹰号航天服) EVA space suit: The imported Russian Orlan-M EVA suit is called Haiying. Used on Shenzhou 7. Feitian (飞天号航天服) EVA space suit: Indigenously developed Chinese-made EVA space suit also used for the Shenzhou 7 mission. The suit was designed for a spacewalk mission of up to seven hours. Chinese astronauts have been training in the out-of-capsule space suits since July 2007, and movements are seriously restricted in the suits, with a mass of more than each. A new generation of Feitian space suit has been used since 2021 as the construction of Tiangong Space Station began. Emerging technologies Several companies and universities are developing technologies and prototypes which represent improvements over current space suits. Additive manufacturing 3D printing (additive manufacturing) can be used to reduce the mass of hard-shell space suits while retaining the high mobility they provide. This fabrication method also allows for the potential for in-situ fabrication and repair of suits, a capability which is not currently available, but will likely be necessary for Martian exploration. The University of Maryland began development of a prototype 3D printed hard suit in 2016, based on the kinematics of the AX-5. The prototype arm segment is designed to be evaluated in the Space Systems Laboratory glovebox to compare mobility to traditional soft suits. Initial research has focused on the feasibility of printing rigid suit elements, bearing races, ball bearings, seals, and sealing surfaces. Astronaut Glove Challenge There are certain difficulties in designing a dexterous space suit glove and there are limitations to the current designs. For this reason, the Centennial Astronaut Glove Challenge was created to build a better glove. Competitions have been held in 2007 and 2009, and another is planned. The 2009 contest required the glove to be covered with a micro-meteorite layer. Aouda.X Since 2009, the Austrian Space Forum has been developing "Aouda.X", an experimental Mars analogue space suit focusing on an advanced human–machine interface and on-board computing network to increase situational awareness. The suit is designed to study contamination vectors in planetary exploration analogue environments and create limitations depending on the pressure regime chosen for a simulation. Since 2012, for the Mars2013 analogue mission by the Austrian Space Forum to Erfoud, Morocco, the Aouda.X analogue space suit has a sister in the form of Aouda.S. This is a slightly less sophisticated suit meant primarily to assist Aouda.X operations and be able to study the interactions between two (analogue) astronauts in similar suits. The Aouda.X and Aouda.S space suits have been named after the fictional princess from the Jules Verne's 1873 novel Around the World in Eighty Days. A public display mock-up of Aouda.X (called Aouda.D) is currently on display at the Dachstein Ice Cave in Obertraun, Austria, after the experiments done there in 2012. Axiom Space and Prada In 2024, at the International Astronautical Congress in Milan, Italy, Axiom Space and Prada showed the results of an ongoing collaboration to develop a spacesuit for NASA's Artemis III mission. Bio-Suit Bio-Suit is a space activity suit under development at the Massachusetts Institute of Technology, which consisted of several lower leg prototypes. Bio-suit is custom fit to each wearer, using laser body scanning. Constellation Space Suit system On August 2, 2006, NASA indicated plans to issue a Request for Proposal (RFP) for the design, development, certification, production, and sustaining engineering of the Constellation Space Suit to meet the needs of the Constellation Program. NASA foresaw a single suit capable of supporting: survivability during launch, entry and abort; zero-gravity EVA; lunar surface EVA; and Mars surface EVA. On June 11, 2008, NASA awarded a US$745 million contract to Oceaneering International to create the new space suit. Final Frontier Design IVA Space Suit Final Frontier Design (FFD) is developing a commercial full IVA space suit, with their first suit completed in 2010. FFD's suits are intended as a light-weight, highly mobile, and inexpensive commercial space suits. Since 2011, FFD has upgraded IVA suit's designs, hardware, processes, and capabilities. FFD has built a total of 7 IVA space suit (2016) assemblies for various institutions and customers since founding, and has conducted high fidelity human testing in simulators, aircraft, microgravity, and hypobaric chambers. FFD has a Space Act Agreement with NASA's Commercial Space Capabilities Office to develop and execute a Human Rating Plan for FFD IVA suit. FFD categorizes their IVA suits according to their mission: Terra for Earth-based testing, Stratos for high altitude flights, and Exos for orbital space flights. Each suit category has different requirements for manufacturing controls, validations, and materials, but are of a similar architecture. I-Suit The I-Suit is a space suit prototype also constructed by ILC Dover, which incorporates several design improvements over the EMU, including a weight-saving soft upper torso. Both the Mark III and the I-Suit have taken part in NASA's annual Desert Research and Technology Studies (D-RATS) field trials, during which suit occupants interact with one another, and with rovers and other equipment. Mark III The Mark III is a NASA prototype, constructed by ILC Dover, which incorporates a hard lower torso section and a mix of soft and hard components. The Mark III is markedly more mobile than previous suits, despite its high operating pressure (), which makes it a "zero-prebreathe" suit, meaning that astronauts would be able to transition directly from a one-atmosphere, mixed-gas space station environment, such as that on the International Space Station, to the suit, without risking decompression sickness, which can occur with rapid depressurization from an atmosphere containing nitrogen or another inert gas. MX-2 The MX-2 is a space suit analogue constructed at the University of Maryland's Space Systems Laboratory. The MX-2 is used for crewed neutral buoyancy testing at the Space Systems Lab's Neutral Buoyancy Research Facility. By approximating the work envelope of a real EVA suit, without meeting the requirements of a flight-rated suit, the MX-2 provides an inexpensive platform for EVA research, compared to using EMU suits at facilities like NASA's Neutral Buoyancy Laboratory. The MX-2 has an operating pressure of 2.5–4 psi. It is a rear-entry suit, featuring a fiberglass HUT. Air, LCVG cooling water, and power are open loop systems, provided through an umbilical. The suit contains a Mac Mini computer to capture sensor data, such as suit pressure, inlet and outlet air temperatures, and heart rate. Resizable suit elements and adjustable ballast allow the suit to accommodate subjects ranging in height from , and with a weight range of . North Dakota suit Beginning in May 2006, five North Dakota colleges collaborated on a new space suit prototype, funded by a US$100,000 grant from NASA, to demonstrate technologies which could be incorporated into a planetary suit. The suit was tested in the Theodore Roosevelt National Park badlands of western North Dakota. The suit has a mass of without a life support backpack, and costs only a fraction of the standard US$12,000,000 cost for a flight-rated NASA space suit. The suit was developed in just over a year by students from the University of North Dakota, North Dakota State, Dickinson State, the state College of Science and Turtle Mountain Community College. The mobility of the North Dakota suit can be attributed to its low operating pressure; while the North Dakota suit was field tested at a pressure of differential, NASA's EMU suit operates at a pressure of , a pressure designed to supply approximately sea-level oxygen partial pressure for respiration (see discussion above). PXS NASA's Prototype eXploration Suit (PXS), like the Z-series, is a rear-entry suit compatible with suitports. The suit has components which could be 3D printed during missions to a range of specifications, to fit different individuals or changing mobility requirements. Suitports A suitport is a theoretical alternative to an airlock, designed for use in hazardous environments and in human spaceflight, especially planetary surface exploration. In a suitport system, a rear-entry space suit is attached and sealed against the outside of a spacecraft, such that an astronaut can enter and seal up the suit, then go on EVA, without the need for an airlock or depressurizing the spacecraft cabin. Suitports require less mass and volume than airlocks, provide dust mitigation, and prevent cross-contamination of the inside and outside environments. Patents for suitport designs were filed in 1996 by Philip Culbertson Jr. of NASA's Ames Research Center and in 2003 by Joerg Boettcher, Stephen Ransom, and Frank Steinsiek. Z-series In 2012, NASA introduced the Z-1 space suit, the first in the Z-series of space suit prototypes designed by NASA specifically for planetary extravehicular activity. The Z-1 space suit includes an emphasis on mobility and protection for space missions. It features a soft torso versus the hard torsos seen in previous NASA EVA space suits, which reduces mass. It has been labeled the "Buzz Lightyear suit" due to its green streaks for a design. In 2014, NASA released the design for the Z-2 prototype, the next model in the Z-series. NASA conducted a poll asking the public to decide on a design for the Z-2 space suit. The designs, created by fashion students from Philadelphia University, were "Technology", "Trends in Society", and "Biomimicry". The design "Technology" won, and the prototype is built with technologies like 3D printing. The Z-2 suit will also differ from the Z-1 suit in that the torso reverts to the hard shell, as seen in NASA's EMU suit. In fiction The earliest space fiction ignored the problems of traveling through a vacuum, and launched its heroes through space without any special protection. In the later 19th century, however, a more realistic brand of space fiction emerged, in which authors have tried to describe or depict the space suits worn by their characters. These fictional suits vary in appearance and technology, and range from the highly authentic to the utterly improbable. A very early fictional account of space suits can be seen in Garrett P. Serviss' novel Edison's Conquest of Mars (1898). Later comic book series such as Buck Rogers (1930s) and Dan Dare (1950s) also featured their own takes on space suit design. Science fiction authors such as Robert A. Heinlein contributed to the development of fictional space suit concepts.
Technology
Basics_6
null
39377
https://en.wikipedia.org/wiki/Similarity%20%28geometry%29
Similarity (geometry)
In Euclidean geometry, two objects are similar if they have the same shape, or if one has the same shape as the mirror image of the other. More precisely, one can be obtained from the other by uniformly scaling (enlarging or reducing), possibly with additional translation, rotation and reflection. This means that either object can be rescaled, repositioned, and reflected, so as to coincide precisely with the other object. If two objects are similar, each is congruent to the result of a particular uniform scaling of the other. For example, all circles are similar to each other, all squares are similar to each other, and all equilateral triangles are similar to each other. On the other hand, ellipses are not all similar to each other, rectangles are not all similar to each other, and isosceles triangles are not all similar to each other. This is because two ellipses can have different width to height ratios, two rectangles can have different length to breadth ratios, and two isosceles triangles can have different base angles. If two angles of a triangle have measures equal to the measures of two angles of another triangle, then the triangles are similar. Corresponding sides of similar polygons are in proportion, and corresponding angles of similar polygons have the same measure. Two congruent shapes are similar, with a scale factor of 1. However, some school textbooks specifically exclude congruent triangles from their definition of similar triangles by insisting that the sizes must be different if the triangles are to qualify as similar. Similar triangles Two triangles, and {{math|△A'B'C}} are similar if and only if corresponding angles have the same measure: this implies that they are similar if and only if the lengths of corresponding sides are proportional. It can be shown that two triangles having congruent angles (equiangular triangles) are similar, that is, the corresponding sides can be proved to be proportional. This is known as the AAA similarity theorem. Note that the "AAA" is a mnemonic: each one of the three A's refers to an "angle". Due to this theorem, several authors simplify the definition of similar triangles to only require that the corresponding three angles are congruent. There are several criteria each of which is necessary and sufficient for two triangles to be similar: Any two pairs of angles are congruent, which in Euclidean geometry implies that all three angles are congruent: If is equal in measure to and is equal in measure to then this implies that is equal in measure to {{math|∠A'C'B'''}} and the triangles are similar. All the corresponding sides are proportional: This is equivalent to saying that one triangle (or its mirror image) is an enlargement of the other. Any two pairs of sides are proportional, and the angles included between these sides are congruent: This is known as the SAS similarity criterion. The "SAS" is a mnemonic: each one of the two S's refers to a "side"; the A refers to an "angle" between the two sides. Symbolically, we write the similarity and dissimilarity of two triangles and {{math|△A'B'C}} as follows: There are several elementary results concerning similar triangles in Euclidean geometry: Any two equilateral triangles are similar. Two triangles, both similar to a third triangle, are similar to each other (transitivity of similarity of triangles). Corresponding altitudes of similar triangles have the same ratio as the corresponding sides. Two right triangles are similar if the hypotenuse and one other side have lengths in the same ratio. There are several equivalent conditions in this case, such as the right triangles having an acute angle of the same measure, or having the lengths of the legs (sides) being in the same proportion. Given a triangle and a line segment one can, with a ruler and compass, find a point such that . The statement that point satisfying this condition exists is Wallis's postulate and is logically equivalent to Euclid's parallel postulate. In hyperbolic geometry (where Wallis's postulate is false) similar triangles are congruent. In the axiomatic treatment of Euclidean geometry given by George David Birkhoff (see Birkhoff's axioms) the SAS similarity criterion given above was used to replace both Euclid's parallel postulate and the SAS axiom which enabled the dramatic shortening of Hilbert's axioms. Similar triangles provide the basis for many synthetic (without the use of coordinates) proofs in Euclidean geometry. Among the elementary results that can be proved this way are: the angle bisector theorem, the geometric mean theorem, Ceva's theorem, Menelaus's theorem and the Pythagorean theorem. Similar triangles also provide the foundations for right triangle trigonometry. Other similar polygons The concept of similarity extends to polygons with more than three sides. Given any two similar polygons, corresponding sides taken in the same sequence (even if clockwise for one polygon and counterclockwise for the other) are proportional and corresponding angles taken in the same sequence are equal in measure. However, proportionality of corresponding sides is not by itself sufficient to prove similarity for polygons beyond triangles (otherwise, for example, all rhombi would be similar). Likewise, equality of all angles in sequence is not sufficient to guarantee similarity (otherwise all rectangles would be similar). A sufficient condition for similarity of polygons is that corresponding sides and diagonals are proportional. For given , all regular -gons are similar. Similar curves Several types of curves have the property that all examples of that type are similar to each other. These include: Lines (any two lines are even congruent) Line segments Circles Parabolas Hyperbolas of a specific eccentricity Ellipses of a specific eccentricity Catenaries Graphs of the logarithm function for different bases Graphs of the exponential function for different bases Logarithmic spirals are self-similar In Euclidean space A similarity (also called a similarity transformation or similitude) of a Euclidean space is a bijection from the space onto itself that multiplies all distances by the same positive real number , so that for any two points and we have where is the Euclidean distance from to . The scalar has many names in the literature including; the ratio of similarity, the stretching factor and the similarity coefficient. When a similarity is called an isometry (rigid transformation). Two sets are called similar if one is the image of the other under a similarity. As a map a similarity of ratio takes the form where is an orthogonal matrix and is a translation vector. Similarities preserve planes, lines, perpendicularity, parallelism, midpoints, inequalities between distances and line segments. Similarities preserve angles but do not necessarily preserve orientation, direct similitudes preserve orientation and opposite similitudes change it. The similarities of Euclidean space form a group under the operation of composition called the similarities group . The direct similitudes form a normal subgroup of and the Euclidean group of isometries also forms a normal subgroup. The similarities group is itself a subgroup of the affine group, so every similarity is an affine transformation. One can view the Euclidean plane as the complex plane, that is, as a 2-dimensional space over the reals. The 2D similarity transformations can then be expressed in terms of complex arithmetic and are given by (direct similitudes), and (opposite similitudes), where and are complex numbers, . When , these similarities are isometries. Area ratio and volume ratio The ratio between the areas of similar figures is equal to the square of the ratio of corresponding lengths of those figures (for example, when the side of a square or the radius of a circle is multiplied by three, its area is multiplied by nine — i.e. by three squared). The altitudes of similar triangles are in the same ratio as corresponding sides. If a triangle has a side of length and an altitude drawn to that side of length then a similar triangle with corresponding side of length will have an altitude drawn to that side of length . The area of the first triangle is while the area of the similar triangle will be Similar figures which can be decomposed into similar triangles will have areas related in the same way. The relationship holds for figures that are not rectifiable as well. The ratio between the volumes of similar figures is equal to the cube of the ratio of corresponding lengths of those figures (for example, when the edge of a cube or the radius of a sphere is multiplied by three, its volume is multiplied by 27 — i.e. by three cubed). Galileo's square–cube law concerns similar solids. If the ratio of similitude (ratio of corresponding sides) between the solids is , then the ratio of surface areas of the solids will be , while the ratio of volumes will be . Similarity with a center If a similarity has exactly one invariant point: a point that the similarity keeps unchanged, then this only point is called "center" of the similarity. On the first image below the title, on the left, one or another similarity shrinks a regular polygon into a concentric one, the vertices of which are each on a side of the previous polygon. This rotational reduction is repeated, so the initial polygon is extended into an abyss of regular polygons. The center of the similarity is the common center of the successive polygons. A red segment joins a vertex of the initial polygon to its image under the similarity, followed by a red segment going to the following image of vertex, and so on to form a spiral. Actually we can see more than three direct similarities on this first image, because every regular polygon is invariant under certain direct similarities, more precisely certain rotations the center of which is the center of the polygon, and a composition of direct similarities is also a direct similarity. For example we see the image of the initial regular pentagon under a homothety of negative ratio , which is a similarity of ±180° angle and a positive ratio equal to . Below the title on the right, the second image shows a similarity decomposed into a rotation and a homothety. Similarity and rotation have the same angle of +135 degrees modulo 360 degrees. Similarity and homothety have the same ratio of multiplicative inverse of the ratio (square root of 2) of the inverse similarity. Point is the common center of the three transformations: rotation, homothety and similarity. For example point is the image of under the rotation, and point is the image of under the homothety, more briefly by naming , and the previous rotation, homothety and similarity, with “" like "Direct". This direct similarity that transforms triangle into triangle can be decomposed into a rotation and a homothety of same center in several manners. For example, , the last decomposition being only represented on the image. To get we can also compose in any order a rotation of –45° angle and a homothety of ratio With "" like "Mirror" and "" like "Indirect", if is the reflection with respect to line , then is the indirect similarity that transforms segment like into segment , but transforms point into and point into itself. Square is the image of under similarity of ratio Point is the center of this similarity because any point being invariant under it fulfills only possible if , otherwise written . How to construct the center of direct similarity from square , how to find point center of a rotation of +135° angle that transforms ray into ray ? This is an inscribed angle problem plus a question of orientation. The set of points such that is an arc of circle that joins and , of which the two radius leading to and form a central angle of . This set of points is the blue quarter of circle of center inside square . In the same manner, point is a member of the blue quarter of circle of center inside square . So point is the intersection point of these two quarters of circles. In general metric spaces In a general metric space , an exact similitude is a function from the metric space into itself that multiplies all distances by the same positive scalar , called 's contraction factor, so that for any two points and we have Weaker versions of similarity would for instance have be a bi-Lipschitz function and the scalar a limit This weaker version applies when the metric is an effective resistance on a topologically self-similar set. A self-similar subset of a metric space is a set for which there exists a finite set of similitudes with contraction factors such that is the unique compact subset of for which These self-similar sets have a self-similar measure with dimension given by the formula which is often (but not always) equal to the set's Hausdorff dimension and packing dimension. If the overlaps between the are "small", we have the following simple formula for the measure: Topology In topology, a metric space can be constructed by defining a similarity instead of a distance. The similarity is a function such that its value is greater when two points are closer (contrary to the distance, which is a measure of dissimilarity: the closer the points, the lesser the distance). The definition of the similarity can vary among authors, depending on which properties are desired. The basic common properties are Positive defined: Majored by the similarity of one element on itself (auto-similarity): More properties can be invoked, such as: Reflectivity: or Finiteness: The upper value is often set at 1 (creating a possibility for a probabilistic interpretation of the similitude). Note that, in the topological sense used here, a similarity is a kind of measure. This usage is not the same as the similarity transformation of the and sections of this article. Self-similarity Self-similarity means that a pattern is non-trivially similar to itself, e.g., the set of numbers of the form where ranges over all integers. When this set is plotted on a logarithmic scale it has one-dimensional translational symmetry: adding or subtracting the logarithm of two to the logarithm of one of these numbers produces the logarithm of another of these numbers. In the given set of numbers themselves, this corresponds to a similarity transformation in which the numbers are multiplied or divided by two. Psychology The intuition for the notion of geometric similarity already appears in human children, as can be seen in their drawings.
Mathematics
Geometry: General
null
39378
https://en.wikipedia.org/wiki/Distance
Distance
Distance is a numerical or occasionally qualitative measurement of how far apart objects, points, people, or ideas are. In physics or everyday usage, distance may refer to a physical length or an estimation based on other criteria (e.g. "two counties over"). The term is also frequently used metaphorically to mean a measurement of the amount of difference between two similar objects (such as statistical distance between probability distributions or edit distance between strings of text) or a degree of separation (as exemplified by distance between people in a social network). Most such notions of distance, both physical and metaphorical, are formalized in mathematics using the notion of a metric space. In the social sciences, distance can refer to a qualitative measurement of separation, such as social distance or psychological distance. Distances in physics and geometry The distance between physical locations can be defined in different ways in different contexts. Straight-line or Euclidean distance The distance between two points in physical space is the length of a straight line between them, which is the shortest possible path. This is the usual meaning of distance in classical physics, including Newtonian mechanics. Straight-line distance is formalized mathematically as the Euclidean distance in two- and three-dimensional space. In Euclidean geometry, the distance between two points and is often denoted . In coordinate geometry, Euclidean distance is computed using the Pythagorean theorem. The distance between points and in the plane is given by: Similarly, given points (x1, y1, z1) and (x2, y2, z2) in three-dimensional space, the distance between them is: This idea generalizes to higher-dimensional Euclidean spaces. Measurement There are many ways of measuring straight-line distances. For example, it can be done directly using a ruler, or indirectly with a radar (for long distances) or interferometry (for very short distances). The cosmic distance ladder is a set of ways of measuring extremely long distances. Shortest-path distance on a curved surface The straight-line distance between two points on the surface of the Earth is not very useful for most purposes, since we cannot tunnel straight through the Earth's mantle. Instead, one typically measures the shortest path along the surface of the Earth, as the crow flies. This is approximated mathematically by the great-circle distance on a sphere. More generally, the shortest path between two points along a curved surface is known as a geodesic. The arc length of geodesics gives a way of measuring distance from the perspective of an ant or other flightless creature living on that surface. Effects of relativity In the theory of relativity, because of phenomena such as length contraction and the relativity of simultaneity, distances between objects depend on a choice of inertial frame of reference. On galactic and larger scales, the measurement of distance is also affected by the expansion of the universe. In practice, a number of distance measures are used in cosmology to quantify such distances. Other spatial distances Unusual definitions of distance can be helpful to model certain physical situations, but are also used in theoretical mathematics: In practice, one is often interested in the travel distance between two points along roads, rather than as the crow flies. In a grid plan, the travel distance between street corners is given by the Manhattan distance: the number of east–west and north–south blocks one must traverse to get between those two points. Chessboard distance, formalized as Chebyshev distance, is the minimum number of moves a king must make on a chessboard in order to travel between two squares. Metaphorical distances Many abstract notions of distance used in mathematics, science and engineering represent a degree of difference or separation between similar objects. This page gives a few examples. Statistical distances In statistics and information geometry, statistical distances measure the degree of difference between two probability distributions. There are many kinds of statistical distances, typically formalized as divergences; these allow a set of probability distributions to be understood as a geometrical object called a statistical manifold. The most elementary is the squared Euclidean distance, which is minimized by the least squares method; this is the most basic Bregman divergence. The most important in information theory is the relative entropy (Kullback–Leibler divergence), which allows one to analogously study maximum likelihood estimation geometrically; this is an example of both an f-divergence and a Bregman divergence (and in fact the only example which is both). Statistical manifolds corresponding to Bregman divergences are flat manifolds in the corresponding geometry, allowing an analog of the Pythagorean theorem (which holds for squared Euclidean distance) to be used for linear inverse problems in inference by optimization theory. Other important statistical distances include the Mahalanobis distance and the energy distance. Edit distances In computer science, an edit distance or string metric between two strings measures how different they are. For example, the words "dog" and "dot", which differ by just one letter, are closer than "dog" and "cat", which have no letters in common. This idea is used in spell checkers and in coding theory, and is mathematically formalized in a number of different ways, including Levenshtein distance, Hamming distance, Lee distance, and Jaro–Winkler distance. Distance in graph theory In a graph, the distance between two vertices is measured by the length of the shortest edge path between them. For example, if the graph represents a social network, then the idea of six degrees of separation can be interpreted mathematically as saying that the distance between any two vertices is at most six. Similarly, the Erdős number and the Bacon number—the number of collaborative relationships away a person is from prolific mathematician Paul Erdős and actor Kevin Bacon, respectively—are distances in the graphs whose edges represent mathematical or artistic collaborations. In the social sciences In psychology, human geography, and the social sciences, distance is often theorized not as an objective numerical measurement, but as a qualitative description of a subjective experience. For example, psychological distance is "the different ways in which an object might be removed from" the self along dimensions such as "time, space, social distance, and hypotheticality". In sociology, social distance describes the separation between individuals or social groups in society along dimensions such as social class, race/ethnicity, gender or sexuality. Mathematical formalization Most of the notions of distance between two points or objects described above are examples of the mathematical idea of a metric. A metric or distance function is a function which takes pairs of points or objects to real numbers and satisfies the following rules: The distance between an object and itself is always zero. The distance between distinct objects is always positive. Distance is symmetric: the distance from to is always the same as the distance from to . Distance satisfies the triangle inequality: if , , and are three objects, then This condition can be described informally as "intermediate stops can't speed you up." As an exception, many of the divergences used in statistics are not metrics. Distance between sets There are multiple ways of measuring the physical distance between objects that consist of more than one point: One may measure the distance between representative points such as the center of mass; this is used for astronomical distances such as the Earth–Moon distance. One may measure the distance between the closest points of the two objects; in this sense, the altitude of an airplane or spacecraft is its distance from the Earth. The same sense of distance is used in Euclidean geometry to define distance from a point to a line, distance from a point to a plane, or, more generally, perpendicular distance between affine subspaces. Even more generally, this idea can be used to define the distance between two subsets of a metric space. The distance between sets and is the infimum of the distances between any two of their respective points: This does not define a metric on the set of such subsets: the distance between overlapping sets is zero, and this distance does not satisfy the triangle inequality for any metric space with two or more points (consider the triple of sets consisting of two distinct singletons and their union). The Hausdorff distance between two subsets of a metric space can be thought of as measuring how far they are from perfectly overlapping. Somewhat more precisely, the Hausdorff distance between and is either the distance from to the farthest point of , or the distance from to the farthest point of , whichever is larger. (Here "farthest point" must be interpreted as a supremum.) The Hausdorff distance defines a metric on the set of compact subsets of a metric space. Related ideas The word distance is also used for related concepts that are not encompassed by the description "a numerical measurement of how far apart points or objects are". Distance travelled The distance travelled by an object is the length of a specific path travelled between two points, such as the distance walked while navigating a maze. This can even be a closed distance along a closed curve which starts and ends at the same point, such as a ball thrown straight up, or the Earth when it completes one orbit. This is formalized mathematically as the arc length of the curve. The distance travelled may also be signed: a "forward" distance is positive and a "backward" distance is negative. Circular distance is the distance traveled by a point on the circumference of a wheel, which can be useful to consider when designing vehicles or mechanical gears (see also odometry). The circumference of the wheel is ; if the radius is 1, each revolution of the wheel causes a vehicle to travel radians. Displacement and directed distance The displacement in classical physics measures the change in position of an object during an interval of time. While distance is a scalar quantity, or a magnitude, displacement is a vector quantity with both magnitude and direction. In general, the vector measuring the difference between two locations (the relative position) is sometimes called the directed distance. For example, the directed distance from the New York City Main Library flag pole to the Statue of Liberty flag pole has: A starting point: library flag pole An ending point: statue flag pole A direction: -38° A distance: 8.72 km Signed distance
Mathematics
Geometry
null
39382
https://en.wikipedia.org/wiki/Infimum%20and%20supremum
Infimum and supremum
In mathematics, the infimum (abbreviated inf; : infima) of a subset of a partially ordered set is the greatest element in that is less than or equal to each element of if such an element exists. If the infimum of exists, it is unique, and if b is a lower bound of , then b is less than or equal to the infimum of . Consequently, the term greatest lower bound (abbreviated as ) is also commonly used. The supremum (abbreviated sup; : suprema) of a subset of a partially ordered set is the least element in that is greater than or equal to each element of if such an element exists. If the supremum of exists, it is unique, and if b is an upper bound of , then the supremum of is less than or equal to b. Consequently, the supremum is also referred to as the least upper bound (or ). The infimum is, in a precise sense, dual to the concept of a supremum. Infima and suprema of real numbers are common special cases that are important in analysis, and especially in Lebesgue integration. However, the general definitions remain valid in the more abstract setting of order theory where arbitrary partially ordered sets are considered. The concepts of infimum and supremum are close to minimum and maximum, but are more useful in analysis because they better characterize special sets which may have . For instance, the set of positive real numbers (not including ) does not have a minimum, because any given element of could simply be divided in half resulting in a smaller number that is still in There is, however, exactly one infimum of the positive real numbers relative to the real numbers: which is smaller than all the positive real numbers and greater than any other real number which could be used as a lower bound. An infimum of a set is always and only defined relative to a superset of the set in question. For example, there is no infimum of the positive real numbers inside the positive real numbers (as their own superset), nor any infimum of the positive real numbers inside the complex numbers with positive real part. Formal definition A of a subset of a partially ordered set is an element of such that for all A lower bound of is called an (or , or ) of if for all lower bounds of in ( is larger than any other lower bound). Similarly, an of a subset of a partially ordered set is an element of such that for all An upper bound of is called a (or , or ) of if for all upper bounds of in ( is less than any other upper bound). Existence and uniqueness Infima and suprema do not necessarily exist. Existence of an infimum of a subset of can fail if has no lower bound at all, or if the set of lower bounds does not contain a greatest element. (An example of this is the subset of . It has upper bounds, such as 1.5, but no supremum in .) Consequently, partially ordered sets for which certain infima are known to exist become especially interesting. For instance, a lattice is a partially ordered set in which all subsets have both a supremum and an infimum, and a complete lattice is a partially ordered set in which subsets have both a supremum and an infimum. More information on the various classes of partially ordered sets that arise from such considerations are found in the article on completeness properties. If the supremum of a subset exists, it is unique. If contains a greatest element, then that element is the supremum; otherwise, the supremum does not belong to (or does not exist). Likewise, if the infimum exists, it is unique. If contains a least element, then that element is the infimum; otherwise, the infimum does not belong to (or does not exist). Relation to maximum and minimum elements The infimum of a subset of a partially ordered set assuming it exists, does not necessarily belong to If it does, it is a minimum or least element of Similarly, if the supremum of belongs to it is a maximum or greatest element of For example, consider the set of negative real numbers (excluding zero). This set has no greatest element, since for every element of the set, there is another, larger, element. For instance, for any negative real number there is another negative real number which is greater. On the other hand, every real number greater than or equal to zero is certainly an upper bound on this set. Hence, is the least upper bound of the negative reals, so the supremum is 0. This set has a supremum but no greatest element. However, the definition of maximal and minimal elements is more general. In particular, a set can have many maximal and minimal elements, whereas infima and suprema are unique. Whereas maxima and minima must be members of the subset that is under consideration, the infimum and supremum of a subset need not be members of that subset themselves. Minimal upper bounds Finally, a partially ordered set may have many minimal upper bounds without having a least upper bound. Minimal upper bounds are those upper bounds for which there is no strictly smaller element that also is an upper bound. This does not say that each minimal upper bound is smaller than all other upper bounds, it merely is not greater. The distinction between "minimal" and "least" is only possible when the given order is not a total one. In a totally ordered set, like the real numbers, the concepts are the same. As an example, let be the set of all finite subsets of natural numbers and consider the partially ordered set obtained by taking all sets from together with the set of integers and the set of positive real numbers ordered by subset inclusion as above. Then clearly both and are greater than all finite sets of natural numbers. Yet, neither is smaller than nor is the converse true: both sets are minimal upper bounds but none is a supremum. Least-upper-bound property The is an example of the aforementioned completeness properties which is typical for the set of real numbers. This property is sometimes called . If an ordered set has the property that every nonempty subset of having an upper bound also has a least upper bound, then is said to have the least-upper-bound property. As noted above, the set of all real numbers has the least-upper-bound property. Similarly, the set of integers has the least-upper-bound property; if is a nonempty subset of and there is some number such that every element of is less than or equal to then there is a least upper bound for an integer that is an upper bound for and is less than or equal to every other upper bound for A well-ordered set also has the least-upper-bound property, and the empty subset has also a least upper bound: the minimum of the whole set. An example of a set that the least-upper-bound property is the set of rational numbers. Let be the set of all rational numbers such that Then has an upper bound ( for example, or ) but no least upper bound in : If we suppose is the least upper bound, a contradiction is immediately deduced because between any two reals and (including and ) there exists some rational which itself would have to be the least upper bound (if ) or a member of greater than (if ). Another example is the hyperreals; there is no least upper bound of the set of positive infinitesimals. There is a corresponding ; an ordered set possesses the greatest-lower-bound property if and only if it also possesses the least-upper-bound property; the least-upper-bound of the set of lower bounds of a set is the greatest-lower-bound, and the greatest-lower-bound of the set of upper bounds of a set is the least-upper-bound of the set. If in a partially ordered set every bounded subset has a supremum, this applies also, for any set in the function space containing all functions from to where if and only if for all For example, it applies for real functions, and, since these can be considered special cases of functions, for real -tuples and sequences of real numbers. The least-upper-bound property is an indicator of the suprema. Infima and suprema of real numbers In analysis, infima and suprema of subsets of the real numbers are particularly important. For instance, the negative real numbers do not have a greatest element, and their supremum is (which is not a negative real number). The completeness of the real numbers implies (and is equivalent to) that any bounded nonempty subset of the real numbers has an infimum and a supremum. If is not bounded below, one often formally writes If is empty, one writes Properties If is any set of real numbers then if and only if and otherwise Set inclusion If are sets of real numbers then (if this reads as ) and Image under functions If is a nonincreasing function, then and , where the image is defined as Identifying infima and suprema If the infimum of exists (that is, is a real number) and if is any real number then if and only if is a lower bound and for every there is an with Similarly, if is a real number and if is any real number then if and only if is an upper bound and if for every there is an with Relation to limits of sequences If is any non-empty set of real numbers then there always exists a non-decreasing sequence in such that Similarly, there will exist a (possibly different) non-increasing sequence in such that In particular, the infimum and supremum of a set belong to its closure if then and if then Expressing the infimum and supremum as a limit of a such a sequence allows theorems from various branches of mathematics to be applied. Consider for example the well-known fact from topology that if is a continuous function and is a sequence of points in its domain that converges to a point then necessarily converges to It implies that if is a real number (where all are in ) and if is a continuous function whose domain contains and then which (for instance) guarantees that is an adherent point of the set If in addition to what has been assumed, the continuous function is also an increasing or non-decreasing function, then it is even possible to conclude that This may be applied, for instance, to conclude that whenever is a real (or complex) valued function with domain whose sup norm is finite, then for every non-negative real number since the map defined by is a continuous non-decreasing function whose domain always contains and Although this discussion focused on similar conclusions can be reached for with appropriate changes (such as requiring that be non-increasing rather than non-decreasing). Other norms defined in terms of or include the weak space norms (for ), the norm on Lebesgue space and operator norms. Monotone sequences in that converge to (or to ) can also be used to help prove many of the formula given below, since addition and multiplication of real numbers are continuous operations. Arithmetic operations on sets The following formulas depend on a notation that conveniently generalizes arithmetic operations on sets. Throughout, are sets of real numbers. Sum of sets The Minkowski sum of two sets and of real numbers is the set consisting of all possible arithmetic sums of pairs of numbers, one from each set. The infimum and supremum of the Minkowski sum satisfy, if and Product of sets The multiplication of two sets and of real numbers is defined similarly to their Minkowski sum: If and are nonempty sets of positive real numbers then and similarly for suprema Scalar product of a set The product of a real number and a set of real numbers is the set If then while if then In the case , one has, if Using and the notation it follows that, Multiplicative inverse of a set For any set that does not contain let If is non-empty then where this equation also holds when if the definition is used. This equality may alternatively be written as Moreover, if and only if where if then Duality If one denotes by the partially-ordered set with the opposite order relation; that is, for all declare: then infimum of a subset in equals the supremum of in and vice versa. For subsets of the real numbers, another kind of duality holds: where Examples Infima The infimum of the set of numbers is The number is a lower bound, but not the greatest lower bound, and hence not the infimum. More generally, if a set has a smallest element, then the smallest element is the infimum for the set. In this case, it is also called the minimum of the set. If is a decreasing sequence with limit then Suprema The supremum of the set of numbers is The number is an upper bound, but it is not the least upper bound, and hence is not the supremum. In the last example, the supremum of a set of rationals is irrational, which means that the rationals are incomplete. One basic property of the supremum is for any functionals and The supremum of a subset of where denotes "divides", is the lowest common multiple of the elements of The supremum of a set containing subsets of some set is the union of the subsets when considering the partially ordered set , where is the power set of and is subset.
Mathematics
Order theory
null
39389
https://en.wikipedia.org/wiki/Pine
Pine
A pine is any conifer tree or shrub in the genus Pinus () of the family Pinaceae. Pinus is the sole genus in the subfamily Pinoideae. World Flora Online accepts 134 species-rank taxa (119 species and 15 nothospecies) of pines as current, with additional synonyms, and Plants of the World Online 126 species-rank taxa (113 species and 13 nothospecies), making it the largest genus among the conifers. The highest species diversity of pines is found in Mexico. Pines are widely distributed in the Northern Hemisphere; they occupy large areas of boreal forest, but are found in many habitats, including the Mediterranean Basin, and dry tropical forests in southeast Asia and Central America. Wood from pine trees is one of the most extensively used types of timber, and some pines are widely used as Christmas trees. Description Pine trees are evergreen, coniferous resinous trees (or, rarely, shrubs) growing tall, with the majority of species reaching tall. The smallest are Siberian dwarf pine and Potosi pinyon, and the tallest is an tall sugar pine located in Yosemite National Park. Pines are long lived and typically reach ages of 100–1,000 years, some even more. The longest-lived is the Great Basin bristlecone pine (P. longaeva). One individual of this species, dubbed "Methuselah", is one of the world's oldest living organisms at around 4,800-years old. This tree can be found in the White Mountains of California. An older tree, now cut down, was dated at 4,900-years old. It was discovered in a grove beneath Wheeler Peak and it is now known as "Prometheus" after the Greek immortal. The spiral growth of branches, needles, and cones scales are arranged in Fibonacci number ratios. Bark The bark of most pines is thick and scaly, but some species have thin, flaky bark. The branches are produced in "pseudo-whorls", actually a very tight spiral but appearing like a ring of branches arising from the same point. Many pines are uninodal, producing just one such whorl of branches each year, from buds at the tip of the year's new shoot, but others are multinodal, producing two or more whorls of branches per year. Foliage Pines have four types of leaf: Seed leaves (cotyledons) on seedlings are borne in a whorl of 4–24. Juvenile leaves, which follow immediately on seedlings and young plants, are long, single, green or often blue-green, and arranged spirally on the shoot. These are produced for six months to five years, rarely longer. Scale leaves, similar to bud scales, are small, brown and not photosynthetic, and arranged spirally like the juvenile leaves. Needles, the adult leaves, are green (photosynthetic) and bundled in clusters called fascicles. The needles can number from one to seven per fascicle, but generally number from two to five. Each fascicle is produced from a small bud on a dwarf shoot in the axil of a scale leaf. These bud scales often remain on the fascicle as a basal sheath. The needles persist for 1.5–40 years, depending on species. If a shoot's growing tip is damaged (e.g. eaten by an animal), the needle fascicles just below the damage will generate a stem-producing bud, which can then replace the lost growth tip. Cones Pines are monoecious, having the male and female cones on the same tree. The male cones are small, typically 1–5 cm long, and only present for a short period (usually in spring, though autumn in a few pines), falling as soon as they have shed their pollen. The female cones take 1.5–3 years (depending on species) to mature after pollination, with actual fertilisation delayed one year. At maturity the female cones are 3–60 cm long. Each cone has numerous spirally arranged scales, with two seeds on each fertile scale; the scales at the base and tip of the cone are small and sterile, without seeds. The seeds are mostly small and winged, and are anemochorous (wind-dispersed), but some are larger and have only a vestigial wing, and are bird-dispersed. Female cones are woody and sometimes armed to protect developing seeds from foragers. At maturity, the cones usually open to release the seeds. In some of the bird-dispersed species, for example whitebark pine, the seeds are only released by the bird breaking the cones open. In others, the seeds are stored in closed cones for many years until an environmental cue triggers the cones to open, releasing the seeds. This is called serotiny. The most common form of serotiny is pyriscence, in which resin binds the cones shut until melted by a forest fire, for example in P. radiata and P. muricata; the seeds are then released after the fire to colonise the burnt ground with minimal competition from other plants. Naming Etymology The modern English name "pine" derives from Latin pinus, traced to the Indo-European base *pīt- 'resin'. Before the 19th century, pines were often called firs, a name now applied to another genus, Abies. In some European languages, Germanic cognates of the Old Norse name are still in use for pines, as in Danish fyr, Swedish fura/furu, and German Föhre. Taxonomic history The genus Pinus was named by Carl Linnaeus in 1753. Pinus sylvestris, the Scots pine, was later chosen as the type species. Evolution Taxonomy Pines are gymnosperms. The genus is divided into two subgenera based on the number of fibrovascular bundles in the needle, and the presence or absence of a resin seal on the scales of the mature cones before opening. The subgenera can be distinguished by cone, seed, and leaf characters: Pinus subg. Pinus, the yellow, or hard pine group, with cones with a resin seal on the scales, and generally with harder wood; the needle fascicles mostly have a persistent sheath (two exceptions, Pinus leiophylla and Pinus lumholtzii, have deciduous sheaths). The subgenus has also been called diploxylon, on account of its two fibrovascular bundles. Pinus subg. Strobus, syn. Pinus subg. Ducampopinus, the white, or soft pine, and pinyon pine groups, with cones without a resin seal on the scales, and usually have softer wood; the needle fascicles mostly have a deciduous sheath (one exception, Pinus nelsonii, has a persistent sheath). The subgenus has also been called haploxylon, on account of its single fibrovascular bundle. Phylogenetic evidence indicates that both subgenera have a very ancient divergence from one another. Each subgenus is further divided into sections and subsections. Many of the smaller groups of Pinus are composed of closely related species with recent divergence and history of hybridisation. This results in low morphological and genetic differences. This, coupled with low sampling and underdeveloped genetic techniques, has made taxonomy difficult to determine. Recent research using large genetic datasets has clarified these relationships into the groupings often accepted today. Phylogeny Pinus is the largest genus of the Pinaceae, the pine family, which first appeared in the Jurassic period. Based on recent transcriptome analysis, Pinus is most closely related to the genus Cathaya, which in turn is closely related to the genus Picea, the spruces. These genera, with firs and larches, form the pinoid clade of the Pinaceae. Pines first appeared during the Early Cretaceous, with the oldest verified fossil of the genus being Pinus yorkshirensis from the Hauterivian-Barremian boundary (~130-125 million years ago) from the Speeton Clay, England. However, there are possible records from the Jurassic. The evolutionary history of the genus Pinus has been complicated by hybridisation. Pines are prone to inter-specific breeding. Wind pollination, long life spans, overlapping generations, large population size, and weak reproductive isolation make breeding across species more likely. As the pines have diversified, gene transfer between different species has created a complex history of genetic relatedness. Two recent phylogenies are given below; the differences between them, and other published phylogenies, demonstrate these complications: Distribution and habitat Pines are native to the Northern Hemisphere, and to a few parts from the tropics to temperate regions in the Southern Hemisphere. Most regions of the Northern Hemisphere host some native species of pines; they occupy large areas of Boreal forest, and are found all around the Mediterranean Basin. The northernmost is Scots pine, reaching just north of 70° N in Stabbursdalen National Park in Norway; Google Maps shows geolocated images with pines at 70° 09' N. One species (Sumatran pine) crosses the equator in Sumatra to 2°S. In North America, various species occur in regions at latitudes from as far north as 66° N to as far south as 12°N. Pines may be found in a very large variety of environments, ranging from semi-arid desert to rainforests, from sea level up to , from the coldest to the hottest environments on Earth. They often occur in mountainous areas with favourable soils and at least some water. Various species have been introduced to temperate and subtropical regions of both hemispheres, where they are grown as timber or cultivated as ornamental plants in parks and gardens. A number of such introduced species have become naturalised, and some species are considered invasive in some areas and threaten native ecosystems. Ecology Pines grow well in acid soils, some also on calcareous soils; most require good soil drainage, preferring sandy soils, but a few (e.g. lodgepole pine) can tolerate poorly drained wet soils. A few are able to sprout after forest fires (e.g. Canary Island pine). Some species of pines (e.g. Bishop pine) need fire to regenerate, and their populations slowly decline under fire suppression regimens. Pine trees are beneficial to the environment since they can remove carbon dioxide from the atmosphere. Although several studies have indicated that after the establishment of pine plantations in grasslands, there is an alteration of carbon pools including a decrease of the soil organic carbon pool. Several species are adapted to extreme conditions imposed by elevation and latitude (e.g. Siberian dwarf pine, mountain pine, whitebark pine, and the bristlecone pines). The pinyon pines and a number of others, notably Turkish pine and gray pine, are particularly well adapted to growth in hot, dry semidesert climates. Pine pollen may play an important role in the functioning of detrital food webs. Nutrients from pollen aid detritivores in development, growth, and maturation, and may enable fungi to decompose nutritionally scarce litter. Pine pollen is also involved in moving plant matter between terrestrial and aquatic ecosystems. Wildlife Pine needles serve as food for various Lepidoptera (butterfly and moth) species. Several species of pine are attacked by nematodes, causing pine wilt disease, which can kill some quickly. Some of these Lepidoptera species, many of them moths, specialise in feeding on only one or sometimes several species of pine. Beside that many species of birds and mammals shelter in pine habitat or feed on pine nuts. The seeds are commonly eaten by birds, such as grouse, crossbills, jays, nuthatches, siskins, and woodpeckers, and by squirrels. Some birds, notably nutcrackers and pinyon jays, are of major importance in distributing pine seeds to new areas. Pine needles are sometimes eaten by the Symphytan species pine sawfly, and goats. Uses Timber and construction Pines are among the most commercially important tree species, valued for their timber and wood pulp throughout the world. In temperate and tropical regions, they are fast-growing softwoods that grow in relatively dense stands. Commercial pines are grown in plantations for timber that is denser and therefore more durable than spruce (Picea). Pine wood is widely used in high-value carpentry items such as furniture, window frames, panelling, floors, and roofing. Turpentine is extracted from the wood of some species of pine. As pine wood has no insect- or decay-resistant qualities after logging, in its untreated state it is generally recommended for indoor construction purposes only (indoor drywall framing, for example). It is commonly used in Canadian Lumber Standard graded wood. For outside use, pine needs to be treated with copper azole, chromated copper arsenate or other suitable chemical preservative. Ornamental uses Many pine species make attractive ornamental plantings for parks and larger gardens with a variety of dwarf cultivars being suitable for smaller spaces. There are currently 818 named cultivars (or trinomials) recognised by the American Conifer Society ACS. Pines are also commercially grown and harvested for Christmas trees. Pine cones, among the largest and most durable of all conifer cones, are craft favourites. Pine boughs, appreciated especially in wintertime for their pleasant smell and greenery, are popularly cut for decorations. Pine needles are also used for making decorative articles such as baskets, trays, pots, etc., and during the U.S. Civil War, the needles of the longleaf pine "Georgia pine" were widely employed in this. This originally Native American skill is now being replicated across the world. Pine needle handicrafts are made in the US, Canada, Mexico, Nicaragua, and India. Pine needles are also versatile and have been used by Latvian designer Tamara Orjola to create different biodegradable products including paper, furniture, textiles and dye. Forestry When grown for sawlogs, pine plantations can be harvested after 25 years, with some stands being allowed to grow up to 50 or more years (the wood value increases more quickly as the trees age). In colder and drier climates, growth is slower, and harvesting can be at much older ages. Imperfect trees (such as those with bent trunks or forks, smaller trees, or diseased trees) are removed in a "thinning" operation every 5–10 years. Thinning allows the best trees to grow faster, because it prevents weaker trees from competing for sunlight, water, and nutrients. Young trees removed during thinning are used for pulpwood or are left in the forest, while most older ones are good enough for saw timber. A 30-year-old commercial pine tree grown in good conditions in Arkansas will be about in diameter and about high. After 50 years, the same tree will be about in diameter and high, and its wood will be worth about seven times as much as the 30-year-old tree. This however depends on the region, species and silvicultural techniques. In New Zealand, a plantation's maximum value is reached after around 28 years with height being as high as and diameter , with maximum wood production after around 35 years (again depending on factors such as site, stocking and genetics). Trees are normally planted 3–4 m apart, or about 1,000 per hectare (100,000 per square kilometre). Food and nutrients The seeds (pine nuts) are generally edible; the young male cones can be cooked and eaten, as can the bark of young twigs. Some species have large pine nuts, which are harvested and sold for cooking and baking. They are an ingredient of pesto alla genovese. The soft, moist, white inner bark (cambium) beneath the woody outer bark is edible and very high in vitamins A and C. It can be eaten raw in slices as a snack or dried and ground up into a powder for use as an ersatz flour or thickener in stews, soups, and other foods, such as bark bread. Adirondack Indians got their name from the Mohawk Indian word atirú:taks, meaning "tree eaters". A tea is made by steeping young, green pine needles in boiling water (known as tallstrunt in Sweden). In eastern Asia, pine and other conifers are accepted among consumers as a beverage product, and used in teas, as well as wine. In Greece, the wine retsina is flavoured with Aleppo pine resin. Pine needles from Pinus densiflora were found to contain 30.54 milligram/gram of proanthocyanidins when extracted with hot water. In traditional Chinese medicine, pine resin is used for burns, wounds and skin complaints. Culture Pines have been a frequently mentioned tree throughout history, including in literature, art, and in religious texts. The pine is a particular motif in Chinese art and literature, which sometimes combines painting and poetry in the same work. Some of the main symbolic attributes of pines in Chinese art and literature are longevity and steadfastness: the pine retains its green needles through all the seasons. Sometimes the pine and cypress are paired. At other times the pine, plum, and bamboo are considered as the "Three Friends of Winter". Literature Writers of various nationalities and ethnicities have written of pines. Among them, John Muir, Dora Sigerson Shorter, Eugene Field, Bai Juyi, Theodore Winthrop, and Rev. George Allan D.D. Art Pines are often featured in art, whether painting and fine art, drawing, photography, or folk art. Religious texts Trees which may be pines or other conifers are mentioned in some verses of the Bible. In the Book of Nehemiah 8:15, the King James Version translates Hebrew "עץ שמן" (etz shman), 'oil tree', as pine, and the unknown type of tree of Hebrew "תדהר" in Isaiah 60:13 similarly. Some botanical authorities believe that the Hebrew word "ברוש" (bərōsh), "cypress", which is used many times in the Bible, properly designates Pinus halepensis, the Aleppo or Jerusalem pine, or in Hosea 14:8 which refers to fruit, Pinus pinea, the stone pine. The word used in modern Hebrew for pine is "אֹ֖רֶן" (oren), which occurs only in Isaiah 44:14, but two manuscripts have "ארז" (cedar), a much more common word.
Biology and health sciences
Gymnosperms
null
39406
https://en.wikipedia.org/wiki/Central%20limit%20theorem
Central limit theorem
In probability theory, the central limit theorem (CLT) states that, under appropriate conditions, the distribution of a normalized version of the sample mean converges to a standard normal distribution. This holds even if the original variables themselves are not normally distributed. There are several versions of the CLT, each applying in the context of different conditions. The theorem is a key concept in probability theory because it implies that probabilistic and statistical methods that work for normal distributions can be applicable to many problems involving other types of distributions. This theorem has seen many changes during the formal development of probability theory. Previous versions of the theorem date back to 1811, but in its modern form it was only precisely stated as late as 1920. In statistics, the CLT can be stated as: let denote a statistical sample of size from a population with expected value (average) and finite positive variance , and let denote the sample mean (which is itself a random variable). Then the limit as of the distribution of is a normal distribution with mean and variance . In other words, suppose that a large sample of observations is obtained, each observation being randomly produced in a way that does not depend on the values of the other observations, and the average (arithmetic mean) of the observed values is computed. If this procedure is performed many times, resulting in a collection of observed averages, the central limit theorem says that if the sample size is large enough, the probability distribution of these averages will closely approximate a normal distribution. The central limit theorem has several variants. In its common form, the random variables must be independent and identically distributed (i.i.d.). This requirement can be weakened; convergence of the mean to the normal distribution also occurs for non-identical distributions or for non-independent observations if they comply with certain conditions. The earliest version of this theorem, that the normal distribution may be used as an approximation to the binomial distribution, is the de Moivre–Laplace theorem. Independent sequences Classical CLT Let be a sequence of i.i.d. random variables having a distribution with expected value given by and finite variance given by Suppose we are interested in the sample average By the law of large numbers, the sample average converges almost surely (and therefore also converges in probability) to the expected value as The classical central limit theorem describes the size and the distributional form of the stochastic fluctuations around the deterministic number during this convergence. More precisely, it states that as gets larger, the distribution of the normalized mean , i.e. the difference between the sample average and its limit scaled by the factor , approaches the normal distribution with mean and variance For large enough the distribution of gets arbitrarily close to the normal distribution with mean and variance The usefulness of the theorem is that the distribution of approaches normality regardless of the shape of the distribution of the individual Formally, the theorem can be stated as follows: In the case convergence in distribution means that the cumulative distribution functions of converge pointwise to the cdf of the distribution: for every real number where is the standard normal cdf evaluated at The convergence is uniform in in the sense that where denotes the least upper bound (or supremum) of the set. Lyapunov CLT In this variant of the central limit theorem the random variables have to be independent, but not necessarily identically distributed. The theorem also requires that random variables have moments of some order and that the rate of growth of these moments is limited by the Lyapunov condition given below. In practice it is usually easiest to check Lyapunov's condition for If a sequence of random variables satisfies Lyapunov's condition, then it also satisfies Lindeberg's condition. The converse implication, however, does not hold. Lindeberg (-Feller) CLT In the same setting and with the same notation as above, the Lyapunov condition can be replaced with the following weaker one (from Lindeberg in 1920). Suppose that for every where is the indicator function. Then the distribution of the standardized sums converges towards the standard normal distribution CLT for the sum of a random number of random variables Rather than summing an integer number of random variables and taking , the sum can be of a random number of random variables, with conditions on . Multidimensional CLT Proofs that use characteristic functions can be extended to cases where each individual is a random vector in with mean vector and covariance matrix (among the components of the vector), and these random vectors are independent and identically distributed. The multidimensional central limit theorem states that when scaled, sums converge to a multivariate normal distribution. Summation of these vectors is done component-wise. For let be independent random vectors. The sum of the random vectors is and their average is Therefore, The multivariate central limit theorem states that where the covariance matrix is equal to The multivariate central limit theorem can be proved using the Cramér–Wold theorem. The rate of convergence is given by the following Berry–Esseen type result: It is unknown whether the factor is necessary. The Generalized Central Limit Theorem The Generalized Central Limit Theorem (GCLT) was an effort of multiple mathematicians (Bernstein, Lindeberg, Lévy, Feller, Kolmogorov, and others) over the period from 1920 to 1937. The first published complete proof of the GCLT was in 1937 by Paul Lévy in French. An English language version of the complete proof of the GCLT is available in the translation of Gnedenko and Kolmogorov's 1954 book. The statement of the GCLT is as follows: A non-degenerate random variable Z is α-stable for some 0 < α ≤ 2 if and only if there is an independent, identically distributed sequence of random variables X1, X2, X3, ... and constants an > 0, bn ∈ ℝ with an (X1 + ... + Xn) − bn → Z. Here → means the sequence of random variable sums converges in distribution; i.e., the corresponding distributions satisfy Fn(y) → F(y) at all continuity points of F. In other words, if sums of independent, identically distributed random variables converge in distribution to some Z, then Z must be a stable distribution. Dependent processes CLT under weak dependence A useful generalization of a sequence of independent, identically distributed random variables is a mixing random process in discrete time; "mixing" means, roughly, that random variables temporally far apart from one another are nearly independent. Several kinds of mixing are used in ergodic theory and probability theory. See especially strong mixing (also called α-mixing) defined by where is so-called strong mixing coefficient. A simplified formulation of the central limit theorem under strong mixing is: In fact, where the series converges absolutely. The assumption cannot be omitted, since the asymptotic normality fails for where are another stationary sequence. There is a stronger version of the theorem: the assumption is replaced with and the assumption is replaced with Existence of such ensures the conclusion. For encyclopedic treatment of limit theorems under mixing conditions see . Martingale difference CLT Remarks Proof of classical CLT The central limit theorem has a proof using characteristic functions. It is similar to the proof of the (weak) law of large numbers. Assume are independent and identically distributed random variables, each with mean and finite variance The sum has mean and variance Consider the random variable where in the last step we defined the new random variables each with zero mean and unit variance The characteristic function of is given by where in the last step we used the fact that all of the are identically distributed. The characteristic function of is, by Taylor's theorem, where is "little notation" for some function of that goes to zero more rapidly than By the limit of the exponential function the characteristic function of equals All of the higher order terms vanish in the limit The right hand side equals the characteristic function of a standard normal distribution , which implies through Lévy's continuity theorem that the distribution of will approach as Therefore, the sample average is such that converges to the normal distribution from which the central limit theorem follows. Convergence to the limit The central limit theorem gives only an asymptotic distribution. As an approximation for a finite number of observations, it provides a reasonable approximation only when close to the peak of the normal distribution; it requires a very large number of observations to stretch into the tails. The convergence in the central limit theorem is uniform because the limiting cumulative distribution function is continuous. If the third central moment exists and is finite, then the speed of convergence is at least on the order of (see Berry–Esseen theorem). Stein's method can be used not only to prove the central limit theorem, but also to provide bounds on the rates of convergence for selected metrics. The convergence to the normal distribution is monotonic, in the sense that the entropy of increases monotonically to that of the normal distribution. The central limit theorem applies in particular to sums of independent and identically distributed discrete random variables. A sum of discrete random variables is still a discrete random variable, so that we are confronted with a sequence of discrete random variables whose cumulative probability distribution function converges towards a cumulative probability distribution function corresponding to a continuous variable (namely that of the normal distribution). This means that if we build a histogram of the realizations of the sum of independent identical discrete variables, the piecewise-linear curve that joins the centers of the upper faces of the rectangles forming the histogram converges toward a Gaussian curve as approaches infinity; this relation is known as de Moivre–Laplace theorem. The binomial distribution article details such an application of the central limit theorem in the simple case of a discrete variable taking only two possible values. Common misconceptions Studies have shown that the central limit theorem is subject to several common but serious misconceptions, some of which appear in widely used textbooks. These include: The misconceived belief that the theorem applies to random sampling of any variable, rather than to the mean values (or sums) of iid random variables extracted from a population by repeated sampling. That is, the theorem assumes the random sampling produces a sampling distribution formed from different values of means (or sums) of such random variables. The misconceived belief that the theorem ensures that random sampling leads to the emergence of a normal distribution for sufficiently large samples of any random variable, regardless of the population distribution. In reality, such sampling asymptotically reproduces the properties of the population, an intuitive result underpinned by the Glivenko-Cantelli theorem. The misconceived belief that the theorem leads to a good approximation of a normal distribution for sample sizes greater than around 30, allowing reliable inferences regardless of the nature of the population. In reality, this empirical rule of thumb has no valid justification, and can lead to seriously flawed inferences. See Z-test for where the approximation holds. Relation to the law of large numbers The law of large numbers as well as the central limit theorem are partial solutions to a general problem: "What is the limiting behavior of as approaches infinity?" In mathematical analysis, asymptotic series are one of the most popular tools employed to approach such questions. Suppose we have an asymptotic expansion of : Dividing both parts by and taking the limit will produce , the coefficient of the highest-order term in the expansion, which represents the rate at which changes in its leading term. Informally, one can say: " grows approximately as ". Taking the difference between and its approximation and then dividing by the next term in the expansion, we arrive at a more refined statement about : Here one can say that the difference between the function and its approximation grows approximately as . The idea is that dividing the function by appropriate normalizing functions, and looking at the limiting behavior of the result, can tell us much about the limiting behavior of the original function itself. Informally, something along these lines happens when the sum, , of independent identically distributed random variables, , is studied in classical probability theory. If each has finite mean , then by the law of large numbers, . If in addition each has finite variance , then by the central limit theorem, where is distributed as . This provides values of the first two constants in the informal expansion In the case where the do not have finite mean or variance, convergence of the shifted and rescaled sum can also occur with different centering and scaling factors: or informally Distributions which can arise in this way are called stable. Clearly, the normal distribution is stable, but there are also other stable distributions, such as the Cauchy distribution, for which the mean or variance are not defined. The scaling factor may be proportional to , for any ; it may also be multiplied by a slowly varying function of . The law of the iterated logarithm specifies what is happening "in between" the law of large numbers and the central limit theorem. Specifically it says that the normalizing function , intermediate in size between of the law of large numbers and of the central limit theorem, provides a non-trivial limiting behavior. Alternative statements of the theorem Density functions The density of the sum of two or more independent variables is the convolution of their densities (if these densities exist). Thus the central limit theorem can be interpreted as a statement about the properties of density functions under convolution: the convolution of a number of density functions tends to the normal density as the number of density functions increases without bound. These theorems require stronger hypotheses than the forms of the central limit theorem given above. Theorems of this type are often called local limit theorems. See Petrov for a particular local limit theorem for sums of independent and identically distributed random variables. Characteristic functions Since the characteristic function of a convolution is the product of the characteristic functions of the densities involved, the central limit theorem has yet another restatement: the product of the characteristic functions of a number of density functions becomes close to the characteristic function of the normal density as the number of density functions increases without bound, under the conditions stated above. Specifically, an appropriate scaling factor needs to be applied to the argument of the characteristic function. An equivalent statement can be made about Fourier transforms, since the characteristic function is essentially a Fourier transform. Calculating the variance Let be the sum of random variables. Many central limit theorems provide conditions such that converges in distribution to (the normal distribution with mean 0, variance 1) as . In some cases, it is possible to find a constant and function such that converges in distribution to as . Extensions Products of positive random variables The logarithm of a product is simply the sum of the logarithms of the factors. Therefore, when the logarithm of a product of random variables that take only positive values approaches a normal distribution, the product itself approaches a log-normal distribution. Many physical quantities (especially mass or length, which are a matter of scale and cannot be negative) are the products of different random factors, so they follow a log-normal distribution. This multiplicative version of the central limit theorem is sometimes called Gibrat's law. Whereas the central limit theorem for sums of random variables requires the condition of finite variance, the corresponding theorem for products requires the corresponding condition that the density function be square-integrable. Beyond the classical framework Asymptotic normality, that is, convergence to the normal distribution after appropriate shift and rescaling, is a phenomenon much more general than the classical framework treated above, namely, sums of independent random variables (or vectors). New frameworks are revealed from time to time; no single unifying framework is available for now. Convex body These two -close distributions have densities (in fact, log-concave densities), thus, the total variance distance between them is the integral of the absolute value of the difference between the densities. Convergence in total variation is stronger than weak convergence. An important example of a log-concave density is a function constant inside a given convex body and vanishing outside; it corresponds to the uniform distribution on the convex body, which explains the term "central limit theorem for convex bodies". Another example: where and . If then factorizes into which means are independent. In general, however, they are dependent. The condition ensures that are of zero mean and uncorrelated; still, they need not be independent, nor even pairwise independent. By the way, pairwise independence cannot replace independence in the classical central limit theorem. Here is a Berry–Esseen type result. The distribution of need not be approximately normal (in fact, it can be uniform). However, the distribution of is close to (in the total variation distance) for most vectors according to the uniform distribution on the sphere . Lacunary trigonometric series Gaussian polytopes The same also holds in all dimensions greater than 2. The polytope is called a Gaussian random polytope. A similar result holds for the number of vertices (of the Gaussian polytope), the number of edges, and in fact, faces of all dimensions. Linear functions of orthogonal matrices A linear function of a matrix is a linear combination of its elements (with given coefficients), where is the matrix of the coefficients; see Trace (linear algebra)#Inner product. A random orthogonal matrix is said to be distributed uniformly, if its distribution is the normalized Haar measure on the orthogonal group ; see Rotation matrix#Uniform random rotation matrices. Subsequences Random walk on a crystal lattice The central limit theorem may be established for the simple random walk on a crystal lattice (an infinite-fold abelian covering graph over a finite graph), and is used for design of crystal structures. Applications and examples A simple example of the central limit theorem is rolling many identical, unbiased dice. The distribution of the sum (or average) of the rolled numbers will be well approximated by a normal distribution. Since real-world quantities are often the balanced sum of many unobserved random events, the central limit theorem also provides a partial explanation for the prevalence of the normal probability distribution. It also justifies the approximation of large-sample statistics to the normal distribution in controlled experiments. Regression Regression analysis, and in particular ordinary least squares, specifies that a dependent variable depends according to some function upon one or more independent variables, with an additive error term. Various types of statistical inference on the regression assume that the error term is normally distributed. This assumption can be justified by assuming that the error term is actually the sum of many independent error terms; even if the individual error terms are not normally distributed, by the central limit theorem their sum can be well approximated by a normal distribution. Other illustrations Given its importance to statistics, a number of papers and computer packages are available that demonstrate the convergence involved in the central limit theorem. History Dutch mathematician Henk Tijms writes: Sir Francis Galton described the Central Limit Theorem in this way: The actual term "central limit theorem" (in German: "zentraler Grenzwertsatz") was first used by George Pólya in 1920 in the title of a paper. Pólya referred to the theorem as "central" due to its importance in probability theory. According to Le Cam, the French school of probability interprets the word central in the sense that "it describes the behaviour of the centre of the distribution as opposed to its tails". The abstract of the paper On the central limit theorem of calculus of probability and the problem of moments by Pólya in 1920 translates as follows. A thorough account of the theorem's history, detailing Laplace's foundational work, as well as Cauchy's, Bessel's and Poisson's contributions, is provided by Hald. Two historical accounts, one covering the development from Laplace to Cauchy, the second the contributions by von Mises, Pólya, Lindeberg, Lévy, and Cramér during the 1920s, are given by Hans Fischer. Le Cam describes a period around 1935. Bernstein presents a historical discussion focusing on the work of Pafnuty Chebyshev and his students Andrey Markov and Aleksandr Lyapunov that led to the first proofs of the CLT in a general setting. A curious footnote to the history of the Central Limit Theorem is that a proof of a result similar to the 1922 Lindeberg CLT was the subject of Alan Turing's 1934 Fellowship Dissertation for King's College at the University of Cambridge. Only after submitting the work did Turing learn it had already been proved. Consequently, Turing's dissertation was not published.
Mathematics
Statistics and probability
null
39407
https://en.wikipedia.org/wiki/Dirac%20equation
Dirac equation
In particle physics, the Dirac equation is a relativistic wave equation derived by British physicist Paul Dirac in 1928. In its free form, or including electromagnetic interactions, it describes all spin-1/2 massive particles, called "Dirac particles", such as electrons and quarks for which parity is a symmetry. It is consistent with both the principles of quantum mechanics and the theory of special relativity, and was the first theory to account fully for special relativity in the context of quantum mechanics. It was validated by accounting for the fine structure of the hydrogen spectrum in a completely rigorous way. It has become vital in the building of the Standard Model. The equation also implied the existence of a new form of matter, antimatter, previously unsuspected and unobserved and which was experimentally confirmed several years later. It also provided a theoretical justification for the introduction of several component wave functions in Pauli's phenomenological theory of spin. The wave functions in the Dirac theory are vectors of four complex numbers (known as bispinors), two of which resemble the Pauli wavefunction in the non-relativistic limit, in contrast to the Schrödinger equation which described wave functions of only one complex value. Moreover, in the limit of zero mass, the Dirac equation reduces to the Weyl equation. In the context of quantum field theory, the Dirac equation is reinterpreted to describe quantum fields corresponding to spin- particles. Dirac did not fully appreciate the importance of his results; however, the entailed explanation of spin as a consequence of the union of quantum mechanics and relativity—and the eventual discovery of the positron—represents one of the great triumphs of theoretical physics. This accomplishment has been described as fully on a par with the works of Newton, Maxwell, and Einstein before him. The equation has been deemed by some physicists to be the "real seed of modern physics". The equation has also been described as the "centerpiece of relativistic quantum mechanics", with it also stated that "the equation is perhaps the most important one in all of quantum mechanics". The Dirac equation is inscribed upon a plaque on the floor of Westminster Abbey. Unveiled on 13 November 1995, the plaque commemorates Dirac's life. History The Dirac equation in the form originally proposed by Dirac is: where is the wave function for an electron of rest mass with spacetime coordinates . are the components of the momentum, understood to be the momentum operator in the Schrödinger equation. is the speed of light, and is the reduced Planck constant; these fundamental physical constants reflect special relativity and quantum mechanics, respectively. Dirac's purpose in casting this equation was to explain the behavior of the relativistically moving electron, thus allowing the atom to be treated in a manner consistent with relativity. He hoped that the corrections introduced this way might have a bearing on the problem of atomic spectra. Up until that time, attempts to make the old quantum theory of the atom compatible with the theory of relativity—which were based on discretizing the angular momentum stored in the electron's possibly non-circular orbit of the atomic nucleus—had failed, and the new quantum mechanics of Heisenberg, Pauli, Jordan, Schrödinger, and Dirac himself had not developed sufficiently to treat this problem. Although Dirac's original intentions were satisfied, his equation had far deeper implications for the structure of matter and introduced new mathematical classes of objects that are now essential elements of fundamental physics. The new elements in this equation are the four matrices , , and , and the four-component wave function . There are four components in because the evaluation of it at any given point in configuration space is a bispinor. It is interpreted as a superposition of a spin-up electron, a spin-down electron, a spin-up positron, and a spin-down positron. The matrices and are all Hermitian and are involutory: and they all mutually anti-commute: These matrices and the form of the wave function have a deep mathematical significance. The algebraic structure represented by the gamma matrices had been created some 50 years earlier by the English mathematician W. K. Clifford. In turn, Clifford's ideas had emerged from the mid-19th-century work of German mathematician Hermann Grassmann in his Lineare Ausdehnungslehre (Theory of Linear Expansion). Making the Schrödinger equation relativistic The Dirac equation is superficially similar to the Schrödinger equation for a massive free particle: The left side represents the square of the momentum operator divided by twice the mass, which is the non-relativistic kinetic energy. Because relativity treats space and time as a whole, a relativistic generalization of this equation requires that space and time derivatives must enter symmetrically as they do in the Maxwell equations that govern the behavior of light — the equations must be differentially of the same order in space and time. In relativity, the momentum and the energies are the space and time parts of a spacetime vector, the four-momentum, and they are related by the relativistically invariant relation which says that the length of this four-vector is proportional to the rest mass . Substituting the operator equivalents of the energy and momentum from the Schrödinger theory produces the Klein–Gordon equation describing the propagation of waves, constructed from relativistically invariant objects, with the wave function being a relativistic scalar: a complex number which has the same numerical value in all frames of reference. Space and time derivatives both enter to second order. This has a telling consequence for the interpretation of the equation. Because the equation is second order in the time derivative, one must specify initial values both of the wave function itself and of its first time-derivative in order to solve definite problems. Since both may be specified more or less arbitrarily, the wave function cannot maintain its former role of determining the probability density of finding the electron in a given state of motion. In the Schrödinger theory, the probability density is given by the positive definite expression and this density is convected according to the probability current vector with the conservation of probability current and density following from the continuity equation: The fact that the density is positive definite and convected according to this continuity equation implies that one may integrate the density over a certain domain and set the total to 1, and this condition will be maintained by the conservation law. A proper relativistic theory with a probability density current must also share this feature. To maintain the notion of a convected density, one must generalize the Schrödinger expression of the density and current so that space and time derivatives again enter symmetrically in relation to the scalar wave function. The Schrödinger expression can be kept for the current, but the probability density must be replaced by the symmetrically formed expression which now becomes the 4th component of a spacetime vector, and the entire probability 4-current density has the relativistically covariant expression The continuity equation is as before. Everything is compatible with relativity now, but the expression for the density is no longer positive definite; the initial values of both and may be freely chosen, and the density may thus become negative, something that is impossible for a legitimate probability density. Thus, one cannot get a simple generalization of the Schrödinger equation under the naive assumption that the wave function is a relativistic scalar, and the equation it satisfies, second order in time. Although it is not a successful relativistic generalization of the Schrödinger equation, this equation is resurrected in the context of quantum field theory, where it is known as the Klein–Gordon equation, and describes a spinless particle field (e.g. pi meson or Higgs boson). Historically, Schrödinger himself arrived at this equation before the one that bears his name but soon discarded it. In the context of quantum field theory, the indefinite density is understood to correspond to the charge density, which can be positive or negative, and not the probability density. Dirac's coup Dirac thus thought to try an equation that was first order in both space and time. He postulated an equation of the form where the operators must be independent of for linearity and independent of for space-time homogeneity. These constraints implied additional dynamical variables that the operators will depend upon; from this requirement Dirac concluded that the operators would depend upon 4x4 matrices, related to the Pauli matrices. One could, for example, formally (i.e. by abuse of notation, since it is not straightforward to take a functional square root of the sum of two differential operators) take the relativistic expression for the energy replace by its operator equivalent, expand the square root in an infinite series of derivative operators, set up an eigenvalue problem, then solve the equation formally by iterations. Most physicists had little faith in such a process, even if it were technically possible. As the story goes, Dirac was staring into the fireplace at Cambridge, pondering this problem, when he hit upon the idea of taking the square root of the wave operator (see also half derivative) thus: On multiplying out the right side it is apparent that, in order to get all the cross-terms such as to vanish, one must assume with Dirac, who had just then been intensely involved with working out the foundations of Heisenberg's matrix mechanics, immediately understood that these conditions could be met if , , and are matrices, with the implication that the wave function has multiple components. This immediately explained the appearance of two-component wave functions in Pauli's phenomenological theory of spin, something that up until then had been regarded as mysterious, even to Pauli himself. However, one needs at least matrices to set up a system with the properties required — so the wave function had four components, not two, as in the Pauli theory, or one, as in the bare Schrödinger theory. The four-component wave function represents a new class of mathematical object in physical theories that makes its first appearance here. Given the factorization in terms of these matrices, one can now write down immediately an equation with to be determined. Applying again the matrix operator on both sides yields Taking shows that all the components of the wave function individually satisfy the relativistic energy–momentum relation. Thus the sought-for equation that is first-order in both space and time is Setting and because , the Dirac equation is produced as written above. Covariant form and relativistic invariance To demonstrate the relativistic invariance of the equation, it is advantageous to cast it into a form in which the space and time derivatives appear on an equal footing. New matrices are introduced as follows: and the equation takes the form (remembering the definition of the covariant components of the 4-gradient and especially that ) where there is an implied summation over the values of the twice-repeated index , and is the 4-gradient. In practice one often writes the gamma matrices in terms of 2 × 2 sub-matrices taken from the Pauli matrices and the 2 × 2 identity matrix. Explicitly the standard representation is The complete system is summarized using the Minkowski metric on spacetime in the form where the bracket expression denotes the anticommutator. These are the defining relations of a Clifford algebra over a pseudo-orthogonal 4-dimensional space with metric signature . The specific Clifford algebra employed in the Dirac equation is known today as the Dirac algebra. Although not recognized as such by Dirac at the time the equation was formulated, in hindsight the introduction of this geometric algebra represents an enormous stride forward in the development of quantum theory. The Dirac equation may now be interpreted as an eigenvalue equation, where the rest mass is proportional to an eigenvalue of the 4-momentum operator, the proportionality constant being the speed of light: Using ( is pronounced "d-slash"), according to Feynman slash notation, the Dirac equation becomes: In practice, physicists often use units of measure such that , known as natural units. The equation then takes the simple form A foundational theorem states that if two distinct sets of matrices are given that both satisfy the Clifford relations, then they are connected to each other by a similarity transform: If in addition the matrices are all unitary, as are the Dirac set, then itself is unitary; The transformation is unique up to a multiplicative factor of absolute value 1. Let us now imagine a Lorentz transformation to have been performed on the space and time coordinates, and on the derivative operators, which form a covariant vector. For the operator to remain invariant, the gammas must transform among themselves as a contravariant vector with respect to their spacetime index. These new gammas will themselves satisfy the Clifford relations, because of the orthogonality of the Lorentz transformation. By the previously mentioned foundational theorem, one may replace the new set by the old set subject to a unitary transformation. In the new frame, remembering that the rest mass is a relativistic scalar, the Dirac equation will then take the form If the transformed spinor is defined as then the transformed Dirac equation is produced in a way that demonstrates manifest relativistic invariance: Thus, settling on any unitary representation of the gammas is final, provided the spinor is transformed according to the unitary transformation that corresponds to the given Lorentz transformation. The various representations of the Dirac matrices employed will bring into focus particular aspects of the physical content in the Dirac wave function. The representation shown here is known as the standard representation – in it, the wave function's upper two components go over into Pauli's 2 spinor wave function in the limit of low energies and small velocities in comparison to light. The considerations above reveal the origin of the gammas in geometry, hearkening back to Grassmann's original motivation; they represent a fixed basis of unit vectors in spacetime. Similarly, products of the gammas such as represent oriented surface elements, and so on. With this in mind, one can find the form of the unit volume element on spacetime in terms of the gammas as follows. By definition, it is For this to be an invariant, the epsilon symbol must be a tensor, and so must contain a factor of , where is the determinant of the metric tensor. Since this is negative, that factor is imaginary. Thus This matrix is given the special symbol , owing to its importance when one is considering improper transformations of space-time, that is, those that change the orientation of the basis vectors. In the standard representation, it is This matrix will also be found to anticommute with the other four Dirac matrices: It takes a leading role when questions of parity arise because the volume element as a directed magnitude changes sign under a space-time reflection. Taking the positive square root above thus amounts to choosing a handedness convention on spacetime. Comparison with related theories Pauli theory The necessity of introducing half-integer spin goes back experimentally to the results of the Stern–Gerlach experiment. A beam of atoms is run through a strong inhomogeneous magnetic field, which then splits into parts depending on the intrinsic angular momentum of the atoms. It was found that for silver atoms, the beam was split in two; the ground state therefore could not be integer, because even if the intrinsic angular momentum of the atoms were as small as possible, 1, the beam would be split into three parts, corresponding to atoms with . The conclusion is that silver atoms have net intrinsic angular momentum of . Pauli set up a theory which explained this splitting by introducing a two-component wave function and a corresponding correction term in the Hamiltonian, representing a semi-classical coupling of this wave function to an applied magnetic field, as so in SI units: (Note that bold faced characters imply Euclidean vectors in 3 dimensions, whereas the Minkowski four-vector can be defined as ) Here and represent the components of the electromagnetic four-potential in their standard SI units, and the three sigmas are the Pauli matrices. On squaring out the first term, a residual interaction with the magnetic field is found, along with the usual classical Hamiltonian of a charged particle interacting with an applied field in SI units: This Hamiltonian is now a matrix, so the Schrödinger equation based on it must use a two-component wave function. On introducing the external electromagnetic 4-vector potential into the Dirac equation in a similar way, known as minimal coupling, it takes the form: A second application of the Dirac operator will now reproduce the Pauli term exactly as before, because the spatial Dirac matrices multiplied by , have the same squaring and commutation properties as the Pauli matrices. What is more, the value of the gyromagnetic ratio of the electron, standing in front of Pauli's new term, is explained from first principles. This was a major achievement of the Dirac equation and gave physicists great faith in its overall correctness. There is more however. The Pauli theory may be seen as the low energy limit of the Dirac theory in the following manner. First the equation is written in the form of coupled equations for 2-spinors with the SI units restored: so Assuming the field is weak and the motion of the electron non-relativistic, the total energy of the electron is approximately equal to its rest energy, and the momentum going over to the classical value, and so the second equation may be written which is of order Thus, at typical energies and velocities, the bottom components of the Dirac spinor in the standard representation are much suppressed in comparison to the top components. Substituting this expression into the first equation gives after some rearrangement The operator on the left represents the particle's total energy reduced by its rest energy, which is just its classical kinetic energy, so one can recover Pauli's theory upon identifying his 2-spinor with the top components of the Dirac spinor in the non-relativistic approximation. A further approximation gives the Schrödinger equation as the limit of the Pauli theory. Thus, the Schrödinger equation may be seen as the far non-relativistic approximation of the Dirac equation when one may neglect spin and work only at low energies and velocities. This also was a great triumph for the new equation, as it traced the mysterious that appears in it, and the necessity of a complex wave function, back to the geometry of spacetime through the Dirac algebra. It also highlights why the Schrödinger equation, although ostensibly in the form of a diffusion equation, actually represents wave propagation. It should be strongly emphasized that the entire Dirac spinor represents an irreducible whole. The separation, done here, of the Dirac spinor into large and small components depends on the low-energy approximation being valid. The components that were neglected above, to show that the Pauli theory can be recovered by a low-velocity approximation of Dirac's equation, are necessary to produce new phenomena observed in the relativistic regime – among them antimatter, and the creation and annihilation of particles. Weyl theory In the massless case , the Dirac equation reduces to the Weyl equation, which describes relativistic massless spin- particles. The theory acquires a second symmetry: see below. Physical interpretation Identification of observables The critical physical question in a quantum theory is this: what are the physically observable quantities defined by the theory? According to the postulates of quantum mechanics, such quantities are defined by self-adjoint operators that act on the Hilbert space of possible states of a system. The eigenvalues of these operators are then the possible results of measuring the corresponding physical quantity. In the Schrödinger theory, the simplest such object is the overall Hamiltonian, which represents the total energy of the system. To maintain this interpretation on passing to the Dirac theory, the Hamiltonian must be taken to be where, as always, there is an implied summation over the twice-repeated index . This looks promising, because one can see by inspection the rest energy of the particle and, in the case of , the energy of a charge placed in an electric potential . What about the term involving the vector potential? In classical electrodynamics, the energy of a charge moving in an applied potential is Thus, the Dirac Hamiltonian is fundamentally distinguished from its classical counterpart, and one must take great care to correctly identify what is observable in this theory. Much of the apparently paradoxical behavior implied by the Dirac equation amounts to a misidentification of these observables. Hole theory The negative solutions to the equation are problematic, for it was assumed that the particle has a positive energy. Mathematically speaking, however, there seems to be no reason for us to reject the negative-energy solutions. Since they exist, they cannot simply be ignored, for once the interaction between the electron and the electromagnetic field is included, any electron placed in a positive-energy eigenstate would decay into negative-energy eigenstates of successively lower energy. Real electrons obviously do not behave in this way, or they would disappear by emitting energy in the form of photons. To cope with this problem, Dirac introduced the hypothesis, known as hole theory, that the vacuum is the many-body quantum state in which all the negative-energy electron eigenstates are occupied. This description of the vacuum as a "sea" of electrons is called the Dirac sea. Since the Pauli exclusion principle forbids electrons from occupying the same state, any additional electron would be forced to occupy a positive-energy eigenstate, and positive-energy electrons would be forbidden from decaying into negative-energy eigenstates. Dirac further reasoned that if the negative-energy eigenstates are incompletely filled, each unoccupied eigenstate – called a hole – would behave like a positively charged particle. The hole possesses a positive energy because energy is required to create a particle–hole pair from the vacuum. As noted above, Dirac initially thought that the hole might be the proton, but Hermann Weyl pointed out that the hole should behave as if it had the same mass as an electron, whereas the proton is over 1800 times heavier. The hole was eventually identified as the positron, experimentally discovered by Carl Anderson in 1932. It is not entirely satisfactory to describe the "vacuum" using an infinite sea of negative-energy electrons. The infinitely negative contributions from the sea of negative-energy electrons have to be canceled by an infinite positive "bare" energy and the contribution to the charge density and current coming from the sea of negative-energy electrons is exactly canceled by an infinite positive "jellium" background so that the net electric charge density of the vacuum is zero. In quantum field theory, a Bogoliubov transformation on the creation and annihilation operators (turning an occupied negative-energy electron state into an unoccupied positive energy positron state and an unoccupied negative-energy electron state into an occupied positive energy positron state) allows us to bypass the Dirac sea formalism even though, formally, it is equivalent to it. In certain applications of condensed matter physics, however, the underlying concepts of "hole theory" are valid. The sea of conduction electrons in an electrical conductor, called a Fermi sea, contains electrons with energies up to the chemical potential of the system. An unfilled state in the Fermi sea behaves like a positively charged electron, and although it too is referred to as an "electron hole", it is distinct from a positron. The negative charge of the Fermi sea is balanced by the positively charged ionic lattice of the material. In quantum field theory In quantum field theories such as quantum electrodynamics, the Dirac field is subject to a process of second quantization, which resolves some of the paradoxical features of the equation. Mathematical formulation In its modern formulation for field theory, the Dirac equation is written in terms of a Dirac spinor field taking values in a complex vector space described concretely as , defined on flat spacetime (Minkowski space) . Its expression also contains gamma matrices and a parameter interpreted as the mass, as well as other physical constants. Dirac first obtained his equation through a factorization of Einstein's energy-momentum-mass equivalence relation assuming a scalar product of momentum vectors determined by the metric tensor and quantized the resulting relation by associating momenta to their respective operators. In terms of a field , the Dirac equation is then and in natural units, with Feynman slash notation, The gamma matrices are a set of four complex matrices (elements of ) which satisfy the defining anti-commutation relations: where is the Minkowski metric element, and the indices run over 0,1,2 and 3. These matrices can be realized explicitly under a choice of representation. Two common choices are the Dirac representation and the chiral representation. The Dirac representation is where are the Pauli matrices. For the chiral representation the are the same, but The slash notation is a compact notation for where is a four-vector (often it is the four-vector differential operator ). The summation over the index is implied. Alternatively the four coupled linear first-order partial differential equations for the four quantities that make up the wave function can be written as a vector. In Planck units this becomes: which makes it clearer that it is a set of four partial differential equations with four unknown functions. (Note that the term is not preceded by because is imaginary.) Dirac adjoint and the adjoint equation The Dirac adjoint of the spinor field is defined as Using the property of gamma matrices (which follows straightforwardly from Hermicity properties of the ) that one can derive the adjoint Dirac equation by taking the Hermitian conjugate of the Dirac equation and multiplying on the right by : where the partial derivative acts from the right on : written in the usual way in terms of a left action of the derivative, we have Klein–Gordon equation Applying to the Dirac equation gives That is, each component of the Dirac spinor field satisfies the Klein–Gordon equation. Conserved current A conserved current of the theory is Another approach to derive this expression is by variational methods, applying Noether's theorem for the global symmetry to derive the conserved current Solutions Since the Dirac operator acts on 4-tuples of square-integrable functions, its solutions should be members of the same Hilbert space. The fact that the energies of the solutions do not have a lower bound is unexpected. Plane-wave solutions Plane-wave solutions are those arising from an ansatz which models a particle with definite 4-momentum where For this ansatz, the Dirac equation becomes an equation for : After picking a representation for the gamma matrices , solving this is a matter of solving a system of linear equations. It is a representation-free property of gamma matrices that the solution space is two-dimensional (see here). For example, in the chiral representation for , the solution space is parametrised by a vector , with where and is the Hermitian matrix square-root. These plane-wave solutions provide a starting point for canonical quantization. Lagrangian formulation Both the Dirac equation and the Adjoint Dirac equation can be obtained from (varying) the action with a specific Lagrangian density that is given by: If one varies this with respect to one gets the adjoint Dirac equation. Meanwhile, if one varies this with respect to one gets the Dirac equation. In natural units and with the slash notation, the action is then For this action, the conserved current above arises as the conserved current corresponding to the global symmetry through Noether's theorem for field theory. Gauging this field theory by changing the symmetry to a local, spacetime point dependent one gives gauge symmetry (really, gauge redundancy). The resultant theory is quantum electrodynamics or QED. See below for a more detailed discussion. Lorentz invariance The Dirac equation is invariant under Lorentz transformations, that is, under the action of the Lorentz group or strictly , the component connected to the identity. For a Dirac spinor viewed concretely as taking values in , the transformation under a Lorentz transformation is given by a complex matrix . There are some subtleties in defining the corresponding , as well as a standard abuse of notation. Most treatments occur at the Lie algebra level. For a more detailed treatment see here. The Lorentz group of real matrices acting on is generated by a set of six matrices with components When both the indices are raised or lowered, these are simply the 'standard basis' of antisymmetric matrices. These satisfy the Lorentz algebra commutation relations In the article on the Dirac algebra, it is also found that the spin generators satisfy the Lorentz algebra commutation relations. A Lorentz transformation can be written as where the components are antisymmetric in . The corresponding transformation on spin space is This is an abuse of notation, but a standard one. The reason is is not a well-defined function of , since there are two different sets of components (up to equivalence) which give the same but different . In practice we implicitly pick one of these and then is well defined in terms of Under a Lorentz transformation, the Dirac equation becomes Associated to Lorentz invariance is a conserved Noether current, or rather a tensor of conserved Noether currents . Similarly, since the equation is invariant under translations, there is a tensor of conserved Noether currents , which can be identified as the stress-energy tensor of the theory. The Lorentz current can be written in terms of the stress-energy tensor in addition to a tensor representing internal angular momentum. Further discussion of Lorentz covariance of the Dirac equation The Dirac equation is Lorentz covariant. Articulating this helps illuminate not only the Dirac equation, but also the Majorana spinor and Elko spinor, which although closely related, have subtle and important differences. Understanding Lorentz covariance is simplified by keeping in mind the geometric character of the process. Let be a single, fixed point in the spacetime manifold. Its location can be expressed in multiple coordinate systems. In the physics literature, these are written as and , with the understanding that both and describe the same point , but in different local frames of reference (a frame of reference over a small extended patch of spacetime). One can imagine as having a fiber of different coordinate frames above it. In geometric terms, one says that spacetime can be characterized as a fiber bundle, and specifically, the frame bundle. The difference between two points and in the same fiber is a combination of rotations and Lorentz boosts. A choice of coordinate frame is a (local) section through that bundle. Coupled to the frame bundle is a second bundle, the spinor bundle. A section through the spinor bundle is just the particle field (the Dirac spinor, in the present case). Different points in the spinor fiber correspond to the same physical object (the fermion) but expressed in different Lorentz frames. Clearly, the frame bundle and the spinor bundle must be tied together in a consistent fashion to get consistent results; formally, one says that the spinor bundle is the associated bundle; it is associated to a principal bundle, which in the present case is the frame bundle. Differences between points on the fiber correspond to the symmetries of the system. The spinor bundle has two distinct generators of its symmetries: the total angular momentum and the intrinsic angular momentum. Both correspond to Lorentz transformations, but in different ways. The presentation here follows that of Itzykson and Zuber. It is very nearly identical to that of Bjorken and Drell. A similar derivation in a general relativistic setting can be found in Weinberg. Here we fix our spacetime to be flat, that is, our spacetime is Minkowski space. Under a Lorentz transformation the Dirac spinor to transform as It can be shown that an explicit expression for is given by where parameterizes the Lorentz transformation, and are the six 4×4 matrices satisfying: This matrix can be interpreted as the intrinsic angular momentum of the Dirac field. That it deserves this interpretation arises by contrasting it to the generator of Lorentz transformations, having the form This can be interpreted as the total angular momentum. It acts on the spinor field as Note the above does not have a prime on it: the above is obtained by transforming obtaining the change to and then returning to the original coordinate system . The geometrical interpretation of the above is that the frame field is affine, having no preferred origin. The generator generates the symmetries of this space: it provides a relabelling of a fixed point The generator generates a movement from one point in the fiber to another: a movement from with both and still corresponding to the same spacetime point These perhaps obtuse remarks can be elucidated with explicit algebra. Let be a Lorentz transformation. The Dirac equation is If the Dirac equation is to be covariant, then it should have exactly the same form in all Lorentz frames: The two spinors and should both describe the same physical field, and so should be related by a transformation that does not change any physical observables (charge, current, mass, etc.) The transformation should encode only the change of coordinate frame. It can be shown that such a transformation is a 4×4 unitary matrix. Thus, one may presume that the relation between the two frames can be written as Inserting this into the transformed equation, the result is The coordinates related by Lorentz transformation satisfy: The original Dirac equation is then regained if An explicit expression for (equal to the expression given above) can be obtained by considering a Lorentz transformation of infinitesimal rotation near the identity transformation: where is the metric tensor : and is symmetric while is antisymmetric. After plugging and chugging, one obtains which is the (infinitesimal) form for above and yields the relation . To obtain the affine relabelling, write After properly antisymmetrizing, one obtains the generator of symmetries given earlier. Thus, both and can be said to be the "generators of Lorentz transformations", but with a subtle distinction: the first corresponds to a relabelling of points on the affine frame bundle, which forces a translation along the fiber of the spinor on the spin bundle, while the second corresponds to translations along the fiber of the spin bundle (taken as a movement along the frame bundle, as well as a movement along the fiber of the spin bundle.) Weinberg provides additional arguments for the physical interpretation of these as total and intrinsic angular momentum. Other formulations The Dirac equation can be formulated in a number of other ways. Curved spacetime This article has developed the Dirac equation in flat spacetime according to special relativity. It is possible to formulate the Dirac equation in curved spacetime. The algebra of physical space This article developed the Dirac equation using four-vectors and Schrödinger operators. The Dirac equation in the algebra of physical space uses a Clifford algebra over the real numbers, a type of geometric algebra. Coupled Weyl Spinors As mentioned above, the massless Dirac equation immediately reduces to the homogeneous Weyl equation. By using the chiral representation of the gamma matrices, the nonzero-mass equation can also be decomposed into a pair of coupled inhomogeneous Weyl equations acting on the first and last pairs of indices of the original four-component spinor, i.e. , where and are each two-component Weyl spinors. This is because the skew block form of the chiral gamma matrices means that they swap the and and apply the two-by-two Pauli matrices to each: . So the Dirac equation becomes which in turn is equivalent to a pair of inhomogeneous Weyl equations for massless left- and right-helicity spinors, where the coupling strength is proportional to the mass: . This has been proposed as an intuitive explanation of Zitterbewegung, as these massless components would propagate at the speed of light and move in opposite directions, since the helicity is the projection of the spin onto the direction of motion. Here the role of the "mass" is not to make the velocity less than the speed of light, but instead controls the average rate at which these reversals occur; specifically, the reversals can be modeled as a Poisson process. U(1) symmetry Natural units are used in this section. The coupling constant is labelled by convention with : this parameter can also be viewed as modelling the electron charge. Vector symmetry The Dirac equation and action admits a symmetry where the fields transform as This is a global symmetry, known as the vector symmetry (as opposed to the axial symmetry: see below). By Noether's theorem there is a corresponding conserved current: this has been mentioned previously as Gauging the symmetry If we 'promote' the global symmetry, parametrised by the constant , to a local symmetry, parametrised by a function , or equivalently the Dirac equation is no longer invariant: there is a residual derivative of . The fix proceeds as in scalar electrodynamics: the partial derivative is promoted to a covariant derivative The covariant derivative depends on the field being acted on. The newly introduced is the 4-vector potential from electrodynamics, but also can be viewed as a gauge field (which, mathematically, is defined as a connection). The transformation law under gauge transformations for is then the usual but can also be derived by asking that covariant derivatives transform under a gauge transformation as We then obtain a gauge-invariant Dirac action by promoting the partial derivative to a covariant one: The final step needed to write down a gauge-invariant Lagrangian is to add a Maxwell Lagrangian term, Putting these together gives Expanding out the covariant derivative allows the action to be written in a second useful form: Axial symmetry Massless Dirac fermions, that is, fields satisfying the Dirac equation with , admit a second, inequivalent symmetry. This is seen most easily by writing the four-component Dirac fermion as a pair of two-component vector fields, and adopting the chiral representation for the gamma matrices, so that may be written where has components and has components . The Dirac action then takes the form That is, it decouples into a theory of two Weyl spinors or Weyl fermions. The earlier vector symmetry is still present, where and rotate identically. This form of the action makes the second inequivalent symmetry manifest: This can also be expressed at the level of the Dirac fermion as where is the exponential map for matrices. This isn't the only symmetry possible, but it is conventional. Any 'linear combination' of the vector and axial symmetries is also a symmetry. Classically, the axial symmetry admits a well-formulated gauge theory. But at the quantum level, there is an anomaly, that is, an obstruction to gauging. Extension to color symmetry We can extend this discussion from an abelian symmetry to a general non-abelian symmetry under a gauge group , the group of color symmetries for a theory. For concreteness, we fix , the special unitary group of matrices acting on . Before this section, could be viewed as a spinor field on Minkowski space, in other words a function , and its components in are labelled by spin indices, conventionally Greek indices taken from the start of the alphabet . Promoting the theory to a gauge theory, informally acquires a part transforming like , and these are labelled by color indices, conventionally Latin indices . In total, has components, given in indices by . The 'spinor' labels only how the field transforms under spacetime transformations. Formally, is valued in a tensor product, that is, it is a function Gauging proceeds similarly to the abelian case, with a few differences. Under a gauge transformation the spinor fields transform as The matrix-valued gauge field or connection transforms as and the covariant derivatives defined transform as Writing down a gauge-invariant action proceeds exactly as with the case, replacing the Maxwell Lagrangian with the Yang–Mills Lagrangian where the Yang–Mills field strength or curvature is defined here as and is the matrix commutator. The action is then Physical applications For physical applications, the case describes the quark sector of the Standard Model which models strong interactions. Quarks are modelled as Dirac spinors; the gauge field is the gluon field. The case describes part of the electroweak sector of the Standard Model. Leptons such as electrons and neutrinos are the Dirac spinors; the gauge field is the gauge boson. Generalisations This expression can be generalised to arbitrary Lie group with connection and a representation , where the colour part of is valued in . Formally, the Dirac field is a function Then transforms under a gauge transformation as and the covariant derivative is defined where here we view as a Lie algebra representation of the Lie algebra associated to . This theory can be generalised to curved spacetime, but there are subtleties which arise in gauge theory on a general spacetime (or more generally still, a manifold) which, on flat spacetime, can be ignored. This is ultimately due to the contractibility of flat spacetime which allows us to view a gauge field and gauge transformations as defined globally on .
Physical sciences
Quantum mechanics
Physics
39418
https://en.wikipedia.org/wiki/Moore%27s%20law
Moore's law
Moore's law is the observation that the number of transistors in an integrated circuit (IC) doubles about every two years. Moore's law is an observation and projection of a historical trend. Rather than a law of physics, it is an empirical relationship. It is an experience-curve law, a type of law quantifying efficiency gains from experience in production. The observation is named after Gordon Moore, the co-founder of Fairchild Semiconductor and Intel and former CEO of the latter, who in 1965 noted that the number of components per integrated circuit had been doubling every year, and projected this rate of growth would continue for at least another decade. In 1975, looking forward to the next decade, he revised the forecast to doubling every two years, a compound annual growth rate (CAGR) of 41%. Moore's empirical evidence did not directly imply that the historical trend would continue, nevertheless his prediction has held since 1975 and has since become known as a "law". Moore's prediction has been used in the semiconductor industry to guide long-term planning and to set targets for research and development, thus functioning to some extent as a self-fulfilling prophecy. Advancements in digital electronics, such as the reduction in quality-adjusted microprocessor prices, the increase in memory capacity (RAM and flash), the improvement of sensors, and even the number and size of pixels in digital cameras, are strongly linked to Moore's law. These ongoing changes in digital electronics have been a driving force of technological and social change, productivity, and economic growth. Industry experts have not reached a consensus on exactly when Moore's law will cease to apply. Microprocessor architects report that semiconductor advancement has slowed industry-wide since around 2010, slightly below the pace predicted by Moore's law. In September 2022, Nvidia CEO Jensen Huang considered Moore's law dead, while Intel CEO Pat Gelsinger was of the opposite view. History In 1959, Douglas Engelbart studied the projected downscaling of integrated circuit (IC) size, publishing his results in the article "Microelectronics, and the Art of Similitude". Engelbart presented his findings at the 1960 International Solid-State Circuits Conference, where Moore was present in the audience. In 1965, Gordon Moore, who at the time was working as the director of research and development at Fairchild Semiconductor, was asked to contribute to the thirty-fifth anniversary issue of Electronics magazine with a prediction on the future of the semiconductor components industry over the next ten years. His response was a brief article entitled "Cramming more components onto integrated circuits". Within his editorial, he speculated that by 1975 it would be possible to contain as many as components on a single quarter-square-inch (~ ) semiconductor. The complexity for minimum component costs has increased at a rate of roughly a factor of two per year. Certainly over the short term this rate can be expected to continue, if not to increase. Over the longer term, the rate of increase is a bit more uncertain, although there is no reason to believe it will not remain nearly constant for at least 10 years. Moore posited a log–linear relationship between device complexity (higher circuit density at reduced cost) and time. In a 2015 interview, Moore noted of the 1965 article: "... I just did a wild extrapolation saying it's going to continue to double every year for the next 10 years." One historian of the law cites Stigler's law of eponymy, to introduce the fact that the regular doubling of components was known to many working in the field. In 1974, Robert H. Dennard at IBM recognized the rapid MOSFET scaling technology and formulated what became known as Dennard scaling, which describes that as MOS transistors get smaller, their power density stays constant such that the power use remains in proportion with area. Evidence from the semiconductor industry shows that this inverse relationship between power density and areal density broke down in the mid-2000s. At the 1975 IEEE International Electron Devices Meeting, Moore revised his forecast rate, predicting semiconductor complexity would continue to double annually until about 1980, after which it would decrease to a rate of doubling approximately every two years. He outlined several contributing factors for this exponential behavior: The advent of metal–oxide–semiconductor (MOS) technology The exponential rate of increase in die sizes, coupled with a decrease in defective densities, with the result that semiconductor manufacturers could work with larger areas without losing reduction yields Finer minimum dimensions What Moore called "circuit and device cleverness" Shortly after 1975, Caltech professor Carver Mead popularized the term "Moore's law". Moore's law eventually came to be widely accepted as a goal for the semiconductor industry, and it was cited by competitive semiconductor manufacturers as they strove to increase processing power. Moore viewed his eponymous law as surprising and optimistic: "Moore's law is a violation of Murphy's law. Everything gets better and better." The observation was even seen as a self-fulfilling prophecy. The doubling period is often misquoted as 18 months because of a separate prediction by Moore's colleague, Intel executive David House. In 1975, House noted that Moore's revised law of doubling transistor count every 2 years in turn implied that computer chip performance would roughly double every 18 months, with no increase in power consumption. Mathematically, Moore's law predicted that transistor count would double every 2 years due to shrinking transistor dimensions and other improvements. As a consequence of shrinking dimensions, Dennard scaling predicted that power consumption per unit area would remain constant. Combining these effects, David House deduced that computer chip performance would roughly double every 18 months. Also due to Dennard scaling, this increased performance would not be accompanied by increased power, i.e., the energy-efficiency of silicon-based computer chips roughly doubles every 18 months. Dennard scaling ended in the 2000s. Koomey later showed that a similar rate of efficiency improvement predated silicon chips and Moore's law, for technologies such as vacuum tubes. Microprocessor architects report that since around 2010, semiconductor advancement has slowed industry-wide below the pace predicted by Moore's law. Brian Krzanich, the former CEO of Intel, cited Moore's 1975 revision as a precedent for the current deceleration, which results from technical challenges and is "a natural part of the history of Moore's law". The rate of improvement in physical dimensions known as Dennard scaling also ended in the mid-2000s. As a result, much of the semiconductor industry has shifted its focus to the needs of major computing applications rather than semiconductor scaling. Nevertheless, leading semiconductor manufacturers TSMC and Samsung Electronics have claimed to keep pace with Moore's law with 10, 7, and 5 nm nodes in mass production. Moore's second law As the cost of computer power to the consumer falls, the cost for producers to fulfill Moore's law follows an opposite trend: R&D, manufacturing, and test costs have increased steadily with each new generation of chips. The cost of the tools, principally EUVL (Extreme ultraviolet lithography), used to manufacture chips doubles every 4 years. Rising manufacturing costs are an important consideration for the sustaining of Moore's law. This led to the formulation of Moore's second law, also called Rock's law (named after Arthur Rock), which is that the capital cost of a semiconductor fabrication plant also increases exponentially over time. Major enabling factors Numerous innovations by scientists and engineers have sustained Moore's law since the beginning of the IC era. Some of the key innovations are listed below, as examples of breakthroughs that have advanced integrated circuit and semiconductor device fabrication technology, allowing transistor counts to grow by more than seven orders of magnitude in less than five decades. Integrated circuit (IC): The raison d'être for Moore's law. The germanium hybrid IC was invented by Jack Kilby at Texas Instruments in 1958, followed by the invention of the silicon monolithic IC chip by Robert Noyce at Fairchild Semiconductor in 1959. Complementary metal–oxide–semiconductor (CMOS): The CMOS process was invented by Chih-Tang Sah and Frank Wanlass at Fairchild Semiconductor in 1963. Dynamic random-access memory (DRAM): DRAM was developed by Robert H. Dennard at IBM in 1967. Chemically amplified photoresist: Invented by Hiroshi Ito, C. Grant Willson and J. M. J. Fréchet at IBM circa 1980, which was 5–10 times more sensitive to ultraviolet light. IBM introduced chemically amplified photoresist for DRAM production in the mid-1980s. Deep UV excimer laser photolithography: Invented by Kanti Jain at IBM circa 1980. Prior to this, excimer lasers had been mainly used as research devices since their development in the 1970s. From a broader scientific perspective, the invention of excimer laser lithography has been highlighted as one of the major milestones in the 50-year history of the laser. Interconnect innovations: Interconnect innovations of the late 1990s, including chemical-mechanical polishing or chemical mechanical planarization (CMP), trench isolation, and copper interconnects—although not directly a factor in creating smaller transistors—have enabled improved wafer yield, additional layers of metal wires, closer spacing of devices, and lower electrical resistance. Computer industry technology road maps predicted in 2001 that Moore's law would continue for several generations of semiconductor chips. Recent trends One of the key technical challenges of engineering future nanoscale transistors is the design of gates. As device dimensions shrink, controlling the current flow in the thin channel becomes more difficult. Modern nanoscale transistors typically take the form of multi-gate MOSFETs, with the FinFET being the most common nanoscale transistor. The FinFET has gate dielectric on three sides of the channel. In comparison, the gate-all-around MOSFET (GAAFET) structure has even better gate control. A gate-all-around MOSFET (GAAFET) was first demonstrated in 1988, by a Toshiba research team led by Fujio Masuoka, who demonstrated a vertical nanowire GAAFET that he called a "surrounding gate transistor" (SGT). Masuoka, best known as the inventor of flash memory, later left Toshiba and founded Unisantis Electronics in 2004 to research surrounding-gate technology along with Tohoku University. In 2006, a team of Korean researchers from the Korea Advanced Institute of Science and Technology (KAIST) and the National Nano Fab Center developed a 3 nm transistor, the world's smallest nanoelectronic device at the time, based on FinFET technology. In 2010, researchers at the Tyndall National Institute in Cork, Ireland announced a junctionless transistor. A control gate wrapped around a silicon nanowire can control the passage of electrons without the use of junctions or doping. They claim these may be produced at 10 nm scale using existing fabrication techniques. In 2011, researchers at the University of Pittsburgh announced the development of a single-electron transistor, 1.5 nm in diameter, made out of oxide-based materials. Three "wires" converge on a central "island" that can house one or two electrons. Electrons tunnel from one wire to another through the island. Conditions on the third wire result in distinct conductive properties including the ability of the transistor to act as a solid state memory. Nanowire transistors could spur the creation of microscopic computers. In 2012, a research team at the University of New South Wales announced the development of the first working transistor consisting of a single atom placed precisely in a silicon crystal (not just picked from a large sample of random transistors). Moore's law predicted this milestone to be reached for ICs in the lab by 2020. In 2015, IBM demonstrated 7 nm node chips with silicon–germanium transistors produced using EUVL. The company believed this transistor density would be four times that of the then current 14 nm chips. Samsung and TSMC plan to manufacture 3nm GAAFET nodes by 20212022. Note that node names, such as 3nm, have no relation to the physical size of device elements (transistors). A Toshiba research team including T. Imoto, M. Matsui and C. Takubo developed a "System Block Module" wafer bonding process for manufacturing three-dimensional integrated circuit (3D IC) packages in 2001. In April 2007, Toshiba introduced an eight-layer 3D IC, the 16GB THGAM embedded NAND flash memory chip that was manufactured with eight stacked 2GB NAND flash chips. In September 2007, Hynix introduced 24-layer 3D IC, a 16GB flash memory chip that was manufactured with 24 stacked NAND flash chips using a wafer bonding process. V-NAND, also known as 3D NAND, allows flash memory cells to be stacked vertically using charge trap flash technology originally presented by John Szedon in 1967, significantly increasing the number of transistors on a flash memory chip. 3D NAND was first announced by Toshiba in 2007. V-NAND was first commercially manufactured by Samsung Electronics in 2013. In 2008, researchers at HP Labs announced a working memristor, a fourth basic passive circuit element whose existence only had been theorized previously. The memristor's unique properties permit the creation of smaller and better-performing electronic devices. In 2014, bioengineers at Stanford University developed a circuit modeled on the human brain. Sixteen "Neurocore" chips simulate one million neurons and billions of synaptic connections, claimed to be times faster as well as more energy efficient than a typical PC. In 2015, Intel and Micron announced 3D XPoint, a non-volatile memory claimed to be significantly faster with similar density compared to NAND. Production scheduled to begin in 2016 was delayed until the second half of 2017. In 2017, Samsung combined its V-NAND technology with eUFS 3D IC stacking to produce a 512GB flash memory chip, with eight stacked 64-layer V-NAND dies. In 2019, Samsung produced a 1TB flash chip with eight stacked 96-layer V-NAND dies, along with quad-level cell (QLC) technology (4-bit per transistor), equivalent to 2trillion transistors, the highest transistor count of any IC chip. In 2020, Samsung Electronics planned to produce the 5 nm node, using FinFET and EUV technology. In May 2021, IBM announced the creation of the first 2 nm computer chip, with parts supposedly being smaller than human DNA. Microprocessor architects report that semiconductor advancement has slowed industry-wide since around 2010, below the pace predicted by Moore's law. Brian Krzanich, the former CEO of Intel, announced, "Our cadence today is closer to two and a half years than two." Intel stated in 2015 that improvements in MOSFET devices have slowed, starting at the 22 nm feature width around 2012, and continuing at 14 nm. Pat Gelsinger, Intel CEO, stated at the end of 2023 that "we're no longer in the golden era of Moore's Law, it's much, much harder now, so we're probably doubling effectively closer to every three years now, so we've definitely seen a slowing." The physical limits to transistor scaling have been reached due to source-to-drain leakage, limited gate metals and limited options for channel material. Other approaches are being investigated, which do not rely on physical scaling. These include the spin state of electron spintronics, tunnel junctions, and advanced confinement of channel materials via nano-wire geometry. Spin-based logic and memory options are being developed actively in labs. Alternative materials research The vast majority of current transistors on ICs are composed principally of doped silicon and its alloys. As silicon is fabricated into single nanometer transistors, short-channel effects adversely change desired material properties of silicon as a functional transistor. Below are several non-silicon substitutes in the fabrication of small nanometer transistors. One proposed material is indium gallium arsenide, or InGaAs. Compared to their silicon and germanium counterparts, InGaAs transistors are more promising for future high-speed, low-power logic applications. Because of intrinsic characteristics of III–V compound semiconductors, quantum well and tunnel effect transistors based on InGaAs have been proposed as alternatives to more traditional MOSFET designs. In the early 2000s, the atomic layer deposition high-κ film and pitch double-patterning processes were invented by Gurtej Singh Sandhu at Micron Technology, extending Moore's law for planar CMOS technology to 30 nm class and smaller. In 2009, Intel announced the development of 80 nm InGaAs quantum well transistors. Quantum well devices contain a material sandwiched between two layers of material with a wider band gap. Despite being double the size of leading pure silicon transistors at the time, the company reported that they performed equally as well while consuming less power. In 2011, researchers at Intel demonstrated 3-D tri-gate InGaAs transistors with improved leakage characteristics compared to traditional planar designs. The company claims that their design achieved the best electrostatics of any III–V compound semiconductor transistor. At the 2015 International Solid-State Circuits Conference, Intel mentioned the use of III–V compounds based on such an architecture for their 7 nm node. In 2011, researchers at the University of Texas at Austin developed an InGaAs tunneling field-effect transistors capable of higher operating currents than previous designs. The first III–V TFET designs were demonstrated in 2009 by a joint team from Cornell University and Pennsylvania State University. In 2012, a team in MIT's Microsystems Technology Laboratories developed a 22 nm transistor based on InGaAs that, at the time, was the smallest non-silicon transistor ever built. The team used techniques used in silicon device fabrication and aimed for better electrical performance and a reduction to 10-nanometer scale. Biological computing research shows that biological material has superior information density and energy efficiency compared to silicon-based computing. Various forms of graphene are being studied for graphene electronics, e.g. graphene nanoribbon transistors have shown promise since its appearance in publications in 2008. (Bulk graphene has a band gap of zero and thus cannot be used in transistors because of its constant conductivity, an inability to turn off. The zigzag edges of the nanoribbons introduce localized energy states in the conduction and valence bands and thus a bandgap that enables switching when fabricated as a transistor. As an example, a typical GNR of width of 10 nm has a desirable bandgap energy of 0.4 eV.) More research will need to be performed, however, on sub-50 nm graphene layers, as its resistivity value increases and thus electron mobility decreases. Forecasts and roadmaps In April 2005, Gordon Moore stated in an interview that the projection cannot be sustained indefinitely: "It can't continue forever. The nature of exponentials is that you push them out and eventually disaster happens." He also noted that transistors eventually would reach the limits of miniaturization at atomic levels: In 2016 the International Technology Roadmap for Semiconductors, after using Moore's Law to drive the industry since 1998, produced its final roadmap. It no longer centered its research and development plan on Moore's law. Instead, it outlined what might be called the More than Moore strategy in which the needs of applications drive chip development, rather than a focus on semiconductor scaling. Application drivers range from smartphones to AI to data centers. IEEE began a road-mapping initiative in 2016, "Rebooting Computing", named the International Roadmap for Devices and Systems (IRDS). Some forecasters, including Gordon Moore, predict that Moore's law will end by around 2025. Although Moore's Law will reach a physical limit, some forecasters are optimistic about the continuation of technological progress in a variety of other areas, including new chip architectures, quantum computing, and AI and machine learning. Nvidia CEO Jensen Huang declared Moore's law dead in 2022; several days later, Intel CEO Pat Gelsinger countered with the opposite claim. Consequences Digital electronics have contributed to world economic growth in the late twentieth and early twenty-first centuries. The primary driving force of economic growth is the growth of productivity, which Moore's law factors into. Moore (1995) expected that "the rate of technological progress is going to be controlled from financial realities". The reverse could and did occur around the late-1990s, however, with economists reporting that "Productivity growth is the key economic indicator of innovation." Moore's law describes a driving force of technological and social change, productivity, and economic growth. An acceleration in the rate of semiconductor progress contributed to a surge in U.S. productivity growth, which reached 3.4% per year in 1997–2004, outpacing the 1.6% per year during both 1972–1996 and 2005–2013. As economist Richard G. Anderson notes, "Numerous studies have traced the cause of the productivity acceleration to technological innovations in the production of semiconductors that sharply reduced the prices of such components and of the products that contain them (as well as expanding the capabilities of such products)." The primary negative implication of Moore's law is that obsolescence pushes society up against the Limits to Growth. As technologies continue to rapidly "improve", they render predecessor technologies obsolete. In situations in which security and survivability of hardware or data are paramount, or in which resources are limited, rapid obsolescence often poses obstacles to smooth or continued operations. Other formulations and similar observations Several measures of digital technology are improving at exponential rates related to Moore's law, including the size, cost, density, and speed of components. Moore wrote only about the density of components, "a component being a transistor, resistor, diode or capacitor", at minimum cost. Transistors per integrated circuit – The most popular formulation is of the doubling of the number of transistors on ICs every two years. At the end of the 1970s, Moore's law became known as the limit for the number of transistors on the most complex chips. The graph at the top of this article shows this trend holds true today. , the commercially available processor possessing one of the highest numbers of transistors is an AD102 graphics processor with more than 76,3 billion transistors. Density at minimum cost per transistor – This is the formulation given in Moore's 1965 paper. It is not just about the density of transistors that can be achieved, but about the density of transistors at which the cost per transistor is the lowest. As more transistors are put on a chip, the cost to make each transistor decreases, but the chance that the chip will not work due to a defect increases. In 1965, Moore examined the density of transistors at which cost is minimized, and observed that, as transistors were made smaller through advances in photolithography, this number would increase at "a rate of roughly a factor of two per year". Dennard scaling – This posits that power usage would decrease in proportion to area (both voltage and current being proportional to length) of transistors. Combined with Moore's law, performance per watt would grow at roughly the same rate as transistor density, doubling every 1–2 years. According to Dennard scaling transistor dimensions would be scaled by 30% (0.7×) every technology generation, thus reducing their area by 50%. This would reduce the delay by 30% (0.7×) and therefore increase operating frequency by about 40% (1.4×). Finally, to keep electric field constant, voltage would be reduced by 30%, reducing energy by 65% and power (at 1.4× frequency) by 50%. Therefore, in every technology generation transistor density would double, circuit becomes 40% faster, while power consumption (with twice the number of transistors) stays the same. Dennard scaling ended in 2005–2010, due to leakage currents. The exponential processor transistor growth predicted by Moore does not always translate into exponentially greater practical CPU performance. Since around 2005–2007, Dennard scaling has ended, so even though Moore's law continued after that, it has not yielded proportional dividends in improved performance. The primary reason cited for the breakdown is that at small sizes, current leakage poses greater challenges, and also causes the chip to heat up, which creates a threat of thermal runaway and therefore, further increases energy costs. The breakdown of Dennard scaling prompted a greater focus on multicore processors, but the gains offered by switching to more cores are lower than the gains that would be achieved had Dennard scaling continued. In another departure from Dennard scaling, Intel microprocessors adopted a non-planar tri-gate FinFET at 22 nm in 2012 that is faster and consumes less power than a conventional planar transistor. The rate of performance improvement for single-core microprocessors has slowed significantly. Single-core performance was improving by 52% per year in 1986–2003 and 23% per year in 2003–2011, but slowed to just seven percent per year in 2011–2018. Quality adjusted price of IT equipment – The price of information technology (IT), computers and peripheral equipment, adjusted for quality and inflation, declined 16% per year on average over the five decades from 1959 to 2009. The pace accelerated, however, to 23% per year in 1995–1999 triggered by faster IT innovation, and later, slowed to 2% per year in 2010–2013. While quality-adjusted microprocessor price improvement continues, the rate of improvement likewise varies, and is not linear on a log scale. Microprocessor price improvement accelerated during the late 1990s, reaching 60% per year (halving every nine months) versus the typical 30% improvement rate (halving every two years) during the years earlier and later. Laptop microprocessors in particular improved 25–35% per year in 2004–2010, and slowed to 15–25% per year in 2010–2013. The number of transistors per chip cannot explain quality-adjusted microprocessor prices fully. Moore's 1995 paper does not limit Moore's law to strict linearity or to transistor count, "The definition of 'Moore's Law' has come to refer to almost anything related to the semiconductor industry that on a semi-log plot approximates a straight line. I hesitate to review its origins and by doing so restrict its definition." Hard disk drive areal density – A similar prediction (sometimes called Kryder's law) was made in 2005 for hard disk drive areal density. The prediction was later viewed as over-optimistic. Several decades of rapid progress in areal density slowed around 2010, from 30 to 100% per year to 10–15% per year, because of noise related to smaller grain size of the disk media, thermal stability, and writability using available magnetic fields. Fiber-optic capacity – The number of bits per second that can be sent down an optical fiber increases exponentially, faster than Moore's law. Keck's law, in honor of Donald Keck. Network capacity – According to Gerald Butters, the former head of Lucent's Optical Networking Group at Bell Labs, there is another version, called Butters' Law of Photonics, a formulation that deliberately parallels Moore's law. Butters' law says that the amount of data coming out of an optical fiber is doubling every nine months. Thus, the cost of transmitting a bit over an optical network decreases by half every nine months. The availability of wavelength-division multiplexing (sometimes called WDM) increased the capacity that could be placed on a single fiber by as much as a factor of 100. Optical networking and dense wavelength-division multiplexing (DWDM) is rapidly bringing down the cost of networking, and further progress seems assured. As a result, the wholesale price of data traffic collapsed in the dot-com bubble. Nielsen's Law says that the bandwidth available to users increases by 50% annually. Pixels per dollar – Similarly, Barry Hendy of Kodak Australia has plotted pixels per dollar as a basic measure of value for a digital camera, demonstrating the historical linearity (on a log scale) of this market and the opportunity to predict the future trend of digital camera price, LCD and LED screens, and resolution. The great Moore's law compensator (TGMLC), also known as Wirth's law – generally is referred to as software bloat and is the principle that successive generations of computer software increase in size and complexity, thereby offsetting the performance gains predicted by Moore's law. In a 2008 article in InfoWorld, Randall C. Kennedy, formerly of Intel, introduces this term using successive versions of Microsoft Office between the year 2000 and 2007 as his premise. Despite the gains in computational performance during this time period according to Moore's law, Office 2007 performed the same task at half the speed on a prototypical year 2007 computer as compared to Office 2000 on a year 2000 computer. Library expansion – was calculated in 1945 by Fremont Rider to double in capacity every 16 years, if sufficient space were made available. He advocated replacing bulky, decaying printed works with miniaturized microform analog photographs, which could be duplicated on-demand for library patrons or other institutions. He did not foresee the digital technology that would follow decades later to replace analog microform with digital imaging, storage, and transmission media. Automated, potentially lossless digital technologies allowed vast increases in the rapidity of information growth in an era that now sometimes is called the Information Age. Carlson curve – is a term coined by The Economist to describe the biotechnological equivalent of Moore's law, and is named after author Rob Carlson. Carlson accurately predicted that the doubling time of DNA sequencing technologies (measured by cost and performance) would be at least as fast as Moore's law. Carlson Curves illustrate the rapid (in some cases hyperexponential) decreases in cost, and increases in performance, of a variety of technologies, including DNA sequencing, DNA synthesis, and a range of physical and computational tools used in protein expression and in determining protein structures. Eroom's law – is a pharmaceutical drug development observation that was deliberately written as Moore's Law spelled backwards in order to contrast it with the exponential advancements of other forms of technology (such as transistors) over time. It states that the cost of developing a new drug roughly doubles every nine years. Experience curve effects says that each doubling of the cumulative production of virtually any product or service is accompanied by an approximate constant percentage reduction in the unit cost. The acknowledged first documented qualitative description of this dates from 1885. A power curve was used to describe this phenomenon in a 1936 discussion of the cost of airplanes. Edholm's law – Phil Edholm observed that the bandwidth of telecommunication networks (including the Internet) is doubling every 18 months. The bandwidths of online communication networks has risen from bits per second to terabits per second. The rapid rise in online bandwidth is largely due to the same MOSFET scaling that enabled Moore's law, as telecommunications networks are built from MOSFETs. Haitz's law predicts that the brightness of LEDs increases as their manufacturing cost goes down. Swanson's law is the observation that the price of solar photovoltaic modules tends to drop 20 percent for every doubling of cumulative shipped volume. At present rates, costs go down 75% about every 10 years.
Technology
Computer hardware
null
39420
https://en.wikipedia.org/wiki/Right%20triangle
Right triangle
A right triangle or right-angled triangle, sometimes called an orthogonal triangle or rectangular triangle, is a triangle in which two sides are perpendicular, forming a right angle ( turn or 90 degrees). The side opposite to the right angle is called the hypotenuse (side in the figure). The sides adjacent to the right angle are called legs (or catheti, singular: cathetus). Side may be identified as the side adjacent to angle and opposite (or opposed to) angle while side is the side adjacent to angle and opposite angle Every right triangle is half of a rectangle which has been divided along its diagonal. When the rectangle is a square, its right-triangular half is isosceles, with two congruent sides and two congruent angles. When the rectangle is not a square, its right-triangular half is scalene. Every triangle whose base is the diameter of a circle and whose apex lies on the circle is a right triangle, with the right angle at the apex and the hypotenuse as the base; conversely, the circumcircle of any right triangle has the hypotenuse as its diameter. This is Thales' theorem. The legs and hypotenuse of a right triangle satisfy the Pythagorean theorem: the sum of the areas of the squares on two legs is the area of the square on the hypotenuse, If the lengths of all three sides of a right triangle are integers, the triangle is called a Pythagorean triangle and its side lengths are collectively known as a Pythagorean triple. The relations between the sides and angles of a right triangle provides one way of defining and understanding trigonometry, the study of the metrical relationships between lengths and angles. Principal properties Sides The three sides of a right triangle are related by the Pythagorean theorem, which in modern algebraic notation can be written where is the length of the hypotenuse (side opposite the right angle), and and are the lengths of the legs (remaining two sides). Pythagorean triples are integer values of satisfying this equation. This theorem was proven in antiquity, and is proposition I.47 in Euclid's Elements: "In right-angled triangles the square on the side subtending the right angle is equal to the squares on the sides containing the right angle." Area As with any triangle, the area is equal to one half the base multiplied by the corresponding height. In a right triangle, if one leg is taken as the base then the other is height, so the area of a right triangle is one half the product of the two legs. As a formula the area is where and are the legs of the triangle. If the incircle is tangent to the hypotenuse at point then letting the semi-perimeter be we have and and the area is given by This formula only applies to right triangles. Altitudes If an altitude is drawn from the vertex, with the right angle to the hypotenuse, then the triangle is divided into two smaller triangles; these are both similar to the original, and therefore similar to each other. From this: The altitude to the hypotenuse is the geometric mean (mean proportional) of the two segments of the hypotenuse. Each leg of the triangle is the mean proportional of the hypotenuse and the segment of the hypotenuse that is adjacent to the leg. In equations, (this is sometimes known as the right triangle altitude theorem) where are as shown in the diagram. Thus Moreover, the altitude to the hypotenuse is related to the legs of the right triangle by For solutions of this equation in integer values of see here. The altitude from either leg coincides with the other leg. Since these intersect at the right-angled vertex, the right triangle's orthocenter—the intersection of its three altitudes—coincides with the right-angled vertex. Inradius and circumradius The radius of the incircle of a right triangle with legs and and hypotenuse is The radius of the circumcircle is half the length of the hypotenuse, Thus the sum of the circumradius and the inradius is half the sum of the legs: One of the legs can be expressed in terms of the inradius and the other leg as Characterizations A triangle with sides , semiperimeter , area altitude opposite the longest side, circumradius inradius exradii tangent to respectively, and medians is a right triangle if and only if any one of the statements in the following six categories is true. Each of them is thus also a property of any right triangle. Sides and semiperimeter Angles and are complementary. Area where is the tangency point of the incircle at the longest side Inradius and exradii Altitude and medians The length of one median is equal to the circumradius. The shortest altitude (the one from the vertex with the biggest angle) is the geometric mean of the line segments it divides the opposite (longest) side into. This is the right triangle altitude theorem. Circumcircle and incircle The triangle can be inscribed in a semicircle, with one side coinciding with the entirety of the diameter (Thales' theorem). The circumcenter is the midpoint of the longest side. The longest side is a diameter of the circumcircle The circumcircle is tangent to the nine-point circle. The orthocenter lies on the circumcircle. The distance between the incenter and the orthocenter is equal to . Trigonometric ratios The trigonometric functions for acute angles can be defined as ratios of the sides of a right triangle. For a given angle, a right triangle may be constructed with this angle, and the sides labeled opposite, adjacent and hypotenuse with reference to this angle according to the definitions above. These ratios of the sides do not depend on the particular right triangle chosen, but only on the given angle, since all triangles constructed this way are similar. If, for a given angle α, the opposite side, adjacent side and hypotenuse are labeled and respectively, then the trigonometric functions are For the expression of hyperbolic functions as ratio of the sides of a right triangle, see the hyperbolic triangle of a hyperbolic sector. Special right triangles The values of the trigonometric functions can be evaluated exactly for certain angles using right triangles with special angles. These include the 30-60-90 triangle which can be used to evaluate the trigonometric functions for any multiple of and the isosceles right triangle or 45-45-90 triangle which can be used to evaluate the trigonometric functions for any multiple of Kepler triangle Let and be the harmonic mean, the geometric mean, and the arithmetic mean of two positive numbers and with If a right triangle has legs and and hypotenuse then where is the golden ratio. Since the sides of this right triangle are in geometric progression, this is the Kepler triangle. Thales' theorem Thales' theorem states that if is the diameter of a circle and is any other point on the circle, then is a right triangle with a right angle at The converse states that the hypotenuse of a right triangle is the diameter of its circumcircle. As a corollary, the circumcircle has its center at the midpoint of the diameter, so the median through the right-angled vertex is a radius, and the circumradius is half the length of the hypotenuse. Medians The following formulas hold for the medians of a right triangle: The median on the hypotenuse of a right triangle divides the triangle into two isosceles triangles, because the median equals one-half the hypotenuse. The medians and from the legs satisfy Euler line In a right triangle, the Euler line contains the median on the hypotenuse—that is, it goes through both the right-angled vertex and the midpoint of the side opposite that vertex. This is because the right triangle's orthocenter, the intersection of its altitudes, falls on the right-angled vertex while its circumcenter, the intersection of its perpendicular bisectors of sides, falls on the midpoint of the hypotenuse. Inequalities In any right triangle the diameter of the incircle is less than half the hypotenuse, and more strongly it is less than or equal to the hypotenuse times In a right triangle with legs and hypotenuse with equality only in the isosceles case. If the altitude from the hypotenuse is denoted then with equality only in the isosceles case. Other properties If segments of lengths and emanating from vertex trisect the hypotenuse into segments of length then The right triangle is the only triangle having two, rather than one or three, distinct inscribed squares. Given any two positive numbers and with Let and be the sides of the two inscribed squares in a right triangle with hypotenuse Then These sides and the incircle radius are related by a similar formula: The perimeter of a right triangle equals the sum of the radii of the incircle and the three excircles:
Mathematics
Two-dimensional space
null
39562
https://en.wikipedia.org/wiki/Geochemistry
Geochemistry
Geochemistry is the science that uses the tools and principles of chemistry to explain the mechanisms behind major geological systems such as the Earth's crust and its oceans. The realm of geochemistry extends beyond the Earth, encompassing the entire Solar System, and has made important contributions to the understanding of a number of processes including mantle convection, the formation of planets and the origins of granite and basalt. It is an integrated field of chemistry and geology. History The term geochemistry was first used by the Swiss-German chemist Christian Friedrich Schönbein in 1838: "a comparative geochemistry ought to be launched, before geognosy can become geology, and before the mystery of the genesis of our planets and their inorganic matter may be revealed." However, for the rest of the century the more common term was "chemical geology", and there was little contact between geologists and chemists. Geochemistry emerged as a separate discipline after major laboratories were established, starting with the United States Geological Survey (USGS) in 1884, which began systematic surveys of the chemistry of rocks and minerals. The chief USGS chemist, Frank Wigglesworth Clarke, noted that the elements generally decrease in abundance as their atomic weights increase, and summarized the work on elemental abundance in The Data of Geochemistry. The composition of meteorites was investigated and compared to terrestrial rocks as early as 1850. In 1901, Oliver C. Farrington hypothesised that, although there were differences, the relative abundances should still be the same. This was the beginnings of the field of cosmochemistry and has contributed much of what we know about the formation of the Earth and the Solar System. In the early 20th century, Max von Laue and William L. Bragg showed that X-ray scattering could be used to determine the structures of crystals. In the 1920s and 1930s, Victor Goldschmidt and associates at the University of Oslo applied these methods to many common minerals and formulated a set of rules for how elements are grouped. Goldschmidt published this work in the series Geochemische Verteilungsgesetze der Elemente [Geochemical Laws of the Distribution of Elements]. The research of Manfred Schidlowski from the 1960s to around the year 2002 was concerned with the biochemistry of the Early Earth with a focus on isotope-biogeochemistry and the evidence of the earliest life processes in Precambrian. Subfields Some subfields of geochemistry are: Aqueous geochemistry studies the role of various elements in watersheds, including copper, sulfur, mercury, and how elemental fluxes are exchanged through atmospheric-terrestrial-aquatic interactions. Biogeochemistry is the field of study focusing on the effect of life on the chemistry of the Earth. Cosmochemistry includes the analysis of the distribution of elements and their isotopes in the cosmos. Isotope geochemistry involves the determination of the relative and absolute concentrations of the elements and their isotopes in the Earth and on Earth's surface. Organic geochemistry, the study of the role of processes and compounds that are derived from living or once-living organisms. Photogeochemistry is the study of light-induced chemical reactions that occur or may occur among natural components of the Earth's surface. Regional geochemistry includes applications to environmental, hydrological and mineral exploration studies. Chemical elements The building blocks of materials are the chemical elements. These can be identified by their atomic number Z, which is the number of protons in the nucleus. An element can have more than one value for N, the number of neutrons in the nucleus. The sum of these is the mass number, which is roughly equal to the atomic mass. Atoms with the same atomic number but different neutron numbers are called isotopes. A given isotope is identified by a letter for the element preceded by a superscript for the mass number. For example, two common isotopes of chlorine are 35Cl and 37Cl. There are about 1700 known combinations of Z and N, of which only about 260 are stable. However, most of the unstable isotopes do not occur in nature. In geochemistry, stable isotopes are used to trace chemical pathways and reactions, while radioactive isotopes are primarily used to date samples. The chemical behavior of an atom – its affinity for other elements and the type of bonds it forms – is determined by the arrangement of electrons in orbitals, particularly the outermost (valence) electrons. These arrangements are reflected in the position of elements in the periodic table. Based on position, the elements fall into the broad groups of alkali metals, alkaline earth metals, transition metals, semi-metals (also known as metalloids), halogens, noble gases, lanthanides and actinides. Another useful classification scheme for geochemistry is the Goldschmidt classification, which places the elements into four main groups. Lithophiles combine easily with oxygen. These elements, which include Na, K, Si, Al, Ti, Mg and Ca, dominate in the Earth's crust, forming silicates and other oxides. Siderophile elements (Fe, Co, Ni, Pt, Re, Os) have an affinity for iron and tend to concentrate in the core. Chalcophile elements (Cu, Ag, Zn, Pb, S) form sulfides; and atmophile elements (O, N, H and noble gases) dominate the atmosphere. Within each group, some elements are refractory, remaining stable at high temperatures, while others are volatile, evaporating more easily, so heating can separate them. Differentiation and mixing The chemical composition of the Earth and other bodies is determined by two opposing processes: differentiation and mixing. In the Earth's mantle, differentiation occurs at mid-ocean ridges through partial melting, with more refractory materials remaining at the base of the lithosphere while the remainder rises to form basalt. After an oceanic plate descends into the mantle, convection eventually mixes the two parts together. Erosion differentiates granite, separating it into clay on the ocean floor, sandstone on the edge of the continent, and dissolved minerals in ocean waters. Metamorphism and anatexis (partial melting of crustal rocks) can mix these elements together again. In the ocean, biological organisms can cause chemical differentiation, while dissolution of the organisms and their wastes can mix the materials again. Fractionation A major source of differentiation is fractionation, an unequal distribution of elements and isotopes. This can be the result of chemical reactions, phase changes, kinetic effects, or radioactivity. On the largest scale, planetary differentiation is a physical and chemical separation of a planet into chemically distinct regions. For example, the terrestrial planets formed iron-rich cores and silicate-rich mantles and crusts. In the Earth's mantle, the primary source of chemical differentiation is partial melting, particularly near mid-ocean ridges. This can occur when the solid is heterogeneous or a solid solution, and part of the melt is separated from the solid. The process is known as equilibrium or batch melting if the solid and melt remain in equilibrium until the moment that the melt is removed, and fractional or Rayleigh melting if it is removed continuously. Isotopic fractionation can have mass-dependent and mass-independent forms. Molecules with heavier isotopes have lower ground state energies and are therefore more stable. As a result, chemical reactions show a small isotope dependence, with heavier isotopes preferring species or compounds with a higher oxidation state; and in phase changes, heavier isotopes tend to concentrate in the heavier phases. Mass-dependent fractionation is largest in light elements because the difference in masses is a larger fraction of the total mass. Ratios between isotopes are generally compared to a standard. For example, sulfur has four stable isotopes, of which the two most common are 32S and 34S. The ratio of their concentrations, , is reported as where is the same ratio for a standard. Because the differences are small, the ratio is multiplied by 1000 to make it parts per thousand (referred to as parts per mil). This is represented by the symbol . Equilibrium Equilibrium fractionation occurs between chemicals or phases that are in equilibrium with each other. In equilibrium fractionation between phases, heavier phases prefer the heavier isotopes. For two phases A and B, the effect can be represented by the factor In the liquid-vapor phase transition for water, at 20 degrees Celsius is 1.0098 for 18O and 1.084 for 2H. In general, fractionation is greater at lower temperatures. At 0 °C, the factors are 1.0117 and 1.111. Kinetic When there is no equilibrium between phases or chemical compounds, kinetic fractionation can occur. For example, at interfaces between liquid water and air, the forward reaction is enhanced if the humidity of the air is less than 100% or the water vapor is moved by a wind. Kinetic fractionation generally is enhanced compared to equilibrium fractionation and depends on factors such as reaction rate, reaction pathway and bond energy. Since lighter isotopes generally have weaker bonds, they tend to react faster and enrich the reaction products. Biological fractionation is a form of kinetic fractionation since reactions tend to be in one direction. Biological organisms prefer lighter isotopes because there is a lower energy cost in breaking energy bonds. In addition to the previously mentioned factors, the environment and species of the organism can have a large effect on the fractionation. Cycles Through a variety of physical and chemical processes, chemical elements change in concentration and move around in what are called geochemical cycles. An understanding of these changes requires both detailed observation and theoretical models. Each chemical compound, element or isotope has a concentration that is a function of position and time, but it is impractical to model the full variability. Instead, in an approach borrowed from chemical engineering, geochemists average the concentration over regions of the Earth called geochemical reservoirs. The choice of reservoir depends on the problem; for example, the ocean may be a single reservoir or be split into multiple reservoirs. In a type of model called a box model, a reservoir is represented by a box with inputs and outputs. Geochemical models generally involve feedback. In the simplest case of a linear cycle, either the input or the output from a reservoir is proportional to the concentration. For example, salt is removed from the ocean by formation of evaporites, and given a constant rate of evaporation in evaporite basins, the rate of removal of salt should be proportional to its concentration. For a given component , if the input to a reservoir is a constant and the output is for some constant , then the mass balance equation is This expresses the fact that any change in mass must be balanced by changes in the input or output. On a time scale of , the system approaches a steady state in which . The residence time is defined as where and are the input and output rates. In the above example, the steady-state input and output rates are both equal to , so . If the input and output rates are nonlinear functions of , they may still be closely balanced over time scales much greater than the residence time; otherwise, there will be large fluctuations in . In that case, the system is always close to a steady-state and the lowest order expansion of the mass balance equation will lead to a linear equation like Equation (). In most systems, one or both of the input and output depend on , resulting in feedback that tends to maintain the steady-state. If an external forcing perturbs the system, it will return to the steady-state on a time scale of . Abundance of elements Solar System The composition of the solar system is similar to that of many other stars, and aside from small anomalies it can be assumed to have formed from a solar nebula that had a uniform composition, and the composition of the Sun's photosphere is similar to that of the rest of the Solar System. The composition of the photosphere is determined by fitting the absorption lines in its spectrum to models of the Sun's atmosphere. By far the largest two elements by fraction of total mass are hydrogen (74.9%) and helium (23.8%), with all the remaining elements contributing just 1.3%. There is a general trend of exponential decrease in abundance with increasing atomic number, although elements with even atomic number are more common than their odd-numbered neighbors (the Oddo–Harkins rule). Compared to the overall trend, lithium, boron and beryllium are depleted and iron is anomalously enriched. The pattern of elemental abundance is mainly due to two factors. The hydrogen, helium, and some of the lithium were formed in about 20 minutes after the Big Bang, while the rest were created in the interiors of stars. Meteorites Meteorites come in a variety of compositions, but chemical analysis can determine whether they were once in planetesimals that melted or differentiated. Chondrites are undifferentiated and have round mineral inclusions called chondrules. With the ages of 4.56 billion years, they date to the early solar system. A particular kind, the CI chondrite, has a composition that closely matches that of the Sun's photosphere, except for depletion of some volatiles (H, He, C, N, O) and a group of elements (Li, B, Be) that are destroyed by nucleosynthesis in the Sun. Because of the latter group, CI chondrites are considered a better match for the composition of the early Solar System. Moreover, the chemical analysis of CI chondrites is more accurate than for the photosphere, so it is generally used as the source for chemical abundance, despite their rareness (only five have been recovered on Earth). Giant planets The planets of the Solar System are divided into two groups: the four inner planets are the terrestrial planets (Mercury, Venus, Earth and Mars), with relatively small sizes and rocky surfaces. The four outer planets are the giant planets, which are dominated by hydrogen and helium and have lower mean densities. These can be further subdivided into the gas giants (Jupiter and Saturn) and the ice giants (Uranus and Neptune) that have large icy cores. Most of our direct information on the composition of the giant planets is from spectroscopy. Since the 1930s, Jupiter was known to contain hydrogen, methane and ammonium. In the 1960s, interferometry greatly increased the resolution and sensitivity of spectral analysis, allowing the identification of a much greater collection of molecules including ethane, acetylene, water and carbon monoxide. However, Earth-based spectroscopy becomes increasingly difficult with more remote planets, since the reflected light of the Sun is much dimmer; and spectroscopic analysis of light from the planets can only be used to detect vibrations of molecules, which are in the infrared frequency range. This constrains the abundances of the elements H, C and N. Two other elements are detected: phosphorus in the gas phosphine (PH3) and germanium in germane (GeH4). The helium atom has vibrations in the ultraviolet range, which is strongly absorbed by the atmospheres of the outer planets and Earth. Thus, despite its abundance, helium was only detected once spacecraft were sent to the outer planets, and then only indirectly through collision-induced absorption in hydrogen molecules. Further information on Jupiter was obtained from the Galileo probe when it was sent into the atmosphere in 1995; and the final mission of the Cassini probe in 2017 was to enter the atmosphere of Saturn. In the atmosphere of Jupiter, He was found to be depleted by a factor of 2 compared to solar composition and Ne by a factor of 10, a surprising result since the other noble gases and the elements C, N and S were enhanced by factors of 2 to 4 (oxygen was also depleted but this was attributed to the unusually dry region that Galileo sampled). Spectroscopic methods only penetrate the atmospheres of Jupiter and Saturn to depths where the pressure is about equal to 1 bar, approximately Earth's atmospheric pressure at sea level. The Galileo probe penetrated to 22 bars. This is a small fraction of the planet, which is expected to reach pressures of over 40 Mbar. To constrain the composition in the interior, thermodynamic models are constructed using the information on temperature from infrared emission spectra and equations of state for the likely compositions. High-pressure experiments predict that hydrogen will be a metallic liquid in the interior of Jupiter and Saturn, while in Uranus and Neptune it remains in the molecular state. Estimates also depend on models for the formation of the planets. Condensation of the presolar nebula would result in a gaseous planet with the same composition as the Sun, but the planets could also have formed when a solid core captured nebular gas. In current models, the four giant planets have cores of rock and ice that are roughly the same size, but the proportion of hydrogen and helium decreases from about 300 Earth masses in Jupiter to 75 in Saturn and just a few in Uranus and Neptune. Thus, while the gas giants are primarily composed of hydrogen and helium, the ice giants are primarily composed of heavier elements (O, C, N, S), primarily in the form of water, methane, and ammonia. The surfaces are cold enough for molecular hydrogen to be liquid, so much of each planet is likely a hydrogen ocean overlaying one of heavier compounds. Outside the core, Jupiter has a mantle of liquid metallic hydrogen and an atmosphere of molecular hydrogen and helium. Metallic hydrogen does not mix well with helium, and in Saturn, it may form a separate layer below the metallic hydrogen. Terrestrial planets Terrestrial planets are believed to have come from the same nebular material as the giant planets, but they have lost most of the lighter elements and have different histories. Planets closer to the Sun might be expected to have a higher fraction of refractory elements, but if their later stages of formation involved collisions of large objects with orbits that sampled different parts of the Solar System, there could be little systematic dependence on position. Direct information on Mars, Venus and Mercury largely comes from spacecraft missions. Using gamma-ray spectrometers, the composition of the crust of Mars has been measured by the Mars Odyssey orbiter, the crust of Venus by some of the Venera missions to Venus, and the crust of Mercury by the MESSENGER spacecraft. Additional information on Mars comes from meteorites that have landed on Earth (the Shergottites, Nakhlites, and Chassignites, collectively known as SNC meteorites). Abundances are also constrained by the masses of the planets, while the internal distribution of elements is constrained by their moments of inertia. The planets condensed from the solar nebula, and much of the details of their composition are determined by fractionation as they cooled. The phases that condense fall into five groups. First to condense are materials rich in refractory elements such as Ca and Al. These are followed by nickel and iron, then magnesium silicates. Below about 700 kelvins (700 K), FeS and volatile-rich metals and silicates form a fourth group, and in the fifth group FeO enter the magnesium silicates. The compositions of the planets and the Moon are chondritic, meaning that within each group the ratios between elements are the same as in carbonaceous chondrites. The estimates of planetary compositions depend on the model used. In the equilibrium condensation model, each planet was formed from a feeding zone in which the compositions of solids were determined by the temperature in that zone. Thus, Mercury formed at 1400 K, where iron remained in a pure metallic form and there was little magnesium or silicon in solid form; Venus at 900 K, so all the magnesium and silicon condensed; Earth at 600 K, so it contains FeS and silicates; and Mars at 450 K, so FeO was incorporated into magnesium silicates. The greatest problem with this theory is that volatiles would not condense, so the planets would have no atmospheres and Earth no atmosphere. In chondritic mixing models, the compositions of chondrites are used to estimate planetary compositions. For example, one model mixes two components, one with the composition of C1 chondrites and one with just the refractory components of C1 chondrites. In another model, the abundances of the five fractionation groups are estimated using an index element for each group. For the most refractory group, uranium is used; iron for the second; the ratios of potassium and thallium to uranium for the next two; and the molar ratio FeO/(FeO+MgO) for the last. Using thermal and seismic models along with heat flow and density, Fe can be constrained to within 10 percent on Earth, Venus, and Mercury. U can be constrained within about 30% on Earth, but its abundance on other planets is based on "educated guesses". One difficulty with this model is that there may be significant errors in its prediction of volatile abundances because some volatiles are only partially condensed. Earth's crust The more common rock constituents are nearly all oxides; chlorides, sulfides and fluorides are the only important exceptions to this and their total amount in any rock is usually much less than 1%. By 1911, F. W. Clarke had calculated that a little more than 47% of the Earth's crust consists of oxygen. It occurs principally in combination as oxides, of which the chief are silica, alumina, iron oxides, and various carbonates (calcium carbonate, magnesium carbonate, sodium carbonate, and potassium carbonate). The silica functions principally as an acid, forming silicates, and all the commonest minerals of igneous rocks are of this nature. From a computation based on 1672 analyses of numerous kinds of rocks Clarke arrived at the following as the average percentage composition of the Earth's crust: SiO2=59.71, Al2O3=15.41, Fe2O3=2.63, FeO=3.52, MgO=4.36, CaO=4.90, Na2O=3.55, K2O=2.80, H2O=1.52, TiO2=0.60, P2O5=0.22, (total 99.22%). All the other constituents occur only in very small quantities, usually much less than 1%. These oxides combine in a haphazard way. For example, potash (potassium carbonate) and soda (sodium carbonate) combine to produce feldspars. In some cases, they may take other forms, such as nepheline, leucite, and muscovite, but in the great majority of instances they are found as feldspar. Phosphoric acid with lime (calcium carbonate) forms apatite. Titanium dioxide with ferrous oxide gives rise to ilmenite. Part of the lime forms lime feldspar. Magnesium carbonate and iron oxides with silica crystallize as olivine or enstatite, or with alumina and lime form the complex ferromagnesian silicates of which the pyroxenes, amphiboles, and biotites are the chief. Any excess of silica above what is required to neutralize the bases will separate out as quartz; excess of alumina crystallizes as corundum. These must be regarded only as general tendencies. It is possible, by rock analysis, to say approximately what minerals the rock contains, but there are numerous exceptions to any rule. Mineral constitution Except in acid or siliceous igneous rocks containing greater than 66% of silica, known as felsic rocks, quartz is not abundant in igneous rocks. In basic rocks (containing 20% of silica or less) it is rare for them to contain as much silicon, these are referred to as mafic rocks. If magnesium and iron are above average while silica is low, olivine may be expected; where silica is present in greater quantity over ferromagnesian minerals, such as augite, hornblende, enstatite or biotite, occur rather than olivine. Unless potash is high and silica relatively low, leucite will not be present, for leucite does not occur with free quartz. Nepheline, likewise, is usually found in rocks with much soda and comparatively little silica. With high alkalis, soda-bearing pyroxenes and amphiboles may be present. The lower the percentage of silica and alkali's, the greater is the prevalence of plagioclase feldspar as contracted with soda or potash feldspar. Earth's crust is composed of 90% silicate minerals and their abundance in the Earth is as follows: plagioclase feldspar (39%), alkali feldspar (12%), quartz (12%), pyroxene (11%), amphiboles (5%), micas (5%), clay minerals (5%); the remaining silicate minerals make up another 3% of Earth's crust. Only 8% of the Earth is composed of non-silicate minerals such as carbonates, oxides, and sulfides. The other determining factor, namely the physical conditions attending consolidation, plays, on the whole, a smaller part, yet is by no means negligible. Certain minerals are practically confined to deep-seated intrusive rocks, e.g., microcline, muscovite, diallage. Leucite is very rare in plutonic masses; many minerals have special peculiarities in microscopic character according to whether they crystallized in-depth or near the surface, e.g., hypersthene, orthoclase, quartz. There are some curious instances of rocks having the same chemical composition, but consisting of entirely different minerals, e.g., the hornblendite of Gran, in Norway, which contains only hornblende, has the same composition as some of the camptonites of the same locality that contain feldspar and hornblende of a different variety. In this connection, we may repeat what has been said above about the corrosion of porphyritic minerals in igneous rocks. In rhyolites and trachytes, early crystals of hornblende and biotite may be found in great numbers partially converted into augite and magnetite. Hornblende and biotite were stable under the pressures and other conditions below the surface, but unstable at higher levels. In the ground-mass of these rocks, augite is almost universally present. But the plutonic representatives of the same magma, granite, and syenite contain biotite and hornblende far more commonly than augite. Felsic, intermediate and mafic igneous rocks Those rocks that contain the most silica, and on crystallizing yield free quartz, form a group generally designated the "felsic" rocks. Those again that contain the least silica and most magnesia and iron, so that quartz is absent while olivine is usually abundant, form the "mafic" group. The "intermediate" rocks include those characterized by the general absence of both quartz and olivine. An important subdivision of these contains a very high percentage of alkalis, especially soda, and consequently has minerals such as nepheline and leucite not common in other rocks. It is often separated from the others as the "alkali" or "soda" rocks, and there is a corresponding series of mafic rocks. Lastly, a small sub-group rich in olivine and without feldspar has been called the "ultramafic" rocks. They have very low percentages of silica but much iron and magnesia. Except these last, practically all rocks contain felspars or feldspathoid minerals. In the acid rocks, the common feldspars are orthoclase, perthite, microcline, and oligoclase—all having much silica and alkalis. In the mafic rocks labradorite, anorthite, and bytownite prevail, being rich in lime and poor in silica, potash, and soda. Augite is the most common ferromagnesian in mafic rocks, but biotite and hornblende are on the whole more frequent in felsic rocks. Rocks that contain leucite or nepheline, either partly or wholly replacing felspar, are not included in this table. They are essentially of intermediate or of mafic character. We might in consequence regard them as varieties of syenite, diorite, gabbro, etc., in which feldspathoid minerals occur, and indeed there are many transitions between syenites of ordinary type and nepheline — or leucite — syenite, and between gabbro or dolerite and theralite or essexite. But, as many minerals develop in these "alkali" rocks that are uncommon elsewhere, it is convenient in a purely formal classification like that outlined here to treat the whole assemblage as a distinct series. This classification is based essentially on the mineralogical constitution of the igneous rocks. Any chemical distinctions between the different groups, though implied, are relegated to a subordinate position. It is admittedly artificial, but it has grown up with the growth of the science and is still adopted as the basis on which more minute subdivisions are erected. The subdivisions are by no means of equal value. The syenites, for example, and the peridotites, are far less important than the granites, diorites, and gabbros. Moreover, the effusive andesites do not always correspond to the plutonic diorites but partly also to the gabbros. As the different kinds of rock, regarded as aggregates of minerals, pass gradually into one another, transitional types are very common and are often so important as to receive special names. The quartz-syenites and nordmarkites may be interposed between granite and syenite, the tonalites and adamellites between granite and diorite, the monzonites between syenite and diorite, norites and hyperites between diorite and gabbro, and so on. Trace metals in the ocean Trace metals readily form complexes with major ions in the ocean, including hydroxide, carbonate, and chloride and their chemical speciation changes depending on whether the environment is oxidized or reduced. Benjamin (2002) defines complexes of metals with more than one type of ligand, other than water, as mixed-ligand-complexes. In some cases, a ligand contains more than one donor atom, forming very strong complexes, also called chelates (the ligand is the chelator). One of the most common chelators is EDTA (ethylenediaminetetraacetic acid), which can replace six molecules of water and form strong bonds with metals that have a plus two charge. With stronger complexation, lower activity of the free metal ion is observed. One consequence of the lower reactivity of complexed metals compared to the same concentration of free metal is that the chelation tends to stabilize metals in the aqueous solution instead of in solids. Concentrations of the trace metals cadmium, copper, molybdenum, manganese, rhenium, uranium and vanadium in sediments record the redox history of the oceans. Within aquatic environments, cadmium(II) can either be in the form CdCl+(aq) in oxic waters or CdS(s) in a reduced environment. Thus, higher concentrations of Cd in marine sediments may indicate low redox potential conditions in the past. For copper(II), a prevalent form is CuCl+(aq) within oxic environments and CuS(s) and Cu2S within reduced environments. The reduced seawater environment leads to two possible oxidation states of copper, Cu(I) and Cu(II). Molybdenum is present as the Mo(VI) oxidation state as MoO42−(aq) in oxic environments. Mo(V) and Mo(IV) are present in reduced environments in the forms MoO2+(aq) and MoS2(s). Rhenium is present as the Re(VII) oxidation state as ReO4− within oxic conditions, but is reduced to Re(IV) which may form ReO2 or ReS2. Uranium is in oxidation state VI in UO2(CO3)34−(aq) and is found in the reduced form UO2(s). Vanadium is in several forms in oxidation state V(V); HVO42− and H2VO4−. Its reduced forms can include VO2+, VO(OH)3−, and V(OH)3. These relative dominance of these species depends on pH. In the water column of the ocean or deep lakes, vertical profiles of dissolved trace metals are characterized as following conservative–type, nutrient–type, or scavenged–type distributions. Across these three distributions, trace metals have different residence times and are used to varying extents by planktonic microorganisms. Trace metals with conservative-type distributions have high concentrations relative to their biological use. One example of a trace metal with a conservative-type distribution is molybdenum. It has a residence time within the oceans of around 8 x 105 years and is generally present as the molybdate anion (MoO42−). Molybdenum interacts weakly with particles and displays an almost uniform vertical profile in the ocean. Relative to the abundance of molybdenum in the ocean, the amount required as a metal cofactor for enzymes in marine phytoplankton is negligible. Trace metals with nutrient-type distributions are strongly associated with the internal cycles of particulate organic matter, especially the assimilation by plankton. The lowest dissolved concentrations of these metals are at the surface of the ocean, where they are assimilated by plankton. As dissolution and decomposition occur at greater depths, concentrations of these trace metals increase. Residence times of these metals, such as zinc, are several thousand to one hundred thousand years. Finally, an example of a scavenged-type trace metal is aluminium, which has strong interactions with particles as well as a short residence time in the ocean. The residence times of scavenged-type trace metals are around 100 to 1000 years. The concentrations of these metals are highest around bottom sediments, hydrothermal vents, and rivers. For aluminium, atmospheric dust provides the greatest source of external inputs into the ocean. Iron and copper show hybrid distributions in the ocean. They are influenced by recycling and intense scavenging. Iron is a limiting nutrient in vast areas of the oceans and is found in high abundance along with manganese near hydrothermal vents. Here, many iron precipitates are found, mostly in the forms of iron sulfides and oxidized iron oxyhydroxide compounds. Concentrations of iron near hydrothermal vents can be up to one million times the concentrations found in the open ocean. Using electrochemical techniques, it is possible to show that bioactive trace metals (zinc, cobalt, cadmium, iron, and copper) are bound by organic ligands in surface seawater. These ligand complexes serve to lower the bioavailability of trace metals within the ocean. For example, copper, which may be toxic to open ocean phytoplankton and bacteria, can form organic complexes. The formation of these complexes reduces the concentrations of bioavailable inorganic complexes of copper that could be toxic to sea life at high concentrations. Unlike copper, zinc toxicity in marine phytoplankton is low and there is no advantage to increasing the organic binding of Zn2+. In high-nutrient, low-chlorophyll regions, iron is the limiting nutrient, with the dominant species being strong organic complexes of Fe(III).
Physical sciences
Geochemistry
Earth science
39595
https://en.wikipedia.org/wiki/Human%20leg
Human leg
The leg is the entire lower limb of the human body, including the foot, thigh or sometimes even the hip or buttock region. The major bones of the leg are the femur (thigh bone), tibia (shin bone), and adjacent fibula. There are 60 bones in each leg. The thigh is located in between the hip and knee. The calf (rear) and shin (front), or shank, are located between the knee and ankle. Legs are used for standing, many forms of human movement, recreation such as dancing, and constitute a significant portion of a person's mass. Evolution has led to the human leg's development into a mechanism specifically adapted for efficient bipedal gait. While the capacity to walk upright is not unique to humans, other primates can only achieve this for short periods and at a great expenditure of energy. In humans, female legs generally have greater hip anteversion and tibiofemoral angles, while male legs have longer femur and tibial lengths. In humans, each lower limb is divided into the hip, thigh, knee, leg, ankle and foot. In anatomy, arm refers to the upper arm and leg refers to the lower leg. Structure In human anatomy, the lower leg is the part of the lower limb that lies between the knee and the ankle. Anatomists restrict the term leg to this use, rather than to the entire lower limb. The thigh is between the hip and knee and makes up the rest of the lower limb. The term lower limb or lower extremity is commonly used to describe all of the leg. The leg from the knee to the ankle is called the crus. The calf is the back portion, and the tibia or shinbone together with the smaller fibula make up the shin, the front of the lower leg. Evolution has provided the human body with two distinct features: the specialization of the upper limb for visually guided manipulation and the lower limb's development into a mechanism specifically adapted for efficient bipedal gait. While the capacity to walk upright is not unique to humans, other primates can only achieve this for short periods and at a great expenditure of energy. The human adaption to bipedalism has also affected the location of the body's center of gravity, the reorganization of internal organs, and the form and biomechanism of the trunk. In humans, the double S-shaped vertebral column acts as a great shock-absorber which shifts the weight from the trunk over the load-bearing surface of the feet. The human legs are exceptionally long and powerful as a result of their exclusive specialization for support and locomotion—in orangutans the leg length is 111% of the trunk; in chimpanzees 128%, and in humans 171%. Many of the leg's muscles are also adapted to bipedalism, most substantially the gluteal muscles, the extensors of the knee joint, and the calf muscles. Skeleton The major bones of the leg are the femur (thigh bone), tibia (shin bone), and adjacent fibula, and these are all long bones. The patella (kneecap) is the sesamoid bone in front of the knee. Most of the leg skeleton has bony prominences and margins that can be palpated and some serve as anatomical landmarks that define the extent of the leg. These landmarks are the anterior superior iliac spine, the greater trochanter, the superior margin of the medial condyle of tibia, and the medial malleolus. Notable exceptions to palpation are the hip joint, and the neck and body, or shaft of the femur. Usually, the large joints of the lower limb are aligned in a straight line, which represents the mechanical longitudinal axis of the leg, the Mikulicz line. This line stretches from the hip joint (or more precisely the head of the femur), through the knee joint (the intercondylar eminence of the tibia), and down to the center of the ankle (the ankle mortise, the fork-like grip between the medial and lateral malleoli). In the tibial shaft, the mechanical and anatomical axes coincide, but in the femoral shaft they diverge 6°, resulting in the femorotibial angle of 174° in a leg with normal axial alignment. A leg is considered straight when, with the feet brought together, both the medial malleoli of the ankle and the medial condyles of the knee are touching. Divergence from the normal femorotibial angle is called genu varum if the center of the knee joint is lateral to the mechanical axis (intermalleolar distance exceeds 3 cm), and genu valgum if it is medial to the mechanical axis (intercondylar distance exceeds 5 cm). These conditions impose unbalanced loads on the joints and stretching of either the thigh's adductors and abductors. The angle of inclination formed between the neck and shaft of the femur (collodiaphysial angle) varies with age—about 150° in the newborn, it gradually decreases to 126–128° in adults, to reach 120° in old age. Pathological changes in this angle result in abnormal posture of the leg: a small angle produces coxa vara and a large angle coxa valga; the latter is usually combined with genu varum, and coxa vara leads genu valgum. Additionally, a line drawn through the femoral neck superimposed on a line drawn through the femoral condyles forms an angle, the torsion angle, which makes it possible for flexion movements of the hip joint to be transposed into rotary movements of the femoral head. Abnormally increased torsion angles result in a limb turned inward and a decreased angle in a limb turned outward; both cases resulting in a reduced range of a person's mobility. Muscles Hip There are several ways of classifying the muscles of the hip: By location or innervation (ventral and dorsal divisions of the plexus layer); By development on the basis of their points of insertion (a posterior group in two layers and an anterior group); and By function (i.e. extensors, flexors, adductors, and abductors). Some hip muscles also act either on the knee joint or on vertebral joints. Additionally, because the areas of origin and insertion of many of these muscles are very extensive, these muscles are often involved in several very different movements. In the hip joint, lateral and medial rotation occur along the axis of the limb; extension (also called dorsiflexion or retroversion) and flexion (anteflexion or anteversion) occur along a transverse axis; and abduction and adduction occur about a sagittal axis. The anterior dorsal hip muscles are the iliopsoas, a group of two or three muscles with a shared insertion on the lesser trochanter of the femur. The psoas major originates from the last vertebra and along the lumbar spine to stretch down into the pelvis. The iliacus originates on the iliac fossa on the interior side of the pelvis. The two muscles unite to form the iliopsoas muscle, which is inserted on the lesser trochanter of the femur. The psoas minor, only present in about 50 per cent of subjects, originates above the psoas major to stretch obliquely down to its insertion on the interior side of the major muscle. The posterior dorsal hip muscles are inserted on or directly below the greater trochanter of the femur. The tensor fasciae latae, stretching from the anterior superior iliac spine down into the iliotibial tract, presses the head of the femur into the acetabulum but also flexes, rotates medially, and abducts to hip joint. The piriformis originates on the anterior pelvic surface of the sacrum, passes through the greater sciatic foramen, and inserts on the posterior aspect of the tip of the greater trochanter. In a standing posture it is a lateral rotator, but it also assists extending the thigh. The gluteus maximus has its origin between (and around) the iliac crest and the coccyx, from where one part radiates into the iliotibial tract and the other stretches down to the gluteal tuberosity under the greater trochanter. The gluteus maximus is primarily an extensor and lateral rotator of the hip joint, and it comes into action when climbing stairs or rising from a sitting to a standing posture. Furthermore, the part inserted into the fascia latae abducts and the part inserted into the gluteal tuberosity adducts the hip. The two deep glutei muscles, the gluteus medius and minimus, originate on the lateral side of the pelvis. The medius muscle is shaped like a cap. Its anterior fibers act as a medial rotator and flexor; the posterior fibers as a lateral rotator and extensor; and the entire muscle abducts the hip. The minimus has similar functions and both muscles are inserted onto the greater trochanter. The ventral hip muscles function as lateral rotators and play an important role in the control of the body's balance. Because they are stronger than the medial rotators, in the normal position of the leg, the apex of the foot is pointing outward to achieve better support. The obturator internus originates on the pelvis on the obturator foramen and its membrane, passes through the lesser sciatic foramen, and is inserted on the trochanteric fossa of the femur. "Bent" over the lesser sciatic notch, which acts as a fulcrum, the muscle forms the strongest lateral rotators of the hip together with the gluteus maximus and quadratus femoris. When sitting with the knees flexed it acts as an abductor. The obturator externus has a parallel course with its origin located on the posterior border of the obturator foramen. It is covered by several muscles and acts as a lateral rotator and a weak adductor. The inferior and superior gemelli muscles represent marginal heads of the obturator internus and assist this muscle. These three muscles form a three-headed muscle (tricipital) known as the triceps coxae. The quadratus femoris originates at the ischial tuberosity and is inserted onto the intertrochanteric crest between the trochanters. This flattened muscle act as a strong lateral rotator and adductor of the thigh. The adductor muscles of the thigh are innervated by the obturator nerve, with the exception of pectineus which receives fibers from the femoral nerve, and the adductor magnus which receives fibers from the tibial nerve. The gracilis arises from near the pubic symphysis and is unique among the adductors in that it reaches past the knee to attach on the medial side of the shaft of the tibia, thus acting on two joints. It share its distal insertion with the sartorius and semitendinosus, all three muscles forming the pes anserinus. It is the most medial muscle of the adductors, and with the thigh abducted its origin can be clearly seen arching under the skin. With the knee extended, it adducts the thigh and flexes the hip. The pectineus has its origin on the iliopubic eminence laterally to the gracilis and, rectangular in shape, extends obliquely to attach immediately behind the lesser trochanter and down the pectineal line and the proximal part of the Linea aspera on the femur. It is a flexor of the hip joint, and an adductor and a weak medial rotator of the thigh. The adductor brevis originates on the inferior ramus of the pubis below the gracilis and stretches obliquely below the pectineus down to the upper third of the Linea aspera. Except for being an adductor, it is a lateral rotator and weak flexor of the hip joint. The adductor longus has its origin at superior ramus of the pubis and inserts medially on the middle third of the Linea aspera. Primarily an adductor, it is also responsible for some flexion. The adductor magnus has its origin just behind the longus and lies deep to it. Its wide belly divides into two parts: One is inserted into the Linea aspera and the tendon of the other reaches down to adductor tubercle on the medial side of the femur's distal end where it forms an intermuscular septum that separates the flexors from the extensors. Magnus is a powerful adductor, especially active when crossing legs. Its superior part is a lateral rotator but the inferior part acts as a medial rotator on the flexed leg when rotated outward and also extends the hip joint. The adductor minimus is an incompletely separated subdivision of the adductor magnus. Its origin forms an anterior part of the magnus and distally it is inserted on the Linea aspera above the magnus. It acts to adduct and lateral rotate the femur. Thigh The muscles of the thigh can be classified into three groups according to their location: anterior and posterior muscles and the adductors (on the medial side). All the adductors except gracilis insert on the femur and act on the hip joint, and so functionally qualify as hip muscles. The majority of the thigh muscles, the "true" thigh muscles, insert on the leg (either the tibia or the fibula) and act primarily on the knee joint. Generally, the extensors lie on anterior of the thigh and flexors lie on the posterior. Even though the sartorius flexes the knee, it is ontogenetically considered an extensor since its displacement is secondary. Of the anterior thigh muscles the largest are the four muscles of the quadriceps femoris: the central rectus femoris, which is surrounded by the three vasti, the vastus intermedius, medialis, and lateralis. Rectus femoris is attached to the pelvis with two tendons, while the vasti are inserted to the femur. All four muscles unite in a common tendon inserted into the patella from where the patellar ligament extends it down to the tibial tuberosity. Fibers from the medial and lateral vasti form two retinacula that stretch past the patella on either sides down to the condyles of the tibia. The quadriceps is the knee extensor, but the rectus femoris additionally flexes the hip joint, and articular muscle of the knee protects the articular capsule of the knee joint from being nipped during extension. The sartorius runs superficially and obliquely down on the anterior side of the thigh, from the anterior superior iliac spine to the pes anserinus on the medial side of the knee, from where it is further extended into the crural fascia. The sartorius acts as a flexor on both the hip and knee, but, due to its oblique course, also contributes to medial rotation of the leg as one of the pes anserinus muscles (with the knee flexed), and to lateral rotation of the hip joint. There are four posterior thigh muscles. The biceps femoris has two heads: The long head has its origin on the ischial tuberosity together with the semitendinosus and acts on two joints. The short head originates from the middle third of the linea aspera on the shaft of the femur and the lateral intermuscular septum of thigh, and acts on only one joint. These two heads unite to form the biceps which inserts on the head of the fibula. The biceps flexes the knee joint and rotates the flexed leg laterally—it is the only lateral rotator of the knee and thus has to oppose all medial rotator. Additionally, the long head extends the hip joint. The semitendinosus and the semimembranosus share their origin with the long head of the biceps, and both attaches on the medial side of the proximal head of the tibia together with the gracilis and sartorius to form the pes anserinus. The semitendinosus acts on two joints; extension of the hip, flexion of the knee, and medial rotation of the leg. Distally, the semimembranosus' tendon is divided into three parts referred to as the pes anserinus profondus. Functionally, the semimembranosus is similar to the semitendinosus, and thus produces extension at the hip joint and flexion and medial rotation at the knee. Posteriorly below the knee joint, the popliteus stretches obliquely from the lateral femoral epicondyle down to the posterior surface of the tibia. The subpopliteal bursa is located deep to the muscle. Popliteus flexes the knee joint and medially rotates the leg. Lower leg and foot With the popliteus (see above) as the single exception, all muscles in the leg are attached to the foot and, based on location, can be classified into an anterior and a posterior group separated from each other by the tibia, the fibula, and the interosseous membrane. In turn, these two groups can be subdivided into subgroups or layers—the anterior group consists of the extensors and the peroneals, and the posterior group of a superficial and a deep layer. Functionally, the muscles of the leg are either extensors, responsible for the dorsiflexion of the foot, or flexors, responsible for the plantar flexion. These muscles can also classified by innervation, muscles supplied by the anterior subdivision of the plexus and those supplied by the posterior subdivision. The leg muscles acting on the foot are called the extrinsic foot muscles whilst the foot muscles located in the foot are called intrinsic. Dorsiflexion (extension) and plantar flexion occur around the transverse axis running through the ankle joint from the tip of the medial malleolus to the tip of the lateral malleolus. Pronation (eversion) and supination (inversion) occur along the oblique axis of the ankle joint. Extrinsic Three of the anterior muscles are extensors. From its origin on the lateral surface of the tibia and the interosseus membrane, the three-sided belly of the tibialis anterior extends down below the superior and inferior extensor retinacula to its insertion on the plantar side of the medial cuneiform bone and the first metatarsal bone. In the non-weight-bearing leg, the anterior tibialis dorsal flexes the foot and lifts the medial edge of the foot. In the weight-bearing leg, it pulls the leg towards the foot. The extensor digitorum longus has a wide origin stretching from the lateral condyle of the tibia down along the anterior side of the fibula, and the interosseus membrane. At the ankle, the tendon divides into four that stretch across the foot to the dorsal aponeuroses of the last phalanges of the four lateral toes. In the non-weight-bearing leg, the muscle extends the digits and dorsiflexes the foot, and in the weight-bearing leg acts similar to the tibialis anterior. The extensor hallucis longus has its origin on the fibula and the interosseus membrane between the two other extensors and is, similarly to the extensor digitorum, is inserted on the last phalanx of big toe ("hallux"). The muscle dorsiflexes the hallux, and acts similar to the tibialis anterior in the weight-bearing leg. Two muscles on the lateral side of the leg form the fibular (peroneal) group. The fibularis (peroneus) longus and fibularis (peroneus) brevis both have their origins on the fibula, and they both pass behind the lateral malleolus where their tendons pass under the fibular retinacula. Under the foot, the fibularis longus stretches from the lateral to the medial side in a groove, thus bracing the transverse arch of the foot. The fibularis brevis is attached on the lateral side to the tuberosity of the fifth metatarsal. Together, these two fibularis muscles form the strongest pronators of the foot. The fibularis muscles are highly variable, and several variants can occasionally be present. Of the posterior muscles three are in the superficial layer. The major plantar flexors, commonly referred to as the triceps surae, are the soleus, which arises on the proximal side of both leg bones, and the gastrocnemius, the two heads of which arises on the distal end of the femur. These muscles unite in a large terminal tendon, the Achilles tendon, which is attached to the posterior tubercle of the calcaneus. The plantaris closely follows the lateral head of the gastrocnemius. Its tendon runs between those of the soleus and gastrocnemius and is embedded in the medial end of the calcaneus tendon. In the deep layer, the tibialis posterior has its origin on the interosseus membrane and the neighbouring bone areas and runs down behind the medial malleolus. Under the foot it splits into a thick medial part attached to the navicular bone and a slightly weaker lateral part inserted to the three cuneiform bones. The muscle produces simultaneous plantar flexion and supination in the non-weight-bearing leg, and approximates the heel to the calf of the leg. The flexor hallucis longus arises distally on the fibula and on the interosseus membrane from where its relatively thick muscle belly extends far distally. Its tendon extends beneath the flexor retinaculum to the sole of the foot and finally attaches on the base of the last phalanx of the hallux. It plantarflexes the hallux and assists in supination. The flexor digitorum longus, finally, has its origin on the upper part of the tibia. Its tendon runs to the sole of the foot where it forks into four terminal tendon attached to the last phalanges of the four lateral toes. It crosses the tendon of the tibialis posterior distally on the tibia, and the tendon of the flexor hallucis longus in the sole. Distally to its division, the quadratus plantae radiates into it and near the middle phalanges its tendons penetrate the tendons of the flexor digitorum brevis. In the non-weight-bearing leg, it plantar flexes the toes and foot and supinates. In the weight-bearing leg it supports the plantar arch. (For the popliteus, see above.) Intrinsic The intrinsic muscles of the foot, muscles whose bellies are located in the foot proper, are either dorsal (top) or plantar (sole). On the dorsal side, two long extrinsic extensor muscles are superficial to the intrinsic muscles, and their tendons form the dorsal aponeurosis of the toes. The short intrinsic extensors and the plantar and dorsal interossei radiates into these aponeuroses. The extensor digitorum brevis and extensor hallucis brevis have a common origin on the anterior side of the calcaneus, from where their tendons extend into the dorsal aponeuroses of digits 1–4. They act to dorsiflex these digits. The plantar muscles can be subdivided into three groups associated with three regions: those of the big digit, the little digit, and the region between these two. All these muscles are covered by the thick and dense plantar aponeurosis, which together with two tough septa, form the spaces of the three groups. These muscles and their fatty tissue function as cushions that transmit the weight of the body downward. As a whole, the foot is a functional entity. The abductor hallucis stretches along the medial edge of the foot, from the calcaneus to the base of the first phalanx of the first digit and the medial sesamoid bone. It is an abductor and a weak flexor, and also helps maintain the arch of the foot. Lateral to the abductor hallucis is the flexor hallucis brevis, which originates from the medial cuneiform bone and from the tendon of the tibialis posterior. The flexor hallucis has a medial and a lateral head inserted laterally to the abductor hallucis. It is an important plantar flexor which comes into prominent use in classical ballet (i.e. for pointe work). The adductor hallucis has two heads; a stronger oblique head which arises from the cuboid and lateral cuneiform bones and the bases of the second and third metatarsals; and a transverse head which arises from the distal ends of the third-fifth metatarsals. Both heads are inserted on the lateral sesamoid bone of the first digit. The muscle acts as a tensor to the arches of the foot, but can also adduct the first digit and plantar flex its first phalanx. The opponens digiti minimi originates from the long plantar ligament and the plantar tendinous sheath of the fibularis (peroneus) longus and is inserted on the fifth metatarsal. When present, it acts to plantar flex the fifth digit and supports the plantar arch. The flexor digiti minimi arises from the region of base of the fifth metatarsal and is inserted onto the base of the first phalanx of the fifth digit where it is usually merged with the abductor of the first digit. It acts to plantar flex the last digit. The largest and longest muscles of the little toe is the abductor digiti minimi. Stretching from the lateral process of the calcaneus, with a second attachment on the base of the fifth metatarsal, to the base of the fifth digit's first phalanx, the muscle forms the lateral edge of the sole. Except for supporting the arch, it plantar flexes the little toe and also acts as an abductor. The four lumbricales have their origin on the tendons of the flexor digitorum longus, from where they extend to the medial side of the bases of the first phalanx of digits two-five. Except for reinforcing the plantar arch, they contribute to plantar flexion and move the four digits toward the big toe. They are, in contrast to the lumbricales of the hand, rather variable, sometimes absent and sometimes more than four are present. The quadratus plantae arises with two slips from margins of the plantar surface of the calcaneus and is inserted into the tendon(s) of the flexor digitorum longus, and is known as the "plantar head" of this latter muscle. The three plantar interossei arise with their single heads on the medial side of the third-fifth metatarsals and are inserted on the bases of the first phalanges of these digits. The two heads of the four dorsal interossei arise on two adjacent metatarsals and merge in the intermediary spaces. Their distal attachment is on the bases of the proximal phalanges of the second-fourth digits. The interossei are organized with the second digit as a longitudinal axis; the plantars act as adductors and pull digits 3–5 towards the second digit; while the dorsals act as abductors. Additionally, the interossei act as plantar flexors at the metatarsophalangeal joints. Lastly, the flexor digitorum brevis arises from underneath the calcaneus to insert its tendons on the middle phalanges of digit 2–4. Because the tendons of the flexor digitorum longus run between these tendons, the brevis is sometimes called perforatus. The tendons of these two muscles are surrounded by a tendinous sheath. The brevis acts to plantar flex the middle phalanges. Flexibility Flexibility can be simply defined as the available range of motion (ROM) provided by a specific joint or group of joints. For the most part, exercises that increase flexibility are performed with intentions to boost overall muscle length, reduce the risks of injury and to potentially improve muscular performance in physical activity. Stretching muscles after engagement in any physical activity can improve muscular strength, increase flexibility, and reduce muscle soreness. If limited movement is present within a joint, the "insufficient extensibility" of the muscle, or muscle group, could be restricting the activity of the affected joint. Stretching Stretching prior to strenuous physical activity has been thought to increase muscular performance by extending the soft tissue past its attainable length in order to increase range of motion. Many physically active individuals practice these techniques as a "warm-up" in order to achieve a certain level of muscular preparation for specific exercise movements. When stretching, muscles should feel somewhat uncomfortable but not physically agonizing. Plantar flexion: One of the most popular lower leg muscle stretches is the step standing heel raises, which mainly involves the gastrocnemius, soleus, and the Achilles tendon. Standing heel raises allow the individual to activate their calf muscles by standing on a step with toes and forefoot, leaving the heel hanging off the step, and plantar flexing the ankle joint by raising the heel. This exercise is easily modified by holding on to a nearby rail for balance and is generally repeated 5–10 times. Dorsiflexion: In order to stretch the anterior muscles of the lower leg, crossover shin stretches work well. This motion will stretch the dorsiflexion muscles, mainly the anterior tibialis, extensor hallucis longus and extensor digitorum longus, by slowly causing the muscles to lengthen as body weight is leaned on the ankle joint by using the floor as resistance against the top of the foot. Crossover shin stretches can vary in intensity depending on the amount of body weight applied on the ankle joint as the individual bends at the knee. This stretch is typically held for 15–30 seconds. Eversion and inversion: Stretching the eversion and inversion muscles allows for better range of motion to the ankle joint. Seated ankle elevations and depressions will stretch the fibularis (peroneus) and tibilalis muscles that are associated with these movements as they lengthen. Eversion muscles are stretched when the ankle becomes depressed from the starting position. In like manner, the inversion muscles are stretched when the ankle joint becomes elevated. Throughout this seated stretch, the ankle joint is to remain supported while depressed and elevated with the ipsilateral (same side) hand in order to sustain the stretch for 10–15 seconds. This stretch will increase overall eversion and inversion muscle group length and provide more flexibility to the ankle joint for larger range of motion during activity. Blood supply The arteries of the leg are divided into a series of segments. In the pelvis area, at the level of the last lumbar vertebra, the abdominal aorta, a continuation the descending aorta, splits into a pair of common iliac arteries. These immediately split into the internal and external iliac arteries, the latter of which descends along the medial border of the psoas major to exits the pelvis area through the vascular lacuna under the inguinal ligament. The artery enters the thigh as the femoral artery which descends the medial side of the thigh to the adductor canal. The canal passes from the anterior to the posterior side of the limb where the artery leaves through the adductor hiatus and becomes the popliteal artery. On the back of the knee the popliteal artery runs through the popliteal fossa to the popliteal muscle where it divides into anterior and posterior tibial arteries. In the lower leg, the anterior tibial enters the extensor compartment near the upper border of the interosseus membrane to descend between the tibialis anterior and the extensor hallucis longus. Distal to the superior and extensor retinacula of the foot it becomes the dorsal artery of the foot. The posterior tibial forms a direct continuation of the popliteal artery which enters the flexor compartment of the lower leg to descend behind the medial malleolus where it divides into the medial and lateral plantar arteries, of which the posterior branch gives rise to the fibular artery. For practical reasons the lower limb is subdivided into somewhat arbitrary regions: The regions of the hip are all located in the thigh: anteriorly, the subinguinal region is bounded by the inguinal ligament, the sartorius, and the pectineus and forms part of the femoral triangle which extends distally to the adductor longus. Posteriorly, the gluteal region corresponds to the gluteus maximus. The anterior region of the thigh extends distally from the femoral triangle to the region of the knee and laterally to the tensor fasciae latae. The posterior region ends distally before the popliteal fossa. The anterior and posterior regions of the knee extend from the proximal regions down to the level of the tuberosity of the tibia. In the lower leg the anterior and posterior regions extend down to the malleoli. Behind the malleoli are the lateral and medial retromalleolar regions and behind these is the region of the heel. Finally, the foot is subdivided into a dorsal region superiorly and a plantar region inferiorly. Veins The veins are subdivided into three systems. The deep veins return approximately 85 percent of the blood and the superficial veins approximately 15 percent. A series of perforator veins interconnect the superficial and deep systems. In the standing posture, the veins of the leg have to handle an exceptional load as they act against gravity when they return the blood to the heart. The venous valves assist in maintaining the superficial to deep direction of the blood flow. Superficial veins: Great saphenous vein Small saphenous Deep veins: Femoral vein, whose segment is the common femoral vein Popliteal vein Anterior tibial vein Posterior tibial vein Fibular vein Nerve supply The sensory and motor innervation to the lower limb is supplied by the lumbosacral plexus, which is formed by the ventral rami of the lumbar and sacral spinal nerves with additional contributions from the subcostal nerve (T12) and coccygeal nerve (Co1). Based on distribution and topography, the lumbosacral plexus is subdivided into the lumbar plexus (T12-L4) and the Sacral plexus (L5-S4); the latter is often further subdivided into the sciatic and pudendal plexuses: The lumbar plexus is formed lateral to the intervertebral foramina by the ventral rami of the first four lumbar spinal nerves (L1-L4), which all pass through psoas major. The larger branches of the plexus exit the muscle to pass sharply downward to reach the abdominal wall and the thigh (under the inguinal ligament); with the exception of the obturator nerve which pass through the lesser pelvis to reach the medial part of the thigh through the obturator foramen. The nerves of the lumbar plexus pass in front of the hip joint and mainly support the anterior part of the thigh. The iliohypogastric (T12-L1) and ilioinguinal nerves (L1) emerge from the psoas major near the muscle's origin, from where they run laterally downward to pass anteriorly above the iliac crest between the transversus abdominis and abdominal internal oblique, and then run above the inguinal ligament. Both nerves give off muscular branches to both these muscles. Iliohypogastric supplies sensory branches to the skin of the lateral hip region, and its terminal branch finally pierces the aponeurosis of the abdominal external oblique above the inguinal ring to supply sensory branches to the skin there. Ilioinguinalis exits through the inguinal ring and supplies sensory branches to the skin above the pubic symphysis and the lateral portion of the scrotum. The genitofemoral nerve (L1, L2) leaves psoas major below the two former nerves, immediately divides into two branches that descends along the muscle's anterior side. The sensory femoral branch supplies the skin below the inguinal ligament, while the mixed genital branch supplies the skin and muscles around the sex organ. The lateral femoral cutaneous nerve (L2, L3) leaves psoas major laterally below the previous nerve, runs obliquely and laterally downward above the iliacus, exits the pelvic area near the iliac spine, and supplies the skin of the anterior thigh. The obturator nerve (L2-L4) passes medially behind psoas major to exit the pelvis through the obturator canal, after which it gives off branches to obturator externus and divides into two branches passing behind and in front of adductor brevis to supply motor innervation to all the other adductor muscles. The anterior branch also supplies sensory nerves to the skin on a small area on the distal medial aspect of the thigh. The femoral nerve (L2-L4) is the largest and longest of the nerves of the lumbar plexus. It supplies motor innervation to iliopsoas, pectineus, sartorius, and quadriceps; and sensory branches to the anterior thigh, medial lower leg, and posterior foot. The nerves of the sacral plexus pass behind the hip joint to innervate the posterior part of the thigh, most of the lower leg, and the foot. The superior (L4-S1) and inferior gluteal nerves (L5-S2) innervate the gluteus muscles and the tensor fasciae latae. The posterior femoral cutaneous nerve (S1-S3) contributes sensory branches to the skin on the posterior thigh. The sciatic nerve (L4-S3), the largest and longest nerve in the human body, leaves the pelvis through the greater sciatic foramen. In the posterior thigh it first gives off branches to the short head of the biceps femoris and then divides into the tibial (L4-S3) and common fibular nerves (L4-S2). The fibular nerve continues down on the medial side of biceps femoris, winds around the fibular neck and enters the front of the lower leg. There it divides into a deep and a superficial terminal branch. The superficial branch supplies the fibularis muscles and the deep branch enters the extensor compartment; both branches reaches into the dorsal foot. In the thigh, the tibial nerve gives off branches to semitendinosus, semimembranosus, adductor magnus, and the long head of the biceps femoris. The nerve then runs straight down the back of the leg, through the popliteal fossa to supply the ankle flexors on the back of the lower leg and then continues down to supply all the muscles in the sole of the foot. The pudendal (S2-S4) and coccygeal nerves (S5-Co2) supply the muscles of the pelvic floor and the surrounding skin. The lumbosacral trunk is a communicating branch passing between the sacral and lumbar plexuses containing ventral fibers from L4. The coccygeal nerve, the last spinal nerve, emerges from the sacral hiatus, unites with the ventral rami of the two last sacral nerves, and forms the coccygeal plexus. Lower leg and foot The lower leg and ankle need to keep exercised and moving well as they are the base of the whole body. The lower extremities must be strong in order to balance the weight of the rest of the body, and the gastrocnemius muscles take part in much of the blood circulation. Exercises Isometric and standard There are a number of exercises that can be done to strengthen the lower leg. For example, in order to activate plantar flexors in the deep plantar flexors one can sit on the floor with the hips flexed, the ankle neutral with knees fully extended as they alternate pushing their foot against a wall or platform. This kind of exercise is beneficial as it hardly causes any fatigue. Another form of isometric exercise for the gastrocnemius would be seated calf raises which can be done with or without equipment. One can be seated at a table with their feet flat on the ground, and then plantar flex both ankles so that the heels are raised off the floor and the gastrocnemius flexed. An alternate movement could be heel drop exercises with the toes being propped on an elevated surface—as an opposing movement this would improve the range of motion. One-legged toe raises for the gastrocnemius muscle can be performed by holding one dumbbell in one hand while using the other for balance, and then standing with one foot on a plate. The next step would be to plantar flex and keep the knee joint straight or flexed slightly. The triceps surae is contracted during this exercise. Stabilization exercises like the BOSU ball squat are also important especially as they assist in the ankles having to adjust to the ball's form in order to balance. Strengthening the lower leg is essential for improving overall leg stability, balance, and injury prevention. Several effective exercises target the muscles in the lower leg, including the calves, tibialis anterior, and other supporting muscles. Calf raises are a foundational exercise: standing with feet hip-width apart, you raise your heels off the ground and lower them back down, effectively strengthening the gastrocnemius and soleus muscles. Seated calf raises, performed while sitting with a weight on your knees, focus specifically on the soleus muscle, which is crucial for endurance activities. To target the tibialis anterior, toe raises are highly effective. Standing with feet flat, you lift your toes off the ground while keeping your heels planted, then lower them back down. For improved ankle mobility, ankle circles—rotating your ankle clockwise and counterclockwise while seated or standing—are beneficial. Similarly, heel walks, where you walk on your heels with toes lifted, strengthen the tibialis anterior and enhance balance. Using equipment like resistance bands can add versatility to your routine. For example, looping a band around your foot and pulling it toward you strengthens various lower leg muscles. Jump rope is another excellent option, enhancing calf strength, coordination, and cardiovascular fitness. Finally, box jumps, where you jump onto a sturdy box or platform, develop explosive strength in the calves and lower legs. Incorporating these exercises into your workout routine can significantly improve lower leg strength and stability. Begin with a proper warm-up and gradually increase intensity to prevent injury. If you have specific fitness goals or medical conditions, consulting a fitness professional or physical therapist is recommended. Clinical significance Lower leg injury Lower leg injuries are common while running or playing sports. About 10% of all injuries in athletes involve the lower extremities. The majority of athletes sprain their ankles; this is mainly caused by the increased loads onto the feet when they move into the foot down or in an outer ankle position. All areas of the foot, which are the forefoot, midfoot, and rearfoot, absorb various forces while running and this can also lead to injuries. Running and various activities can cause stress fractures, tendinitis, musculotendinous injuries, or any chronic pain to our lower extremities such as the tibia. Types of activities Injuries to quadriceps or hamstrings are caused by the constant impact loads to the legs during activities, such as kicking a ball. While doing this type of motion, 85% of that shock is absorbed to the hamstrings; this can cause strain to those muscles. Jumping – is another risk because if the legs do not land properly after an initial jump, there may be damage to the meniscus in the knees, sprain to the ankle by everting or inverting the foot, or damage to the Achilles tendon and gastrocnemius if there is too much force while plantar flexing. Weight lifting – such as the improperly performed deep squat, is also dangerous to the lower limbs, because the exercise can lead to an overextension, or an outstretch, of our ligaments in the knee and can cause pain over time. Running – the most common activity associated with lower leg injury. There is constant pressure and stress being put on the feet, knees, and legs while running by gravitational force. Muscle tears in our legs or pain in various areas of the feet can be a result of poor biomechanics of running. Running The most common injuries in running involve the knees and the feet. Various studies have focused on the initial cause of these running related injuries and found that there are many factors that correlate to these injuries. Female distance runners who had a history of stress fracture injuries had higher vertical impact forces than non-injured subjects. The large forces onto the lower legs were associated with gravitational forces, and this correlated with patellofemoral pain or potential knee injuries. Researchers have also found that these running-related injuries affect the feet as well, because runners with previous injuries showed more foot eversion and over-pronation while running than non-injured runners. This causes more loads and forces on the medial side of the foot, causing more stress on the tendons of the foot and ankle. Most of these running injuries are caused by overuse: running longer distances weekly for a long duration is a risk for injuring the lower legs. Prevention tools Voluntary stretches to the legs, such as the wall stretch, condition the hamstrings and the calf muscle to various movements before vigorously working them. The environment and surroundings, such as uneven terrain, can cause the feet to position in an unnatural way, so wearing shoes that can absorb forces from the ground's impact and allow for stabilizing the feet can prevent some injuries while running as well. Shoes should be structured to allow friction-traction at the shoe surface, space for different foot-strike stresses, and for comfortable, regular arches for the feet. Summary The chance of damaging our lower extremities will be reduced by having knowledge about some activities associated with lower leg injury and developing a correct form of running, such as not over-pronating the foot or overusing the legs. Preventative measures, such as various stretches, and wearing appropriate footwear, will also reduce injuries. Fracture A fracture of the leg can be classified according to the involved bone into: Femoral fracture (in the upper leg) Crus fracture (in the lower leg) Pain management Lower leg and foot pain management is critical in reducing the progression of further injuries, uncomfortable sensations and limiting alterations while walking and running. Most individuals suffer from various pains in their lower leg and foot due to different factors. Muscle inflammation, strain, tenderness, swelling and muscle tear from muscle overuse or incorrect movement are several conditions often experienced by athletes and the common public during and after high impact physical activities. Therefore, suggested pain management mechanisms are provided to reduce pain and prevent the progression of injury. Plantar fasciitis A plantar fasciitis foot stretch is one of the recommended methods to reduce pain caused by plantar fasciitis (Figure 1). To do the plantar fascia stretch, while sitting in a chair place the ankle on the opposite knee and hold the toes of the impaired foot, slowly pulling back. The stretch should be held for approximately ten seconds, three times per day. Medial tibial stress syndrome (shin splint) Several methods can be utilized to help control pain caused by shin splints. Placing ice on the affected area prior to and after running will aid in reducing pain. In addition, wearing orthotic devices including a neoprene sleeve (Figure 2) and wearing appropriate footwear such as a foot arch can help to eliminate the condition. Stretching and strengthening of the anterior tibia or medial tibia by performing exercises of plantar and dorsi flexors such as calf stretch can also help in easing the pain. Achilles tendinopathy There are numerous appropriate approaches to handling pain resulting from Achilles tendinitis. The primary action is to rest. Activities that do not provide additional stress to the affected tendon are also recommended. Wearing orthothics or prostheses will provide cushion and will prevent the affected Achilles tendon from experiencing further stress when walking and performing therapeutic stretches. A few stretch modalities or eccentric exercises such as toe extension and flexion and calf and heel stretch are beneficial in lowering pain with Achilles tendinopathy patients (Figure 4) Society and culture In Norse mythology, the race of Jotuns was born from the legs of Ymir. In Finnic mythology, the Earth was created from the shards of the egg of a goldeneye that fell from the knees of Ilmatar. While this story isn't found in other Finno-Ugric mythologies, Pavel Melnikov-Pechersky has noted several times that the beauty of legs is commonly mentioned in Mordvin mythology as a characteristic of both female mythological characters and real Erzyan and Mokshan women. In medieval Europe, showing legs was one of the biggest taboos for women, especially the ones with a high social status. In Victorian England several centuries later legs were not to be mentioned at all (not only human ones, but even those of a table or a piano), and referred to as "limbs" instead. Miniskirts and other clothing that reveal legs first became popular in mid-20th century science fiction. Since then, it became mainstream in Western cultures, with female legs frequently being focused on in films, TV ads, music videos, dance shows and various kinds of sports (i.e. ice skating or women's gymnastics). Many men who are attracted to female legs tend to regard them aesthetically almost as much as they do sexually, perceiving legs as more elegant, suggestive, sensual, or seductive (especially with clothing that makes legs easy to be revealed and concealed), whereas female breasts or buttocks are viewed as much more "in your face" sexual. That said, legs (especially the inside of the upper leg that has the most sensitive and delicate skin) are considered to be one of the most sexualized elements of a woman's body, especially in Hollywood movies. Both men and women generally consider long legs attractive, which may explain the preference for tall fashion models. Men also tend to favor women who have a higher leg length to body ratio, but the opposite is true of women's preferences in men. Adolescent and adult women in many Western cultures often remove the hair from their legs. Toned, tanned, shaved legs are sometimes perceived as a sign of youthfulness and are often considered attractive in these cultures. Men generally do not shave their legs in any culture. However, leg-shaving is a generally accepted practice in modeling. It is also fairly common in sports where the hair removal makes the athlete appreciably faster by reducing drag; the most common case of this is competitive swimming. Image gallery
Biology and health sciences
Human anatomy
null
39626
https://en.wikipedia.org/wiki/Symbiosis
Symbiosis
Symbiosis (Ancient Greek : living with, companionship < : together; and bíōsis: living) is any type of a close and long-term biological interaction, between two organisms of different species. The two organisms, termed symbionts, can be either in a mutualistic, a commensalistic, or a parasitic relationship. In 1879, Heinrich Anton de Bary defined symbiosis as "the living together of unlike organisms". The term is sometimes more exclusively used in a restricted, mutualistic sense, where both symbionts contribute to each other's subsistence. Symbiosis can be obligatory, which means that one, or both of the symbionts depend on each other for survival, or facultative (optional), when they can also subsist independently. Symbiosis is also classified by physical attachment. Symbionts forming a single body live in conjunctive symbiosis, while all other arrangements are called disjunctive symbiosis. When one organism lives on the surface of another, such as head lice on humans, it is called ectosymbiosis; when one partner lives inside the tissues of another, such as Symbiodinium within coral, it is termed endosymbiosis. Definition The definition of symbiosis was a matter of debate for 130 years. In 1877, Albert Bernhard Frank used the term symbiosis to describe the mutualistic relationship in lichens. In 1878, the German mycologist Heinrich Anton de Bary defined it as "the living together of unlike organisms". The definition has varied among scientists, with some advocating that it should only refer to persistent mutualisms, while others thought it should apply to all persistent biological interactions (in other words, to mutualism, commensalism, and parasitism, but excluding brief interactions such as predation). In the 21st century, the latter has become the definition widely accepted by biologists. In 1949, Edward Haskell proposed an integrative approach with a classification of "co-actions", later adopted by biologists as "interactions". Types Obligate versus facultative Relationships can be obligate, meaning that one or both of the symbionts entirely depend on each other for survival. For example, in lichens, which consist of fungal and photosynthetic symbionts, the fungal partners cannot live on their own. The algal or cyanobacterial symbionts in lichens, such as Trentepohlia, can generally live independently, and their part of the relationship is therefore described as facultative (optional), or non-obligate. When one of the participants in a symbiotic relationship is capable of photosynthesis, as with lichens, it is called photosymbiosis. Ectosymbiosis versus endosymbiosis Ectosymbiosis is any symbiotic relationship in which the symbiont lives on the body surface of the host, including the inner surface of the digestive tract or the ducts of exocrine glands. Examples of this include ectoparasites such as lice; commensal ectosymbionts such as the barnacles, which attach themselves to the jaw of baleen whales; and mutualist ectosymbionts such as cleaner fish. Contrastingly, endosymbiosis is any symbiotic relationship in which one symbiont lives within the tissues of the other, either within the cells or extracellularly. Examples include diverse microbiomes: rhizobia, nitrogen-fixing bacteria that live in root nodules on legume roots; actinomycetes, nitrogen-fixing bacteria such as Frankia, which live in alder root nodules; single-celled algae inside reef-building corals; and bacterial endosymbionts that provide essential nutrients to about 10%–15% of insects. In endosymbiosis, the host cell lacks some of the nutrients which the endosymbiont provides. As a result, the host favors endosymbiont's growth processes within itself by producing some specialized cells. These cells affect the genetic composition of the host in order to regulate the increasing population of the endosymbionts and ensure that these genetic changes are passed onto the offspring via vertical transmission (heredity). As the endosymbiont adapts to the host's lifestyle, the endosymbiont changes dramatically. There is a drastic reduction in its genome size, as many genes are lost during the process of metabolism, and DNA repair and recombination, while important genes participating in the DNA-to-RNA transcription, protein translation and DNA/RNA replication are retained. The decrease in genome size is due to loss of protein coding genes and not due to lessening of inter-genic regions or open reading frame (ORF) size. Species that are naturally evolving and contain reduced sizes of genes can be accounted for an increased number of noticeable differences between them, thereby leading to changes in their evolutionary rates. When endosymbiotic bacteria related with insects are passed on to the offspring strictly via vertical genetic transmission, intracellular bacteria go across many hurdles during the process, resulting in the decrease in effective population sizes, as compared to the free-living bacteria. The incapability of the endosymbiotic bacteria to reinstate their wild type phenotype via a recombination process is called Muller's ratchet phenomenon. Muller's ratchet phenomenon, together with less effective population sizes, leads to an accretion of deleterious mutations in the non-essential genes of the intracellular bacteria. This can be due to lack of selection mechanisms prevailing in the relatively "rich" host environment. Competition Competition can be defined as an interaction between organisms or species, in which the fitness of one is lowered by the presence of another. Limited supply of at least one resource (such as food, water, and territory) used by both usually facilitates this type of interaction, although the competition can also be for other resources. Amensalism Amensalism is a non-symbiotic, asymmetric interaction where one species is harmed or killed by the other, and one is unaffected by the other. There are two types of amensalism, competition and antagonism (or antibiosis). Competition is where a larger or stronger organism deprives a smaller or weaker one of a resource. Antagonism occurs when one organism is damaged or killed by another through a chemical secretion. An example of competition is a sapling growing under the shadow of a mature tree. The mature tree can rob the sapling of necessary sunlight and, if the mature tree is very large, it can take up rainwater and deplete soil nutrients. Throughout the process, the mature tree is unaffected by the sapling. Indeed, if the sapling dies, the mature tree gains nutrients from the decaying sapling. An example of antagonism is Juglans nigra (black walnut), secreting juglone, a substance which destroys many herbaceous plants within its root zone. The term amensalism is often used to describe strongly asymmetrical competitive interactions, such as between the Spanish ibex and weevils of the genus Timarcha which feed upon the same type of shrub. Whilst the presence of the weevil has almost no influence on food availability, the presence of ibex has an enormous detrimental effect on weevil numbers, as they consume significant quantities of plant matter and incidentally ingest the weevils upon it. Commensalism Commensalism describes a relationship between two living organisms where one benefits and the other is not significantly harmed or helped. It is derived from the English word commensal, used of human social interaction. It derives from a medieval Latin word meaning sharing food, formed from com- (with) and mensa (table). Commensal relationships may involve one organism using another for transportation (phoresy) or for housing (inquilinism), or it may also involve one organism using something another created, after its death (metabiosis). Examples of metabiosis are hermit crabs using gastropod shells to protect their bodies, and spiders building their webs on plants. Mutualism Mutualism or interspecies reciprocal altruism is a long-term relationship between individuals of different species where both individuals benefit. Mutualistic relationships may be either obligate for both species, obligate for one but facultative for the other, or facultative for both. Many herbivores have mutualistic gut flora to help them digest plant matter, which is more difficult to digest than animal prey. This gut flora comprises cellulose-digesting protozoans or bacteria living in the herbivores' intestines. Coral reefs result from mutualism between coral organisms and various algae living inside them. Most land plants and land ecosystems rely on mutualism between the plants, which fix carbon from the air, and mycorrhyzal fungi, which help in extracting water and minerals from the ground. An example of mutualism is the relationship between the ocellaris clownfish that dwell among the tentacles of Ritteri sea anemones. The territorial fish protects the anemone from anemone-eating fish, and in turn, the anemone stinging tentacles protect the clownfish from its predators. A special mucus on the clownfish protects it from the stinging tentacles. A further example is the goby, a fish which sometimes lives together with a shrimp. The shrimp digs and cleans up a burrow in the sand in which both the shrimp and the goby fish live. The shrimp is almost blind, leaving it vulnerable to predators when outside its burrow. In case of danger, the goby touches the shrimp with its tail to warn it, and both quickly retreat into the burrow. Different species of gobies (Elacatinus spp.) also clean up ectoparasites in other fish, possibly another kind of mutualism. A spectacular example of obligate mutualism is the relationship between the siboglinid tube worms and symbiotic bacteria that live at hydrothermal vents and cold seeps. The worm has no digestive tract and is wholly reliant on its internal symbionts for nutrition. The bacteria oxidize either hydrogen sulfide or methane, which the host supplies to them. These worms were discovered in the late 1980s at the hydrothermal vents near the Galapagos Islands and have since been found at deep-sea hydrothermal vents and cold seeps in all of the world's oceans. Mutualism improves both organism's competitive ability and will outcompete organisms of the same species that lack the symbiont. A facultative symbiosis is seen in encrusting bryozoans and hermit crabs. The bryozoan colony (Acanthodesia commensale) develops a cirumrotatory growth and offers the crab (Pseudopagurus granulimanus) a helicospiral-tubular extension of its living chamber that initially was situated within a gastropod shell. Parasitism In a parasitic relationship, the parasite benefits while the host is harmed. Parasitism takes many forms, from endoparasites that live within the host's body to ectoparasites and parasitic castrators that live on its surface and micropredators like mosquitoes that visit intermittently. Parasitism is an extremely successful mode of life; about 40% of all animal species are parasites, and the average mammal species is host to 4 nematodes, 2 cestodes, and 2 trematodes. Mimicry Mimicry is a form of symbiosis in which a species adopts distinct characteristics of another species to alter its relationship dynamic with the species being mimicked, to its own advantage. Among the many types of mimicry are Batesian and Müllerian, the first involving one-sided exploitation, the second providing mutual benefit. Batesian mimicry is an exploitative three-party interaction where one species, the mimic, has evolved to mimic another, the model, to deceive a third, the dupe. In terms of signalling theory, the mimic and model have evolved to send a signal; the dupe has evolved to receive it from the model. This is to the advantage of the mimic but to the detriment of both the model, whose protective signals are effectively weakened, and of the dupe, which is deprived of an edible prey. For example, a wasp is a strongly-defended model, which signals with its conspicuous black and yellow coloration that it is an unprofitable prey to predators such as birds which hunt by sight; many hoverflies are Batesian mimics of wasps, and any bird that avoids these hoverflies is a dupe. In contrast, Müllerian mimicry is mutually beneficial as all participants are both models and mimics. For example, different species of bumblebee mimic each other, with similar warning coloration in combinations of black, white, red, and yellow, and all of them benefit from the relationship. Cleaning symbiosis Cleaning symbiosis is an association between individuals of two species, where one (the cleaner) removes and eats parasites and other materials from the surface of the other (the client). It is putatively mutually beneficial, but biologists have long debated whether it is mutual selfishness, or simply exploitative. Cleaning symbiosis is well known among marine fish, where some small species of cleaner fish – notably wrasses, but also species in other genera – are specialized to feed almost exclusively by cleaning larger fish and other marine animals. In a supreme situation, the host species (fish or marine life) will display itself at a designated station deemed the "cleaning station". Cleaner fish play an essential role in the reduction of parasitism on marine animals. Some shark species participate in cleaning symbiosis, where cleaner fish remove ectoparasites from the body of the shark. A study by Raymond Keyes addresses the atypical behavior of a few shark species when exposed to cleaner fish. In this experiment, cleaner wrasse (Labroides dimidiatus) and various shark species were placed in a tank together and observed. The different shark species exhibited different responses and behaviors around the wrasse. For example, Atlantic and Pacific lemon sharks consistently react to the wrasse fish in a fascinating way. During the interaction, the shark remains passive and the wrasse swims to it. It begins to scan the shark's body, sometimes stopping to inspect specific areas. Commonly, the wrasse would inspect the gills, labial regions, and skin. When the wrasse makes its way to the mouth of the shark, the shark often ceases breathing for up to two and a half minutes so that the fish is able to scan the mouth. Then, the fish passes further into the mouth to examine the gills, specifically the buccopharyngeal area, which typically holds the most parasites. When the shark begins to close its mouth, the wrasse finishes its examination and goes elsewhere. Male bull sharks exhibit slightly different behavior at cleaning stations: as the shark swims into a colony of wrasse fish, it drastically slows its speed to allow the cleaners to do their job. After approximately one minute, the shark returns to normal swimming speed. Role in evolution Symbiosis is increasingly recognized as an important selective force behind evolution; many species have a long history of interdependent co-evolution. Although symbiosis was once discounted as an anecdotal evolutionary phenomenon, evidence is now overwhelming that obligate or facultative associations among microorganisms and between microorganisms and multicellular hosts had crucial consequences in many landmark events in evolution and in the generation of phenotypic diversity and complex phenotypes able to colonise new environments. Mutualistic symbiosis can sometimes evolve from parasitism or commensalism, Fungi's relationship to plants in the form of mycelium evolved from parasitism and commensalism. Under certain conditions species of fungi previously in a state of mutualism can turn parasitic on weak or dying plants. Likewise the symbiotic relationship of clown fish and sea anemones emerged from a commensalist relationship. Hologenome development and evolution Evolution originated from changes in development where variations within species are selected for or against because of the symbionts involved. The hologenome theory relates to the holobiont and symbionts genome together as a whole. Microbes live everywhere in and on every multicellular organism. Many organisms rely on their symbionts in order to develop properly, this is known as co-development. In cases of co-development the symbionts send signals to their host which determine developmental processes. Co-development is commonly seen in both arthropods and vertebrates. Symbiogenesis One hypothesis for the origin of the nucleus in eukaryotes (plants, animals, fungi, and protists) is that it developed from a symbiogenesis between bacteria and archaea. It is hypothesized that the symbiosis originated when ancient archaea, similar to modern methanogenic archaea, invaded and lived within bacteria similar to modern myxobacteria, eventually forming the early nucleus. This theory is analogous to the accepted theory for the origin of eukaryotic mitochondria and chloroplasts, which are thought to have developed from a similar endosymbiotic relationship between proto-eukaryotes and aerobic bacteria. Evidence for this includes the fact that mitochondria and chloroplasts divide independently of the cell, and that these organelles have their own genome. The biologist Lynn Margulis, famous for her work on endosymbiosis, contended that symbiosis is a major driving force behind evolution. She considered Darwin's notion of evolution, driven by competition, to be incomplete and claimed that evolution is strongly based on co-operation, interaction, and mutual dependence among organisms. According to Margulis and her son Dorion Sagan, "Life did not take over the globe by combat, but by networking." Major examples of co-evolutionary relationships Mycorrhiza About 80% of vascular plants worldwide form symbiotic relationships with fungi, in particular in arbuscular mycorrhizas. Pollination Flowering plants and the animals that pollinate them have co-evolved. Many plants that are pollinated by insects (in entomophily), bats, or birds (in ornithophily) have highly specialized flowers modified to promote pollination by a specific pollinator that is correspondingly adapted. The first flowering plants in the fossil record had relatively simple flowers. Adaptive speciation quickly gave rise to many diverse groups of plants, and, at the same time, corresponding speciation occurred in certain insect groups. Some groups of plants developed nectar and large sticky pollen, while insects evolved more specialized morphologies to access and collect these rich food sources. In some taxa of plants and insects, the relationship has become dependent, where the plant species can only be pollinated by one species of insect. Acacia ants and acacias The acacia ant (Pseudomyrmex ferruginea) is an obligate plant ant that protects at least five species of "Acacia" (Vachellia) from preying insects and from other plants competing for sunlight, and the tree provides nourishment and shelter for the ant and its larvae. Seed dispersal Seed dispersal is the movement, spread or transport of seeds away from the parent plant. Plants have limited mobility and rely upon a variety of dispersal vectors to transport their propagules, including both abiotic vectors such as the wind and living (biotic) vectors like birds. In order to attract animals, these plants evolved a set of morphological characters such as fruit colour, mass, and persistence correlated to particular seed dispersal agents. For example, plants may evolve conspicuous fruit colours to attract avian frugivores, and birds may learn to associate such colours with a food resource. Rhizobia Lichens
Biology and health sciences
Ecology
null
39653
https://en.wikipedia.org/wiki/Dissociative%20identity%20disorder
Dissociative identity disorder
Dissociative identity disorder (DID), previously known as multiple personality disorder (MPD), is one of multiple dissociative disorders in the DSM-5, ICD-11, and Merck Manual. It has a history of extreme controversy. Dissociative identity disorder is characterized by the presence of at least two distinct and relatively enduring personality states. The disorder is accompanied by memory gaps more severe than could be explained by ordinary forgetfulness. According to the DSM-5-TR, early childhood trauma, typically starting before 5–6 years of age, places someone at risk of developing dissociative identity disorder. Across diverse geographic regions, 90% of people diagnosed with dissociative identity disorder report experiencing multiple forms of childhood abuse, such as rape, violence, neglect, or severe bullying. Other traumatic childhood experiences that have been reported include painful medical and surgical procedures, war, terrorism, attachment disturbance, natural disaster, cult and occult abuse, loss of a loved one or loved ones, human trafficking, and dysfunctional family dynamics. There is no medication to treat DID directly, but medications can be used for comorbid disorders or targeted symptom relief—for example, antidepressants for anxiety and depression or sedative-hypnotics to improve sleep. Treatment generally involves supportive care and psychotherapy. The condition generally does not remit without treatment, and many patients have a lifelong course. Lifetime prevalence was found to be 1.1–1.5% of the general population (based on multiple epidemiological studies) and 3.9% of those admitted to psychiatric hospitals in Europe and North America. DID is diagnosed 6–9 times more often in women than in men, particularly in adult clinical settings; pediatric settings have nearly 1:1 ratio of girls to boys. The number of recorded cases increased significantly in the latter half of the 20th century, along with the number of identities reported by those affected, but it is unclear whether increased rates of diagnosis are due to better recognition or to sociocultural factors such as mass media portrayals. The typical presenting symptoms in different regions of the world may also vary depending on culture, such as alter identities taking the form of possessing spirits, deities, ghosts, or mythical creatures in cultures where possession states are normative. Definitions Critics argue that dissociation, the term that underlies dissociative disorders, lacks a precise, empirical, and generally agreed upon definition. Many diverse experiences have been termed dissociative, ranging from normal failures in attention to the breakdowns in memory processes characterized by the dissociative disorders. It is therefore unknown whether there is a commonality among all dissociative experiences, or whether the range of mild to severe symptoms is a result of different etiologies and biological structures. Other terms used in the literature, including personality, personality state, identity, ego state, and amnesia, also lack agreed upon definitions. Multiple competing models exist that incorporate some non-dissociative symptoms while excluding dissociative ones. Due to the lack of consensus about terminology in the study of DID, several terms have been proposed. One is ego state (behaviors and experiences possessing permeable boundaries with other such states but united by a common sense of self). Another is alters (each of which may have a separate autobiographical memory, independent initiative and a sense of ownership over individual behavior). Signs and symptoms The full presentation of dissociative identity disorder can onset at any age, although symptoms typically begin by ages 5–10. DID is generally a childhood-onset disorder. According to the fifth edition [text revision] of the Diagnostic and Statistical Manual of Mental Disorders (DSM-5-TR), symptoms of DID include "the presence of two or more distinct personality states" accompanied by the inability to recall personal information beyond what is expected through normal memory issues. Other DSM-5 symptoms include a loss of identity as related to individual distinct personality states, loss of one's subjective experience of the passage of time, and degradation of a sense of self and consciousness. In each individual, the clinical presentation varies and the level of functioning can change from severe impairment to minimal impairment. The symptoms of dissociative amnesia are subsumed under a DID diagnosis, and thus should not be diagnosed separately if DID criteria are met. Individuals with DID may experience distress from both the symptoms of DID (hearing voices, intrusive thoughts/emotions/impulses) and the consequences of the accompanying symptoms (inability to remember specific information or periods of time). The large majority of patients with DID report repeated childhood sexual and/or physical abuse, usually by caregivers as well as organized abuse. Amnesia between identities may be asymmetrical; identities may or may not be aware of what is known by another. Individuals with DID may be reluctant to discuss symptoms due to associations with abuse, shame, and fear. DID patients may also frequently and intensely experience time disturbances, both from amnesia and derealization. Around half of people with DID have fewer than 10 identities and most have fewer than 100; although as many as 4,500 have been reported by Richard Kluft in 1988. The average number of identities has increased over the past few decades, from two or three to now an average of approximately 16. However, it is unclear whether this is due to an actual increase in identities, or simply that the psychiatric community has become more accepting of a high number of compartmentalized memory components. Comorbid disorders The psychiatric history frequently contains multiple previous diagnoses of various disorders and treatment failures. The most common presenting complaint of DID is depression (90%) that is often treatment-resistant, with headaches and non-epileptic seizures being common neurologic symptoms. Comorbid disorders include post-traumatic stress disorder (PTSD), substance use disorders, eating disorders, anxiety disorders, personality disorders, and autism spectrum disorder. 30-70% of those diagnosed with DID have history of borderline personality disorder. Presentations of dissociation in people with schizophrenia differ from those with DID as not being rooted in trauma, and this distinction can be effectively tested, although both conditions share a high rate of auditory hallucinations in the form of voices. Other disorders that have been found to be comorbid with DID are somatization disorders, major depressive disorder, as well as history of a past suicide attempt, in comparison to those without a DID diagnosis. 70-75% of DID patients attempt suicide, and multiple attempts are common. Disturbed and altered sleep has also been suggested as having a role in dissociative disorders in general and specifically in DID, alterations in environments also largely affecting the DID patient. Individuals diagnosed with DID demonstrate the highest hypnotizability of any clinical population. Although DID has high comorbidity and its development is related to trauma, abundant empirical evidence suggests that DID is a separate condition from other disorders like PTSD. Causes General There are two competing theories on what causes dissociative identity disorder to develop. The trauma-related model suggests that complex trauma or severe adversity in childhood, also known as developmental trauma, increases the risk of someone developing dissociative identity disorder. The non-trauma related model, also referred to as the sociogenic or fantasy model, suggests that dissociative identity disorder is developed through high fantasy-proneness or suggestibility, roleplaying, or sociocultural influences. The DSM-5-TR states that "early life trauma (e.g., neglect and physical, sexual, and emotional abuse, usually before ages 5-6 years) represents a major risk factor for dissociative identity disorder." Other risk factors reported include painful medical procedures, war, terrorism, or being trafficked in childhood. Dissociative disorders frequently occur after trauma, and the DSM-5-TR places them after the chapter on trauma- and stressor-related disorders to reflect this close relationship between complex trauma and dissociation. Traumagenic model Dissociative identity disorder is often conceptualized as "the most severe form of a childhood-onset post-traumatic stress disorder." According to many researchers, the etiology of dissociative identity is multifactorial, involving a complex interaction between developmental trauma, sociocultural influences, and biological factors. People diagnosed with dissociative identity disorder often report that they have experienced physical or sexual abuse during childhood (although the accuracy of these reports has been disputed); others report overwhelming stress, serious medical illness, or other traumatic events during childhood. They also report more historical psychological trauma than those diagnosed with any other mental illness. Severe sexual, physical, or psychological trauma in childhood has been proposed as an explanation for its development; awareness, memories, and emotions of harmful actions or events caused by the trauma are sequestered away from consciousness, and alternate parts form with differing memories, emotions, beliefs, temperament and behavior. Dissociative identity disorder is also attributed to extremes of stress and disturbances of attachment to caregivers in early life. What may result in complex post-traumatic stress disorder (PTSD) in adults may become dissociative identity disorder when occurring in children, possibly due to their greater use of imagination as a form of coping as well as lack of developmental integration in childhood. Possibly due to developmental changes and a more coherent sense of self past age 6-9 years, the experience of extreme trauma may result in different, though also complex, dissociative symptoms, identity disturbances and trauma-related disorders. Relationships between childhood abuse, disorganized attachment, and lack of social support are thought to be common risk factors leading to dissociative identity disorder. Although the role of a child's biological capacity to dissociate remains unclear, some evidence indicates a neurobiological impact of developmental stress. Moreover, children are universally born un-integrated. Delinking early trauma from the etiology of dissociation has been explicitly rejected by those supporting the early trauma model. However, a 2012 review article supports the hypothesis that current or recent trauma may affect an individual's assessment of the more distant past, changing the experience of the past and resulting in dissociative states. Giesbrecht et al. have suggested there is no actual empirical evidence linking early trauma to dissociation, and instead suggest that problems with neuropsychological functioning, such as increased distractibility in response to certain emotions and contexts, account for dissociative features. A middle position hypothesizes that trauma, in some situations, alters neuronal mechanisms related to memory. Evidence is increasing that dissociative disorders are related both to a trauma history and to "specific neural mechanisms". It has also been suggested that there may be a genuine but more modest link between trauma and dissociative identity disorder, with early trauma causing increased fantasy-proneness, which may in turn render individuals more vulnerable to socio-cognitive influences surrounding the development of dissociative identity disorder. Another suggestion made by Hart indicates that there are triggers in the brain that can be the catalyst for different self-states, and that victims of trauma are more susceptible to these triggers than non-victims of trauma; these triggers are said to be related to dissociative identity disorder. Paris states that the trauma model of dissociative identity disorder increased the appeal of the diagnosis among health care providers, patients and the public as it validated the idea that child abuse had lifelong, serious effects. Paris asserts that there is very little experimental evidence supporting the trauma-dissociation hypothesis, and no research showing that dissociation consistently links to long-term memory disruption. Neuroimaging studies have reported a consistently smaller volume of the hippocampus in DID patients, supporting the trauma model. Sociogenic model Symptoms of dissociative identity disorder may be created by therapists using techniques to "recover" memories (such as the use of hypnosis to "access" alter identities, facilitate age regression or retrieve memories) on suggestible individuals. Referred to as the non-trauma-related model, or the sociocognitive model or fantasy model, it proposes that dissociative identity disorder is due to a person consciously or unconsciously behaving in certain ways promoted by cultural stereotypes, with unwitting therapists providing cues through improper therapeutic techniques. This model posits that behavior is enhanced by media portrayals of dissociative identity disorder. Proponents of the non-trauma-related model note that the dissociative symptoms are rarely present before intensive therapy by specialists in the treatment of dissociative identity disorder who, through the process of eliciting, conversing with, and identifying alters, shape or possibly create the diagnosis. While proponents note that dissociative identity disorder is accompanied by genuine suffering and the distressing symptoms, and can be diagnosed reliably using the DSM criteria, they are skeptical of the trauma-related etiology suggested by proponents of the trauma-related model. Proponents of non-trauma-related dissociative identity disorder are concerned about the possibility of hypnotizability, suggestibility, frequent fantasization and mental absorption predisposing individuals to dissociation. They note that a small subset of doctors are responsible for diagnosing the majority of individuals with dissociative identity disorder. Psychologist Nicholas Spanos and others have suggested that in addition to therapy-caused cases, dissociative identity disorder may be the result of role-playing, though others disagree, pointing to a lack of incentive to manufacture or maintain separate identities and point to the claimed histories of abuse. Other arguments that therapy can cause dissociative identity disorder include the lack of children diagnosed with DID, the sudden spike in rates of diagnosis after 1980 (although dissociative identity disorder was not a diagnosis until DSM-IV, published in 1994), the absence of evidence of increased rates of child abuse, the appearance of the disorder almost exclusively in individuals undergoing psychotherapy, particularly involving hypnosis, the presences of bizarre alternate identities (such as those claiming to be animals or mythological creatures) and an increase in the number of alternate identities over time (as well as an initial increase in their number as psychotherapy begins in DID-oriented therapy). These various cultural and therapeutic causes occur within a context of pre-existing psychopathology, notably borderline personality disorder, which is commonly comorbid with dissociative identity disorder. In addition, presentations can vary across cultures, such as Indian patients who only switch alters after a period of sleep – which is commonly how dissociative identity disorder is presented by the media within that country. Proponents of non-trauma-related dissociative identity disorder state that the disorder is strongly linked to (possibly suggestive) psychotherapy, often involving recovered memories (memories that the person previously had amnesia for) or false memories, and that such therapy could cause additional identities. Such memories could be used to make an allegation of child sexual abuse. There is little agreement between those who see therapy as a cause and trauma as a cause. Supporters of therapy as a cause of dissociative identity disorder suggest that a small number of clinicians diagnosing a disproportionate number of cases would provide evidence for their position though it has also been claimed that higher rates of diagnosis in specific countries like the United States may be due to greater awareness of DID. Lower rates in other countries may be due to artificially low recognition of the diagnosis. However, false memory syndrome per se is not regarded by mental health experts as a valid diagnosis, and has been described as "a non-psychological term originated by a private foundation whose stated purpose is to support accused parents," and critics argue that the concept has no empirical support, and further describe the False Memory Syndrome Foundation as an advocacy group that has distorted and misrepresented memory research. Children The rarity of DID diagnoses in children is cited as a reason to doubt the validity of the disorder, and proponents of both etiologies believe that the discovery of dissociative identity disorder in a child who had never undergone treatment would critically undermine the non-trauma related model. Conversely, if children are found to develop dissociative identity disorder only after undergoing treatment it would challenge the trauma-related model. , approximately 250 cases of dissociative identity disorder in children have been identified, though the data does not offer unequivocal support for either theory. While children have been diagnosed with dissociative identity disorder before therapy, several were presented to clinicians by parents who were themselves diagnosed with dissociative identity disorder; others were influenced by the appearance of dissociative identity disorder in popular culture or due to a diagnosis of psychosis due to hearing voices – a symptom also found in dissociative identity disorder. No studies have looked for children with dissociative identity disorder in the general population, and the single study that attempted to look for children with dissociative identity disorder not already in therapy did so by examining siblings of those already in therapy for dissociative identity disorder. An analysis of diagnosis of children reported in scientific publications, 44 case studies of single patients were found to be evenly distributed (i.e., each case study was reported by a different author) but in articles regarding groups of patients, four researchers were responsible for the majority of the reports. The initial theoretical description of dissociative identity disorder was that dissociative symptoms were a means of coping with extreme stress (particularly childhood sexual and physical abuse), but this belief has been challenged by the data of multiple research studies. Proponents of the trauma-related model claim the high correlation of child sexual and physical abuse reported by adults with dissociative identity disorder corroborates the link between trauma and dissociative identity disorder. However, the link between dissociative identity disorder and maltreatment has been questioned for several reasons. The studies reporting the links often rely on self-report rather than independent corroborations, and these results may be worsened by selection and referral bias. Most studies of trauma and dissociation are cross-sectional rather than longitudinal, which means researchers can not attribute causation, and studies avoiding recall bias have failed to corroborate such a causal link. In addition, studies rarely control for the many disorders comorbid with dissociative identity disorder, or family maladjustment (which is itself highly correlated with dissociative identity disorder). The popular association of dissociative identity disorder with childhood abuse is relatively recent, occurring only after the publication of Sybil in 1973. Most previous examples of "multiples" such as Chris Costner Sizemore, whose life was depicted in the book and film The Three Faces of Eve, reported no memory of childhood trauma. Pathophysiology Despite research on DID including structural and functional magnetic resonance imaging, positron emission tomography, single-photon emission computed tomography, event-related potentials, and electroencephalography, no convergent neuroimaging findings have been identified regarding DID, with the exception of smaller hippocampal volume in DID patients. In addition, many of the studies that do exist were performed from an explicitly trauma-based position. There is no research to date regarding the neuroimaging and introduction of false memories in DID patients, though there is evidence of changes in visual parameters and support for amnesia between alters. DID patients also appear to show deficiencies in tests of conscious control of attention and memorization (which also showed signs of compartmentalization for implicit memory between alters but no such compartmentalization for verbal memory) and increased and persistent vigilance and startle responses to sound. DID patients may also demonstrate altered neuroanatomy. Diagnosis General The fifth, revised edition of the American Psychiatric Association's Diagnostic and Statistical Manual of Mental Disorders (DSM-5-TR) diagnoses DID according to the diagnostic criteria found under code 300.14 (dissociative disorders). DID is often initially misdiagnosed because clinicians receive little training about dissociative disorders or DID, and often use standard diagnostic interviews that do not include questions about trauma, dissociation, or post-traumatic symptoms. This contributes to difficulties diagnosing the disorder, and to clinician bias. DID is rarely diagnosed in children, despite the average age of appearance of the first alter being three years old. The criteria require that an individual be recurrently controlled by two or more discrete identities or personality states, accompanied by memory lapses for important information that is not caused by alcohol, drugs or medications and other medical conditions such as complex partial seizures. In children, the symptoms must not be better explained by "imaginary playmates or other fantasy play". Diagnosis is normally performed by a clinically trained mental health professional such as a psychiatrist or psychologist through clinical evaluation, interviews with family and friends, and consideration of other ancillary material. Specially designed interviews (such as the SCID-D) and personality assessment tools may be used in the evaluation as well. Since most of the symptoms depend on self-report and are not concrete and observable, there is a degree of subjectivity in making the diagnosis. People are often disinclined to seek treatment, especially since their symptoms may not be taken seriously; thus dissociative disorders have been referred to as "diseases of hiddenness". The diagnosis has been criticized by supporters of therapy as a cause or the sociocognitive hypothesis as they believe it is a culture-bound and often health care induced condition. The social cues involved in diagnosis may be instrumental in shaping patient behavior or attribution, such that symptoms within one context may be linked to DID, while in another time or place the diagnosis could have been something other than DID. Other researchers disagree and argue that the existence of the condition and its inclusion in the DSM is supported by multiple lines of reliable evidence, with diagnostic criteria allowing it to be clearly discriminated from conditions it is often mistaken for (schizophrenia, borderline personality disorder, and seizure disorder). That a large proportion of cases are diagnosed by specific health care providers, and that symptoms have been created in nonclinical research subjects given appropriate cueing has been suggested as evidence that a small number of clinicians who specialize in DID are responsible for the creation of alters through therapy. The condition is greatly under-diagnosed due to skepticism and lack of awareness from mental health professionals without education and training in dissociation. Differential diagnoses Patients with DID are diagnosed with 5-7 comorbid disorders on average – higher than other mental conditions. Misdiagnoses (e.g. schizophrenia, bipolar disorder) are very common among patients with DID. Due to overlapping symptoms, the differential diagnosis includes schizophrenia, normal and rapid-cycling bipolar disorder, epilepsy, borderline personality disorder, and autism spectrum disorder. Delusions or auditory hallucinations can be mistaken for speech by other personalities. Persistence and consistency of identities and behavior, amnesia, measures of dissociation or hypnotizability and reports from family members or other associates indicating a history of such changes can help distinguish DID from other conditions. A diagnosis of DID takes precedence over any other dissociative disorders. Distinguishing DID from malingering is a concern when financial or legal gains are an issue, and factitious disorder may also be considered if the person has a history of help or attention-seeking. Individuals who state that their symptoms are due to external spirits or entities entering their bodies are generally diagnosed with dissociative disorder not otherwise specified rather than DID due to the lack of identities or personality states. Most individuals who enter an emergency department and are unaware of their names are generally in a psychotic state. Although auditory hallucinations are common in DID, complex visual hallucinations may also occur. Those with DID generally have adequate reality testing. People with DID may have more positive and less negative Schneiderian symptoms of schizophrenia. One study revealed, how a trained physician differentiated DID form schizophrenia on the Positive and Negative Syndrome Scale during interview. Subjects with dissociative identity report more delusions, hallucinations, and persecution. This is predominantly due to overabundance of dopamine. Furthermore, dissociative identity leads people to report lesser negative symptoms (score ≤ 3) because of a limited dopamine transmission. The DID persona perceives any voices heard as coming from inside their heads whereas the schizophrenia persona perceives voices as external). In addition, individuals with psychosis are much less susceptible to hypnosis than those with DID. Difficulties in differential diagnosis are increased in children. However, the frequency of childhood-onset schizophrenia (COS) is 1 in 40000, which is exceptionally rare. DID must be distinguished from, or determined if comorbid with, a variety of disorders including mood disorders, psychosis, anxiety disorders, PTSD, personality disorders, cognitive disorders, neurological disorders, epilepsy, somatoform disorder, factitious disorder, malingering, other dissociative disorders, and trance states. An additional aspect of the controversy of diagnosis is that there are many forms of dissociation and memory lapses, which can be common in both stressful and nonstressful situations and can be attributed to much less controversial diagnoses. A relationship between DID and borderline personality disorder has been posited, with various clinicians noting overlap between symptoms and behaviors and it has been suggested that some cases of DID may arise "from a substrate of borderline traits". Reviews of DID patients and their medical records concluded that 30-70% of those diagnosed with DID have comorbid borderline personality disorder. The DSM-5 elaborates on cultural background as an influence for some presentations of DID. Controversy and criticism of validity DID is among the most controversial of the dissociative disorders and among the most controversial disorders found in the DSM-5-TR. The primary dispute is between those who believe DID is caused by traumatic stresses that split the mind into multiple identities, each with a separate set of memories, and those who believe that the symptoms of DID are produced artificially by certain psychotherapeutic practices or patients playing a role they believe appropriate for a person with DID. The debate between the two positions is characterized by intense disagreement. Research has been characterized by poor methodology. Psychiatrist Joel Paris notes that the idea that a personality is capable of splitting into independent alters is an unproven assertion at odds with research in cognitive psychology. Some people, such as Russell A. Powell and Travis L. Gee, believe that DID is caused by health care, i.e. symptoms of DID are created by therapists themselves via hypnosis. This implies that those with DID are especially susceptible to manipulation by hypnosis and suggestion. The iatrogenic model also sometimes states that treatment for DID is harmful. According to Brand, Loewenstein, and Spiegel, "claims that DID treatment is harmful are based on anecdotal cases, opinion pieces, reports of damage that are not substantiated in the scientific literature, misrepresentations of the data, and misunderstandings about DID treatment and the phenomenology of DID". Their claim is evidenced by the fact that only 5%–10% of people receiving treatment initially worsen in their symptoms. Psychiatrists August Piper and Harold Merskey have challenged the trauma hypothesis, arguing that correlation does not imply causation—that people with DID report childhood trauma does not mean trauma causes DID—and point to the rarity of the diagnosis before 1980 as well as a failure to find DID as an outcome in longitudinal studies of traumatized children. They assert that DID cannot be accurately diagnosed because of vague and unclear diagnostic criteria in the DSM and undefined concepts such as "personality state" and "identities", and question the evidence for childhood abuse beyond self-reports, the lack of a defined threshold of abuse sufficient to induce DID, and the extremely small number of cases of children diagnosed with DID despite an average age of appearance of the first alter of three years. Psychiatrist Colin Ross disagrees with Piper and Merskey's conclusion that DID cannot be accurately diagnosed, pointing to internal consistency between different structured dissociative disorder interviews (including the Dissociative Experiences Scale, Dissociative Disorders Interview Schedule, and Structured Clinical Interview for Dissociative Disorders) in the internal validity range of widely accepted mental illnesses such as schizophrenia and major depressive disorder. In his opinion, Piper and Merskey set the standard of proof higher than it is for other diagnoses. He also asserts that Piper and Merskey have cherry-picked data and not incorporated all relevant scientific literature, such as independent corroborating evidence of trauma. A paper published in 2022 in the journal Comprehensive Psychiatry described how prolonged social media use, especially on video-sharing platforms including TikTok, has exposed young people, largely adolescent females, a core user group of TikTok, to a growing number of content creators making videos about their self-diagnosed disorders. "An increasing number of reports from the US, UK, Germany, Canada, and Australia have noted an increase in functional tic-like behaviors prior to and during the COVID-19 pandemic, coinciding with an increase in social media content related to[…]dissociative identity disorder." The paper concluded that there "is an urgent need for focused empirical research investigation into this concerning phenomenon that is related to the broader research and discourse examining social media influences on mental health". Treatment Treatment under the sociogenic model Proponents of the sociogenic model dispute that dissociative identity disorder is an organic response to trauma, but believe it is a socially constructed behavior and psychic contagion. McHugh says that the disorder is "sustained in large part by the attention that doctors tend to pay to it. This means that it is not a mental condition that derives from nature, such as panic anxiety or major depression. It exists in the world as an artificial product of human devising". According to McHugh, at Johns Hopkins Hospital doctors should ignore the displays from "alters", and instead focus on treatment for other psychiatric problems patients present with. This method of treatment is reportedly successful: McHugh believes that proponents of Dissociative Identity Disorder inadvertently worsen patient condition by validating the behavior and providing attention. Treatments under the trauma model The International Society for the Study of Trauma and Dissociation, proponents of the trauma model, have published guidelines for phase-oriented treatment in adults as well as children and adolescents that are widely used successfully in the field of DID treatment. The guidelines state that "a desirable treatment outcome is a workable form of integration or harmony among alternate identities". Some experts in treating people with DID use the techniques recommended in the 2011 treatment guidelines. The empirical research includes the longitudinal TOP DD treatment study, which found that patients showed "statistically significant reductions in dissociation, PTSD, distress, depression, hospitalisations, suicide attempts, self-harm, dangerous behaviours, drug use, and physical pain" and improved overall functioning. Treatment effects have been studied for over thirty years, with some studies having a follow-up of ten years. Adult and child treatment guidelines exist that suggest a three-phased approach. Common treatment methods include an eclectic mix of psychotherapy techniques, including cognitive behavioral therapy (CBT), insight-oriented therapy, dialectical behavioral therapy (DBT), hypnotherapy, and eye movement desensitization and reprocessing (EMDR). Hypnosis should be carefully considered when choosing both treatment and provider practitioners because of its dangers. For example, hypnosis can sometimes lead to false memories and false accusations of abuse by family, loved ones, friends, providers, and community members. Those who suffer from dissociative identity disorder have commonly been subject to actual abuse (sexual, physical, emotional, financial) by therapists, family, friends, loved ones, and community members. Some behavior therapists initially use behavioral treatments such as only responding to a single identity, and then use more traditional therapy once a consistent response is established. Brief treatment due to managed care may be difficult, as individuals diagnosed with DID may have unusual difficulties in trusting a therapist and take a prolonged period to form a comfortable therapeutic alliance. Regular contact (at least weekly) is recommended, and treatment generally lasts years – not weeks or months. Sleep hygiene has been suggested as a treatment option, but has not been tested. In general there are very few clinical trials on the treatment of DID, none of which were randomized controlled trials. Therapy for DID is generally phase oriented. Different alters may appear based on their greater ability to deal with specific situational stresses or threats. While some patients may initially present with a large number of alters, this number may reduce during treatment – though it is considered important for the therapist to become familiar with at least the more prominent personality states as the "host" personality may not be the "true" identity of the patient. Specific alters may react negatively to therapy, fearing the therapist's goal is to eliminate the alter (particularly those associated with illegal or violent activities). A more realistic and appropriate goal of treatment is to integrate adaptive responses to abuse, injury, or other threats into the overall personality structure. The first phase of therapy focuses on symptoms and relieving the distressing aspects of the condition, ensuring the safety of the individual, improving the patient's capacity to form and maintain healthy relationships, and improving general daily life functioning. Comorbid disorders such as substance use disorder and eating disorders are addressed in this phase of treatment. The second phase focuses on stepwise exposure to traumatic memories and prevention of re-dissociation. The final phase focuses on reconnecting the identities of disparate alters into a single functioning identity with all its memories and experiences intact. Prognosis Little is known about prognosis of untreated DID. Symptoms commonly wax and wane over time. Patients with mainly dissociative and post-traumatic symptoms face a better prognosis than those with comorbid disorders or those still in contact with abusers, and the latter groups often face a lengthier and more difficult treatment course. Suicidal ideation, suicide attempts, and self-harm are common in the DID population. Duration of treatment can vary depending on patient goals, which can range from merely improving inter-alter communication and cooperation, to reducing inter-alter amnesia, to integration and fusion of all alters, but this last goal generally takes years, with trained and experienced psychotherapists. Epidemiology General According to the American Psychiatric Association, the 12-month prevalence of DID among adults in the US is 1.5%, with similar prevalence between women and men. Population prevalence estimates have been described to widely vary, with some estimates of DID in inpatient settings suggesting 1-9.6%." Reported rates in the community vary from 1% to 3% with higher rates among psychiatric patients. As of 2017, evidence suggested a prevalence of DID of 2–5% among psychiatric inpatients, 2–3% among outpatients, and 1% in the general population, with rates reported as high as 16.4% for teenagers in psychiatric outpatient services. Dissociative disorders in general have a lifetime prevalence of 9.1%–18.3% in the general population. As of 2012, DID was diagnosed 5 to 9 times more common in women than men during young adulthood, although this may have been due to selection bias as men meeting DID diagnostic criteria were suspected to end up in the criminal justice system rather than hospitals. In children, rates among men and women are approximately the same (5:4). DID diagnoses are extremely rare in children; much of the research on childhood DID occurred in the 1980s and 1990s and does not address ongoing controversies surrounding the diagnosis. DID occurs more commonly in young adults and declines in prevalence with age. There is a poor awareness of DID in the clinical settings and the general public. Poor clinical education (or lack thereof) for DID and other dissociative disorders has been described in literature: "most clinicians have been taught (or assume) that DID is a rare disorder with a florid, dramatic presentation." Symptoms in patients are often not easily visible, which complicates diagnosis. DID has a high correlation with, and has been described as a form of, complex post-traumatic stress disorder. There is a significant overlap of symptoms between borderline personality disorder and DID, although symptoms are understood to originate from different underlying causes. Historical prevalence Rates of diagnosed DID were increasing in the late 20th century, reaching a peak of diagnoses at approximately 40,000 cases by the end of the 20th century, up from less than 200 diagnoses before 1970. Initially DID along with the rest of the dissociative disorders were considered the rarest of psychological conditions, diagnosed in less than 100 by 1944, with only one further case reported in the next two decades. In the late 1970s and '80s, the number of diagnoses rose sharply. An estimate from the 1980s placed the incidence at 0.01%. Accompanying this rise was an increase in the number of alters, rising from only the primary and one alter personality in most cases, to an average of 13 in the mid-1980s (the increase in both number of cases and number of alters within each case are both factors in professional skepticism regarding the diagnosis). Others explain the increase as being due to the use of inappropriate therapeutic techniques in highly suggestible individuals, though this is itself controversial while proponents of DID claim the increase in incidence is due to increased recognition of and ability to recognize the disorder. Figures from psychiatric populations (inpatients and outpatients) show a wide diversity from different countries. A 1996 essay suggested three possible causes for the sudden increase of DID diagnoses, among which the author suspects the first being most likely: The result of therapist suggestions to suggestible people, much as Charcot's hysterics acted in accordance with his expectations. Psychiatrists' past failure to recognize dissociation being redressed by new training and knowledge. Dissociative phenomena are actually increasing, but this increase only represents a new form of an old and protean entity: "hysteria". Dissociative disorders were excluded from the Epidemiological Catchment Area Project. North America DID continues to be considered a controversial diagnosis; it was once regarded as a phenomenon confined to North America, though studies have since been published from DID populations across 6 continents. Although research has appeared discussing the appearance of DID in other countries and cultures and the condition has been described in non-English speaking nations and non-Western cultures, these reports all occur in English-language journals authored by international researchers who cite Western scientific literature. Etzel Cardeña and David Gleaves believed the greater representation of DID in North America was the result of increased awareness and training about the condition. History Early references In the 19th century, "dédoublement", or "double consciousness", the historical precursor to DID, was frequently described as a state of sleepwalking, with scholars hypothesizing that the patients were switching between a normal consciousness and a "somnambulistic state". An intense interest in spiritualism, parapsychology and hypnosis continued throughout the 19th and early 20th centuries, running in parallel with John Locke's views that there was an association of ideas requiring the coexistence of feelings with awareness of the feelings. Hypnosis, which was pioneered in the late 18th century by Franz Mesmer and Armand-Marie Jacques de Chastenet, Marques de Puységur, challenged Locke's association of ideas. Hypnotists reported what they thought were second personalities emerging during hypnosis and wondered how two minds could coexist. In the 19th century, there were a number of reported cases of multiple personalities which Rieber estimated would be close to 100. Epilepsy was seen as a factor in some cases, and discussion of this connection continues into the present era. By the late 19th century, there was a general acceptance that emotionally traumatic experiences could cause long-term disorders which might display a variety of symptoms. These conversion disorders were found to occur in even the most resilient individuals, but with profound effect in someone with emotional instability like Louis Vivet (1863–?), who had a traumatic experience as a 17-year-old when he encountered a viper. Vivet was the subject of countless medical papers and became the most studied case of dissociation in the 19th century. Between 1880 and 1920, various international medical conferences devoted time to sessions on dissociation. It was in this climate that Jean-Martin Charcot introduced his ideas of the impact of nervous shocks as a cause for a variety of neurological conditions. One of Charcot's students, Pierre Janet, took these ideas and went on to develop his own theories of dissociation. One of the first individuals diagnosed with multiple personalities to be scientifically studied was Clara Norton Fowler, under the pseudonym Christine Beauchamp; American neurologist Morton Prince studied Fowler between 1898 and 1904, describing her case study in his 1906 monograph, Dissociation of a Personality. 20th century In the early 20th century, interest in dissociation and multiple personalities waned for several reasons. After Charcot's death in 1893, many of his so-called hysterical patients were exposed as frauds, and Janet's association with Charcot tarnished his theories of dissociation. Sigmund Freud recanted his earlier emphasis on dissociation and childhood trauma. In 1908, Eugen Bleuler introduced the term "schizophrenia" to represent a revised disease concept for Emil Kraepelin's dementia praecox. Whereas Kraepelin's natural disease entity was anchored in the metaphor of progressive deterioration and mental weakness and defect, Bleuler offered a reinterpretation based on dissociation or "splitting" (Spaltung) and widely broadened the inclusion criteria for the diagnosis. A review of the Index medicus from 1903 through 1978 showed a dramatic decline in the number of reports of multiple personality after the diagnosis of schizophrenia became popular, especially in the United States. The rise of the broad diagnostic category of dementia praecox has also been posited in the disappearance of "hysteria" (the usual diagnostic designation for cases of multiple personalities) by 1910. A number of factors helped create a large climate of skepticism and disbelief; paralleling the increased suspicion of DID was the decline of interest in dissociation as a laboratory and clinical phenomenon. Starting in about 1927, there was a large increase in the number of reported cases of schizophrenia, which was matched by an equally large decrease in the number of multiple personality reports. With the rise of a uniquely American reframing of dementia praecox/schizophrenia as a functional disorder or "reaction" to psychobiological stressors – a theory first put forth by Adolf Meyer in 1906—many trauma-induced conditions associated with dissociation, including "shell shock" or "war neuroses" during World War I, were subsumed under these diagnoses. It was argued in the 1980s that DID patients were often misdiagnosed with schizophrenia. The public, however, was exposed to psychological ideas which took their interest. Mary Shelley's Frankenstein, Robert Louis Stevenson's Strange Case of Dr Jekyll and Mr Hyde, and many short stories by Edgar Allan Poe had a formidable impact. The Three Faces of Eve In 1957, with the publication of the bestselling book The Three Faces of Eve by psychiatrists Corbett H. Thigpen and Hervey M. Cleckley, based on a case study of their patient Chris Costner Sizemore, and the subsequent popular movie of the same name, the American public's interest in multiple personality was revived. More cases of dissociative identity disorder were diagnosed in the following years. The cause of the sudden increase of cases is indefinite, but it may be attributed to the increased awareness, which revealed previously undiagnosed cases or new cases may have been induced by the influence of the media on the behavior of individuals and the judgement of therapists. During the 1970s an initially small number of clinicians campaigned to have it considered a legitimate diagnosis. History in the DSM The DSM-II used the term hysterical neurosis, dissociative type. It described the possible occurrence of alterations in the patient's state of consciousness or identity, and included the symptoms of "amnesia, somnambulism, fugue, and multiple personality". The DSM-III grouped the diagnosis with the other four major dissociative disorders using the term "multiple personality disorder". The DSM-IV made more changes to DID than any other dissociative disorder, and renamed it DID. The name was changed for two reasons: First, the change emphasizes the main problem is not a multitude of personalities, but rather a lack of a single, unified identity and an emphasis on "the identities as centers of information processing". Second, the term "personality" is used to refer to "characteristic patterns of thoughts, feelings, moods, and behaviors of the whole individual", while for a patient with DID, the switches between identities and behavior patterns is the personality. It is, for this reason, the DSM-IV-TR referred to "distinct identities or personality states" instead of personalities. The diagnostic criteria also changed to indicate that while the patient may name and personalize alters, they lack independent, objective existence. The changes also included the addition of amnesia as a symptom, which was not included in the DSM-III-R because despite being a core symptom of the condition, patients may experience "amnesia for the amnesia" and fail to report it. Amnesia was replaced when it became clear that the risk of false negative diagnoses was low because amnesia was central to DID. The ICD-10 places the diagnosis in the category of "dissociative disorders", within the subcategory of "other dissociative (conversion) disorders", but continues to list the condition as multiple personality disorder. The DSM-IV-TR criteria for DID have been criticized for failing to capture the clinical complexity of DID, lacking usefulness in diagnosing individuals with DID (for instance, by focusing on the two least frequent and most subtle symptoms of DID) producing a high rate of false negatives and an excessive number of DDNOS diagnoses, for excluding possession (seen as a cross-cultural form of DID), and for including only two "core" symptoms of DID (amnesia and self-alteration) while failing to discuss hallucinations, trance-like states, somatoform, depersonalization, and derealization symptoms. Arguments have been made for allowing diagnosis through the presence of some, but not all of the characteristics of DID rather than the current exclusive focus on the two least common and noticeable features. The DSM-IV-TR criteria have also been criticized for being tautological, using imprecise and undefined language and for the use of instruments that give a false sense of validity and empirical certainty to the diagnosis. The DSM-5 updated the definition of DID in 2013, summarizing the changes as: Between 1968 and 1980, the term that was used for dissociative identity disorder was "Hysterical neurosis, dissociative type". The APA wrote in the second edition of the DSM: "In the dissociative type, alterations may occur in the patient's state of consciousness or in his identity, to produce such symptoms as amnesia, somnambulism, fugue, and multiple personality." The number of cases sharply increased in the late 1970s and throughout the 80s, and the first scholarly monographs on the topic appeared in 1986. Book and film Sybil In 1974, the highly influential book Sybil was published, and later made into a miniseries in 1976 and again in 2007. Describing what Robert Rieber called "the third most famous of multiple personality cases," it presented a detailed discussion of the problems of treatment of "Sybil Isabel Dorsett", a pseudonym for Shirley Ardell Mason. Though the book and subsequent films helped popularize the diagnosis and trigger an epidemic of the diagnosis, later analysis of the case suggested different interpretations, ranging from Mason's problems having been caused by the therapeutic methods and sodium pentathol injections used by her psychiatrist, C. B. Wilbur, or an inadvertent hoax due in part to the lucrative publishing rights, though this conclusion has itself been challenged. David Spiegel, a Stanford psychiatrist whose father treated Shirley Ardell Mason on occasion, says that his father described Mason as "a brilliant hysteric. He felt that Wilbur tended to pressure her to exaggerate on the dissociation she already had." As media attention on DID increased, so too did the controversy surrounding the diagnosis. Re-classifications The DSM-III intentionally omitted the terms "hysteria" and "neurosis", naming those as Dissociative Disorders, which included Multiple Personality Disorder, and also added Post-traumatic Stress Disorder in Anxiety Disorders section. In the opinion of McGill University psychiatrist Joel Paris, this inadvertently legitimized them by forcing textbooks, which mimicked the structure of the DSM, to include a separate chapter on them and resulted in an increase in diagnosis of dissociative conditions. Once a rarely occurring spontaneous phenomenon (research in 1944 showed only 76 cases), the diagnosis became "an artifact of bad (or naïve) psychotherapy" as patients capable of dissociating were accidentally encouraged to express their symptoms by "overly fascinated" therapists. In a 1986 book chapter (later reprinted in another volume), philosopher of science Ian Hacking focused on multiple personality disorder as an example of "making up people" through the untoward effects on individuals of the "dynamic nominalism" in medicine and psychiatry. With the invention of new terms, entire new categories of "natural kinds" of people are assumed to be created, and those thus diagnosed respond by re-creating their identity in light of the new cultural, medical, scientific, political and moral expectations. Hacking argued that the process of "making up people" is historically contingent, hence it is not surprising to find the rise, fall, and resurrection of such categories over time. Hacking revisited his concept of "making up people" in a 2006. "Interpersonality amnesia" was removed as a diagnostic feature from the DSM III in 1987, which may have contributed to the increasing frequency of the diagnosis. There were 200 reported cases of DID as of 1980, and 20,000 from 1980 to 1990. Joan Acocella reports that 40,000 cases were diagnosed from 1985 to 1995. Scientific publications regarding DID peaked in the mid-1990s then rapidly declined. There were several contributing factors to the rapid decline of reports of multiple personality disorder/dissociative identity disorder. One was the discontinuation in December 1997 of Dissociation: Progress in the Dissociative Disorders, the journal of The International Society for the Study of Multiple Personality and Dissociation. The society and its journal were perceived as uncritical sources of legitimacy for the extraordinary claims of the existence of intergenerational satanic cults responsible for a "hidden holocaust" of Satanic ritual abuse that was linked to the rise of MPD reports. In an effort to distance itself from the increasing skepticism regarding the clinical validity of MPD, the organization dropped "multiple personality" from its official name in 1993, and then in 1997 changed its name again to the International Society for the Study of Trauma and Dissociation. In 1994, the fourth edition of the DSM replaced the criteria again and changed the name of the condition from "multiple personality disorder" to the current "dissociative identity disorder" to emphasize the importance of changes to consciousness and identity rather than personality. The inclusion of interpersonality amnesia helped to distinguish DID from dissociative disorder not otherwise specified (DDNOS), but the condition retains an inherent subjectivity due to difficulty in defining terms such as personality, identity, ego-state, and even amnesia. The ICD-10 classified DID as a "Dissociative [conversion] disorder" and used the name "multiple personality disorder" with the classification number of F44.81. In the ICD-11, the World Health Organization have classified DID under the name "dissociative identity disorder" (code 6B64), and most cases formerly diagnosed as DDNOS are classified as "partial dissociative identity disorder" (code 6B65). 21st century A 2006 study compared scholarly research and publications on DID and dissociative amnesia to other mental health conditions, such as anorexia nervosa, alcohol use disorder, and schizophrenia from 1984 to 2003. The results were found to be unusually distributed, with a very low level of publications in the 1980s followed by a significant rise that peaked in the mid-1990s and subsequently rapidly declined in the decade following. Compared to 25 other diagnosis, the mid-1990s "bubble" of publications regarding DID was unique. In the opinion of the authors of the review, the publication results suggest a period of "fashion" that waned, and that the two diagnoses "[did] not command widespread scientific acceptance." Society and culture In popular culture The public's long fascination with DID has led to a number of different books and films, with many representations described as increasing stigma by perpetuating the myth that people with mental illness are usually dangerous. Movies about DID have been also criticized for poor representation of both DID and its treatment, including "greatly overrepresenting" the role of hypnosis in therapy, showing a significantly smaller number of personalities than many people with DID have, and misrepresenting people with DID as having theatrical and blatant switches between very conspicuous and different alters. Some movies are parodies and ridicule DID, for instance, Me, Myself & Irene, which also incorrectly states that DID is schizophrenia. In some stories, DID is used as a plot device, e.g. in Fight Club, and in whodunnit stories like Secret Window. United States of Tara was reported to be the first US television series with DID as its focus, and a professional commentary on each episode was published by the International Society for the Study of Trauma and Dissociation. A number of people with DID have publicly spoken about their experiences, including comedian and talk show host Roseanne Barr, who interviewed Truddi Chase, author of When Rabbit Howls; Chris Costner Sizemore, the subject of The Three Faces of Eve, Cameron West, author of First Person Plural: My life as a multiple, and NFL player Herschel Walker, author of Breaking Free: My life with dissociative identity disorder. In The Three Faces of Eve (1957) hypnosis is used to identify a childhood trauma which then allows her to fuse from three identities into just one. However, Sizemore's own books I'm Eve and A Mind of My Own revealed that this did not last; she later attempted suicide, sought further treatment, and actually had twenty-two personalities rather than three. Sizemore re-entered therapy and by 1974 had achieved a lasting recovery. Voices Within: The Lives of Truddi Chase portrays many of the 92 personalities Chase described in her book When Rabbit Howls, and is unusual in breaking away from the typical ending of integrating into one. Frankie & Alice (2010), starring Halle Berry was based on a real person with DID. In popular culture dissociative identity disorder is often confused with schizophrenia, and some movies advertised as representing dissociative identity disorder may be more representative of psychosis or schizophrenia, for example Psycho (1960). In his book The C.I.A. Doctors: Human Rights Violations by American Psychiatrists, psychiatrist Colin A. Ross states that based on documents obtained through freedom of information legislation, a psychiatrist linked to Project MKULTRA reported being able to deliberately induce dissociative identity disorder using a variety of highly aversive and abusive techniques, creating a Manchurian Candidate for military purposes. In the USA Network television production Mr. Robot, the protagonist Elliot Alderson was created using anecdotal experiences of DID of the show's creator's friends. Sam Esmail said he consulted with a psychologist who "concretized" the character's mental health conditions, especially his plurality. In M. Night Shyamalan's Unbreakable superhero film series (specifically, the films Split and Glass), Kevin Wendell Crumb is diagnosed with DID, and that some of the personalities have super-human powers. Experts and advocates say the films are a negative portrayal of DID and the films promote the stigmatization of the disorder. The 1993 Malayalam film Manichitrathazhu featured its central character played by Shobana being affected with DID, mentioned as multiple personality disorder in the movie. Bollywood remake of Manichitrathazhu, Bhool Bhulaiyaa (2007) featured Vidya Balan as Avni, an individual diagnosed with DID who associated herself with Manjulika, a deceased dancer in a royal palace. Although the movie was criticised for being insensitive, it was also lauded for spreading awareness about DID and contributing towards removing stigma around mental health. In 2005, Indian film director Shankar Shanmugam's Tamil film Anniyan has its plot centered on a disillusioned everyman whose frustration at what he sees as increasing social apathy and public negligence leads to a split personality that attempts to improve the system. Its central character Ambi, an idealistic, law-abiding lawyer who has DID and develops two other identities: a suave fashion model named Remo and a murderous vigilante named Anniyan. In the 1997 Japanese role-playing game Final Fantasy VII, the protagonist Cloud Strife is shown to have an identity disorder involving false memories as a result of post-traumatic stress disorder (PTSD). Sharon Packer has identified Cloud as having DID. In Marvel Comics, the character of Moon Knight is shown to have DID. In the TV series Moon Knight based on the comic book character, protagonist Marc Spector is depicted with DID; the website for the National Alliance on Mental Illness appears in the series' end credits. Another Marvel character, Legion, has DID in the comics, although he has schizophrenia in the TV show version, highlighting the general public's confusion between the two distinct and separate disorders. Legal issues People with dissociative identity disorder may be involved in legal cases as a witness, defendant, or as the victim/injured party. Claims of DID have been used only rarely to argue criminal insanity in court. In the United States dissociative identity disorder has previously been found to meet the Frye test as a generally accepted medical condition, and the newer Daubert standard. Within legal circles, DID has been described as one of the most disputed psychiatric diagnoses and forensic assessments are needed. For defendants whose defense states they have a diagnosis of DID, courts must distinguish between those who genuinely have DID and those who are malingering to avoid responsibility. Expert witnesses are typically used to assess defendants in such cases, although some of the standard assessments like the MMPI-2 were not developed for people with a trauma history and the validity scales may incorrectly suggest malingering. The Multiscale Dissociation Inventory (Briere, 2002) is well suited to assessing malingering and dissociative disorders, unlike the self-report Dissociative Experiences Scale. In DID, evidence about the altered states of consciousness, actions of alter identities and episodes of amnesia may be excluded from a court if they are not considered relevant, although different countries and regions have different laws. A diagnosis of DID may be used to claim a defense of not guilty by reason of insanity, but this very rarely succeeds, or of diminished capacity, which may reduce the length of a sentence. DID may also affect competency to stand trial. A not guilty by reason of insanity plea was first used successfully in an American court in 1978, in the State of Ohio v. Milligan case. However, a DID diagnosis is not automatically considered a justification for an insanity verdict, and since Milligan the few cases claiming insanity have largely been unsuccessful. Bennett G. Braun was an American psychiatrist known for his promotion of the concept of multiple personality disorder (now called "dissociative identity disorder") and involvement in promoting the "Satanic Panic", a moral panic around a discredited conspiracy theory that led to thousands of people being wrongfully medically treated or investigated for nonexistent crimes. Online subculture A DID community exists on social media, including YouTube, Reddit, Discord, and TikTok. In those contexts, the experience of dissociative identities has been called multiplicity. High-profile members of this community have been criticized for faking their condition for views, or for portraying the disorder lightheartedly. Psychologist Naomi Torres-Mackie, head of research at The Mental Health Coalition, has stated "All of a sudden, all of my adolescent patients think that they have this, and they don't ... Folks start attaching clinical meaning and feeling like, 'I should be diagnosed with this. I need medication for this', when actually a lot of these experiences are normative and don't need to be pathologized or treated." However, online communities for DID can be beneficial. Aubrey Bakker, a neuropsychologist, says, "Dissociative Identity Disorder can be extremely isolating... and [p]articipating in TikTok’s DID community can remedy some of that isolation." Advocacy Some advocates consider DID to be a form of neurodiversity, leading to advocacy in recognizing 'positive plurality' and the use of plural pronouns such as "we" and "our". Advocates also challenge the necessity of integration. Timothy Baynes argues that alters have full moral status, just as their host does. He states that as integration may entail the (involuntary) elimination of such an entity, forcing people to undergo it as a therapeutic treatment is "seriously immoral". In 2011, author Lance Lippert wrote that most people with DID downplayed or minimized their symptoms rather than seeking fame, often due to shame or fear of the effects of stigma. Therapists may discourage people with DID from media work due to concerns that they may feel exploited or traumatized, for example as a result of demonstrating switching between personality states to entertain others. A DID (or Dissociative Identities) Awareness Day takes place on March 5 annually, and a multicolored awareness ribbon is used, based on the idea of a "crazy quilt". Explanatory notes
Biology and health sciences
Mental disorders
Health
39663
https://en.wikipedia.org/wiki/Kepler%27s%20Supernova
Kepler's Supernova
SN 1604, also known as Kepler's Supernova, Kepler's Nova or Kepler's Star, was a Type Ia supernova that occurred in the Milky Way, in the constellation Ophiuchus. Appearing in 1604, it is the most recent supernova in the Milky Way galaxy to have been unquestionably observed by the naked eye, occurring no farther than 6 kiloparsecs (20,000 light-years) from Earth. Before the adoption of the current naming system for supernovae, it was named for Johannes Kepler, the German astronomer who described it in De Stella Nova. Observation Visible to the naked eye, Kepler's Star was brighter at its peak than any other star in the night sky, with an apparent magnitude of −2.5. It was visible during the day for over three weeks. Records of its sighting exist in European, Chinese, Korean, and Arabic sources. It was the second supernova to be observed in a generation (after SN 1572 seen by Tycho Brahe in Cassiopeia). No further supernovae have since been observed with certainty in the Milky Way, though many others outside the galaxy have been seen since S Andromedae in 1885. SN 1987A in the Large Magellanic Cloud was visible to the naked eye at night. Evidence exists for two Milky Way supernovae whose electromagnetic radiation would have reached Earth 1680 and 1870 – Cassiopeia A, and G1.9+0.3 respectively. There is no historical record of either having been detected in those years, likely because absorption by interstellar dust obscured their visible light. The remnant of Kepler's supernova is considered to be one of the prototypical objects of its kind and is still an object of much study in astronomy. Controversies Astronomers of the time (including Kepler) were concerned with observing the conjunction of Mars and Jupiter, which they saw as an auspicious conjunction linked to the Star of Bethlehem. However, cloudy weather prevented Kepler from making observations. Wilhelm Fabry, Michael Maestlin, and Helisaeus Roeslin were able to make observations on 9 October, but did not record the supernova. The first recorded observation in Europe was by Lodovico delle Colombe in northern Italy on 9 October 1604. Kepler was only able to begin his observations on 17 October while working at the imperial court in Prague for Emperor Rudolf II. The supernova was subsequently named after him, even though he was not its first observer, as his observations tracked the object for an entire year. These observations were described in his book De Stella nova in pede Serpentarii ("On the new star in Ophiuchus's foot", Prague 1606). Delle Colombe–Galileo controversy In 1606, Delle Colombe published Discourse of Lodovico delle Colombe in which he shows that the "Star Newly Appeared in October 1604 is neither a Comet nor a New Star" and where he defended an Aristotelian view of cosmology after Galileo Galilei had used the occasion of the supernova to challenge the Aristotelian system. The description of Galileo's claims is as follows: Galileo explained the meaning and relevance of parallax, reported that the nova displayed none, and concluded, as a certainty, that it lay beyond the moon. Here he might have stopped, having dispatched his single arrow. Instead he sketched a theory that ruined the Aristotelian cosmos: the nova very probably consisted of a large quantity of airy material that issued from the earth and shone by reflected sunlight, like Aristotelian comets. Unlike them, however, it could rise beyond the moon. It not only brought change to the heavens, but did so provocatively by importing corruptible earthy elements into the pure quintessence. That raised heaven-shattering possibilities. The interstellar space might be filled with something similar to our atmosphere, as in the physics of the Stoics, to which Tycho had referred in his lengthy account of the nova of 1572. And if the material of the firmament resembled that of bodies here below, a theory of motion built on experience with objects within our reach might apply also to the celestial regions. "But I am not so bold as to think that things cannot take place differently from the way I have specified." Kepler–Roeslin controversy In Kepler's De Stella Nova (1606), he criticized Roeslin concerning this supernova. Kepler argued that in his astrological prognostications, Roeslin had picked out just the two comets, the Great Comet of 1556 and 1580. Roeslin responded in 1609 that this was indeed what he had done. When Kepler replied later that year, he simply observed that by including a broader range of data Roeslin could have made a better argument. Supernova remnant The supernova remnant of SN 1604, Kepler's Star, was discovered in 1941 at the Mount Wilson Observatory as a dim nebula with a brightness of 19 mag. Only filaments can be seen in visible light, but it is a strong radio and X-ray source. Its diameter is 4 arc min. Distance estimates place it between 3 and more than 7 kiloparsecs (10,000 to 23,000 lightyears), with the current consensus being a distance of , as of 2021. The available evidence supports a type Ia supernova as the source of this remnant, which is the result of a carbon-oxygen white dwarf interacting with a companion star. The integrated X-ray spectrum resembles that of Tycho's supernova remnant, a type Ia supernova. The abundance of oxygen relative to iron in the remnant of SN 1604 is roughly solar, whereas a core-collapse scenario should produce a much higher abundance of oxygen. No surviving central source has been identified, which is consistent with a type Ia event. Finally, the historical records for the brightness of this event are consistent with type Ia supernovae. There is evidence for interaction of the supernova ejecta with circumstellar matter from the progenitor star, which is unexpected for type Ia but has been observed in some cases. A bow shock located to the north of this system is believed to have been created by mass loss prior to the explosion. Observations of the remnant are consistent with the interaction of a supernova with a bipolar planetary nebula that belonged to one or both of the progenitor stars. The remnant is not spherically symmetric, which is likely due to the progenitor being a runaway star system. The bow shock is caused by the interaction of the advancing stellar wind with the interstellar medium. A remnant rich in nitrogen and silicon indicates that the system consisted of a white dwarf with an evolved companion that had likely already passed through the asymptotic giant branch stage.
Physical sciences
Notable transient events
Astronomy
39666
https://en.wikipedia.org/wiki/Supernova%20remnant
Supernova remnant
A supernova remnant (SNR) is the structure resulting from the explosion of a star in a supernova. The supernova remnant is bounded by an expanding shock wave, and consists of ejected material expanding from the explosion, and the interstellar material it sweeps up and shocks along the way. There are two common routes to a supernova: either a massive star may run out of fuel, ceasing to generate fusion energy in its core, and collapsing inward under the force of its own gravity to form a neutron star or a black hole; or a white dwarf star may accrete material from a companion star until it reaches a critical mass and undergoes a thermonuclear explosion. In either case, the resulting supernova explosion expels much or all of the stellar material with velocities as much as 10% the speed of light (or approximately 30,000 km/s) and a strong shock wave forms ahead of the ejecta. That heats the upstream plasma up to temperatures well above millions of K. The shock continuously slows down over time as it sweeps up the ambient medium, but it can expand over hundreds or thousands of years and over tens of parsecs before its speed falls below the local sound speed. One of the best observed young supernova remnants was formed by SN 1987A, a supernova in the Large Magellanic Cloud that was observed in February 1987. Other well-known supernova remnants include the Crab Nebula; Tycho, the remnant of SN 1572, named after Tycho Brahe who recorded the brightness of its original explosion; and Kepler, the remnant of SN 1604, named after Johannes Kepler. The youngest known remnant in the Milky Way is G1.9+0.3, discovered in the Galactic Center. Stages An SNR passes through the following stages as it expands: Free expansion of the ejecta, until they sweep up their own weight in circumstellar or interstellar medium. This can last tens to a few hundred years depending on the density of the surrounding gas. Sweeping up of a shell of shocked circumstellar and interstellar gas. This begins the Sedov-Taylor phase, which can be well modeled by a self-similar analytic solution (see blast wave). Strong X-ray emission traces the strong shock waves and hot shocked gas. Cooling of the shell, to form a thin (< 1 pc), dense (1 to 100 million atoms per cubic metre) shell surrounding the hot (few million kelvin) interior. This is the pressure-driven snowplow phase. The shell can be clearly seen in optical emission from recombining ionized hydrogen and ionized oxygen atoms. Cooling of the interior. The dense shell continues to expand from its own momentum. This stage is best seen in the radio emission from neutral hydrogen atoms. Merging with the surrounding interstellar medium. When the supernova remnant slows to the speed of the random velocities in the surrounding medium, after roughly 30,000 years, it will merge into the general turbulent flow, contributing its remaining kinetic energy to the turbulence. Types of supernova remnant There are three types of supernova remnant: Shell-like, such as Cassiopeia A Composite, in which a shell contains a central pulsar wind nebula, such as G11.2-0.3 or G21.5-0.9. Mixed-morphology (also called "thermal composite") remnants, in which central thermal X-ray emission is seen, enclosed by a radio shell. The thermal X-rays are primarily from swept-up interstellar material, rather than supernova ejecta. Examples of this class include the SNRs W28 and W44. (Confusingly, W44 additionally contains a pulsar and pulsar wind nebula; so it is simultaneously both a "classic" composite and a thermal composite.) Remnants which could only be created by significantly higher ejection energies than a standard supernova are called hypernova remnants, after the high-energy hypernova explosion that is assumed to have created them. Origin of cosmic rays Supernova remnants are considered the major source of galactic cosmic rays. The connection between cosmic rays and supernovas was first suggested by Walter Baade and Fritz Zwicky in 1934. Vitaly Ginzburg and Sergei Syrovatskii in 1964 remarked that if the efficiency of cosmic ray acceleration in supernova remnants is about 10 percent, the cosmic ray losses of the Milky Way are compensated. This hypothesis is supported by a specific mechanism called "shock wave acceleration" based on Enrico Fermi's ideas, which is still under development. In 1949, Fermi proposed a model for the acceleration of cosmic rays through particle collisions with magnetic clouds in the interstellar medium. This process, known as the "Second Order Fermi Mechanism", increases particle energy during head-on collisions, resulting in a steady gain in energy. A later model to produce Fermi Acceleration was generated by a powerful shock front moving through space. Particles that repeatedly cross the front of the shock can gain significant increases in energy. This became known as the "First Order Fermi Mechanism". Supernova remnants can provide the energetic shock fronts required to generate ultra-high energy cosmic rays. Observation of the SN 1006 remnant in the X-ray has shown synchrotron emission consistent with it being a source of cosmic rays. However, for energies higher than about 1018 eV a different mechanism is required as supernova remnants cannot provide sufficient energy. It is still unclear whether supernova remnants accelerate cosmic rays up to PeV energies. The future telescope CTA will help to answer this question.
Physical sciences
Stellar astronomy
Astronomy
39669
https://en.wikipedia.org/wiki/Dengue%20fever
Dengue fever
Dengue fever is a mosquito-borne disease caused by dengue virus, prevalent in tropical and subtropical areas. It is frequently asymptomatic; if symptoms appear they typically begin 3 to 14 days after infection. These may include a high fever, headache, vomiting, muscle and joint pains, and a characteristic skin itching and skin rash. Recovery generally takes two to seven days. In a small proportion of cases, the disease develops into severe dengue (previously known as dengue hemorrhagic fever or dengue shock syndrome) with bleeding, low levels of blood platelets, blood plasma leakage, and dangerously low blood pressure. Dengue virus has four confirmed serotypes; infection with one type usually gives lifelong immunity to that type, but only short-term immunity to the others. Subsequent infection with a different type increases the risk of severe complications. The symptoms of dengue resemble many other diseases including malaria, influenza, and Zika. Blood tests are available to confirm the diagnosis including detecting viral RNA, or antibodies to the virus. There is no specific treatment for dengue fever. In mild cases, treatment is focused on treating pain symptoms. Severe cases of dengue require hospitalisation; treatment of acute dengue is supportive and includes giving fluid either by mouth or intravenously. Dengue is spread by several species of female mosquitoes of the Aedes genus, principally Aedes aegypti. Infection can be prevented by mosquito elimination and the prevention of bites. Two types of dengue vaccine have been approved and are commercially available. Dengvaxia became available in 2016 but it is only recommended to prevent re-infection in individuals who have been previously infected. The second vaccine, Qdenga, became available in 2022 and is suitable for adults, adolescents and children from four years of age. The earliest descriptions of a dengue outbreak date from 1779; its viral cause and spread were understood by the early 20th century. Already endemic in more than one hundred countries, dengue is spreading from tropical and subtropical regions to the Iberian Peninsula and the southern states of the US, partly attributed to climate change. It is classified as a neglected tropical disease. During 2023, more than 5 million infections were reported, with more than 5,000 dengue-related deaths. As most cases are asymptomatic or mild, the actual numbers of dengue cases and deaths are under-reported. Signs and symptoms Typically, people infected with dengue virus are asymptomatic (80%) or have only mild symptoms such as an uncomplicated fever. Others have more severe illness (5%), and in a small proportion it is life-threatening. The incubation period (time between exposure and onset of symptoms) ranges from 3 to 14 days, but most often it is 4 to 7 days. The characteristic symptoms of mild dengue are sudden-onset fever, headache (typically located behind the eyes), muscle and joint pains, nausea, vomiting, swollen glands and a rash. If this progresses to severe dengue the symptoms are severe abdominal pain, persistent vomiting, rapid breathing, bleeding gums or nose, fatigue, restlessness, blood in vomit or stool, extreme thirst, pale and cold skin, and feelings of weakness. Clinical course The course of infection is divided into three phases: febrile, critical, and recovery. The febrile phase involves high fever (40 °C/104 °F), and is associated with generalized pain and a headache; this usually lasts two to seven days. There may also be nausea, vomiting, a rash, and pains in the muscle and joints. Most people recover within a week or so. In about 5% of cases, symptoms worsen and can become life-threatening. This is called severe dengue (formerly called dengue hemorrhagic fever or dengue shock syndrome). Severe dengue can lead to shock, internal bleeding, organ failure and even death. Warning signs include severe stomach pain, vomiting, difficulty breathing, and blood in the nose, gums, vomit or stools. During this period, there is leakage of plasma from the blood vessels, together with a reduction in platelets. This may result in fluid accumulation in the chest and abdominal cavity as well as depletion of fluid from the circulation and decreased blood supply to vital organs. The recovery phase usually lasts two to three days. The improvement is often striking, and can be accompanied with severe itching and a slow heart rate. Complications and sequelae Complications following severe dengue include fatigue, somnolence, headache, concentration impairment and memory impairment. A pregnant woman who develops dengue is at higher risk of miscarriage, low birth weight, and premature birth. Children and older individuals are at a risk of developing complications from dengue fever compared to other age groups; young children typically suffer from more intense symptoms. Concurrent infections with tropical diseases like Zika virus can worsen symptoms and make recovery more challenging. Cause Virology Dengue virus (DENV) is an RNA virus of the family Flaviviridae; genus Flavivirus. Other members of the same genus include yellow fever virus, West Nile virus, and Zika virus. Dengue virus genome (genetic material) contains about 11,000 nucleotide bases, which code for the three structural protein molecules (C, prM and E) that form the virus particle and seven other protein molecules that are required for replication of the virus. There are four confirmed strains of the virus, called serotypes, referred to as DENV-1, DENV-2, DENV-3 and DENV-4. The distinctions between the serotypes are based on their antigenicity. Transmission Dengue virus is most frequently transmitted by the bite of mosquitos in the Aedes genus, particularly A. aegypti. They prefer to feed at dusk and dawn, but they may bite and thus spread infection at any time of day. Other Aedes species that may transmit the disease include A. albopictus, A. polynesiensis and A. scutellaris. Humans are the primary host of the virus, but it also circulates in nonhuman primates, and can infect other mammals. An infection can be acquired via a single bite. For 2 to 10 days after becoming newly infected, a person's bloodstream will contain a high level of virus particles (the viremic period). A female mosquito that takes a blood meal from the infected host then propagates the virus in the cells lining its gut. Over the next few days, the virus spreads to other tissues including the mosquito's salivary glands and is released into its saliva. Next time the mosquito feeds, the infectious saliva will be injected into the bloodstream of its victim, thus spreading the disease. The virus seems to have no detrimental effect on the mosquito, which remains infected for life. Dengue can also be transmitted via infected blood products and through organ donation. Vertical transmission (from mother to child) during pregnancy or at birth has been reported. Risk factors The principal risk for infection with dengue is the bite of an infected mosquito. This is more probable in areas where the disease is endemic, especially where there is high population density, poor sanitation, and standing water where mosquitoes can breed. It can be mitigated by taking steps to avoid bites such as by wearing clothing that fully covers the skin, using mosquito netting while resting, and/or the application of insect repellent (DEET being the most effective). Chronic diseases – such as asthma, sickle cell anemia, and diabetes mellitus – increase the risk of developing a severe form of the disease. Other risk factors for severe disease include female sex, and high body mass index, Infection with one serotype is thought to produce lifelong immunity to that type, but only short-term protection against the other three. Subsequent re-infection with a different serotype increases the risk of severe complications due to phenomenon known as antibody-dependent enhancement (ADE).The exact mechanism of ADE is not fully understood. It appears that ADE occurs when the antibodies generated during an immune response recognize and bind to a pathogen, but they fail to neutralize it. Instead, the antibody-virus complex has an enhanced ability to bind to the Fcγ receptors of the target immune cells, enabling the virus to infect the cell and reproduce itself. Mechanism of infection When a dengue virus carrying mosquito bites a person, the virus enters the skin together with the mosquito's saliva. The virus infects nearby skin cells called keratinocytes, as well as specialized immune cell located in the skin, called a Langerhans cells. The Langerhans cells migrate to the lymph nodes, where the infection spreads to white blood cells, and reproduces inside the cells while they move throughout the body. The white blood cells respond by producing several signaling proteins, such as cytokines and interferons, which are responsible for many of the symptoms, such as the fever, the flu-like symptoms, and the severe pains. In severe infection, the virus production inside the body is greatly increased, and many more organs (such as the liver and the bone marrow) can be affected. Fluid from the bloodstream leaks through the wall of small blood vessels into body cavities due to increased capillary permeability. As a result, blood volume decreases, and the blood pressure becomes so low that it cannot supply sufficient blood to vital organs. The spread of the virus to the bone marrow leads to reduced numbers of platelets, which are necessary for effective blood clotting; this increases the risk of bleeding, the other major complication of dengue fever. Prevention Vector control The principal risk for infection with dengue is the bite of an infected mosquito. This is more probable in areas where the disease is endemic, especially where there is high population density, poor sanitation, and standing water where mosquitoes can breed. It can be mitigated by taking steps to avoid bites such as by wearing clothing that fully covers the skin, using mosquito netting while resting, and/or the application of insect repellent (DEET being the most effective); it is also advisable to treat clothing, nets and tents with 0.5% permethrin. Protection of the home can be achieved with door and window screens, by using air conditioning, and by regularly emptying and cleaning all receptacles both indoors and outdoors which may accumulate water (such as buckets, planters, pools or trashcans). The primary method of controlling A. aegypti is by eliminating its habitats. This is done by eliminating open sources of water, or if this is not possible, by adding insecticides or biological control agents to these areas. Generalized spraying with organophosphate or pyrethroid insecticides, while sometimes done, is not thought to be effective. Reducing open collections of water through environmental modification is the preferred method of control, given the concerns of negative health effects from insecticides and greater logistical difficulties with control agents. Ideally, mosquito control would be a community activity, e.g. when all members of a community clear blocked gutters and street drains and keep their yards free of containers with standing water. If residences have direct water connections this eliminates the need for wells or street pumps and water-carrying containers. Vaccine As of March 2024, there are two vaccines to protect against dengue infection; Dengvaxia and Qdenga. Dengvaxia (formerly CYD-TDV) became available in 2015, and is approved for use in the US, EU and in some Asian and Latin American countries. It is an attenuated virus, is suitable for individuals aged 6–45 years and protects against all four serotypes of dengue. Due to safety concerns about antibody-dependent enhancement (ADE), it should only be given to individuals who have previously been infected with dengue, in order to protect them from reinfection. It is given subcutaneously as three doses at six month intervals. Qdenga (formerly TAK-003) completed clinical trials in 2022 and was approved for use in the European Union in December 2022; it has been approved by a number of other countries including Indonesia and Brazil, and has been recommended by the SAGE committee of the World Health Organization. It is indicated for the prevention of dengue disease in individuals four years of age and older, and can be administered to people who have not been previously infected with dengue. It is a live attenuated vaccine containing the four serotypes of dengue virus, administered subcutaneously as two doses three months apart. Severe disease The World Health Organization's International Classification of Diseases divides dengue fever into two classes: uncomplicated and severe. Severe dengue is defined as that associated with severe bleeding, severe organ dysfunction, or severe plasma leakage. Severe dengue can develop suddenly, sometimes after a few days as the fever subsides. Leakage of plasma from the capillaries results in extreme low blood pressure and hypovolemic shock; Patients with severe plasma leakage may have fluid accumulation in the lungs or abdomen, insufficient protein in the blood, or thickening of the blood. Severe dengue is a medical emergency which can cause damage to organs, leading to multiple organ failure and death. Diagnosis Mild cases of dengue fever can easily be confused with several common diseases including Influenza, measles, chikungunya, and zika. Dengue, chikungunya and zika share the same mode of transmission (Aedes mosquitoes) and are often endemic in the same regions, so that it is possible to be infected simultaneously by more than one disease. For travellers, dengue fever diagnosis should be considered in anyone who develops a fever within two weeks of being in the tropics or subtropics. Warning symptoms of severe dengue include abdominal pain, persistent vomiting, odema, bleeding, lethargy, and liver enlargement. Once again, these symptoms can be confused with other diseases such as malaria, gastroenteritis, leptospirosis, and typhus. Blood tests can be used to confirm a diagnosis of dengue. During the first few days of infection, enzyme-linked immunosorbent assay (ELISA) can be used to detect the NS1 antigen; however this antigen is produced by all flaviviruses. Four or five days into the infection, it is possible to reliably detect anti-dengue IgM antibodies, but this does not determine the serotype. Nucleic acid amplification tests provide the most reliable method of diagnosis. Treatment As of July 2024, there is no specific antiviral treatment available for dengue fever. Most cases of dengue fever have mild symptoms, and recovery takes place in a few days. No treatment is required for these cases. Acetaminophen (Paracetamol, Tylenol) may be used to relieve mild fever or pain. Other common pain relievers, including aspirin, ibuprofen (Advil, Motrin IB, others) and naproxen sodium (Aleve) should be avoided as they can increase the risk of bleeding complications. For moderate illness, those who can drink, are passing urine, have no warning signs and are otherwise reasonably healthy can be monitored carefully at home. Supportive care with analgesics, fluid replacement, and bed rest are recommended. Severe dengue is a life-threatening emergency, requiring hospitalization and potentially intensive care. Warning signs include dehydration, decreasing platelets and increasing hematocrit. Treatment modes include intravenous fluids, and transfusion with platelets or plasma. Prognosis Most people with dengue recover without any ongoing problems. The risk of death among those with severe dengue is 0.8–2.5%, and with adequate treatment this is less than 1%. However, those who develop significantly low blood pressure may have a fatality rate of up to 26%. The risk of death among children less than five years old is four times greater than among those over the age of 10. Elderly people are also at higher risk of a poor outcome. Epidemiology As of March 2023, dengue is endemic in more than 100 countries with cases reported in every continent with the exception of Antarctica. The Americas, Southeast Asia and the Western Pacific regions are the most seriously affected. It is difficult to estimate the full extent of the disease, as many cases are mild and not correctly diagnosed. WHO currently estimates that 3.9 billion people are at risk of dengue infection. In 2013, it was estimated that 390 million dengue infections occur every year, with 500,000 of these developing severe symptoms and 25,000 deaths. Generally, areas where dengue is endemic have only one serotype of the virus in circulation. The disease is said to be hyperendemic in areas where more than one serotype is circulating; this increases the risk of severe disease on a second or subsequent infection. Infections are most commonly acquired in urban environments where the virus is primarily transmitted by the mosquito species Aedes aegypti. This species has adapted to the urban environment, is generally found close to human habitation, prefers humans as its host, and takes advantage of small bodies of standing water (such as tanks and buckets) in which to breed. In rural settings the virus is transmitted to humans by A. aegypti and other related mosquitoes such as Aedes albopictus. Both these species have expanding ranges. There are two subspecies of Aedes aegypti, where Aedes aegypti formosus can be found in natural habitats such as forests and Aedes Aegypti aegypti has adapted to urban domestic habitats. Dengue has increased in incidence in recent decades, with WHO recording a ten fold increase between 2010 and 2019 (from 500,000 to 5 million recorded cases). This increase is tied closely to the increasing range of Aedes mosquitoes, which is attributed to a combination of urbanization, population growth, and an increasingly warm climate. In endemic areas, dengue infections peak when rainfall is optimal for mosquito breeding. In October 2023, the first confirmed symptomatic case of locally acquired dengue (i.e. not while travelling) in the US was identified in California. The disease infects all races, sexes, and ages equally. In endemic areas, the infection is most commonly seen in children who then acquire a lifelong partial immunity. History The first historical record of a case of probable dengue fever is in a Chinese medical encyclopedia from the Jin dynasty (266–420) which referred to a "water poison" associated with flying insects. The principal mosquito vector of dengue, Aedes aegypti, spread out of Africa in the 15th to 19th centuries due to the slave trade and consequent expansion of international trading. There have been descriptions of epidemics of dengue-like illness in the 17th century, and it is likely that epidemics in Jakarta, Cairo, and Philadelphia during the 18th century were caused by dengue. It is assumed that dengue was constantly present in many tropical urban centres throughout the 19th and early 20th centuries, even though significant outbreaks were infrequent. The marked spread of dengue during and after the Second World War has been attributed partly to disruption caused by the war, and partly to subsequent urbanisation in south-east Asia. As novel serotypes were introduced to regions already endemic with dengue, outbreaks of severe disease followed. The severe hemorrhagic form of the disease was first reported in the Philippines in 1953; by the 1970s, it had become recognised as a major cause of child mortality in Southeast Asia. In Central and South America, the Aedes mosquito had been eradicated in the 1950s; however the eradication program was discontinued in the 1970s and the disease re-established itself in the region during the 1980s, becoming hyperendemic and causing significant epidemics. Dengue has continued to increase in prevalence during the 21st century, as the mosquito vector continues to expand its range. This is attributed partly to continuing urbanisation, and partly to the impact of a warmer climate. Etymology The name came into English in the early 19th century from West Indian Spanish, which borrowed it from the Kiswahili term dinga / denga, meaning "cramp-like seizure" – the full term of the condition being ki-dinga pepo: "a sort of cramp-like seizure (caused by) an evil spirit". The borrowed term changed to dengue in Spanish due to this word existing in Spanish with the meaning "fastidiousness" and this folk etymology referring to the dislike of movement by affected patients. Slaves in the West Indies having contracted dengue were said to have the posture and gait of a dandy, and the disease was known as "dandy fever". The term break-bone fever was applied by physician and United States Founding Father Benjamin Rush, in a 1789 report of the 1780 epidemic in Philadelphia, due to the associated muscle and joint pains. In the report title he uses the more formal term "bilious remitting fever". The term dengue fever came into general use only after 1828. Other historical terms include "breakheart fever" and "la dengue". Terms for severe disease include "infectious thrombocytopenic purpura" and "Philippine", "Thai", or "Singapore hemorrhagic fever". Research Research directions include dengue pathogenesis (the process by which the disease develops in humans), as well as the biology, ecology and behaviour of the mosquito vector. Improved diagnostics would enable faster and more appropriate treatment. Attempts are ongoing to develop an antiviral medicine targeting the NS3 or NS5 proteins. In addition to the two vaccines which are already available, several vaccine candidates are in development. Society and culture Blood donation Outbreaks of dengue fever increase the need for blood products while decreasing the number of potential blood donors due to potential infection with the virus. Someone who has a dengue infection is typically not allowed to donate blood for at least the next six months. Public awareness International Anti-Dengue Day is observed every year on 15 June in a number of countries. The idea was first agreed upon in 2010 with the first event held in Jakarta, Indonesia, in 2011. Further events were held in 2012 in Yangon, Myanmar, and in 2013 in Vietnam. Goals are to increase public awareness about dengue, mobilize resources for its prevention and control and, to demonstrate the Southeast Asian region's commitment in tackling the disease. Efforts are ongoing as of 2019 to make it a global event. The Philippines has an awareness month in June since 1998. A National Dengue Day is held in India annually on 16 May. Economic burden A study estimated that the global burden of dengue in 2013 amounted to US$8.9 billion.
Biology and health sciences
Infectious disease
null
39670
https://en.wikipedia.org/wiki/Finger%20millet
Finger millet
Finger millet (Eleusine coracana) is an annual herbaceous plant widely grown as a cereal crop in the arid and semiarid areas in Africa and Asia. It is a tetraploid and self-pollinating species probably evolved from its wild relative Eleusine africana. Finger millet is native to the Ethiopian and Ugandan highlands. Interesting crop characteristics of finger millet are the ability to withstand cultivation at altitudes over above sea level, its high drought tolerance, and the long storage time of the grains. History Finger millet originated in East Africa (Ethiopian and Ugandan highlands). It was claimed to have been found in an Indian archaeological site dated to 1800 BCE (Late Bronze Age); however, this was subsequently demonstrated to be incorrectly identified cleaned grains of hulled millets. The oldest record of finger millet comes from an archaeological site in Africa dating to the 3rd millennium B.C. By 1996, cultivation of finger millet in Africa was declining rapidly because of the large amount of labor it required, with farmers preferring to grow nutritionally-inferior but less labor-intensive crops such as maize, sorghum, and cassava. Such a decline was not seen in Asia, however. Taxonomy and botanical description Finger millet is under the genus Eleusine Gaertn. Growing regions Main cultivation areas are parts of eastern and southern Africaparticularly Uganda, Kenya, the Democratic Republic of the Congo, Zimbabwe, Zambia, Malawi, and Tanzaniaand parts of India and Nepal. It is also grown in southern Sudan and "as far south" in Africa as Mozambique. Climate requirements Finger millet is a short-day plant with a growing optimum 12 hours of daylight for most varieties. Its main growing area ranges from 20°N to 20°S, meaning mainly the semiarid to arid tropics. Nevertheless, finger millet is found to be grown at 30°N in the Himalaya region (India and Nepal). It is generally considered as a drought-tolerant crop, but compared with other millets, such as pearl millet and sorghum, it prefers moderate rainfall ( annually). The majority of worldwide finger millet farmers grow it rainfed, although yields often can be significantly improved when irrigation is applied. In India, finger millet is a typical rabi (dry-winter season) crop. Heat tolerance of finger millet is high. For Ugandan finger millet varieties, for instance, the optimal average growth temperature ranges at about 27 °C, while the minimal temperatures should not be lower than 18 °C. Relative to other species (pearl millet and sorghum), finger millet has a higher tolerance to cool temperatures. It is grown from about above sea level (e.g. in Himalaya region). Hence, it can be cultivated on higher elevations than most tropical crops. Finger millet can grow on various soils, including highly weathered tropical lateritic soils. It thrives in free-draining soils with steady moisture levels. Furthermore, it can tolerate soil salinity up to a certain extent. Its ability to bear waterlogging is limited, so good drainage of the soils and moderate water-holding capacity are optimal. Finger millet can tolerate moderately acidic soils (pH 5), but also moderately alkaline soils (pH 8.2). Cropping systems Finger millet monocrops grown under rainfed conditions are most common in drier areas of Eastern Africa. In addition, intercropping with legumes, such as cowpea or pigeon pea, are also quite common in East Africa. Tropical Central Africa supports scattered regions of finger millet intercropping mostly with legumes, but also with cassava, plantain, and vegetables. Most common finger millet intercropping systems in South India are as follows: With legumes: Finger millet/dolichos, finger millet/pigeonpea, finger millet/black gram, finger millet/castor With cereals: Finger millet/maize, finger millet/foxtail millet, finger millet/jowar, finger millet/little millet With other species: Finger millet/brassicas, finger millet/mustard Weeds Weeds are the major biotic stresses for finger millet cultivation. Its seeds are very small, which leads to a relatively slow development in early growing stages. This makes finger millet a weak competitor for light, water, and nutrients compared with weeds. In East and Southern Africa, the closely related species Eleusine indica (common name Indian goose grass) is a severe weed competitor of finger millet. Especially in early growing stages of the crop and the weed and when broadcast seeding instead of row seeding is applied (as often the case in East Africa), the two species are very difficult to distinguish. Besides Eleusine indica, the species Xanthium strumarium, which is animal dispersed and the stolon-owning species Cyperus rotondus and Cynodon dactylon are important finger millet weeds. Measures to control weeds include cultural, physical, and chemical methods. Cultural methods could be sowing in rows instead of broadcast sowing to make distinction between finger millet seedlings and E. indica easier when hand weeding. ICRISAT promotes cover crops and crop rotations to disrupt the growing cycle of the weeds. Physical weed control in financial resource-limited communities growing finger millet are mainly hand weeding or weeding with a hand hoe. Diseases and pests Finger millet is generally seen as not very prone to diseases and pests. Nonetheless, finger millet blast, caused by the fungal pathogen Magnaporthe grisea (anamorph Pyricularia grisea), can locally cause severe damages, especially when untreated. In Uganda, yield losses up to 80% were reported in bad years. The pathogen leads to drying out of leaves, neck rots, and ear rots. These symptoms can drastically impair photosynthesis, translocation of photosynthetic assimilates, and grain filling, so reduce yield and grain quality. Finger millet blast can also infest finger millet weeds such as the closely related E. indica, E. africana, Digitaria spp., Setaria spp., and Doctylocterium spp. Finger millet blast can be controlled with cultural measures, chemical treatments, and the use of resistant varieties. Researchers in Kenya have screened wild relatives of finger millet and landraces for resistance to blast. Cultural measures to control finger millet blast suggested by ICRISAT for Eastern Africa include crop rotations with nonhost crops such as legumes, deep ploughing under of finger millet straw on infected fields, washing of field tools after use to prevent dissemination of the pathogen to uninfected fields, weed control to reduce infections by weed hosts, and avoiding of high plant densities to impede the pathogen dispersal from plant to plant. Chemical measures can be direct spraying of systemic fungicides, such as the active ingredients pyroquilon or seed dressings with fungicides, such as trycyclozole. Striga, a parasitic weed which occurs naturally in parts of Africa, Asia, and Australia, can severely affect the crop and yield losses in finger millet and other cereals by 20 to 80%. Striga can be controlled with limited success by hand weeding, herbicide application, crop rotations, improved soil fertility, intercropping and biological control. The most economically feasible and environmentally friendly control measure would be to develop and use Striga-resistant cultivars. Striga resistant genes have not been identified yet in cultivated finger millet but could be found in crop wild relatives of finger millet. Another pathogen in finger millet cultivation is the fungus Helminthosporium nodulosum, causing leaf blight. Finger millet pests are bird predators, such as quelea in East Africa. Insects The pink stem borer (Sesamia inferens) and the finger millet shoot fly (Atherigona miliaceae) are considered as the most relevant insect pests in finger millet cultivation. Measures to control Sesamia inferens are uprooting of infected plants, destroying of stubbles, having a crop rotation, chemical control with insecticides, biological measures such as pheromone traps, or biological pest control with the use of antagonistic organisms (e.g. Sturmiopsis inferens). Other insect pests include: Root feeders root aphid Tetraneura nigriabdominalis Shoot and stem feeders Atherigona miliaceae and Atherigona soccata Sesamia inferens stem weevil Listronotus bonariensis Leaf feeders hairy caterpillars, Amsacta albistriga, Amsacta transiens, and Amsacta moorei cutworms, Agrotis ipsilon armyworm larvae of Spodoptera exempta, Spodoptera mauritia, and Mythimna separata leaf-folder Cnaphalocrocis medinalis larvae skipper Pelopidas mathias larvae grasshoppers, Chrotogonus hemipterus, Nomadacris septemfasciata, and Locusta migratoria beetle grubs of Chnootriba similis thrip, Heliothrips indicus Sucking pests aphids, Hysteroneura setariae, Metopolophium dirhodum, Rhopalosiphum maidis, and Sitobion miscanthi mealy bug, Brevennia rehi leaf hoppers Cicadulina bipunctella and Cicadulina chinai Propagation and sowing Propagation in finger millet farming is done mainly by seeds. In rainfed cropping, four sowing methods are used: Broadcasting: Seeds are directly sown in the field. This is the common method because it is the easiest way and no special machinery is required. The organic weed management with this method is a problem, because it is difficult to distinguish between weed and crop. Line Sowing: Improved sowing compared to broadcasting. Facilitates organic weed management due to better distinction of weed and crop. In this method, spacing of 22 cm to 30 cm between lines and 8 cm to 10 cm within lines should be maintained. The seeds should be sown about 3 cm deep in the soil. Drilling in rows: Seeds are sown directly in the untreated soil by using a direct-seed drill. This method is used in conservation agriculture. Transplanting the seedlings: Raising the seedlings in nursery beds and transplant to the main field. Leveling and watering of beds is required during transplanting. Seedlings with 4 weeks age should be transplanted in the field. For early Rabi and Kharif season, seedlings should be transplanted at 25 cm x 10 cm and for late Kharif season at 30 cm x 10 cm. Planting should be done 3 cm depth in the soil Harvest Crop does not mature uniformly and hence the harvest is to be taken up in two stages. When the earhead on the main shoot and 50% of the earheads on the crop turn brown, the crop is ready for the first harvest. At the first harvest, all earheads that have turned brown should be cut. After this drying, threshing and cleaning the grains by winnowing. The second harvest is around seven days after the first. All earheads, including the green ones, should be cut. The grains should then be cured to obtain maturity by heaping the harvested earheads in shade for one day without drying, so that the humidity and temperature increase and the grains get cured. After this drying, threshing and cleaning as after the first harvesting. Storage Once harvested, the seeds keep extremely well and are seldom attacked by insects or moulds. Finger millet can be kept for up to 10 years when it is unthreshed. Some sources report a storage duration up to 50 years under good storage conditions. The long storage capacity makes finger millet an important crop in risk-avoidance strategies as a famine crop for farming communities. Processing Milling As a first step of processing finger millet can be milled to produce flour. However, finger millet is difficult to mill due to the small size of the seeds and because the bran is bound very tightly to the endosperm. Furthermore, the delicate seed can get crushed during the milling. The development of commercial mechanical milling systems for finger millet is challenging. Therefore, the main product of finger millet is whole grain flour. This has disadvantages, such as reduced storage time of the flour due to the high oil content. Furthermore, the industrial use of whole grain finger millet flour is limited. Moistening the millet seeds prior to grinding helps to remove the bran mechanically without causing damage to the rest of the seed. The mini millet mill can also be used to process other grains such as wheat and sorghum. Malting Another method to process the finger millet grain is germinating the seed. This process is also called malting and is very common in the production of brewed beverages such as beer. When finger millet is germinated, enzymes are activated, which transfer starches into other carbohydrates such as sugars. Finger millet has a good malting activity. The malted finger millet can be used as a substrate to produce for example gluten-free beer or easily digestible food for infants. Nutrition Finger millet is 11% water, 7% protein, 54% carbohydrates, and 2% fat (table). In a 100 gram (3.5 oz) reference amount, finger millet supplies 305 calories, and is a rich source (20% or more of the Daily Value, DV) of dietary fiber and several dietary minerals, especially iron at 87% DV (table). Growing finger millet to improve nutrition The International Crops Research Institute for the Semi-Arid Tropics (ICRISAT), a member of the CGIAR consortium, partners with farmers, governments, researchers and NGOs to help farmers grow nutritious crops, including finger millet. This helps their communities have more balanced diets and become more resilient to pests and drought. For example, the Harnessing Opportunities for Productivity Enhancement of Sorghum and Millets in Sub-Saharan Africa and South Asia (HOPE) project is increasing yields of finger millet in Tanzania by encouraging farmers to grow improved varieties. Use Finger millet can be ground into a flour and cooked into cakes, puddings or porridge. The flour is made into a fermented drink (or beer) in Nepal and in many parts of Africa. The straw from finger millet is used as animal fodder. In India Finger millet is a staple grain in many parts of India, especially Karnataka, where it is known as ragi (from Kannada ರಾಗಿ rāgi). It is malted and its grain is ground into flour. There are numerous ways to prepare finger millet, including dosa, idli, and laddu. In southern India, on pediatrician's recommendation, finger millet is used in preparing baby food, because of millet's high nutritional content, especially iron and calcium. Satva, pole (dosa), bhakri, ambil (a sour porridge), and pappad are common dishes made using finger millet. In Karnataka, finger millet is generally consumed in the form of a porridge called ragi mudde in Kannada. It is the staple diet of many residents of South Karnataka. Mudde is prepared by cooking the ragi flour with water to achieve a dough-like consistency. This is then rolled into balls of desired size and consumed with sambar (huli), saaru (ಸಾರು), or curries. Ragi is also used to make roti, idli, dosa and conjee. In the Malnad region of Karnataka, the whole ragi grain is soaked and the milk is extracted to make a dessert known as keelsa. A type of flat bread is prepared using finger millet flour (called ragi rotti in Kannada) in Northern districts of Karnataka. In Tamil Nadu, ragi is called kezhvaragu (கேழ்வரகு) and also has other names like keppai, ragi, and ariyam. Ragi is dried, powdered, and boiled to form a thick mass that is allowed to cool. This is the famed kali or keppai kali. This is made into large balls to quantify the intake. It is taken with sambar or kuzhambu. For children, ragi is also fed with milk and sugar (malt). It is also made in the form of pancakes with chopped onions and tomatoes. Kezhvaragu is used to make puttu with jaggery or sugar. Ragi is called koozh – a staple diet in farming communities, eaten along with raw onions and green chillies. In Andhra Pradesh, ragi sankati or ragi muddha – ragi balls – are eaten in the morning with chilli, onions, and sambar. In Kerala, puttu, a traditional breakfast dish, can be made with ragi flour and grated coconut, which is then steamed in a cylindrical steamer. In the tribal and western hilly regions of Odisha, ragi or mandiaa is a staple food. In the Garhwal and Kumaon regions of Uttarakhand, koda or maduwa is made into thick rotis (served with ghee), and also made into badi, which is similar to halwa but without sugar. In the Kumaon region, ragi is traditionally fed to women after child birth. In some parts of Kumaon region the ragi flour is used to make various snacks like namkeen sev, mathri and chips. Ragi flour To make the flour, ragi is graded and washed. It is allowed to dry naturally in sunlight for 5 to 8 hours. It is then powdered. Ragi porridge, ragi halwa, ragi ela ada, and ragi kozhukatta can be made with ragi flour. All-purpose flour can be replaced with ragi flour during baking. Ragi cake and ragi biscuits can be prepared. The flour is consumed with milk, boiled water, or yogurt. The flour is made into flatbreads, including thin, leavened dosa and thicker, unleavened roti. In South and Far East Asia In Nepal, a thick dough (ḍhĩḍo) made of millet flour (kōdō) is cooked and eaten by hand. The dough, on other hand, can be made into thick bread (rotee) spread over flat utensil and heating it. Fermented millet is used to make a beer chhaang and the mash is distilled to make a liquor (rakśiशी). Whole grain millet is fermented to make tongba. Its use in holy Hindu practices is barred especially by upper castes. In Nepal, the National Plant Genetic Resource Centre at Khumaltar maintains 877 accessions (samples) of Nepalese finger millet (kodo). In Sri Lanka, finger millet is called kurakkan and is made into kurakkan roti – an earthy brown thick roti with coconut and thallapa – a thick dough made of ragi by boiling it with water and some salt until like a dough ball. It is then eaten with a spicy meat curry and is usually swallowed in small balls, rather than chewing. It is also eaten as a porridge (kurrakan kenda) and as a sweet called 'Halape'. In northwest Vietnam, finger millet is used as a medicine for women at childbirth. A minority use finger millet flour to make alcohol. As beverage Ragi malt porridge is made from finger millet which is soaked and shadow dried, then roasted and ground. This preparation is boiled in water and used as a substitute for milk powder-based beverages. Gallery
Biology and health sciences
Grains
Plants
39674
https://en.wikipedia.org/wiki/Planetary%20nebula
Planetary nebula
A planetary nebula is a type of emission nebula consisting of an expanding, glowing shell of ionized gas ejected from red giant stars late in their lives. The term "planetary nebula" is a misnomer because they are unrelated to planets. The term originates from the planet-like round shape of these nebulae observed by astronomers through early telescopes. The first usage may have occurred during the 1780s with the English astronomer William Herschel who described these nebulae as resembling planets; however, as early as January 1779, the French astronomer Antoine Darquier de Pellepoix described in his observations of the Ring Nebula, "very dim but perfectly outlined; it is as large as Jupiter and resembles a fading planet". Though the modern interpretation is different, the old term is still used. All planetary nebulae form at the end of the life of a star of intermediate mass, about 1-8 solar masses. It is expected that the Sun will form a planetary nebula at the end of its life cycle. They are relatively short-lived phenomena, lasting perhaps a few tens of millennia, compared to considerably longer phases of stellar evolution. Once all of the red giant's atmosphere has been dissipated, energetic ultraviolet radiation from the exposed hot luminous core, called a planetary nebula nucleus (P.N.N.), ionizes the ejected material. Absorbed ultraviolet light then energizes the shell of nebulous gas around the central star, causing it to appear as a brightly coloured planetary nebula. Planetary nebulae probably play a crucial role in the chemical evolution of the Milky Way by expelling elements into the interstellar medium from stars where those elements were created. Planetary nebulae are observed in more distant galaxies, yielding useful information about their chemical abundances. Starting from the 1990s, Hubble Space Telescope images revealed that many planetary nebulae have extremely complex and varied morphologies. About one-fifth are roughly spherical, but the majority are not spherically symmetric. The mechanisms that produce such a wide variety of shapes and features are not yet well understood, but binary central stars, stellar winds and magnetic fields may play a role. Observations Discovery The first planetary nebula discovered (though not yet termed as such) was the Dumbbell Nebula in the constellation of Vulpecula. It was observed by Charles Messier on July 12, 1764 and listed as M27 in his catalogue of nebulous objects. To early observers with low-resolution telescopes, M27 and subsequently discovered planetary nebulae resembled the giant planets like Uranus. As early as January 1779, the French astronomer Antoine Darquier de Pellepoix described in his observations of the Ring Nebula, "a very dull nebula, but perfectly outlined; as large as Jupiter and looks like a fading planet". The nature of these objects remained unclear. In 1782, William Herschel, discoverer of Uranus, found the Saturn Nebula (NGC 7009) and described it as "A curious nebula, or what else to call it I do not know". He later described these objects as seeming to be planets "of the starry kind". As noted by Darquier before him, Herschel found that the disk resembled a planet but it was too faint to be one. In 1785, Herschel wrote to Jérôme Lalande: These are celestial bodies of which as yet we have no clear idea and which are perhaps of a type quite different from those that we are familiar with in the heavens. I have already found four that have a visible diameter of between 15 and 30 seconds. These bodies appear to have a disk that is rather like a planet, that is to say, of equal brightness all over, round or somewhat oval, and about as well defined in outline as the disk of the planets, of a light strong enough to be visible with an ordinary telescope of only one foot, yet they have only the appearance of a star of about ninth magnitude. He assigned these to Class IV of his catalogue of "nebulae", eventually listing 78 "planetary nebulae", most of which are in fact galaxies. Herschel used the term "planetary nebulae" for these objects. The origin of this term not known. The label "planetary nebula" became ingrained in the terminology used by astronomers to categorize these types of nebulae, and is still in use by astronomers today. Spectra The nature of planetary nebulae remained unknown until the first spectroscopic observations were made in the mid-19th century. Using a prism to disperse their light, William Huggins was one of the earliest astronomers to study the optical spectra of astronomical objects. On August 29, 1864, Huggins was the first to analyze the spectrum of a planetary nebula when he observed Cat's Eye Nebula. His observations of stars had shown that their spectra consisted of a continuum of radiation with many dark lines superimposed. He found that many nebulous objects such as the Andromeda Nebula (as it was then known) had spectra that were quite similar. However, when Huggins looked at the Cat's Eye Nebula, he found a very different spectrum. Rather than a strong continuum with absorption lines superimposed, the Cat's Eye Nebula and other similar objects showed a number of emission lines. Brightest of these was at a wavelength of 500.7 nanometres, which did not correspond with a line of any known element. At first, it was hypothesized that the line might be due to an unknown element, which was named nebulium. A similar idea had led to the discovery of helium through analysis of the Sun's spectrum in 1868. While helium was isolated on Earth soon after its discovery in the spectrum of the Sun, "nebulium" was not. In the early 20th century, Henry Norris Russell proposed that, rather than being a new element, the line at 500.7 nm was due to a familiar element in unfamiliar conditions. Physicists showed in the 1920s that in gas at extremely low densities, electrons can occupy excited metastable energy levels in atoms and ions that would otherwise be de-excited by collisions that would occur at higher densities. Electron transitions from these levels in nitrogen and oxygen ions (, (a.k.a. O ), and ) give rise to the 500.7 nm emission line and others. These spectral lines, which can only be seen in very low-density gases, are called forbidden lines. Spectroscopic observations thus showed that nebulae were made of extremely rarefied gas. Central stars The central stars of planetary nebulae are very hot. Only when a star has exhausted most of its nuclear fuel can it collapse to a small size. Planetary nebulae are understood as a final stage of stellar evolution. Spectroscopic observations show that all planetary nebulae are expanding. This led to the idea that planetary nebulae were caused by a star's outer layers being thrown into space at the end of its life. Modern observations Towards the end of the 20th century, technological improvements helped to further the study of planetary nebulae. Space telescopes allowed astronomers to study light wavelengths outside those that the Earth's atmosphere transmits. The first UV observations of PNe (IC 2149) were performed from space, with the Orion 2 Space Observatory (see Orion 1 and Orion 2 Space Observatories) on board the Soyuz 13 spacecraft in December 1973 , two photon emission from nebulae was detected for the first time.. Infrared and ultraviolet studies of planetary nebulae allowed much more accurate determinations of nebular temperatures, densities and elemental abundances. Charge-coupled device technology allowed much fainter spectral lines to be measured accurately than had previously been possible. The Hubble Space Telescope also showed that while many nebulae appear to have simple and regular structures when observed from the ground, the very high optical resolution achievable by telescopes above the Earth's atmosphere reveals extremely complex structures. Under the Morgan-Keenan spectral classification scheme, planetary nebulae are classified as Type-P, although this notation is seldom used in practice. Origins Stars greater than 8 solar masses (M⊙) will probably end their lives in dramatic supernovae explosions, while planetary nebulae seemingly only occur at the end of the lives of intermediate and low mass stars between 0.8 M⊙ to 8.0 M⊙. Progenitor stars that form planetary nebulae will spend most of their lifetimes converting their hydrogen into helium in the star's core by nuclear fusion at about 15 million K. This generates energy in the core, which creates outward pressure that balances the crushing inward pressures of gravity. This state of equilibrium is known as the main sequence, which can last for tens of millions to billions of years, depending on the mass. When the hydrogen in the core starts to run out, nuclear fusion generates less energy and gravity starts compressing the core, causing a rise in temperature to about 100 million K. Such high core temperatures then make the star's cooler outer layers expand to create much larger red giant stars. This end phase causes a dramatic rise in stellar luminosity, where the released energy is distributed over a much larger surface area, which in fact causes the average surface temperature to be lower. In stellar evolution terms, stars undergoing such increases in luminosity are known as asymptotic giant branch stars (AGB). During this phase, the star can lose 50–70% of its total mass from its stellar wind. For the more massive asymptotic giant branch stars that form planetary nebulae, whose progenitors exceed about 0.6M⊙, their cores will continue to contract. When temperatures reach about 100 million K, the available helium nuclei fuse into carbon and oxygen, so that the star again resumes radiating energy, temporarily stopping the core's contraction. This new helium burning phase (fusion of helium nuclei) forms a growing inner core of inert carbon and oxygen. Above it is a thin helium-burning shell, surrounded in turn by a hydrogen-burning shell. However, this new phase lasts only 20,000 years or so, a very short period compared to the entire lifetime of the star. The venting of atmosphere continues unabated into interstellar space, but when the outer surface of the exposed core reaches temperatures exceeding about 30,000 K, there are enough emitted ultraviolet photons to ionize the ejected atmosphere, causing the gas to shine as a planetary nebula. Lifetime After a star passes through the asymptotic giant branch (AGB) phase, the short planetary nebula phase of stellar evolution begins as gases blow away from the central star at speeds of a few kilometers per second. The central star is the remnant of its AGB progenitor, an electron-degenerate carbon-oxygen core that has lost most of its hydrogen envelope due to mass loss on the AGB. As the gases expand, the central star undergoes a two-stage evolution, first growing hotter as it continues to contract and hydrogen fusion reactions occur in the shell around the core and then slowly cooling when the hydrogen shell is exhausted through fusion and mass loss. In the second phase, it radiates away its energy and fusion reactions cease, as the central star is not heavy enough to generate the core temperatures required for carbon and oxygen to fuse. During the first phase, the central star maintains constant luminosity, while at the same time it grows ever hotter, eventually reaching temperatures around 100,000 K. In the second phase, it cools so much that it does not give off enough ultraviolet radiation to ionize the increasingly distant gas cloud. The star becomes a white dwarf, and the expanding gas cloud becomes invisible to us, ending the planetary nebula phase of evolution. For a typical planetary nebula, about 10,000 years passes between its formation and recombination of the resulting plasma. Role in galactic enrichment Planetary nebulae may play a very important role in galactic evolution. Newly born stars consist almost entirely of hydrogen and helium, but as stars evolve through the asymptotic giant branch phase, they create heavier elements via nuclear fusion which are eventually expelled by strong stellar winds. Planetary nebulae usually contain larger proportions of elements such as carbon, nitrogen and oxygen, and these are recycled into the interstellar medium via these powerful winds. In this way, planetary nebulae greatly enrich the Milky Way and their nebulae with these heavier elements – collectively known by astronomers as metals and specifically referred to by the metallicity parameter Z. Subsequent generations of stars formed from such nebulae also tend to have higher metallicities. Although these metals are present in stars in relatively tiny amounts, they have marked effects on stellar evolution and fusion reactions. When stars formed earlier in the universe they theoretically contained smaller quantities of heavier elements. Known examples are the metal poor Population II stars. (See Stellar population.) Identification of stellar metallicity content is found by spectroscopy. Characteristics Physical characteristics A typical planetary nebula is roughly one light year across, and consists of extremely rarefied gas, with a density generally from 100 to 10,000 particles . (The Earth's atmosphere, by comparison, contains 2.5 particles .) Young planetary nebulae have the highest densities, sometimes as high as 106 particles . As nebulae age, their expansion causes their density to decrease. The masses of planetary nebulae range from 0.1 to 1 solar masses. Radiation from the central star heats the gases to temperatures of about 10,000 K. The gas temperature in central regions is usually much higher than at the periphery reaching 16,000–25,000 K. The volume in the vicinity of the central star is often filled with a very hot (coronal) gas having the temperature of about 1,000,000 K. This gas originates from the surface of the central star in the form of the fast stellar wind. Nebulae may be described as matter bounded or radiation bounded. In the former case, there is not enough matter in the nebula to absorb all the UV photons emitted by the star, and the visible nebula is fully ionized. In the latter case, there are not enough UV photons being emitted by the central star to ionize all the surrounding gas, and an ionization front propagates outward into the circumstellar envelope of neutral atoms. Numbers and distribution About 3000 planetary nebulae are now known to exist in our galaxy, out of 200 billion stars. Their very short lifetime compared to total stellar lifetime accounts for their rarity. They are found mostly near the plane of the Milky Way, with the greatest concentration near the Galactic Center. Morphology Only about 20% of planetary nebulae are spherically symmetric (for example, see Abell 39). A wide variety of shapes exist with some very complex forms seen. Planetary nebulae are classified by different authors into: stellar, disk, ring, irregular, helical, bipolar, quadrupolar, and other types, although the majority of them belong to just three types: spherical, elliptical and bipolar. Bipolar nebulae are concentrated in the galactic plane, probably produced by relatively young massive progenitor stars; and bipolars in the galactic bulge appear to prefer orienting their orbital axes parallel to the galactic plane. On the other hand, spherical nebulae are probably produced by old stars similar to the Sun. The huge variety of the shapes is partially the projection effect—the same nebula when viewed under different angles will appear different. Nevertheless, the reason for the huge variety of physical shapes is not fully understood. Gravitational interactions with companion stars if the central stars are binary stars may be one cause. Another possibility is that planets disrupt the flow of material away from the star as the nebula forms. It has been determined that the more massive stars produce more irregularly shaped nebulae. In January 2005, astronomers announced the first detection of magnetic fields around the central stars of two planetary nebulae, and hypothesized that the fields might be partly or wholly responsible for their remarkable shapes. Membership in clusters Planetary nebulae have been detected as members in four Galactic globular clusters: Messier 15, Messier 22, NGC 6441 and Palomar 6. Evidence also points to the potential discovery of planetary nebulae in globular clusters in the galaxy M31. However, there is currently only one case of a planetary nebula discovered in an open cluster that is agreed upon by independent researchers. That case pertains to the planetary nebula PHR 1315-6555 and the open cluster Andrews-Lindsay 1. Indeed, through cluster membership, PHR 1315-6555 possesses among the most precise distances established for a planetary nebula (i.e., a 4% distance solution). The cases of NGC 2818 and NGC 2348 in Messier 46, exhibit mismatched velocities between the planetary nebulae and the clusters, which indicates they are line-of-sight coincidences. A subsample of tentative cases that may potentially be cluster/PN pairs includes Abell 8 and Bica 6, and He 2-86 and NGC 4463. Theoretical models predict that planetary nebulae can form from main-sequence stars of between one and eight solar masses, which puts the progenitor star's age at greater than 40 million years. Although there are a few hundred known open clusters within that age range, a variety of reasons limit the chances of finding a planetary nebula within. For one reason, the planetary nebula phase for more massive stars is on the order of millennia, which is a blink of the eye in astronomic terms. Also, partly because of their small total mass, open clusters have relatively poor gravitational cohesion and tend to disperse after a relatively short time, typically from 100 to 600 million years. Current issues in planetary nebula studies The distances to planetary nebulae are generally poorly determined, but the Gaia mission is now measuring direct parallactic distances between their central stars and neighboring stars. It is also possible to determine distances to nearby planetary nebula by measuring their expansion rates. High resolution observations taken several years apart will show the expansion of the nebula perpendicular to the line of sight, while spectroscopic observations of the Doppler shift will reveal the velocity of expansion in the line of sight. Comparing the angular expansion with the derived velocity of expansion will reveal the distance to the nebula. The issue of how such a diverse range of nebular shapes can be produced is a debatable topic. It is theorised that interactions between material moving away from the star at different speeds gives rise to most observed shapes. However, some astronomers postulate that close binary central stars might be responsible for the more complex and extreme planetary nebulae. Several have been shown to exhibit strong magnetic fields, and their interactions with ionized gas could explain some planetary nebulae shapes. There are two main methods of determining metal abundances in nebulae. These rely on recombination lines and collisionally excited lines. Large discrepancies are sometimes seen between the results derived from the two methods. This may be explained by the presence of small temperature fluctuations within planetary nebulae. The discrepancies may be too large to be caused by temperature effects, and some hypothesize the existence of cold knots containing very little hydrogen to explain the observations. However, such knots have yet to be observed.
Physical sciences
Stellar astronomy
null
39683
https://en.wikipedia.org/wiki/Banksia
Banksia
Banksia is a genus of around 170 species of flowering plants in the family Proteaceae. These Australian wildflowers and popular garden plants are easily recognised by their characteristic flower spikes, and woody fruiting "cones" and heads. Banksias range in size from prostrate woody shrubs to trees up to 30 metres (100 ft) tall. They are found in a wide variety of landscapes: sclerophyll forest, (occasionally) rainforest, shrubland, and some more arid landscapes, though not in Australia's deserts. Heavy producers of nectar, banksias are a vital part of the food chain in the Australian bush. They are an important food source for nectarivorous animals, including birds, bats, rats, possums, stingless bees and a host of invertebrates. Further, they are of economic importance to Australia's nursery and cut flower industries. However, these plants are threatened by a number of processes including land clearing, frequent burning and disease, and a number of species are rare and endangered. Description Banksias grow as trees or woody shrubs. Trees of the largest species, B. integrifolia (coast banksia) and B. seminuda (river banksia), often grow over 15 metres tall, some even grow to standing 30 metres tall. Banksia species that grow as shrubs are usually erect, but there are several species that are prostrate, with branches that grow on or below the soil. The leaves of Banksia vary greatly between species. Sizes vary from the narrow, 1– centimetre long needle-like leaves of B. ericifolia (heath-leaved banksia), to the very large leaves of B. grandis (bull banksia), which may be up to 45 centimetres long. The leaves of most species have serrated edges, but a few, such as B. integrifolia, do not. Leaves are usually arranged along the branches in irregular spirals, but in some species they are crowded together in whorls. Many species have differing juvenile and adult leaves (e.g., Banksia integrifolia has large serrated juvenile leaves). The flowers are arranged in flower spikes or capitate flower heads. The character most commonly associated with Banksia is the flower spike, an elongated inflorescence consisting of a woody axis covered in tightly packed pairs of flowers attached at right angles. A single flower spike generally contains hundreds or even thousands of flowers; the most recorded is around 6000 on inflorescences of B. grandis. Not all Banksia have an elongate flower spike, however: the members of the small Isostylis complex have long been recognised as banksias in which the flower spike has been reduced to a head; and recently the large genus Dryandra has been found to have arisen from within the ranks of Banksia, and sunk into it as B. ser. Dryandra. They similarly have capitate flower heads rather than spikes. Banksia flowers are usually a shade of yellow, but orange, red, pink and even violet flowers also occur. The colour of the flowers is determined by the colour of the perianth parts and often the style. The style is much longer than the perianth, and is initially trapped by the upper perianth parts. These are gradually released over a period of days, either from top to bottom or from bottom to top. When the styles and perianth parts are different colours, the visual effect is of a colour change sweeping along the spike. This can be most spectacular in B. prionotes (acorn banksia) and related species, as the white inflorescence in bud becomes a brilliant orange. In most cases, the individual flowers are tall, thin saccate (sack-shaped) in shape. Occasionally, multiple flower spikes can form. This is most often seen in Banksia marginata and B. ericifolia (pictured right). As the flower spikes or heads age, the flower parts dry up and may turn shades of orange, tan or dark brown colour, before fading to grey over a period of years. In some species, old flower parts are lost, revealing the axis; in others, the old flower parts may persist for many years, giving the fruiting structure a hairy appearance. Old flower spikes are commonly referred to as "cones", although they are not technically cones according to the botanical definition of the term: cones only occur in conifers and cycads. Despite the large number of flowers per inflorescence, only a few of them ever develop fruit, and in some species a flower spike will set no fruit at all. The fruit of Banksia is a woody follicle embedded in the axis of the inflorescence. In many species, the resulting structure is a massive woody structure commonly called a cone. Each follicle consists of two horizontal valves that tightly enclose the seeds. The follicle opens to release the seed by splitting along the suture. In some species, each valve splits too. In some species the follicles open as soon as the seed is mature, but in most species most follicles open only after stimulated to do so by bushfire. Each follicle usually contains one or two small seeds, each with a wedge-shaped papery wing that causes it to spin as it falls to the ground. Taxonomy Specimens of Banksia were first collected by Sir Joseph Banks and Daniel Solander, naturalists on the Endeavour during Lieutenant (later Captain) James Cook's first voyage to the Pacific Ocean. Cook landed on Australian soil for the first time on 29 April 1770, at a place that he later named Botany Bay in recognition of "the great quantity of plants Mr Banks and Dr Solander found in this place". Over the next seven weeks, Banks and Solander collected thousands of plant specimens, including the first specimens of a new genus that would later be named Banksia in Banks' honour. Four species were present in this first collection: B. serrata (Saw Banksia), B. integrifolia (Coast Banksia), B. ericifolia (Heath-leaved Banksia) and B. robur (Swamp Banksia). In June the ship was careened at Endeavour River, where specimens of B. dentata (Tropical Banksia) were collected. The genus Banksia was finally described and named by Carolus Linnaeus the Younger in his April 1782 publication Supplementum Plantarum; hence the full name for the genus is "Banksia L.f.". Linnaeus placed the genus in class Tetrandra, order Monogynia of his father's classification, and named it in honour of Banks. The name Banksia had in fact already been published in 1775 as Banksia J.R.Forst & G.Forst, referring to some New Zealand species that the Forsters had collected during Cook's second voyage. However Linnaeus incorrectly attributed the Forsters' specimens to the genus Passerina, and therefore considered the name Banksia available for use. By the time Joseph Gaertner corrected Banks' error in 1788, Banksia L.f. was widely known and accepted, so Gaertner renamed Banksia J.R.Forst, & G.Forst to Pimelea, a name previously chosen for the genus by Banks and Solander. The first specimens of a Dryandra were collected by Archibald Menzies, surgeon and naturalist to the Vancouver Expedition. At the request of Joseph Banks, Menzies collected natural history specimens wherever possible during the voyage. During September and October 1791, while the expedition were anchored at King George Sound, he collected numerous plant specimens, including the first specimens of Dryandra (now Banksia) sessilis (Parrotbush) and D. (now Banksia) pellaeifolia. Upon Menzies' return to England, he turned his specimens over to Banks; as with most other specimens in Banks' library, they remained undescribed for many years. Robert Brown gave a lecture, naming the new genus Dryandra in 1809, however Joseph Knight published the name Josephia before Brown published his paper with the description of Dryandra. Brown ignored Knight's name, as did subsequent botanists. In 1891, Otto Kuntze, strictly applying the principle of priority, argued that Pimelea should revert to the name Banksia J.R.Forst & G.Forst. He proposed the new genus Sirmuellera to replaced Banksia L.f. and transferred its species to the new genus. This arrangement was largely ignored by Kuntze's contemporaries.Banksia L.f. was formally conserved and Sirmuellera rejected in 1940. Banksia belongs to the family Proteaceae, subfamily Grevilleoideae, and tribe Banksieae. There are around 170 species. The closest relatives of Banksia are two genera of rainforest trees in North Queensland (Musgravea and Austromuellera). Subgeneric arrangement Alex George arranged the genus into two subgenera—subgenus Isostylis (containing B. ilicifolia, B. oligantha and B. cuneata) and subgenus Banksia (containing all other species except those he considered dryandras)—in his 1981 monograph and 1999 treatment for the Flora of Australia series. He held that flower morphology was the key to relationships in the genus. Austin Mast and Kevin Thiele published the official merging of Dryandra within Banksia in 2007, recalibrating the genus into subgenus Banksia and subgenus Spathulatae. Distribution and habitat All but one of the living Banksia species are endemic to Australia. The exception is B. dentata (tropical banksia), which occurs throughout northern Australia, and on islands to the north including New Guinea and the Aru Islands. An extinct species, B. novae-zelandiae, was found in New Zealand. The other species occur in two distinct geographical regions: southwest Western Australia and eastern Australia. Southwest Western Australia is the main centre of biodiversity; over 90% of all Banksia species occur only there, from Exmouth in the north, south and east to beyond Esperance on the south coast. Eastern Australia has far fewer species, but these include some of best known and most widely distributed species, including B. integrifolia (coast banksia) and B. spinulosa (hairpin banksia). Here they occur from the Eyre Peninsula in South Australia right around the east coast up to Cape York in Queensland. The vast majority of Banksia are found in sandy or gravelly soils, though some populations of B. marginata (silver banksia) and B. spinulosa do occur on soil that is heavier and more clay-like. B. seminuda is exceptional for its preference for rich loams along watercourses. Most occur in heathlands or low woodlands; of the eastern species, B. integrifolia and B. marginata occur in forests; many south-western species such as B. grandis, B. sphaerocarpa, B. sessilis, B. nobilis and B. dallanneyi grow as understorey plants in jarrah (Eucalyptus marginata), wandoo (E. wandoo) and karri (E. diversicolor) forests, with B. seminuda being one of the forest trees in suitable habitat. Most species do not grow well near the coast, notable exceptions being the southern Western Australian species B. speciosa, B. praemorsa and B. repens. Only a few species, such as B. rosserae and B. elderiana (swordfish banksia), occur in arid areas. Most of the eastern Australian species survive in uplands, but only a few of the Western Australian species native to the Stirling Ranges – B. solandri, B. oreophila, B. brownii and B. montana – survive at high elevations. Studies of the south-western species have found the distribution of Banksia species to be primarily constrained by rainfall. With the exception of B. rosserae, no species tolerates annual rainfall of less than 200 millimetres, despite many species surviving in areas that receive less than 400 millimetres. Banksia species are present throughout the region of suitable rainfall, with greatest speciation in cooler, wetter areas. Hotter, drier regions around the edges of its range tend to have fewer species with larger distributions. The greatest species richness occurs in association with uplands, especially the Stirling Range. Evolutionary history and fossil record There are many fossils of Banksia. The oldest of these are fossil pollen between 65 and 59 million years old. There are fossil leaves between 59 and 56 million years old found in southern New South Wales. The oldest fossil cones are between 47.8 and 41.2 million years old, found in Western Australia. Although Banksia is now only native to Australia and New Guinea, there are fossils from New Zealand, between 21 and 25 million years old. Evolutionary scientists Marcell Cardillo and Renae Pratt have proposed a southwest Australian origin for banksias despite their closest relatives being north Queensland rainforest species. Ecology Banksias are heavy producers of nectar, making them an important source of food for nectivorous animals, including honeyeaters and small mammals such as rodents, antechinus, honey possums, pygmy possums, gliders and bats. Many of these animals play a role in pollination of Banksia. Various studies have shown mammals and birds to be important pollinators. In 1978, Carpenter observed that some banksias had a stronger odour at night, possibly to attract nocturnal mammal pollinators. Other associated fauna include the larvae of moths (such as the Dryandra Moth) and weevils, which burrow into the "cones" to eat the seeds and pupate in the follicles; and birds such as cockatoos, who break off the "cones" to eat both the seeds and the insect larvae. A number of Banksia species are considered rare or endangered. These include B. brownii (feather-leaved banksia), B. cuneata (matchstick banksia), B. goodii (Good's banksia), B. oligantha (Wagin banksia), B. tricuspis (pine banksia), and B. verticillata (granite banksia). Response to fire Banksia plants are naturally adapted to the presence of regular bushfires in the Australian landscape. About half of Banksia species are killed by bushfire, but these regenerate quickly from seed, as fire also stimulates the opening of seed-bearing follicles and the germination of seed in the ground. The remaining species usually survive bushfire, either by resprouting from a woody base known as a lignotuber or, more rarely, epicormic buds protected by thick bark. In Western Australia, banksias of the first group are known as 'seeders' and the second group as 'sprouters'. Infrequent bushfires at expected intervals pose no threat, and are in fact beneficial for regeneration of banksia populations. However, too frequent bushfires can seriously reduce or even eliminate populations from certain areas, by killing seedlings and young plants before they reach fruiting age. Many fires near urban areas are caused by arson, and thus the frequency is often much higher than fires would have been prior to human habitation. Furthermore, residents who live in areas near bushland may pressure local councils to burn areas near homes more frequently, to reduce fuel-load in the bush and thus reduce ferocity of future fires. Unfortunately there are often discrepancies in agreed frequency between these groups and conservation groups. Dieback Another threat to Banksia is the water mould Phytophthora cinnamomi, commonly known as "dieback". Dieback attacks the roots of plants, destroying the structure of the root tissues, "rotting" the root, and preventing the plant from absorbing water and nutrients. Banksia'''s proteoid roots, which help it to survive in low-nutrient soils, make it highly susceptible to this disease. All Western Australian species are vulnerable, although most eastern species are fairly resistant. Vulnerable plants typically die within a few years of infection. In southwest Western Australia, where dieback infestation is widespread, infested areas of Banksia forest typically have less than 30% of the cover of uninfested areas. Plant deaths in such large proportions can have a profound influence on the makeup of plant communities. For example, in southwestern Australia Banksia often occurs as an understorey to forests of jarrah (Eucalyptus marginata), another species highly vulnerable to dieback. Infestation kills both the jarrah overstorey and the original Banksia understorey, and over time these may be replaced by a more open woodland consisting of an overstorey of the resistant marri (Corymbia calophylla), and an understorey of the somewhat resistant Banksia sessilis (parrot bush). A number of species of Banksia are threatened by dieback. Nearly every known wild population of B. brownii shows some signs of dieback infection, which could possibly wipe it out within years. Other vulnerable species include B. cuneata, and B. verticillata. Dieback is notoriously difficult to treat, although there has been some success with phosphite and phosphorous acid, which are currently used to inoculate wild B. brownii populations. However this is not without potential problems as it alters the soil composition by adding phosphorus. Some evidence suggests that phosphorous acid may inhibit proteoid root formation. Because dieback thrives in moist soil conditions, it can be a severe problem for banksias that are watered, such as in the cut flower industry and urban gardens. Uses Gardening Most of species are shrubs, only few of them can be found as trees and they are very popular because of their size, the tallest species are: B. integrifolia having its subspecies B. integrifolia subsp. monticola notable for reaching the biggest size for the genus and it is the most frost tolerant in this genus, B. seminuda, B. littoralis, B. serrata; species that can grow as small trees or big shrubs: B. grandis, B. prionotes, B. marginata, B. coccinea, B. speciosa and B. menziesii. Due to their size these species are popularly planted in parks, gardens and streets, the remaining species in this genus are only shrubs. Banksias are popular garden plants in Australia because of their large, showy flower heads, and because the large amounts of nectar they produce attracts birds and small mammals. Popular garden species include B. spinulosa, B. ericifolia, B. aemula (Wallum Banksia ), B. serrata (Saw Banksia), Banksia media (Southern Plains Banksia) and the cultivar Banksia 'Giant Candles'. Banksia species are primarily propagated by seed in the home garden as cuttings can be difficult to strike. However, commercial nurserymen extensively utilize the latter method (indeed, cultivars by nature must be vegetatively propagated by cuttings or grafting). Over time, dwarf cultivars and prostrate species are becoming more popular as urban gardens grow ever smaller. These include miniature forms under 50 cm high of B. spinulosa and B. media, as well as prostrate species such as B. petiolaris and B. blechnifolia. Banksias possibly require more maintenance than other Australian natives, though are fairly hardy if the right conditions are provided (sunny aspect and well drained sandy soil). They may need extra water during dry spells until established, which can take up to two years. If fertilised, only slow-release, low-phosphorus fertilizer should be used, as the proteoid roots may be damaged by high nutrient levels in the soil. All respond well to some form of pruning. Within the Australian horticultural community there is an active subculture of Banksia enthusiasts who seek out interesting flower variants, breed and propagate cultivars, exchange materials and undertake research into cultivation problems and challenges. The main forum for exchange of information within this group is ASGAP's Banksia Study Group. Cut flower industry With the exception of the nursery industry, Banksia have limited commercial use. Some species, principally B. coccinea (scarlet banksia), B. baxteri, B. hookeriana (Hooker's banksia), B. sceptrum (sceptre banksia), and B. prionotes (acorn banksia), and less commonly B. speciosa (showy banksia), B. menziesii (Menzies' banksia), B. burdettii and B. ashbyi are grown on farms in Western and Southern Australia, as well as Israel and Hawaii, and the flower heads harvested for the cut flower trade. Eastern species, such as B. ericifolia, B. robur and B. plagiocarpa, are sometimes cultivated for this purpose. The nectar is also sought by beekeepers, not for the quality of the dark-coloured honey, which is often poor, but because the trees provide an abundant and reliable source of nectar at times when other sources provide little. WoodworkingBanksia wood is reddish in color with an attractive grain but it is rarely used as it warps badly on drying. It is occasionally used for ornamental purposes in wood turning and cabinet paneling. It has also been used to make keels for small boats. Historically, the wood of certain species such as B. serrata was used for yokes and boat parts. The large "cones" or seed pods of B. grandis are used for woodturning projects. They are also sliced up and sold as drink coasters; these are generally marketed as souvenirs for international tourists. Woodturners throughout the world value Banksia pods for making ornamental objects. Indigenous uses The Indigenous people of south-western Australia would suck on the flower spikes to obtain the nectar, they also soaked the flower spikes in water to make a sweet drink. The Noongar people of southwest Western Australia also used infusions of the flower spikes to relieve coughs and sore throats. The Girai wurrung peoples of the western district of Victoria used the spent flower cones to strain water by placing the cones in their mouths and using them like a straw. Banksia trees are a reliable source of insect larvae which are extracted as food. Cultural references Field guides and other technical resources A number of field guides and other semi-technical books on the genus have been published. These include: Field Guide to Banksias Written by Ivan Holliday and Geoffrey Watton and first published in 1975, this book contained descriptions and colour photographs of species known at the time. It was largely outdated by the publication of Alex George's classic 1981 monograph, but a revised and updated second edition was released in 1990.The BanksiasThis three volume monograph contains watercolour paintings of every Banksia species by renowned botanical illustrator Celia Rosser, with accompanying text by Alex George. Its publication represents the first time that such a large genus has been entirely painted. Published by Academic Press in association with Monash University, the three volumes were published in 1981, 1988 and 2000 respectively. The Banksia Book Begun by Australian photographer Fred Humphreys and Charles Gardner, both of whom died before its completion, The Banksia Book was eventually completed by Alex George and first published in 1984. It included every species known at the time, with a second edition appearing in 1987 and third in 1996. The Banksia Atlas In 1983 the Australian Biological Resources Study (ABRS) decided to pilot an Australia-wide distribution study of a significant plant genus. Banksia was chosen because it was a high-profile, widely distributed genus that was easily identified, but for which distribution and habitat was poorly known. The study mobilised over 400 volunteers, collecting over 25,000 field observations over a two-year period. Outcomes included the discovery of two new species, as well as new varieties and some rare colour variants, and discoveries of previously unknown populations of rare and threatened species. The collated data was used to create The Banksia Atlas, which was first published in 1988. Banksias, Waratahs and Grevilleas and all other plants in the Australian Proteaceae family Written by J. W. Wrigley and M. Fagg, this was published by Collins Publishers in 1989. A comprehensive text on all the Proteaceae genera with good historical notes and an overview of the 1975 Johnson & Briggs classification. It is out of print and hard to find. May Gibb's "Banksia men" Perhaps the best known cultural reference to Banksia is the "big bad Banksia men" of May Gibbs' children's book Snugglepot and Cuddlepie. Gibb's "Banksia men" are modelled on the appearance of aged Banksia "cones", with follicles for eyes and other facial features. There is some contention over which species actually provided the inspiration for the "Banksia men": the drawings most resemble the old cones of B. aemula or B. serrata, but B. attenuata (slender banksia) has also been cited, as this was the species that Gibbs saw as a child in Western Australia. Other cultural references In 1989, the Banksia Environmental Foundation was created to support and recognise people and organizations that make a positive contribution to the environment. The Foundation launched the annual Banksia Environmental Awards in the same year. Announced in June 2023, the exoplanet WASP-19b was named "Banksia" in the third NameExoWorlds competition. The approved name was proposed by a team from Brandon Park Primary School in Wheelers Hill (Melbourne, Australia), led by scientist Lance Kelly and teacher David Maierhofer, after various types of Banksia plants. Selected speciesB. archaeocarpa†B. attenuataB. integrifoliaB. seminudaB. ericifoliaB. grandisB. marginataB. prionotesB. dentataB. novae-zelandiae†B. spinulosaB. sphaerocarpaB. sessilisB. nobilisB. dallanneyiB. praemorsaB. repensB. rosseraeB. elderianaB. solandriB. oreophilaB. browniiB. montanaB. goodii (Good's Banksia)B. tricuspis (Pine Banksia)B. verticillata (Granite Banksia)IsostylisB. cuneata (Matchstick Banksia)B. ilicifolia (Holly-leaved Banksia)B. oligantha (Wagin Banksia)
Biology and health sciences
Proteales
Plants
39696
https://en.wikipedia.org/wiki/Oak
Oak
An oak is a hardwood tree or shrub in the genus Quercus of the beech family. They have spirally arranged leaves, often with lobed edges, and a nut called an acorn, borne within a cup. The genus is widely distributed in the Northern Hemisphere; it includes some 500 species, both deciduous and evergreen. Fossil oaks date back to the Middle Eocene. Molecular phylogeny shows that the genus is divided into Old World and New World clades, but many oak species hybridise freely, making the genus's history difficult to resolve. Ecologically, oaks are keystone species in habitats from Mediterranean semi-desert to subtropical rainforest. They live in association with many kinds of fungi including truffles. Oaks support more than 950 species of caterpillar, many kinds of gall wasp which form distinctive galls, roundish woody lumps such as the oak apple, and a large number of pests and diseases. Oak leaves and acorns contain enough tannin to be toxic to cattle, but pigs are able to digest them safely. Oak timber is strong and hard, and has found many uses in construction and furniture-making. The bark was traditionally used for tanning leather. Wine barrels are made of oak; these are used for aging alcoholic beverages such as sherry and whisky, giving them a range of flavours, colours, and aromas. The spongy bark of the cork oak is used to make traditional wine bottle corks. Almost a third of oak species are threatened with extinction due to climate change, invasive pests, and habitat loss. In culture, the oak tree is a symbol of strength and serves as the national tree of many countries. In Indo-European and related religions, the oak is associated with thunder gods. Individual oak trees of cultural significance include the Royal Oak in Britain, the Charter Oak in the United States, and the Guernica Oak in the Basque Country. Etymology The generic name Quercus is Latin for "oak", derived from Proto-Indo-European *kwerkwu-, "oak", which is also the origin of the name "fir", another important or sacred tree in Indo-European culture. The word "cork", for the bark of the cork oak, similarly derives from Quercus. The common name "oak" is from Old English ac (seen in placenames such as Acton, from ac + tun, "oak village"), which in turn is from Proto-Germanic *aiks, "oak". Description Oaks are hardwood (dicotyledonous) trees, deciduous or evergreen, with spirally arranged leaves, often with lobate margins; some have serrated leaves or entire leaves with smooth margins. Many deciduous species are marcescent, not dropping dead leaves until spring. In spring, a single oak tree produces both male and female flowers. The staminate (male) flowers are arranged in catkins, while the small pistillate (female) flowers produce an acorn (a kind of nut) contained in a cupule. Each acorn usually contains one seed and takes 6–18 months to mature, depending on the species. The acorns and leaves contain tannic acid, which helps to guard against fungi and insects. There are some 500 extant species of oaks. Trees in the genus are often large and slow-growing; Q. alba can reach an age of 600 years, a diameter of and a height of . The Granit oak in Bulgaria, a Q. robur specimen, has an estimated age of 1,637 years, making it the oldest oak in Europe. The Wi'aaSal tree, a live oak in the reservation of the Pechanga Band of Indians, California, is at least 1,000 years old, and might be as much as 2,000 years old, which would make it the oldest oak in the US. Among the smallest oaks is Q. acuta, the Japanese evergreen oak. It forms a bush or small tree to a height of some . Distribution The genus Quercus is native to the Northern Hemisphere and includes deciduous and evergreen species extending from cool temperate to tropical latitudes in the Americas, Asia, Europe, and North Africa. North America has the largest number of oak species, with approximately 160 species in Mexico, of which 109 are endemic, and about 90 in the United States. The second greatest area of oak diversity is China, with approximately 100 species. In the Americas, Quercus is widespread from Vancouver and Nova Scotia in the south of Canada, south to Mexico and across the whole of the eastern United States. It is present in a small area of the west of Cuba; in Mesoamerica it occurs mainly above . The genus crossed the isthmus of Panama when the northern and southern continents came together and is present as one species, Q. humboldtii, above 1,000 metres in Colombia. The oaks of North America are of many sections (Protobalanus, Lobatae, Ponticae, Quercus, and Virentes) along with related genera such as Notholithocarpus. In the Old World, oaks of section Quercus extend across the whole of Europe including European Russia apart from the far north, and north Africa (north of the Sahara) from Morocco to Libya. In Mediterranean Europe, they are joined by oaks of the sections Cerris and Ilex, which extend across Turkey, the Middle East, Iran, Afghanistan and Pakistan, while section Ponticae is endemic to the western Caucasus in Turkey and Georgia. Oaks of section Cyclobalanopsis extend in a narrow belt along the Himalayas to cover mainland and island Southeast Asia as far as Sumatra, Java, Borneo, and Palawan. Finally, oaks of multiple sections (Cyclobalanopsis, Ilex, Cerris, Quercus and related genera like Lithocarpus and Castanopsis) extend across east Asia including China, Korea, and Japan. Evolution Fossil history Potential records of Quercus have been reported from Late Cretaceous deposits in North America and East Asia. These are not considered definitive, as macrofossils older than the Paleogene, and possibly from before the Eocene are mostly poorly preserved without critical features for certain identification. Amongst the oldest unequivocal records of Quercus are pollen from Austria, dating to the Paleocene-Eocene boundary, around 55 million years ago. The oldest records of Quercus in North America are from Oregon, dating to the Middle Eocene, around 44 million years ago, with the oldest records in Asia from the Middle Eocene of Japan; both forms have affinities to the Cyclobalanopsis group. External phylogeny Quercus forms part, or rather two parts, of the Quercoideae subfamily of the Fagaceae, the beech family. Modern molecular phylogenetics suggests the following relationships: Internal phylogeny Molecular techniques for phylogenetic analysis show that the genus Quercus consisted of Old World and New World clades. The entire genome of Quercus robur (the pedunculate oak) has been sequenced, revealing an array of mutations that may underlie the evolution of longevity and disease resistance in oaks. In addition, hundreds of oak species have been compared (at RAD-seq loci), allowing a detailed phylogeny to be constructed. However, the high signal of introgressive hybridization (the transfer of genetic material by repeated backcrossing with hybrid offspring) in the genus has made it difficult to resolve an unambiguous, unitary history of oaks. The phylogeny from Hipp et al. 2019 is: Taxonomy Taxonomic history The genus Quercus was circumscribed by Carl Linnaeus in the first edition of his 1753 Species Plantarum. He described 15 species within the new genus, providing type specimens for 10 of these, and giving names but no types for Q. cerris, Q. coccifera, Q. ilex, Q. smilax, and Q. suber. He chose Q. robur, the pedunculate oak, as the type species for the genus. A 2017 classification of Quercus, based on multiple molecular phylogenetic studies, divided the genus into two subgenera and eight sections: Subgenus Quercus – the New World clade (or high-latitude clade), mostly native to North America Section Lobatae Loudon – North American red oaks Section Protobalanus (Trelease) O.Schwarz – North American intermediate oaks Section Ponticae Stef. – with a disjunct distribution between western Eurasia and western North America Section Virentes Loudon – American southern live oaks Section Quercus – white oaks from North America and Eurasia Subgenus Cerris Oerst. – the Old World clade (or mid-latitude clade), exclusively native to Eurasia Section Cyclobalanopsis Oerst. – cycle-cup oaks of East Asia Section Cerris Dumort. – cerris oaks of subtropical and temperate Eurasia and North Africa Section Ilex Loudon – ilex oaks of tropical and subtropical Eurasia and North Africa The subgenus division supports the evolutionary diversification of oaks among two distinct clades: the Old World clade (subgenus Cerris), including oaks that diversified in Eurasia; and the New World clade (subgenus Quercus), oaks that diversified mainly in the Americas. Subgenus Quercus Sect. Lobatae (synonym Erythrobalanus), the red oaks of North America, Central America and northern South America. Styles are long; the acorns mature in 18 months and taste very bitter. The inside of the acorn shell appears woolly. The actual nut is encased in a thin, clinging, papery skin. The leaves typically have sharp lobe tips, with spiny bristles at the lobe. Sect. Protobalanus, the canyon live oak and its relatives, in the southwestern United States and northwest Mexico. Styles are short; the acorns mature in 18 months and taste very bitter. The inside of the acorn shell appears woolly. The leaves typically have sharp lobe tips, with bristles at the lobe tip. Sect. Ponticae, a disjunct including just two species. Styles are short, and the acorns mature in 12 months. The leaves have large stipules, high secondary veins, and are highly toothed. Sect. Virentes, the southern live oaks of the Americas. Styles are short, and the acorns mature in 12 months. The leaves are evergreen or subevergreen. Sect. Quercus (synonyms Lepidobalanus and Leucobalanus), the white oaks of Europe, Asia and North America. Trees or shrubs that produce nuts, specifically acorns, as fruits. Acorns mature in one year for annual trees and two years for biannual trees. Acorn is encapsulated by a spiny cupule as characterized by the family Fagaceae. Flowers in the Quercus genera produce one flower per node, with three or six styles, as well as three or six ovaries, respectively. The leaves mostly lack a bristle on their lobe tips, which are usually rounded. The type species is Quercus robur. Subgenus Cerris The type species is Quercus cerris. Sect. Cyclobalanopsis, the ring-cupped oaks of eastern and southeastern Asia. These are evergreen trees growing tall. They are distinct from subgenus Quercus in that they have acorns with distinctive cups bearing concrescent rings of scales; they commonly also have densely clustered acorns, though this does not apply to all of the species. Species of Cyclobalanopsis are common in the evergreen subtropical laurel forests, which extend from southern Japan, southern Korea, and Taiwan across southern China and northern Indochina to the eastern Himalayas, in association with trees of the genus Castanopsis and the laurel family (Lauraceae). Sect. Cerris, the Turkey oak and its relatives of Europe and Asia. Styles are long; acorns mature in 18 months and taste very bitter. The inside of the acorn's shell is hairless. Its leaves typically have sharp lobe tips, with bristles at the lobe tip. Sect. Ilex, the Ilex oak and its relatives of Eurasia and northern Africa. Styles are medium-long; acorns mature in 12–24 months, appearing hairy on the inside. The leaves are evergreen, with bristle-like extensions on the teeth. Ecology Oaks are keystone species in a wide range of habitats from Mediterranean semi-desert to subtropical rainforest. They are important components of hardwood forests; some species grow in associations with members of the Ericaceae in oak–heath forests. Several kinds of truffles, including two well-known varieties – black Périgord truffle and the white Piedmont truffle – have symbiotic relationships with oak trees. Similarly, many other fungi, such as Ramaria flavosaponaria, associate with oaks. Oaks support more than 950 species of caterpillars, an important food source for many birds. Mature oak trees shed widely varying numbers of acorns (known collectively as mast) annually, with large quantities in mast years. This may be a predator satiation strategy, increasing the chance that some acorns will survive to germination. Animals including squirrels and jays – Eurasian jays in the Old World, blue jays in North America – feed on acorns, and are important agents of seed dispersal as they carry the acorns away and bury many of them as food stores. However, some species of squirrel selectively excise the embryos from the acorns that they store, meaning that the food store lasts longer and that the acorns will never germinate. Hybridisation Interspecific hybridization is quite common among oaks, but usually between species within the same section only, and most common in the white oak group. White oaks cannot discriminate against pollination by other species in the same section. Because they are wind pollinated and have weak internal barriers to hybridization, hybridization produces functional seeds and fertile hybrid offspring. Ecological stresses, especially near habitat margins, can also cause a breakdown of mate recognition as well as a reduction of male function (pollen quantity and quality) in one parent species. Frequent hybridization among oaks has consequences for oak populations around the world; most notably, hybridization has produced large populations of hybrids with much introgression and the evolution of new species. Introgression has caused different species in the same populations to share up to 50% of their genetic information. As a result, genetic data often does not differentiate between clearly morphologically distinct species, but instead differentiates populations. The maintenance of particular loci for adaptation to ecological niches may explain the retention of species identity despite significant gene flow. The Fagaceae, or beech family, to which the oaks belong, is a slowly-evolving clade compared to other angiosperms, and the patterns of hybridization and introgression in Quercus pose a significant challenge to the concept of a species as a group of "actually or potentially interbreeding populations which are reproductively isolated from other such groups." By this definition, many species of Quercus would be lumped together according to their geographic and ecological habitat, despite clear distinctions in morphology and genetic data. Diseases and pests Oaks are affected by a large number of pests and diseases. For instance, Q. robur and Q. petraea in Britain host 423 insect species. This diversity includes 106 macro-moths, 83 micro-moths, 67 beetles, 53 cynipoidean wasps, 38 heteropteran bugs, 21 auchenorrhynchan bugs, 17 sawflies, and 15 aphids. The insect numbers are seasonal: in spring, chewing insects such as caterpillars become numerous, followed by insects with sucking mouthparts such as aphids, then by leaf miners, and finally by gall wasps such as Neuroterus. Several powdery mildews affect oak species. In Europe, the species Erysiphe alphitoides is the most common. It reduces the ability of leaves to photosynthesize, and infected leaves are shed early. Another significant threat, the oak processionary moth (Thaumetopoea processionea), has emerged in the UK since 2006. The caterpillars of this species defoliate the trees and are hazardous to human health; their bodies are covered with poisonous hairs which can cause rashes and respiratory problems. A little-understood disease of mature oaks, acute oak decline, has affected the UK since 2009. In California, goldspotted oak borer (Agrilus auroguttatus) has destroyed many oak trees, while sudden oak death, caused by the oomycete pathogen Phytophthora ramorum, has devastated oaks in California and Oregon, and is present in Europe. Japanese oak wilt, caused by the fungus Raffaelea quercivora, has rapidly killed trees across Japan. Gall communities Many galls are found on oak leaves, buds, flowers, and roots. Examples are oak artichoke gall, oak marble gall, oak apple gall, knopper gall, and spangle gall. These galls are the handiwork of tiny wasps from the Cynipidae. In a complex ecological relationship, these gall wasps become hosts to parasitoid wasps—primarily from the order Chalcidoidea—which lay their larvae inside the gall wasps, ultimately leading to the hosts' demise. Additionally, inquilines live commensally within the galls without harming the gall wasps. Toxicity The leaves and acorns of oaks are poisonous to livestock, including cattle and horses, if eaten in large amounts, due to the toxin tannic acid, which causes kidney damage and gastroenteritis. An exception is the domestic pig, which, under the right conditions, may be fed entirely on acorns, and has traditionally been pastured in oak woodlands (such as the Spanish dehesa and the English system of pannage). Humans can eat acorns after leaching out the tannins in water. Uses Timber Oak timber is a strong and hard wood with many uses, such as for furniture, floors, building frames, and veneers. The wood of a red oak Quercus cerris (the Turkey oak) has better mechanical properties than those of the white oaks Q. petraea and Q. robur; the heartwood and sapwood have similar mechanical properties. Of the North American red oaks, the northern red oak, Quercus rubra, is highly prized for lumber. The wood is resistant to insect and fungal attack. Wood from Q. robur and Q. petraea was used in Europe for shipbuilding, especially of naval men of war, until the 19th century. In hill states of India such as Uttarakhand, along with being used for fuelwood and timber, oak wood is used for agricultural implements, while the leaves serve as fodder for livestock during lean periods. Other traditional products Oak bark, with its high tannin content, was traditionally used in the Old World for tanning leather. Oak galls were used for centuries as a main ingredient in iron gall ink for manuscripts, harvested at a specific time of year. In Korea, sawtooth oak bark is used to make shingles for traditional roof construction. The dried bark of the white oak was used in traditional medical preparations; its tannic acid content made it astringent and antiseptic. Acorns have been ground to make a flour, and roasted for acorn coffee. Culinary Barrels for aging wines, sherry, and spirits such as brandy and Scotch whisky are made from oak, with single barrel malt whiskies fetching a premium. The use of oak in wine adds a range of flavours. Oak barrels, which may be charred before use, contribute to their contents' colour, taste, and aroma, imparting a desirable oaky vanillin flavour. A dilemma for wine producers is to choose between French and American oakwoods. French oaks (Quercus robur, Q. petraea) give greater refinement and are chosen for the best, most expensive wines. American oak contributes greater texture and resistance to ageing, but produces a more powerful bouquet. Oak wood chips are used for smoking foods such as fish, meat, and cheese. In Japan, Children's Day is celebrated with rice cakes, filled with a sweet red bean paste, and wrapped in a oak leaf. The bark of the cork oak is used to produce cork stoppers for wine bottles. This species grows around the Mediterranean Sea; Portugal, Spain, Algeria, and Morocco produce most of the world's supply. Acorns of various oak species have been used as food for millennia, in Asia, Europe, the Middle East, North Africa, and among the native peoples of North America. In North Africa, acorns have been pressed to make acorn oil: the oil content can be as high as 30%. Oaks have also been used as fodder, both leaves and acorns being fed to livestock such as pigs. Given their high tannin content, acorns have often been leached to remove tannins before use as fodder. Conservation An estimated 31% of the world's oak species are threatened with extinction, while 41% of oak species are considered to be of conservation concern. The countries with the highest numbers of threatened oak species (as of 2020) are China with 36 species, Mexico with 32 species, Vietnam with 20 species, and the US with 16 species. Leading causes are climate change and invasive pests in the US, and deforestation and urbanization in Asia. In the Himalayan region of India, oak forests are being invaded by pine trees due to global warming. The associated pine forest species may cross frontiers and integrate into the oak forests. Over the past 200 years, large areas of oak forest in the highlands of Mexico, Central America, and the northern Andes have been cleared for coffee plantations and cattle ranching. There is a continuing threat to these forests from exploitation for timber, fuelwood, and charcoal. In the US, entire oak ecosystems have declined due to a combination of factors thought to include fire suppression, increased consumption of acorns by growing mammal populations, herbivory of seedlings, and introduced pests. However, disturbance-tolerant oaks may have benefited from grazers like bison, and suffered when the bison were removed following European colonization. Culture Symbols The oak is a widely used symbol of strength and endurance. It is the national tree of many countries, including the US, Bulgaria, Croatia, Cyprus (golden oak), Estonia, France, Germany, Moldova, Jordan, Latvia, Lithuania, Poland, Romania, Serbia, and Wales. Ireland's fifth-largest city, Derry, is named for the tree, from . Oak branches are displayed on some German coins, both of the former Deutsche Mark and the euro. Oak leaves symbolize rank in armed forces including those of the United States. Arrangements of oak leaves, acorns, and sprigs indicate different branches of the United States Navy staff corps officers. The oak tree is used as a symbol by several political parties and organisations. It is the symbol of the Conservative Party in the United Kingdom, and formerly of the Progressive Democrats in Ireland. Religion The prehistoric Indo-European tribes worshiped the oak and connected it with a thunder god, and this tradition descended to many classical cultures. In Greek mythology, the oak is the tree sacred to Zeus, king of the gods. In Zeus's oracle in Dodona, Epirus, the sacred oak was the centerpiece of the precinct, and the priests would divine the pronouncements of the god by interpreting the rustling of the oak's leaves. Mortals who destroyed such trees were said to be punished by the gods since the ancient Greeks believed beings called hamadryads inhabited them. In Norse and Baltic mythology, the oak was sacred to the thunder gods Thor and Perkūnas respectively. In Celtic polytheism, the name druid, Celtic priest, is connected to Proto-Indo-European *deru, meaning oak or tree. Veneration of the oak survives in Serbian Orthodox Church tradition. Christmas celebrations include the badnjak, a branch taken from a young and straight oak ceremonially felled early on Christmas Eve morning, similar to a yule log. History Category: Individual oak trees Several oak trees hold cultural importance; such as the Royal Oak in Britain, the Charter Oak in the United States, and the Guernica oak in the Basque Country. "The Proscribed Royalist, 1651", a famous painting by John Everett Millais, depicts a Royalist hiding in an oak tree while fleeing from Cromwell's forces. In the Roman Republic, a crown of oak leaves was given to those who had saved the life of a citizen in battle; it was called the "Civic Crown". In his 17th century poem The Garden, Andrew Marvell critiqued the desire to be awarded such a leafy crown: "How vainly men themselves amaze / To win the palm, the oak, or bays; And their uncessant labors see / Crowned from some single herb or tree, ..."
Biology and health sciences
Fagales
null
39736
https://en.wikipedia.org/wiki/Binomial%20nomenclature
Binomial nomenclature
In taxonomy, binomial nomenclature ("two-term naming system"), also called binary nomenclature, is a formal system of naming species of living things by giving each a name composed of two parts, both of which use Latin grammatical forms, although they can be based on words from other languages. Such a name is called a binomial name (often shortened to just "binomial"), a binomen, name, or a scientific name; more informally, it is also called a Latin name. In the International Code of Zoological Nomenclature (ICZN), the system is also called nomenclature, with an "n" before the "al" in "binominal", which is a typographic error, meaning "two-name naming system". The first part of the name – the generic name – identifies the genus to which the species belongs, whereas the second part – the specific name or specific epithet – distinguishes the species within the genus. For example, modern humans belong to the genus Homo and within this genus to the species Homo sapiens. Tyrannosaurus rex is likely the most widely known binomial. The formal introduction of this system of naming species is credited to Carl Linnaeus, effectively beginning with his work Species Plantarum in 1753. But as early as 1622, Gaspard Bauhin introduced in his book Pinax theatri botanici (English, Illustrated exposition of plants) containing many names of genera that were later adopted by Linnaeus. Binomial nomenclature was introduced in order to provide succinct, relatively stable and verifiable names that could be used and understood internationally, unlike common names which are usually different in every language. The application of binomial nomenclature is now governed by various internationally agreed codes of rules, of which the two most important are the International Code of Zoological Nomenclature (ICZN) for animals and the International Code of Nomenclature for algae, fungi, and plants (ICNafp or ICN). Although the general principles underlying binomial nomenclature are common to these two codes, there are some differences in the terminology they use and their particular rules. In modern usage, the first letter of the generic name is always capitalized in writing, while that of the specific epithet is not, even when derived from a proper noun such as the name of a person or place. Similarly, both parts are italicized in normal text (or underlined in handwriting). Thus the binomial name of the annual phlox (named after botanist Thomas Drummond) is now written as Phlox drummondii. Often, after a species name is introduced in a text, the generic name is abbreviated to the first letter in subsequent mentions (e.g., P. drummondii). In scientific works, the authority for a binomial name is usually given, at least when it is first mentioned, and the year of publication may be specified. In zoology "Patella vulgata Linnaeus, 1758". The name "Linnaeus" tells the reader who published the name and description for this species; 1758 is the year the name and original description were published (in this case, in the 10th edition of the book Systema Naturae). "Passer domesticus (Linnaeus, 1758)". The original name given by Linnaeus was Fringilla domestica; the parentheses indicate that the species is now placed in a different genus. The ICZN does not require that the name of the person who changed the genus be given, nor the date on which the change was made, although nomenclatorial catalogs usually include such information. In botany "Amaranthus retroflexus L." – "L." is the standard abbreviation used for "Linnaeus". "Hyacinthoides italica (L.) Rothm." – Linnaeus first named this bluebell species Scilla italica; Rothmaler transferred it to the genus Hyacinthoides; the ICNafp does not require that the dates of either publication be specified. Etymology The word binomial is composed of two elements: (Latin prefix meaning 'two') and (the adjective form of , Latin for 'name'). In Medieval Latin, the related word was used to signify one term in a binomial expression in mathematics. In fact, the Latin word may validly refer to either of the epithets in the binomial name, which can equally be referred to as a (pl. ). History Prior to the adoption of the modern binomial system of naming species, a scientific name consisted of a generic name combined with a specific name that was from one to several words long. Together they formed a system of polynomial nomenclature. These names had two separate functions. First, to designate or label the species, and second, to be a diagnosis or description; however, these two goals were eventually found to be incompatible. In a simple genus, containing only two species, it was easy to tell them apart with a one-word genus and a one-word specific name; but as more species were discovered, the names necessarily became longer and unwieldy, for instance, Plantago foliis ovato-lanceolatus pubescentibus, spica cylindrica, scapo tereti ("plantain with pubescent ovate-lanceolate leaves, a cylindric spike and a terete scape"), which we know today as Plantago media. Such "polynomial names" may sometimes look like binomials, but are significantly different. For example, Gerard's herbal (as amended by Johnson) describes various kinds of spiderwort: "The first is called Phalangium ramosum, Branched Spiderwort; the second, Phalangium non ramosum, Unbranched Spiderwort. The other ... is aptly termed Phalangium Ephemerum Virginianum, Soon-Fading Spiderwort of Virginia". The Latin phrases are short descriptions, rather than identifying labels. The Bauhins, in particular Caspar Bauhin (1560–1624), took some important steps towards the binomial system by pruning the Latin descriptions, in many cases to two words. The adoption by biologists of a system of strictly binomial nomenclature is due to Swedish botanist and physician Carl Linnaeus (1707–1778). It was in Linnaeus's 1753 Species Plantarum that he began consistently using a one-word trivial name () after a generic name (genus name) in a system of binomial nomenclature. Trivial names had already appeared in his Critica Botanica (1737) and Philosophia Botanica (1751). This trivial name is what is now known as a specific epithet (ICNafp) or specific name (ICZN). The Bauhins' genus names were retained in many of these, but the descriptive part was reduced to a single word. Linnaeus's trivial names introduced the important new idea that the function of a name could simply be to give a species a unique label, meaning that the name no longer needed to be descriptive. Both parts could, for example, be derived from the names of people. Thus Gerard's Phalangium ephemerum virginianum became Tradescantia virginiana, where the genus name honoured John Tradescant the Younger, an English botanist and gardener. A bird in the parrot family was named Psittacus alexandri, meaning "Alexander's parrot", after Alexander the Great, whose armies introduced eastern parakeets to Greece. Linnaeus's trivial names were much easier to remember and use than the parallel polynomial names, and eventually replaced them. Value The value of the binomial nomenclature system derives primarily from its economy, its widespread use, and the uniqueness and stability of names that the Codes of Zoological and Botanical, Bacterial and Viral Nomenclature provide: Economy. Compared to the polynomial system which it replaced, a binomial name is shorter and easier to remember. It corresponds to the noun-adjective form many vernacular names take to indicate a species within a group (for example, 'brown bear' to refer to a particular type of bear), as well as the widespread system of family name plus given name(s) used to name people in many cultures. Widespread use. The binomial system of nomenclature is governed by international codes and is used by biologists worldwide. A few binomials have also entered common speech, such as Homo sapiens, E. coli, Boa constrictor, Tyrannosaurus rex, and Aloe vera. Uniqueness. Provided that taxonomists agree as to the limits of a species, it can have only one name that is correct under the appropriate nomenclature code, generally the earliest published if two or more names are accidentally assigned to a species. This means the species a binomial name refers to can be clearly identified, as compared to the common names of species which are usually different in every language. However, establishing that two names actually refer to the same species and then determining which has priority can sometimes be difficult, particularly if the species was named by biologists from different countries. Therefore, a species may have more than one regularly used name; all but one of these names are "synonyms". Furthermore, within zoology or botany, each species name applies to only one species. If a name is used more than once, it is called a homonym. Stability. Although stability is far from absolute, the procedures associated with establishing binomial names, such as the principle of priority, tend to favor stability. For example, when species are transferred between genera (as not uncommonly happens as a result of new knowledge), the second part of the binomial is kept the same (unless it becomes a homonym). Thus, there is disagreement among botanists as to whether the genera Chionodoxa and Scilla are sufficiently different for them to be kept separate. Those who keep them separate give the plant commonly grown in gardens in Europe the name Chionodoxa siehei; those who do not give it the name Scilla siehei. The siehei element is constant. Similarly, if what were previously thought to be two distinct species are demoted to a lower rank, such as subspecies, the second part of the binomial name is retained as a trinomen (the third part of the new name). Thus, the Tenerife robin may be treated as a different species from the European robin, in which case its name is Erithacus superbus, or as only a subspecies, in which case its name is Erithacus rubecula superbus. The superbus element of the name is constant, as are its authorship and year of publication. Problems Binomial nomenclature for species has the effect that when a species is moved from one genus to another, sometimes the specific name or epithet must be changed as well. This may happen because the specific name is already used in the new genus, or to agree in gender with the new genus if the specific epithet is an adjective modifying the genus name. Some biologists have argued for the combination of the genus name and specific epithet into a single unambiguous name, or for the use of uninomials (as used in nomenclature of ranks above species). Because genus names are unique only within a nomenclature code, it is possible for homonyms (two or more species sharing the same genus name) to happen, and even the same binomial if they occur in different kingdoms. At least 1,258 instances of genus name duplication occur (mainly between zoology and botany). Relationship to classification and taxonomy Nomenclature (including binomial nomenclature) is not the same as classification, although the two are related. Classification is the ordering of items into groups based on similarities or differences; in biological classification, species are one of the kinds of item to be classified. In principle, the names given to species could be completely independent of their classification. This is not the case for binomial names, since the first part of a binomial is the name of the genus into which the species is placed. Above the rank of genus, binomial nomenclature and classification are partly independent; for example, a species retains its binomial name if it is moved from one family to another or from one order to another, unless it better fits a different genus in the same or different family, or it is split from its old genus and placed in a newly created genus. The independence is only partial since the names of families and other higher taxa are usually based on genera. Taxonomy includes both nomenclature and classification. Its first stages (sometimes called "alpha taxonomy") are concerned with finding, describing and naming species of living or fossil organisms. Binomial nomenclature is thus an important part of taxonomy as it is the system by which species are named. Taxonomists are also concerned with classification, including its principles, procedures and rules. Derivation of binomial names A complete binomial name is always treated grammatically as if it were a phrase in the Latin language (hence the common use of the term "Latin name" for a binomial name). However, the two parts of a binomial name can each be derived from a number of sources, of which Latin is only one. These include: Latin, from any period, whether classical, medieval or modern. Thus, both parts of the binomial name are Latin words, meaning "wise" () "human/man" (). Classical Greek. The genus Rhododendron was named by Linnaeus from the Greek word , itself derived from rhodon, "rose", and dendron, "tree". Greek words are often converted to a Latinized form. Thus coca (the plant from which cocaine is obtained) has the name Erythroxylum coca. Erythroxylum is derived from the Greek words , red, and , wood. The Greek ending - (-on), when it is neuter, is often converted to the Latin neuter ending . Other languages. The second part of the name Erythroxylum coca is derived from , the name of the plant in Aymara and Quechua. Since many dinosaur fossils were found in Mongolia, their names often use Mongolian words, e.g. Tarchia from , meaning "brain", or Saichania meaning "beautiful one". Names of people (often naturalists or biologists). The name Magnolia campbellii commemorates two people: Pierre Magnol, a French botanist, and Archibald Campbell, a doctor in British India. Names of places. The lone star tick, Amblyomma americanum, is widespread in the United States. Other sources. Some binomial names have been constructed from taxonomic anagrams or other re-orderings of existing names. Thus the name of the genus Muilla is derived by reversing the name Allium. Names may also be derived from jokes or puns. For example, Neal Evenhuis described a number of species of flies in a genus he named Pieza, including Pieza pi, Pieza rhea, Pieza kake, and Pieza deresistans. The first part of the name, which identifies the genus, must be a word that can be treated as a Latin singular noun in the nominative case. It must be unique within the purview of each nomenclatural code, but can be repeated between them. Thus Huia recurvata is an extinct species of plant, found as fossils in Yunnan, China, whereas Huia masonii is a species of frog found in Java, Indonesia. The second part of the name, which identifies the species within the genus, is also treated grammatically as a Latin word. It can have one of a number of forms: The second part of a binomial may be an adjective. If so, the form of the adjective must agree with the genus name in gender. Latin nouns can have three genders, masculine, feminine and neuter, and many Latin adjectives will have two or three different endings, depending upon the gender of the noun they refer to. The house sparrow has the binomial name . Here ("domestic") simply means "associated with the house". The sacred bamboo is rather than , since is feminine whereas is masculine. The tropical fruit langsat is a product of the plant , since is neuter. Some common endings for Latin adjectives in the three genders (masculine, feminine, neuter) are , , (as in the previous example of ); , , (e.g., , meaning "sad"); and (e.g., , meaning "smaller"). For further information, see Latin declension: Adjectives. The second part of a binomial may be a noun in the nominative case. An example is the binomial name of the lion, which is . Grammatically the noun is said to be in apposition to the genus name and the two nouns do not have to agree in gender; in this case, is feminine and is masculine. The second part of a binomial may be a noun in the genitive (possessive) case. The genitive case is constructed in a number of ways in Latin, depending on the declension of the noun. Common endings for masculine and neuter nouns are or in the singular and in the plural, and for feminine nouns in the singular and in the plural. The noun may be part of a person's name, often the surname, as in the Tibetan antelope (), the shrub , or the olive-backed pipit (). The meaning is "of the person named", so means "Hodgson's magnolia". The or endings show that in each case Hodgson was a man (not the same one); had Hodgson been a woman, would have been used. The person commemorated in the binomial name is not usually (if ever) the person who created the name; for example, was named by Charles Wallace Richmond, in honour of Hodgson. Rather than a person, the noun may be related to a place, as with , meaning "of the Chalumna River". Another use of genitive nouns is in, for example, the name of the bacterium , where means "of the colon". This formation is common in parasites, as in , where means "of the wasps", since is a parasite of wasps. Whereas the first part of a binomial name must be unique within the purview of each nomenclatural code, the second part is quite commonly used in two or more genera (as is shown by examples of hodgsonii above), but cannot be used more than once within a single genus. The full binomial name must be unique within each code. Codes From the early 19th century onwards it became ever more apparent that a body of rules was necessary to govern scientific names. In the course of time these became nomenclature codes. The International Code of Zoological Nomenclature (ICZN) governs the naming of animals, the International Code of Nomenclature for algae, fungi, and plants (ICNafp) that of plants (including cyanobacteria), and the International Code of Nomenclature of Bacteria (ICNB) that of bacteria (including Archaea). Virus names are governed by the International Committee on Taxonomy of Viruses (ICTV), a taxonomic code, which determines taxa as well as names. These codes differ in certain ways, e.g.: "Binomial nomenclature" is the correct term for botany, although it is also used by zoologists. Since 1961, "binominal nomenclature" is the technically correct term in zoology. A binomial name is also called a binomen (plural binomina) or name. Both codes consider the first part of the two-part name for a species to be the "generic name". In the zoological code (ICZN), the second part of the name is a "specific name". In the botanical code (ICNafp), it is a "specific epithet". Together, these two parts are referred to as a "species name" or "binomen" in the zoological code: or "species name", "binomial", or "binary combination" in the botanical code. "Species name" is the only term common to the two codes. The ICNafp, the plant code, does not allow the two parts of a binomial name to be the same (such a name is called a tautonym), whereas the ICZN, the animal code, does. Thus the American bison has the binomen Bison bison; a name of this kind would not be allowed for a plant. The starting points, the time from which these codes are in effect (retroactively), vary from group to group. In botany the starting point will often be in 1753 (the year Carl Linnaeus first published Species Plantarum). In zoology the starting point is 1758 (1 January 1758 is considered the date of the publication of Linnaeus's Systema Naturae, 10th Edition, and also Clerck's Aranei Svecici). Bacteriology started anew, with a starting point on 1 January 1980. Unifying the different codes into a single code, the "BioCode", has been suggested, although implementation is not in sight. (There is also a published code for a different system of biotic nomenclature, which does not use ranks above species, but instead names clades. This is called PhyloCode.) Differences in handling personal names As noted above, there are some differences between the codes in how binomials can be formed; for example the ICZN allows both parts to be the same, while the ICNafp does not. Another difference is in how personal names are used in forming specific names or epithets. The ICNafp sets out precise rules by which a personal name is to be converted to a specific epithet. In particular, names ending in a consonant (but not "er") are treated as first being converted into Latin by adding "-ius" (for a man) or "-ia" (for a woman), and then being made genitive (i.e. meaning "of that person or persons"). This produces specific epithets like lecardii for Lecard (male), wilsoniae for Wilson (female), and brauniarum for the Braun sisters. By contrast, the ICZN does not require the intermediate creation of a Latin form of a personal name, allowing the genitive ending to be added directly to the personal name. This explains the difference between the names of the plant Magnolia hodgsonii and the bird Anthus hodgsoni. Furthermore, the ICNafp requires names not published in the form required by the code to be corrected to conform to it, whereas the ICZN is more protective of the form used by the original author. Writing binomial names By tradition, the binomial names of species are usually typeset in italics; for example, Homo sapiens. Generally, the binomial should be printed in a font style different from that used in the normal text; for example, "Several more Homo sapiens fossils were discovered." When handwritten, a binomial name should be underlined; for example, Homo sapiens. The first part of the binomial, the genus name, is always written with an initial capital letter. Older sources, particularly botanical works published before the 1950s, used a different convention: if the second part of the name was derived from a proper noun, e.g., the name of a person or place, a capital letter was used. Thus, the modern form Berberis darwinii was written as Berberis Darwinii. A capital was also used when the name is formed by two nouns in apposition, e.g., Panthera Leo or Centaurea Cyanus. In current usage, the second part is never written with an initial capital. When used with a common name, the scientific name often follows in parentheses, although this varies with publication. For example, "The house sparrow (Passer domesticus) is decreasing in Europe." The binomial name should generally be written in full. The exception to this is when several species from the same genus are being listed or discussed in the same paper or report, or the same species is mentioned repeatedly; in which case the genus is written in full when it is first used, but may then be abbreviated to an initial (and a period/full stop). For example, a list of members of the genus Canis might be written as "Canis lupus, C. aureus, C. simensis". In rare cases, this abbreviated form has spread to more general use; for example, the bacterium Escherichia coli is often referred to as just E. coli, and Tyrannosaurus rex is perhaps even better known simply as T. rex, these two both often appearing in this form in popular writing even where the full genus name has not already been given. The abbreviation "sp." is used when the actual specific name cannot or need not be specified. The abbreviation "spp." (plural) indicates "several species". These abbreviations are not italicised (or underlined). For example: "Canis sp." means "an unspecified species of the genus Canis", while "Canis spp." means "two or more species of the genus Canis". (These abbreviations should not be confused with the abbreviations "ssp." (zoology) or "subsp." (botany), plurals "sspp." or "subspp.", referring to one or more subspecies. See trinomen (zoology) and infraspecific name.) The abbreviation "cf." (i.e., confer in Latin) is used to compare individuals/taxa with known/described species. Conventions for use of the "cf." qualifier vary. In paleontology, it is typically used when the identification is not confirmed. For example, "Corvus cf. nasicus" was used to indicate "a fossil bird similar to the Cuban crow but not certainly identified as this species". In molecular systematics papers, "cf." may be used to indicate one or more undescribed species assumed to be related to a described species. For example, in a paper describing the phylogeny of small benthic freshwater fish called darters, five undescribed putative species (Ozark, Sheltowee, Wildcat, Ihiyo, and Mamequit darters), notable for brightly colored nuptial males with distinctive color patterns, were referred to as "Etheostoma cf. spectabile" because they had been viewed as related to, but distinct from, Etheostoma spectabile (orangethroat darter). This view was supported to varying degrees by DNA analysis. The somewhat informal use of taxa names with qualifying abbreviations is referred to as open nomenclature and it is not subject to strict usage codes. In some contexts, the dagger symbol ("†") may be used before or after the binomial name to indicate that the species is extinct. Authority In scholarly texts, at least the first or main use of the binomial name is usually followed by the "authority" – a way of designating the scientist(s) who first published the name. The authority is written in slightly different ways in zoology and botany. For names governed by the ICZN the surname is usually written in full together with the date (normally only the year) of publication. One example of author citation of scientific name is: "Amabela Möschler, 1880." The ICZN recommends that the "original author and date of a name should be cited at least once in each work dealing with the taxon denoted by that name." For names governed by the ICNafp the name is generally reduced to a standard abbreviation and the date omitted. The International Plant Names Index maintains an approved list of botanical author abbreviations. Historically, abbreviations were used in zoology too. When the original name is changed, e.g., the species is moved to a different genus, both codes use parentheses around the original authority; the ICNafp also requires the person who made the change to be given. In the ICNafp, the original name is then called the basionym. Some examples: (Plant) Amaranthus retroflexus L. – "L." is the standard abbreviation for "Linnaeus"; the absence of parentheses shows that this is his original name. (Plant) Hyacinthoides italica (L.) Rothm. – Linnaeus first named the Italian bluebell Scilla italica; that is the basionym. Rothmaler later transferred it to the genus Hyacinthoides. (Animal) Passer domesticus (Linnaeus, 1758) – the original name given by Linnaeus was Fringilla domestica; unlike the ICNafp, the ICZN does not require the name of the person who changed the genus (Mathurin Jacques Brisson) to be given. Other ranks Binomial nomenclature, as described here, is a system for naming species. Implicitly, it includes a system for naming genera, since the first part of the name of the species is a genus name. In a classification system based on ranks, there are also ways of naming ranks above the level of genus and below the level of species. Ranks above genus (e.g., family, order, class) receive one-part names, which are conventionally not written in italics. Thus, the house sparrow, Passer domesticus, belongs to the family Passeridae. Family names are normally based on genus names, although the endings used differ between zoology and botany. Ranks below species receive three-part names, conventionally written in italics like the names of species. There are significant differences between the ICZN and the ICNafp. In zoology, the only formal rank below species is subspecies and the name is written simply as three parts (a trinomen). Thus, one of the subspecies of the olive-backed pipit is Anthus hodgsoni berezowskii. Informally, in some circumstances, a form may be appended. For example Harmonia axyridis f. spectabilis is the harlequin ladybird in its black or melanic forms having four large orange or red spots. In botany, there are many ranks below species and although the name itself is written in three parts, a "connecting term" (not part of the name) is needed to show the rank. Thus, the American black elder is Sambucus nigra subsp. canadensis; the white-flowered form of the ivy-leaved cyclamen is Cyclamen hederifolium f. albiflorum.
Biology and health sciences
Genetics and taxonomy
null
39747
https://en.wikipedia.org/wiki/Stomach
Stomach
The stomach is a muscular, hollow organ in the upper gastrointestinal tract of humans and many other animals, including several invertebrates. The stomach has a dilated structure and functions as a vital organ in the digestive system. The stomach is involved in the gastric phase of digestion, following the cephalic phase in which the sight and smell of food and the act of chewing are stimuli. In the stomach a chemical breakdown of food takes place by means of secreted digestive enzymes and gastric acid. The stomach is located between the esophagus and the small intestine. The pyloric sphincter controls the passage of partially digested food (chyme) from the stomach into the duodenum, the first and shortest part of the small intestine, where peristalsis takes over to move this through the rest of the intestines. Structure In the human digestive system, the stomach lies between the esophagus and the duodenum (the first part of the small intestine). It is in the left upper quadrant of the abdominal cavity. The top of the stomach lies against the diaphragm. Lying behind the stomach is the pancreas. A large double fold of visceral peritoneum called the greater omentum hangs down from the greater curvature of the stomach. Two sphincters keep the contents of the stomach contained; the lower esophageal sphincter (found in the cardiac region), at the junction of the esophagus and stomach, and the pyloric sphincter at the junction of the stomach with the duodenum. The stomach is surrounded by parasympathetic (inhibitor) and sympathetic (stimulant) plexuses (networks of blood vessels and nerves in the anterior gastric, posterior, superior and inferior, celiac and myenteric), which regulate both the secretory activity of the stomach and the motor (motion) activity of its muscles. The stomach is distensible, and can normally expand to hold about one litre of food. In a newborn human baby the stomach will only be able to hold about 30 millilitres. The maximum stomach volume in adults is between 2 and 4 litres, although volumes of up to 15 litres have been observed in extreme circumstances. Sections The human stomach can be divided into four sections, beginning at the cardia followed by the fundus, the body and the pylorus. The gastric cardia is where the contents of the esophagus empty from the gastroesophageal sphincter into the cardiac orifice, the opening into the gastric cardia. A cardiac notch at the left of the cardiac orifice, marks the beginning of the greater curvature of the stomach. A horizontal line across from the cardiac notch gives the dome-shaped region called the fundus. The cardia is a very small region of the stomach that surrounds the esophageal opening. The fundus () is formed in the upper curved part. The body or corpus is the main, central region of the stomach. The pylorus opens to the body of the stomach. The pylorus () connects the stomach to the duodenum at the pyloric sphincter. The cardia is defined as the region following the "z-line" of the gastroesophageal junction, the point at which the epithelium changes from stratified squamous to columnar. Near the cardia is the lower esophageal sphincter. Anatomical proximity The stomach bed refers to the structures upon which the stomach rests in mammals. These include the tail of the pancreas, splenic artery, left kidney, left suprarenal gland, transverse colon and its mesocolon, and the left crus of diaphragm, and the left colic flexure. The term was introduced around 1896 by Philip Polson of the Catholic University School of Medicine, Dublin. However this was brought into disrepute by surgeon anatomist J Massey. Blood supply The lesser curvature of the human stomach is supplied by the right gastric artery inferiorly and the left gastric artery superiorly, which also supplies the cardiac region. The greater curvature is supplied by the right gastroepiploic artery inferiorly and the left gastroepiploic artery superiorly. The fundus of the stomach, and also the upper portion of the greater curvature, is supplied by the short gastric arteries, which arise from the splenic artery. Lymphatic drainage The two sets of gastric lymph nodes drain the stomach's tissue fluid into the lymphatic system. Microanatomy Wall Like the other parts of the gastrointestinal wall, the human stomach wall from inner to outer, consists of a mucosa, submucosa, muscular layer, subserosa and serosa. The inner part of the stomach wall is the gastric mucosa a mucous membrane that forms the lining of the stomach. the membrane consists of an outer layer of columnar epithelium, a lamina propria, and a thin layer of smooth muscle called the muscularis mucosa. Beneath the mucosa lies the submucosa, consisting of fibrous connective tissue. Meissner's plexus is in this layer interior to the oblique muscle layer. Outside of the submucosa lies the muscular layer. It consists of three layers of muscular fibres, with fibres lying at angles to each other. These are the inner oblique, middle circular, and outer longitudinal layers. The presence of the inner oblique layer is distinct from other parts of the gastrointestinal tract, which do not possess this layer. The stomach contains the thickest muscular layer consisting of three layers, thus maximum peristalsis occurs here. The inner oblique layer: This layer is responsible for creating the motion that churns and physically breaks down the food. It is the only layer of the three which is not seen in other parts of the digestive system. The antrum has thicker skin cells in its walls and performs more forceful contractions than the fundus. The middle circular layer: At this layer, the pylorus is surrounded by a thick circular muscular wall, which is normally tonically constricted, forming a functional (if not anatomically discrete) pyloric sphincter, which controls the movement of chyme into the duodenum. This layer is concentric to the longitudinal axis of the stomach. The myenteric plexus (Auerbach's plexus) is found between the outer longitudinal and the middle circular layer and is responsible for the innervation of both (causing peristalsis and mixing). The outer longitudinal layer is responsible for moving the semi-digested food towards the pylorus of the stomach through muscular shortening. To the outside of the muscular layer lies a serosa, consisting of layers of connective tissue continuous with the peritoneum. Smooth mucosa along the inside of the lesser curvature forms a passageway - the gastric canal that fast-tracks liquids entering the stomach, to the pylorus. Glands The mucosa lining the stomach is lined with gastric pits, which receive gastric juice, secreted by between 2 and 7 gastric glands. Gastric juice is an acidic fluid containing hydrochloric acid and digestive enzymes. The glands contains a number of cells, with the function of the glands changing depending on their position within the stomach. Within the body and fundus of the stomach lie the fundic glands. In general, these glands are lined by column-shaped cells that secrete a protective layer of mucus and bicarbonate. Additional cells present include parietal cells that secrete hydrochloric acid and intrinsic factor, chief cells that secrete pepsinogen (this is a precursor to pepsin- the highly acidic environment converts the pepsinogen to pepsin), and neuroendocrine cells that secrete serotonin. Glands differ where the stomach meets the esophagus and near the pylorus. Near the gastroesophageal junction lie cardiac glands, which primarily secrete mucus. They are fewer in number than the other gastric glands and are more shallowly positioned in the mucosa. There are two kinds - either simple tubular glands with short ducts or compound racemose resembling the duodenal Brunner's glands. Near the pylorus lie pyloric glands located in the antrum of the pylorus. They secrete mucus, as well as gastrin produced by their G cells. Gene and protein expression About 20,000 protein-coding genes are expressed in human cells and nearly 70% of these genes are expressed in the normal stomach. Just over 150 of these genes are more specifically expressed in the stomach compared to other organs, with only some 20 genes being highly specific. The corresponding specific proteins expressed in stomach are mainly involved in creating a suitable environment for handling the digestion of food for uptake of nutrients. Highly stomach-specific proteins include gastrokine-1 expressed in the mucosa; pepsinogen and gastric lipase, expressed in gastric chief cells; and a gastric ATPase and gastric intrinsic factor, expressed in parietal cells. Development In the early part of the development of the human embryo, the ventral part of the embryo abuts the yolk sac. During the third week of development, as the embryo grows, it begins to surround parts of the yolk sac. The enveloped portions form the basis for the adult gastrointestinal tract. The sac is surrounded by a network of vitelline arteries and veins. Over time, these arteries consolidate into the three main arteries that supply the developing gastrointestinal tract: the celiac artery, superior mesenteric artery, and inferior mesenteric artery. The areas supplied by these arteries are used to define the foregut, midgut, and hindgut. The surrounded sac becomes the primitive gut. Sections of this gut begin to differentiate into the organs of the gastrointestinal tract, and the esophagus, and stomach form from the foregut. As the stomach rotates during early development, the dorsal and ventral mesentery rotate with it; this rotation produces a space anterior to the expanding stomach called the greater sac, and a space posterior to the stomach called the lesser sac. After this rotation the dorsal mesentery thins and forms the greater omentum, which is attached to the greater curvature of the stomach. The ventral mesentery forms the lesser omentum, and is attached to the developing liver. In the adult, these connective structures of omentum and mesentery form the peritoneum, and act as an insulating and protective layer while also supplying organs with blood and lymph vessels as well as nerves. Arterial supply to all these structures is from the celiac trunk, and venous drainage is by the portal venous system. Lymph from these organs is drained to the prevertebral celiac nodes at the origin of the celiac artery from the aorta. Function Digestion In the human digestive system, a bolus (a small rounded mass of chewed up food) enters the stomach through the esophagus via the lower esophageal sphincter. The stomach releases proteases (protein-digesting enzymes such as pepsin), and hydrochloric acid, which kills or inhibits bacteria and provides the acidic pH of 2 for the proteases to work. Food is churned by the stomach through peristaltic muscular contractions of the wall – reducing the volume of the bolus, before looping around the fundus and the body of stomach as the boluses are converted into chyme (partially digested food). Chyme slowly passes through the pyloric sphincter and into the duodenum of the small intestine, where the extraction of nutrients begins. Gastric juice in the stomach also contains pepsinogen. Hydrochloric acid activates this inactive form of enzyme into the active form, pepsin. Pepsin breaks down proteins into polypeptides. Mechanical digestion Within a few moments after food enters the stomach, mixing waves begin to occur at intervals of approximately 20 seconds. A mixing wave is a unique type of peristalsis that mixes and softens the food with gastric juices to create chyme. The initial mixing waves are relatively gentle, but these are followed by more intense waves, starting at the body of the stomach and increasing in force as they reach the pylorus. The pylorus, which holds around 30 mL of chyme, acts as a filter, permitting only liquids and small food particles to pass through the mostly, but not fully, closed pyloric sphincter. In a process called gastric emptying, rhythmic mixing waves force about 3 mL of chyme at a time through the pyloric sphincter and into the duodenum. Release of a greater amount of chyme at one time would overwhelm the capacity of the small intestine to handle it. The rest of the chyme is pushed back into the body of the stomach, where it continues mixing. This process is repeated when the next mixing waves force more chyme into the duodenum. Gastric emptying is regulated by both the stomach and the duodenum. The presence of chyme in the duodenum activates receptors that inhibit gastric secretion. This prevents additional chyme from being released by the stomach before the duodenum is ready to process it. Chemical digestion The fundus stores both undigested food and gases that are released during the process of chemical digestion. Food may sit in the fundus of the stomach for a while before being mixed with the chyme. While the food is in the fundus, the digestive activities of salivary amylase continue until the food begins mixing with the acidic chyme. Ultimately, mixing waves incorporate this food with the chyme, the acidity of which inactivates salivary amylase and activates lingual lipase. Lingual lipase then begins breaking down triglycerides into free fatty acids, and mono- and diglycerides. The breakdown of protein begins in the stomach through the actions of hydrochloric acid, and the enzyme pepsin. The stomach can also produce gastric lipase, which can help digesting fat. The contents of the stomach are completely emptied into the duodenum within two to four hours after the meal is eaten. Different types of food take different amounts of time to process. Foods heavy in carbohydrates empty fastest, followed by high-protein foods. Meals with a high triglyceride content remain in the stomach the longest. Since enzymes in the small intestine digest fats slowly, food can stay in the stomach for 6 hours or longer when the duodenum is processing fatty chyme. However, this is still a fraction of the 24 to 72 hours that full digestion typically takes from start to finish. Absorption Although the absorption in the human digestive system is mainly a function of the small intestine, some absorption of certain small molecules nevertheless does occur in the stomach through its lining. This includes: Water, if the body is dehydrated Medication, such as aspirin Amino acids 10–20% of ingested ethanol (e.g. from alcoholic beverages) Caffeine To a small extent water-soluble vitamins (most are absorbed in the small intestine) The parietal cells of the human stomach are responsible for producing intrinsic factor, which is necessary for the absorption of vitamin B12. B12 is used in cellular metabolism and is necessary for the production of red blood cells, and the functioning of the nervous system. Control of secretion and motility Chyme from the stomach is slowly released into the duodenum through coordinated peristalsis and opening of the pyloric sphincter. The movement and the flow of chemicals into the stomach are controlled by both the autonomic nervous system and by the various digestive hormones of the digestive system: Other than gastrin, these hormones all act to turn off the stomach action. This is in response to food products in the liver and gall bladder, which have not yet been absorbed. The stomach needs to push food into the small intestine only when the intestine is not busy. While the intestine is full and still digesting food, the stomach acts as storage for food. Other Effects of EGF Epidermal growth factor (EGF) results in cellular proliferation, differentiation, and survival. EGF is a low-molecular-weight polypeptide first purified from the mouse submandibular gland, but since then found in many human tissues including the submandibular gland, and the parotid gland. Salivary EGF, which also seems to be regulated by dietary inorganic iodine, also plays an important physiological role in the maintenance of oro-esophageal and gastric tissue integrity. The biological effects of salivary EGF include healing of oral and gastroesophageal ulcers, inhibition of gastric acid secretion, stimulation of DNA synthesis, and mucosal protection from intraluminal injurious factors such as gastric acid, bile acids, pepsin, and trypsin and from physical, chemical, and bacterial agents. Stomach as nutrition sensor The human stomach has receptors responsive to sodium glutamate and this information is passed to the lateral hypothalamus and limbic system in the brain as a palatability signal through the vagus nerve. The stomach can also sense, independently of tongue and oral taste receptors, glucose, carbohydrates, proteins, and fats. This allows the brain to link nutritional value of foods to their tastes. Thyrogastric syndrome This syndrome defines the association between thyroid disease and chronic gastritis, which was first described in the 1960s. This term was coined also to indicate the presence of thyroid autoantibodies or autoimmune thyroid disease in patients with pernicious anemia, a late clinical stage of atrophic gastritis. In 1993, a more complete investigation on the stomach and thyroid was published, reporting that the thyroid is, embryogenetically and phylogenetically, derived from a primitive stomach, and that the thyroid cells, such as primitive gastroenteric cells, migrated and specialized in uptake of iodide and in storage and elaboration of iodine compounds during vertebrate evolution. In fact, the stomach and thyroid share iodine-concentrating ability and many morphological and functional similarities, such as cell polarity and apical microvilli, similar organ-specific antigens and associated autoimmune diseases, secretion of glycoproteins (thyroglobulin and mucin) and peptide hormones, the digesting and readsorbing ability, and lastly, similar ability to form iodotyrosines by peroxidase activity, where iodide acts as an electron donor in the presence of H2O2. In the following years, many researchers published reviews about this syndrome. Clinical significance Diseases A series of radiographs can be used to examine the stomach for various disorders. This will often include the use of a barium swallow. Another method of examination of the stomach, is the use of an endoscope. A gastric emptying study is considered the gold standard to assess the gastric emptying rate. A large number of studies have indicated that most cases of peptic ulcers, and gastritis, in humans are caused by Helicobacter pylori infection, and an association has been seen with the development of stomach cancer. A stomach rumble is actually noise from the intestines. Surgery In humans, many bariatric surgery procedures involve the stomach, in order to lose weight. A gastric band may be placed around the cardia area, which can adjust to limit intake. The anatomy of the stomach may be modified, or the stomach may be bypassed entirely. Surgical removal of the stomach is called a gastrectomy, and removal of the cardia area is a called a cardiectomy. "Cardiectomy" is a term that is also used to describe the removal of the heart. A gastrectomy may be carried out because of gastric cancer or severe perforation of the stomach wall. Fundoplication is stomach surgery in which the fundus is wrapped around the lower esophagus and stitched into place. It is used to treat gastroesophageal reflux disease (GERD). Etymology The word stomach is derived from Greek stomachos (), ultimately from stoma () 'mouth'. Gastro- and gastric (meaning 'related to the stomach') are both derived from Greek gaster () 'belly'. Other animals Although the precise shape and size of the stomach varies widely among different vertebrates, the relative positions of the esophageal and duodenal openings remain relatively constant. As a result, the organ always curves somewhat to the left before curving back to meet the pyloric sphincter. However, lampreys, hagfishes, chimaeras, lungfishes, and some teleost fish have no stomach at all, with the esophagus opening directly into the intestine. These animals all consume diets that require little storage of food, no predigestion with gastric juices, or both. The gastric lining is usually divided into two regions, an anterior portion lined by fundic glands and a posterior portion lined with pyloric glands. Cardiac glands are unique to mammals, and even then are absent in a number of species. The distributions of these glands vary between species, and do not always correspond with the same regions as in humans. Furthermore, in many non-human mammals, a portion of the stomach anterior to the cardiac glands is lined with epithelium essentially identical to that of the esophagus. Ruminants, in particular, have a complex four-chambered stomach. The first three chambers (rumen, reticulum, and omasum) are all lined with esophageal mucosa, while the final chamber functions like a monogastric stomach, which is called the abomasum. In birds and crocodilians, the stomach is divided into two regions. Anteriorly is a narrow tubular region, the proventriculus, lined by fundic glands, and connecting the true stomach to the crop. Beyond lies the powerful muscular gizzard, lined by pyloric glands, and, in some species, containing stones that the animal swallows to help grind up food. In insects, there is also a crop. The insect stomach is called the midgut. Information about the stomach in echinoderms or molluscs can be found under the respective articles. Additional images
Biology and health sciences
Digestive system
null
39776
https://en.wikipedia.org/wiki/Denial-of-service%20attack
Denial-of-service attack
In computing, a denial-of-service attack (DoS attack) is a cyber-attack in which the perpetrator seeks to make a machine or network resource unavailable to its intended users by temporarily or indefinitely disrupting services of a host connected to a network. Denial of service is typically accomplished by flooding the targeted machine or resource with superfluous requests in an attempt to overload systems and prevent some or all legitimate requests from being fulfilled. The range of attacks varies widely, spanning from inundating a server with millions of requests to slow its performance, overwhelming a server with a substantial amount of invalid data, to submitting requests with an illegitimate IP address. In a distributed denial-of-service attack (DDoS attack), the incoming traffic flooding the victim originates from many different sources. More sophisticated strategies are required to mitigate this type of attack; simply attempting to block a single source is insufficient as there are multiple sources. A DDoS attack is analogous to a group of people crowding the entry door of a shop, making it hard for legitimate customers to enter, thus disrupting trade and losing the business money. Criminal perpetrators of DDoS attacks often target sites or services hosted on high-profile web servers such as banks or credit card payment gateways. Revenge and blackmail, as well as hacktivism, can motivate these attacks. History Panix, the third-oldest ISP in the world, was the target of what is thought to be the first DoS attack. On September 6, 1996, Panix was subject to a SYN flood attack, which brought down its services for several days while hardware vendors, notably Cisco, figured out a proper defense. Another early demonstration of the DoS attack was made by Khan C. Smith in 1997 during a DEF CON event, disrupting Internet access to the Las Vegas Strip for over an hour. The release of sample code during the event led to the online attack of Sprint, EarthLink, E-Trade, and other major corporations in the year to follow. The largest DDoS attack to date happened in September 2017, when Google Cloud experienced an attack with a peak volume of , revealed by Google on October 17, 2020. The record holder was thought to be an attack executed by an unnamed customer of the US-based service provider Arbor Networks, reaching a peak of about . In February 2020, Amazon Web Services experienced an attack with a peak volume of . In July 2021, CDN Provider Cloudflare boasted of protecting its client from a DDoS attack from a global Mirai botnet that was up to 17.2 million requests per second. Russian DDoS prevention provider Yandex said it blocked a HTTP pipelining DDoS attack on Sept. 5. 2021 that originated from unpatched Mikrotik networking gear. In the first half of 2022, the Russian invasion of Ukraine significantly shaped the cyberthreat landscape, with an increase in cyberattacks attributed to both state-sponsored actors and global hacktivist activities. The most notable event was a DDoS attack in February, the largest Ukraine has encountered, disrupting government and financial sector services. This wave of cyber aggression extended to Western allies like the UK, the US, and Germany. Particularly, the UK's financial sector saw an increase in DDoS attacks from nation-state actors and hacktivists, aimed at undermining Ukraine's allies. In February 2023, Cloudflare faced a 71 million/requests per second attack which Cloudflare claims was the largest HTTP DDoS attack at the time. HTTP DDoS attacks are measured by HTTP requests per second instead of packets per second or bits per second. On July 10, 2023, the fanfiction platform Archive of Our Own (AO3) faced DDoS attacks, disrupting services. Anonymous Sudan, claiming the attack for religious and political reasons, was viewed skeptically by AO3 and experts. Flashpoint, a threat intelligence vendor, noted the group's past activities but doubted their stated motives. AO3, supported by the non-profit Organization for Transformative Works (OTW) and reliant on donations, is unlikely to meet the $30,000 Bitcoin ransom. In August 2023, the group of hacktivists NoName057 targeted several Italian financial institutions, through the execution of slow DoS attacks. On 14 January 2024, they executed a DDoS attack on Swiss federal websites, prompted by President Zelensky's attendance at the Davos World Economic Forum. Switzerland's National Cyber Security Centre quickly mitigated the attack, ensuring core federal services remained secure, despite temporary accessibility issues on some websites. In October 2023, exploitation of a new vulnerability in the HTTP/2 protocol resulted in the record for largest HTTP DDoS attack being broken twice, once with a 201 million requests per second attack observed by Cloudflare, and again with a 398 million requests per second attack observed by Google. In August 2024, Global Secure Layer observed and reported on a record-breaking packet DDoS at 3.15 billion packets per second, which targeted an undisclosed number of unofficial Minecraft game servers. In October 2024, the Internet Archive faced two severe DDoS attacks that brought the site completely offline, immediately following a previous attack that leaked records of over 31 million of the site's users. The hacktivist group SN_Blackmeta claimed the DDoS attack as retribution for American involvement in the Israel–Hamas war, despite the Internet Archive being unaffiliated with the United States government; however, their link with the preceding data leak remains unclear. Types Denial-of-service attacks are characterized by an explicit attempt by attackers to prevent legitimate use of a service. There are two general forms of DoS attacks: those that crash services and those that flood services. The most serious attacks are distributed. A distributed denial-of-service (DDoS) attack occurs when multiple systems flood the bandwidth or resources of a targeted system, usually one or more web servers. A DDoS attack uses more than one unique IP address or machines, often from thousands of hosts infected with malware. A distributed denial of service attack typically involves more than around 3–5 nodes on different networks; fewer nodes may qualify as a DoS attack but is not a DDoS attack. Multiple attack machines can generate more attack traffic than a single machine and are harder to disable, and the behavior of each attack machine can be stealthier, making the attack harder to track and shut down. Since the incoming traffic flooding the victim originates from different sources, it may be impossible to stop the attack simply by using ingress filtering. It also makes it difficult to distinguish legitimate user traffic from attack traffic when spread across multiple points of origin. As an alternative or augmentation of a DDoS, attacks may involve forging of IP sender addresses (IP address spoofing) further complicating identifying and defeating the attack. These attacker advantages cause challenges for defense mechanisms. For example, merely purchasing more incoming bandwidth than the current volume of the attack might not help, because the attacker might be able to simply add more attack machines. The scale of DDoS attacks has continued to rise over recent years, by 2016 exceeding a terabit per second. Some common examples of DDoS attacks are UDP flooding, SYN flooding and DNS amplification. Yo-yo attack A yo-yo attack is a specific type of DoS/DDoS aimed at cloud-hosted applications which use autoscaling. The attacker generates a flood of traffic until a cloud-hosted service scales outwards to handle the increase of traffic, then halts the attack, leaving the victim with over-provisioned resources. When the victim scales back down, the attack resumes, causing resources to scale back up again. This can result in a reduced quality of service during the periods of scaling up and down and a financial drain on resources during periods of over-provisioning while operating with a lower cost for an attacker compared to a normal DDoS attack, as it only needs to be generating traffic for a portion of the attack period. Application layer attacks An application layer DDoS attack (sometimes referred to as layer 7 DDoS attack) is a form of DDoS attack where attackers target application-layer processes. The attack over-exercises specific functions or features of a website with the intention to disable those functions or features. This application-layer attack is different from an entire network attack, and is often used against financial institutions to distract IT and security personnel from security breaches. In 2013, application-layer DDoS attacks represented 20% of all DDoS attacks. According to research by Akamai Technologies, there have been "51 percent more application layer attacks" from Q4 2013 to Q4 2014 and "16 percent more" from Q3 2014 to Q4 2014. In November 2017; Junade Ali, an engineer at Cloudflare noted that whilst network-level attacks continue to be of high capacity, they were occurring less frequently. Ali further noted that although network-level attacks were becoming less frequent, data from Cloudflare demonstrated that application-layer attacks were still showing no sign of slowing down. Application layer The OSI model (ISO/IEC 7498-1) is a conceptual model that characterizes and standardizes the internal functions of a communication system by partitioning it into abstraction layers. The model is a product of the Open Systems Interconnection project at the International Organization for Standardization (ISO). The model groups similar communication functions into one of seven logical layers. A layer serves the layer above it and is served by the layer below it. For example, a layer that provides error-free communications across a network provides the communications path needed by applications above it, while it calls the next lower layer to send and receive packets that traverse that path. In the OSI model, the definition of its application layer is narrower in scope than is often implemented. The OSI model defines the application layer as being the user interface. The OSI application layer is responsible for displaying data and images to the user in a human-recognizable format and to interface with the presentation layer below it. In an implementation, the application and presentation layers are frequently combined. Method of attack The simplest DoS attack relies primarily on brute force, flooding the target with an overwhelming flux of packets, oversaturating its connection bandwidth or depleting the target's system resources. Bandwidth-saturating floods rely on the attacker's ability to generate the overwhelming flux of packets. A common way of achieving this today is via distributed denial-of-service, employing a botnet. An application layer DDoS attack is done mainly for specific targeted purposes, including disrupting transactions and access to databases. It requires fewer resources than network layer attacks but often accompanies them. An attack may be disguised to look like legitimate traffic, except it targets specific application packets or functions. The attack on the application layer can disrupt services such as the retrieval of information or search functions on a website. Advanced persistent DoS An advanced persistent DoS (APDoS) is associated with an advanced persistent threat and requires specialized DDoS mitigation. These attacks can persist for weeks; the longest continuous period noted so far lasted 38 days. This attack involved approximately 50+ petabits (50,000+ terabits) of malicious traffic. Attackers in this scenario may tactically switch between several targets to create a diversion to evade defensive DDoS countermeasures but all the while eventually concentrating the main thrust of the attack onto a single victim. In this scenario, attackers with continuous access to several very powerful network resources are capable of sustaining a prolonged campaign generating enormous levels of unamplified DDoS traffic. APDoS attacks are characterized by: advanced reconnaissance (pre-attack OSINT and extensive decoyed scanning crafted to evade detection over long periods) tactical execution (attack with both primary and secondary victims but the focus is on primary) explicit motivation (a calculated end game/goal target) large computing capacity (access to substantial computer power and network bandwidth) simultaneous multi-threaded OSI layer attacks (sophisticated tools operating at layers 3 through 7) persistence over extended periods (combining all the above into a concerted, well-managed attack across a range of targets). Denial-of-service as a service Some vendors provide so-called booter or stresser services, which have simple web-based front ends, and accept payment over the web. Marketed and promoted as stress-testing tools, they can be used to perform unauthorized denial-of-service attacks, and allow technically unsophisticated attackers access to sophisticated attack tools. Usually powered by a botnet, the traffic produced by a consumer stresser can range anywhere from 5-50 Gbit/s, which can, in most cases, deny the average home user internet access. Markov-modulated denial-of-service attack A Markov-modulated denial-of-service attack occurs when the attacker disrupts control packets using a hidden Markov model. A setting in which Markov-model based attacks are prevalent is online gaming as the disruption of the control packet undermines game play and system functionality. Symptoms The United States Computer Emergency Readiness Team (US-CERT) has identified symptoms of a denial-of-service attack to include: unusually slow network performance (opening files or accessing websites), unavailability of a particular website, or inability to access any website. Attack techniques Attack tools In cases such as MyDoom and Slowloris, the tools are embedded in malware and launch their attacks without the knowledge of the system owner. Stacheldraht is a classic example of a DDoS tool. It uses a layered structure where the attacker uses a client program to connect to handlers which are compromised systems that issue commands to the zombie agents which in turn facilitate the DDoS attack. Agents are compromised via the handlers by the attacker using automated routines to exploit vulnerabilities in programs that accept remote connections running on the targeted remote hosts. Each handler can control up to a thousand agents. In other cases a machine may become part of a DDoS attack with the owner's consent, for example, in Operation Payback organized by the group Anonymous. The Low Orbit Ion Cannon has typically been used in this way. Along with High Orbit Ion Cannon a wide variety of DDoS tools are available today, including paid and free versions, with different features available. There is an underground market for these in hacker-related forums and IRC channels. Application-layer attacks Application-layer attacks employ DoS-causing exploits and can cause server-running software to fill the disk space or consume all available memory or CPU time. Attacks may use specific packet types or connection requests to saturate finite resources by, for example, occupying the maximum number of open connections or filling the victim's disk space with logs. An attacker with shell-level access to a victim's computer may slow it until it is unusable or crash it by using a fork bomb. Another kind of application-level DoS attack is XDoS (or XML DoS) which can be controlled by modern web application firewalls (WAFs). All attacks belonging to the category of timeout exploiting. Slow DoS attacks implement an application-layer attack. Examples of threats are Slowloris, establishing pending connections with the victim, or SlowDroid, an attack running on mobile devices. Another target of DDoS attacks may be to produce added costs for the application operator, when the latter uses resources based on cloud computing. In this case, normally application-used resources are tied to a needed quality of service (QoS) level (e.g. responses should be less than 200 ms) and this rule is usually linked to automated software (e.g. Amazon CloudWatch) to raise more virtual resources from the provider to meet the defined QoS levels for the increased requests. The main incentive behind such attacks may be to drive the application owner to raise the elasticity levels to handle the increased application traffic, to cause financial losses, or force them to become less competitive. A banana attack is another particular type of DoS. It involves redirecting outgoing messages from the client back onto the client, preventing outside access, as well as flooding the client with the sent packets. A LAND attack is of this type. Degradation-of-service attacks Pulsing zombies are compromised computers that are directed to launch intermittent and short-lived floodings of victim websites with the intent of merely slowing it rather than crashing it. This type of attack, referred to as degradation-of-service, can be more difficult to detect and can disrupt and hamper connection to websites for prolonged periods of time, potentially causing more overall disruption than a denial-of-service attack. Exposure of degradation-of-service attacks is complicated further by the matter of discerning whether the server is really being attacked or is experiencing higher than normal legitimate traffic loads. Distributed DoS attack If an attacker mounts an attack from a single host, it would be classified as a DoS attack. Any attack against availability would be classed as a denial-of-service attack. On the other hand, if an attacker uses many systems to simultaneously launch attacks against a remote host, this would be classified as a DDoS attack. Malware can carry DDoS attack mechanisms; one of the better-known examples of this was MyDoom. Its DoS mechanism was triggered on a specific date and time. This type of DDoS involved hardcoding the target IP address before releasing the malware and no further interaction was necessary to launch the attack. A system may also be compromised with a trojan containing a zombie agent. Attackers can also break into systems using automated tools that exploit flaws in programs that listen for connections from remote hosts. This scenario primarily concerns systems acting as servers on the web. Stacheldraht is a classic example of a DDoS tool. It uses a layered structure where the attacker uses a client program to connect to handlers, which are compromised systems that issue commands to the zombie agents, which in turn facilitate the DDoS attack. Agents are compromised via the handlers by the attacker. Each handler can control up to a thousand agents. In some cases a machine may become part of a DDoS attack with the owner's consent, for example, in Operation Payback, organized by the group Anonymous. These attacks can use different types of internet packets such as TCP, UDP, ICMP, etc. These collections of compromised systems are known as botnets. DDoS tools like Stacheldraht still use classic DoS attack methods centered on IP spoofing and amplification like smurf attacks and fraggle attacks (types of bandwidth consumption attacks). SYN floods (a resource starvation attack) may also be used. Newer tools can use DNS servers for DoS purposes. Unlike MyDoom's DDoS mechanism, botnets can be turned against any IP address. Script kiddies use them to deny the availability of well known websites to legitimate users. More sophisticated attackers use DDoS tools for the purposes of extortionincluding against their business rivals. It has been reported that there are new attacks from internet of things (IoT) devices that have been involved in denial of service attacks. In one noted attack that was made peaked at around 20,000 requests per second which came from around 900 CCTV cameras. UK's GCHQ has tools built for DDoS, named PREDATORS FACE and ROLLING THUNDER. Simple attacks such as SYN floods may appear with a wide range of source IP addresses, giving the appearance of a distributed DoS. These flood attacks do not require completion of the TCP three-way handshake and attempt to exhaust the destination SYN queue or the server bandwidth. Because the source IP addresses can be trivially spoofed, an attack could come from a limited set of sources, or may even originate from a single host. Stack enhancements such as SYN cookies may be effective mitigation against SYN queue flooding but do not address bandwidth exhaustion. In 2022, TCP attacks were the leading method in DDoS incidents, accounting for 63% of all DDoS activity. This includes tactics like TCP SYN, TCP ACK, and TCP floods. With TCP being the most widespread networking protocol, its attacks are expected to remain prevalent in the DDoS threat scene. DDoS extortion In 2015, DDoS botnets such as DD4BC grew in prominence, taking aim at financial institutions. Cyber-extortionists typically begin with a low-level attack and a warning that a larger attack will be carried out if a ransom is not paid in bitcoin. Security experts recommend targeted websites to not pay the ransom. The attackers tend to get into an extended extortion scheme once they recognize that the target is ready to pay. HTTP slow POST DoS attack First discovered in 2009, the HTTP slow POST attack sends a complete, legitimate HTTP POST header, which includes a Content-Length field to specify the size of the message body to follow. However, the attacker then proceeds to send the actual message body at an extremely slow rate (e.g. 1 byte/110 seconds). Due to the entire message being correct and complete, the target server will attempt to obey the Content-Length field in the header, and wait for the entire body of the message to be transmitted, which can take a very long time. The attacker establishes hundreds or even thousands of such connections until all resources for incoming connections on the victim server are exhausted, making any further connections impossible until all data has been sent. It is notable that unlike many other DDoS or DDoS attacks, which try to subdue the server by overloading its network or CPU, an HTTP slow POST attack targets the logical resources of the victim, which means the victim would still have enough network bandwidth and processing power to operate. Combined with the fact that the Apache HTTP Server will, by default, accept requests up to 2GB in size, this attack can be particularly powerful. HTTP slow POST attacks are difficult to differentiate from legitimate connections and are therefore able to bypass some protection systems. OWASP, an open source web application security project, released a tool to test the security of servers against this type of attack. Challenge Collapsar (CC) attack A Challenge Collapsar (CC) attack is an attack where standard HTTP requests are sent to a targeted web server frequently. The Uniform Resource Identifiers (URIs) in the requests require complicated time-consuming algorithms or database operations which may exhaust the resources of the targeted web server. In 2004, a Chinese hacker nicknamed KiKi invented a hacking tool to send these kinds of requests to attack a NSFOCUS firewall named Collapsar, and thus the hacking tool was known as Challenge Collapsar, or CC for short. Consequently, this type of attack got the name CC attack. Internet Control Message Protocol (ICMP) flood A smurf attack relies on misconfigured network devices that allow packets to be sent to all computer hosts on a particular network via the broadcast address of the network, rather than a specific machine. The attacker will send large numbers of IP packets with the source address faked to appear to be the address of the victim. Most devices on a network will, by default, respond to this by sending a reply to the source IP address. If the number of machines on the network that receive and respond to these packets is very large, the victim's computer will be flooded with traffic. This overloads the victim's computer and can even make it unusable during such an attack. Ping flood is based on sending the victim an overwhelming number of ping packets, usually using the ping command from Unix-like hosts. It is very simple to launch, the primary requirement being access to greater bandwidth than the victim. Ping of death is based on sending the victim a malformed ping packet, which will lead to a system crash on a vulnerable system. The BlackNurse attack is an example of an attack taking advantage of the required Destination Port Unreachable ICMP packets. Nuke A nuke is an old-fashioned denial-of-service attack against computer networks consisting of fragmented or otherwise invalid ICMP packets sent to the target, achieved by using a modified ping utility to repeatedly send this corrupt data, thus slowing down the affected computer until it comes to a complete stop. A specific example of a nuke attack that gained some prominence is the WinNuke, which exploited the vulnerability in the NetBIOS handler in Windows 95. A string of out-of-band data was sent to TCP port 139 of the victim's machine, causing it to lock up and display a Blue Screen of Death. Peer-to-peer attacks Attackers have found a way to exploit a number of bugs in peer-to-peer servers to initiate DDoS attacks. The most aggressive of these peer-to-peer-DDoS attacks exploits DC++ . With peer-to-peer there is no botnet and the attacker does not have to communicate with the clients it subverts. Instead, the attacker acts as a puppet master, instructing clients of large peer-to-peer file sharing hubs to disconnect from their peer-to-peer network and to connect to the victim's website instead. Permanent denial-of-service attacks Permanent denial-of-service (PDoS), also known loosely as phlashing, is an attack that damages a system so badly that it requires replacement or reinstallation of hardware. Unlike the distributed denial-of-service attack, a PDoS attack exploits security flaws which allow remote administration on the management interfaces of the victim's hardware, such as routers, printers, or other networking hardware. The attacker uses these vulnerabilities to replace a device's firmware with a modified, corrupt, or defective firmware image—a process which when done legitimately is known as flashing. The intent is to brick the device, rendering it unusable for its original purpose until it can be repaired or replaced. The PDoS is a pure hardware-targeted attack that can be much faster and requires fewer resources than using a botnet in a DDoS attack. Because of these features, and the potential and high probability of security exploits on network-enabled embedded devices, this technique has come to the attention of numerous hacking communities. BrickerBot, a piece of malware that targeted IoT devices, used PDoS attacks to disable its targets. PhlashDance is a tool created by Rich Smith (an employee of Hewlett-Packard's Systems Security Lab) used to detect and demonstrate PDoS vulnerabilities at the 2008 EUSecWest Applied Security Conference in London, UK. Reflected attack A distributed denial-of-service attack may involve sending forged requests of some type to a very large number of computers that will reply to the requests. Using Internet Protocol address spoofing, the source address is set to that of the targeted victim, which means all the replies will go to (and flood) the target. This reflected attack form is sometimes called a distributed reflective denial-of-service (DRDoS) attack. ICMP echo request attacks (Smurf attacks) can be considered one form of reflected attack, as the flooding hosts send Echo Requests to the broadcast addresses of mis-configured networks, thereby enticing hosts to send Echo Reply packets to the victim. Some early DDoS programs implemented a distributed form of this attack. Amplification Amplification attacks are used to magnify the bandwidth that is sent to a victim. Many services can be exploited to act as reflectors, some harder to block than others. US-CERT have observed that different services may result in different amplification factors, as tabulated below: DNS amplification attacks involves an attacker sending a DNS name lookup request to one or more public DNS servers, spoofing the source IP address of the targeted victim. The attacker tries to request as much information as possible, thus amplifying the DNS response that is sent to the targeted victim. Since the size of the request is significantly smaller than the response, the attacker is easily able to increase the amount of traffic directed at the target. Simple Network Management Protocol (SNMP) and Network Time Protocol (NTP) can also be exploited as reflectors in an amplification attack. An example of an amplified DDoS attack through the NTP is through a command called monlist, which sends the details of the last 600 hosts that have requested the time from the NTP server back to the requester. A small request to this time server can be sent using a spoofed source IP address of some victim, which results in a response 556.9 times the size of the request being sent to the victim. This becomes amplified when using botnets that all send requests with the same spoofed IP source, which will result in a massive amount of data being sent back to the victim. It is very difficult to defend against these types of attacks because the response data is coming from legitimate servers. These attack requests are also sent through UDP, which does not require a connection to the server. This means that the source IP is not verified when a request is received by the server. To bring awareness of these vulnerabilities, campaigns have been started that are dedicated to finding amplification vectors which have led to people fixing their resolvers or having the resolvers shut down completely. Mirai botnet The Mirai botnet works by using a computer worm to infect hundreds of thousands of IoT devices across the internet. The worm propagates through networks and systems taking control of poorly protected IoT devices such as thermostats, Wi-Fi-enabled clocks, and washing machines. The owner or user will usually have no immediate indication of when the device becomes infected. The IoT device itself is not the direct target of the attack, it is used as a part of a larger attack. Once the hacker has enslaved the desired number of devices, they instruct the devices to try to contact an ISP. In October 2016, a Mirai botnet attacked Dyn which is the ISP for sites such as Twitter, Netflix, etc. As soon as this occurred, these websites were all unreachable for several hours. R-U-Dead-Yet? (RUDY) RUDY attack targets web applications by starvation of available sessions on the web server. Much like Slowloris, RUDY keeps sessions at halt using never-ending POST transmissions and sending an arbitrarily large content-length header value. SACK Panic Manipulating maximum segment size and selective acknowledgement (SACK) may be used by a remote peer to cause a denial of service by an integer overflow in the Linux kernel, potentially causing a kernel panic. Jonathan Looney discovered on June 17, 2019. Shrew attack The shrew attack is a denial-of-service attack on the Transmission Control Protocol where the attacker employs man-in-the-middle techniques. It exploits a weakness in TCP's re-transmission timeout mechanism, using short synchronized bursts of traffic to disrupt TCP connections on the same link. Slow read attack A slow read attack sends legitimate application layer requests, but reads responses very slowly, keeping connections open longer hoping to exhaust the server's connection pool. The slow read is achieved by advertising a very small number for the TCP Receive Window size, and at the same time emptying clients' TCP receive buffer slowly, which causes a very low data flow rate. Sophisticated low-bandwidth Distributed Denial-of-Service Attack A sophisticated low-bandwidth DDoS attack is a form of DoS that uses less traffic and increases its effectiveness by aiming at a weak point in the victim's system design, i.e., the attacker sends traffic consisting of complicated requests to the system. Essentially, a sophisticated DDoS attack is lower in cost due to its use of less traffic, is smaller in size making it more difficult to identify, and it has the ability to hurt systems which are protected by flow control mechanisms. SYN flood A SYN flood occurs when a host sends a flood of TCP/SYN packets, often with a forged sender address. Each of these packets is handled like a connection request, causing the server to spawn a half-open connection, send back a TCP/SYN-ACK packet, and wait for a packet in response from the sender address. However, because the sender's address is forged, the response never comes. These half-open connections exhaust the available connections the server can make, keeping it from responding to legitimate requests until after the attack ends. Teardrop attacks A teardrop attack involves sending mangled IP fragments with overlapping, oversized payloads to the target machine. This can crash various operating systems because of a bug in their TCP/IP fragmentation re-assembly code. Windows 3.1x, Windows 95 and Windows NT operating systems, as well as versions of Linux prior to versions 2.0.32 and 2.1.63 are vulnerable to this attack. One of the fields in an IP header is the fragment offset field, indicating the starting position, or offset, of the data contained in a fragmented packet relative to the data in the original packet. If the sum of the offset and size of one fragmented packet differs from that of the next fragmented packet, the packets overlap. When this happens, a server vulnerable to teardrop attacks is unable to reassemble the packets resulting in a denial-of-service condition. Telephony denial-of-service Voice over IP has made abusive origination of large numbers of telephone voice calls inexpensive and easily automated while permitting call origins to be misrepresented through caller ID spoofing. According to the US Federal Bureau of Investigation, telephony denial-of-service (TDoS) has appeared as part of various fraudulent schemes: A scammer contacts the victim's banker or broker, impersonating the victim to request a funds transfer. The banker's attempt to contact the victim for verification of the transfer fails as the victim's telephone lines are being flooded with bogus calls, rendering the victim unreachable. A scammer contacts consumers with a bogus claim to collect an outstanding payday loan for thousands of dollars. When the consumer objects, the scammer retaliates by flooding the victim's employer with automated calls. In some cases, the displayed caller ID is spoofed to impersonate police or law enforcement agencies. Swatting: A scammer contacts consumers with a bogus debt collection demand and threatens to send police; when the victim balks, the scammer floods local police numbers with calls on which caller ID is spoofed to display the victim's number. Police soon arrive at the victim's residence attempting to find the origin of the calls. TDoS can exist even without Internet telephony. In the 2002 New Hampshire Senate election phone jamming scandal, telemarketers were used to flood political opponents with spurious calls to jam phone banks on election day. Widespread publication of a number can also flood it with enough calls to render it unusable, as happened by accident in 1981 with multiple +1-area code-867-5309 subscribers inundated by hundreds of calls daily in response to the song "867-5309/Jenny". TDoS differs from other telephone harassment (such as prank calls and obscene phone calls) by the number of calls originated. By occupying lines continuously with repeated automated calls, the victim is prevented from making or receiving both routine and emergency telephone calls. Related exploits include SMS flooding attacks and black fax or continuous fax transmission by using a loop of paper at the sender. TTL expiry attack It takes more router resources to drop a packet with a TTL value of 1 or less than it does to forward a packet with a higher TTL value. When a packet is dropped due to TTL expiry, the router CPU must generate and send an ICMP time exceeded response. Generating many of these responses can overload the router's CPU. UPnP attack A UPnP attack uses an existing vulnerability in Universal Plug and Play (UPnP) protocol to get past network security and flood a target's network and servers. The attack is based on a DNS amplification technique, but the attack mechanism is a UPnP router that forwards requests from one outer source to another. The UPnP router returns the data on an unexpected UDP port from a bogus IP address, making it harder to take simple action to shut down the traffic flood. According to the Imperva researchers, the most effective way to stop this attack is for companies to lock down UPnP routers. SSDP reflection attack In 2014, it was discovered that Simple Service Discovery Protocol (SSDP) was being used in DDoS attacks known as an SSDP reflection attack with amplification. Many devices, including some residential routers, have a vulnerability in the UPnP software that allows an attacker to get replies from UDP port 1900 to a destination address of their choice. With a botnet of thousands of devices, the attackers can generate sufficient packet rates and occupy bandwidth to saturate links, causing the denial of services. Because of this weakness, the network company Cloudflare has described SSDP as the "Stupidly Simple DDoS Protocol". ARP spoofing ARP spoofing is a common DoS attack that involves a vulnerability in the ARP protocol that allows an attacker to associate their MAC address to the IP address of another computer or gateway, causing traffic intended for the original authentic IP to be re-routed to that of the attacker, causing a denial of service. Defense techniques Defensive responses to denial-of-service attacks typically involve the use of a combination of attack detection, traffic classification and response tools, aiming to block traffic the tools identify as illegitimate and allow traffic that they identify as legitimate. A list of response tools include the following. Upstream filtering All traffic destined to the victim is diverted to pass through a cleaning center or a scrubbing center via various methods such as: changing the victim IP address in the DNS system, tunneling methods (GRE/VRF, MPLS, SDN), proxies, digital cross connects, or even direct circuits. The cleaning center separates bad traffic (DDoS and also other common internet attacks) and only passes good legitimate traffic to the victim server. The victim needs central connectivity to the Internet to use this kind of service unless they happen to be located within the same facility as the cleaning center. DDoS attacks can overwhelm any type of hardware firewall, and passing malicious traffic through large and mature networks becomes more and more effective and economically sustainable against DDoS. Application front end hardware Application front-end hardware is intelligent hardware placed on the network before traffic reaches the servers. It can be used on networks in conjunction with routers and switches and as part of bandwidth management. Application front-end hardware analyzes data packets as they enter the network, and identifies and drops dangerous or suspicious flows. Application level key completion indicators Approaches to detection of DDoS attacks against cloud-based applications may be based on an application layer analysis, indicating whether incoming bulk traffic is legitimate. These approaches mainly rely on an identified path of value inside the application and monitor the progress of requests on this path, through markers called key completion indicators. In essence, these techniques are statistical methods of assessing the behavior of incoming requests to detect if something unusual or abnormal is going on. An analogy is to a brick-and-mortar department store where customers spend, on average, a known percentage of their time on different activities such as picking up items and examining them, putting them back, filling a basket, waiting to pay, paying, and leaving. If a mob of customers arrived in the store and spent all their time picking up items and putting them back, but never made any purchases, this could be flagged as unusual behavior. Blackholing and sinkholing With blackhole routing, all the traffic to the attacked DNS or IP address is sent to a black hole (null interface or a non-existent server). To be more efficient and avoid affecting network connectivity, it can be managed by the ISP. A DNS sinkhole routes traffic to a valid IP address which analyzes traffic and rejects bad packets. Sinkholing may not be efficient for severe attacks. IPS based prevention Intrusion prevention systems (IPS) are effective if the attacks have signatures associated with them. However, the trend among attacks is to have legitimate content but bad intent. Intrusion-prevention systems that work on content recognition cannot block behavior-based DoS attacks. An ASIC based IPS may detect and block denial-of-service attacks because they have the processing power and the granularity to analyze the attacks and act like a circuit breaker in an automated way. DDS based defense More focused on the problem than IPS, a DoS defense system (DDS) can block connection-based DoS attacks and those with legitimate content but bad intent. A DDS can also address both protocol attacks (such as teardrop and ping of death) and rate-based attacks (such as ICMP floods and SYN floods). DDS has a purpose-built system that can easily identify and obstruct denial of service attacks at a greater speed than a software-based system. Firewalls In the case of a simple attack, a firewall can be adjusted to deny all incoming traffic from the attackers, based on protocols, ports, or the originating IP addresses. More complex attacks will however be hard to block with simple rules: for example, if there is an ongoing attack on port 80 (web service), it is not possible to drop all incoming traffic on this port because doing so will prevent the server from receiving and serving legitimate traffic. Additionally, firewalls may be too deep in the network hierarchy, with routers being adversely affected before the traffic gets to the firewall. Also, many security tools still do not support IPv6 or may not be configured properly, so the firewalls may be bypassed during the attacks. Routers Similar to switches, routers have some rate-limiting and ACL capabilities. They, too, are manually set. Most routers can be easily overwhelmed under a DoS attack. Nokia SR-OS using FP4 or FP5 processors offers DDoS protection. Nokia SR-OS also uses big data analytics-based Nokia Deepfield Defender for DDoS protection. Cisco IOS has optional features that can reduce the impact of flooding. Switches Most switches have some rate-limiting and ACL capability. Some switches provide automatic or system-wide rate limiting, traffic shaping, delayed binding (TCP splicing), deep packet inspection and bogon filtering (bogus IP filtering) to detect and remediate DoS attacks through automatic rate filtering and WAN Link failover and balancing. These schemes will work as long as the DoS attacks can be prevented by using them. For example, SYN flood can be prevented using delayed binding or TCP splicing. Similarly, content-based DoS may be prevented using deep packet inspection. Attacks using Martian packets can be prevented using bogon filtering. Automatic rate filtering can work as long as set rate thresholds have been set correctly. WAN-link failover will work as long as both links have a DoS prevention mechanism. Blocking vulnerable ports Threats may be associated with specific TCP or UDP port numbers. Blocking these ports at the firewall can mitigate an attack. For example, in an SSDP reflection attack, the key mitigation is to block incoming UDP traffic on port 1900. Unintentional denial-of-service An unintentional denial-of-service can occur when a system ends up denied, not due to a deliberate attack by a single individual or group of individuals, but simply due to a sudden enormous spike in popularity. This can happen when an extremely popular website posts a prominent link to a second, less well-prepared site, for example, as part of a news story. The result is that a significant proportion of the primary site's regular userspotentially hundreds of thousands of peopleclick that link in the space of a few hours, having the same effect on the target website as a DDoS attack. A VIPDoS is the same, but specifically when the link was posted by a celebrity. When Michael Jackson died in 2009, websites such as Google and Twitter slowed down or even crashed. Many sites' servers thought the requests were from a virus or spyware trying to cause a denial-of-service attack, warning users that their queries looked like "automated requests from a computer virus or spyware application". News sites and link sitessites whose primary function is to provide links to interesting content elsewhere on the Internetare most likely to cause this phenomenon. The canonical example is the Slashdot effect when receiving traffic from Slashdot. It is also known as "the Reddit hug of death" and "the Digg effect". Similar unintentional denial-of-service can also occur via other media, e.g. when a URL is mentioned on television. In March 2014, after Malaysia Airlines Flight 370 went missing, DigitalGlobe launched a crowdsourcing service on which users could help search for the missing jet in satellite images. The response overwhelmed the company's servers. An unintentional denial-of-service may also result from a prescheduled event created by the website itself, as was the case of the Census in Australia in 2016. Legal action has been taken in at least one such case. In 2006, Universal Tube & Rollform Equipment Corporation sued YouTube: massive numbers of would-be YouTube.com users accidentally typed the tube company's URL, utube.com. As a result, the tube company ended up having to spend large amounts of money on upgrading its bandwidth. The company appears to have taken advantage of the situation, with utube.com now containing ads and receiving advertisement revenue. Routers have also been known to create unintentional DoS attacks, as both D-Link and Netgear routers have overloaded NTP servers by flooding them without respecting the restrictions of client types or geographical limitations. Side effects of attacks Backscatter In computer network security, backscatter is a side-effect of a spoofed denial-of-service attack. In this kind of attack, the attacker spoofs the source address in IP packets sent to the victim. In general, the victim machine cannot distinguish between the spoofed packets and legitimate packets, so the victim responds to the spoofed packets as it normally would. These response packets are known as backscatter. If the attacker is spoofing source addresses randomly, the backscatter response packets from the victim will be sent back to random destinations. This effect can be used by network telescopes as indirect evidence of such attacks. The term backscatter analysis refers to observing backscatter packets arriving at a statistically significant portion of the IP address space to determine the characteristics of DoS attacks and victims. Legality Many jurisdictions have laws under which denial-of-service attacks are illegal. UNCTAD highlights that 156 countries, or 80% globally, have enacted cybercrime laws to combat its widespread impact. Adoption rates vary by region, with Europe at a 91% rate, and Africa at 72%. In the US, denial-of-service attacks may be considered a federal crime under the Computer Fraud and Abuse Act with penalties that include years of imprisonment. The Computer Crime and Intellectual Property Section of the US Department of Justice handles cases of DoS and DDoS. In one example, in July 2019, Austin Thompson, aka DerpTrolling, was sentenced to 27 months in prison and $95,000 restitution by a federal court for conducting multiple DDoS attacks on major video gaming companies, disrupting their systems from hours to days. In European countries, committing criminal denial-of-service attacks may, as a minimum, lead to arrest. The United Kingdom is unusual in that it specifically outlawed denial-of-service attacks and set a maximum penalty of 10 years in prison with the Police and Justice Act 2006, which amended Section 3 of the Computer Misuse Act 1990. In January 2019, Europol announced that "actions are currently underway worldwide to track down the users" of Webstresser.org, a former DDoS marketplace that was shut down in April 2018 as part of Operation Power Off. Europol said UK police were conducting a number of "live operations" targeting over 250 users of Webstresser and other DDoS services. On January 7, 2013, Anonymous posted a petition on the whitehouse.gov site asking that DDoS be recognized as a legal form of protest similar to the Occupy movement, the claim being that the similarity in the purpose of both is same.
Technology
Computer security
null
39789
https://en.wikipedia.org/wiki/Rotation
Rotation
Rotation or rotational motion is the circular movement of an object around a central line, known as an axis of rotation. A plane figure can rotate in either a clockwise or counterclockwise sense around a perpendicular axis intersecting anywhere inside or outside the figure at a center of rotation. A solid figure has an infinite number of possible axes and angles of rotation, including chaotic rotation (between arbitrary orientations), in contrast to rotation around a axis. The special case of a rotation with an internal axis passing through the body's own center of mass is known as a spin (or autorotation). In that case, the surface intersection of the internal spin axis can be called a pole; for example, Earth's rotation defines the geographical poles. A rotation around an axis completely external to the moving body is called a revolution (or orbit), e.g. Earth's orbit around the Sun. The ends of the external axis of revolution can be called the orbital poles. Either type of rotation is involved in a corresponding type of angular velocity (spin angular velocity and orbital angular velocity) and angular momentum (spin angular momentum and orbital angular momentum). Mathematics Mathematically, a rotation is a rigid body movement which, unlike a translation, keeps at least one point fixed. This definition applies to rotations in two dimensions (in a plane), in which exactly one point is kept fixed; and also in three dimensions (in space), in which additional points may be kept fixed (as in rotation around a fixed axis, as infinite line). All rigid body movements are rotations, translations, or combinations of the two. A rotation is simply a progressive radial orientation to a common point. That common point lies within the axis of that motion. The axis is perpendicular to the plane of the motion. If a rotation around a point or axis is followed by a second rotation around the same point/axis, a third rotation results. The reverse (inverse) of a rotation is also a rotation. Thus, the rotations around a point/axis form a group. However, a rotation around a point or axis and a rotation around a different point/axis may result in something other than a rotation, e.g. a translation. Rotations around the x, y and z axes are called principal rotations. Rotation around any axis can be performed by taking a rotation around the x axis, followed by a rotation around the y axis, and followed by a rotation around the z axis. That is to say, any spatial rotation can be decomposed into a combination of principal rotations. Fixed axis vs. fixed point The combination of any sequence of rotations of an object in three dimensions about a fixed point is always equivalent to a rotation about an axis (which may be considered to be a rotation in the plane that is perpendicular to that axis). Similarly, the rotation rate of an object in three dimensions at any instant is about some axis, although this axis may be changing over time. In other than three dimensions, it does not make sense to describe a rotation as being around an axis, since more than one axis through the object may be kept fixed; instead, simple rotations are described as being in a plane. In four or more dimensions, a combination of two or more rotations about a plane is not in general a rotation in a single plane. Axis of 2-dimensional rotations 2-dimensional rotations, unlike the 3-dimensional ones, possess no axis of rotation, only a point about which the rotation occurs. This is equivalent, for linear transformations, with saying that there is no direction in the plane which is kept unchanged by a 2-dimensional rotation, except, of course, the identity. The question of the existence of such a direction is the question of existence of an eigenvector for the matrix A representing the rotation. Every 2D rotation around the origin through an angle in counterclockwise direction can be quite simply represented by the following matrix: A standard eigenvalue determination leads to the characteristic equation which has as its eigenvalues. Therefore, there is no real eigenvalue whenever , meaning that no real vector in the plane is kept unchanged by A. Rotation angle and axis in 3 dimensions Knowing that the trace is an invariant, the rotation angle for a proper orthogonal 3×3 rotation matrix is found by Using the principal arc-cosine, this formula gives a rotation angle satisfying . The corresponding rotation axis must be defined to point in a direction that limits the rotation angle to not exceed 180 degrees. (This can always be done because any rotation of more than 180 degrees about an axis can always be written as a rotation having if the axis is replaced with .) Every proper rotation in 3D space has an axis of rotation, which is defined such that any vector that is aligned with the rotation axis will not be affected by rotation. Accordingly, , and the rotation axis therefore corresponds to an eigenvector of the rotation matrix associated with an eigenvalue of 1. As long as the rotation angle is nonzero (i.e., the rotation is not the identity tensor), there is one and only one such direction. Because A has only real components, there is at least one real eigenvalue, and the remaining two eigenvalues must be complex conjugates of each other (see Eigenvalues and eigenvectors#Eigenvalues and the characteristic polynomial). Knowing that 1 is an eigenvalue, it follows that the remaining two eigenvalues are complex conjugates of each other, but this does not imply that they are complex—they could be real with double multiplicity. In the degenerate case of a rotation angle , the remaining two eigenvalues are both equal to −1. In the degenerate case of a zero rotation angle, the rotation matrix is the identity, and all three eigenvalues are 1 (which is the only case for which the rotation axis is arbitrary). A spectral analysis is not required to find the rotation axis. If denotes the unit eigenvector aligned with the rotation axis, and if denotes the rotation angle, then it can be shown that . Consequently, the expense of an eigenvalue analysis can be avoided by simply normalizing this vector if it has a nonzero magnitude. On the other hand, if this vector has a zero magnitude, it means that . In other words, this vector will be zero if and only if the rotation angle is 0 or 180 degrees, and the rotation axis may be assigned in this case by normalizing any column of that has a nonzero magnitude. This discussion applies to a proper rotation, and hence . Any improper orthogonal 3x3 matrix may be written as , in which is proper orthogonal. That is, any improper orthogonal 3x3 matrix may be decomposed as a proper rotation (from which an axis of rotation can be found as described above) followed by an inversion (multiplication by −1). It follows that the rotation axis of is also the eigenvector of corresponding to an eigenvalue of −1. Rotation plane As much as every tridimensional rotation has a rotation axis, also every tridimensional rotation has a plane, which is perpendicular to the rotation axis, and which is left invariant by the rotation. The rotation, restricted to this plane, is an ordinary 2D rotation. The proof proceeds similarly to the above discussion. First, suppose that all eigenvalues of the 3D rotation matrix A are real. This means that there is an orthogonal basis, made by the corresponding eigenvectors (which are necessarily orthogonal), over which the effect of the rotation matrix is just stretching it. If we write A in this basis, it is diagonal; but a diagonal orthogonal matrix is made of just +1s and −1s in the diagonal entries. Therefore, we do not have a proper rotation, but either the identity or the result of a sequence of reflections. It follows, then, that a proper rotation has some complex eigenvalue. Let v be the corresponding eigenvector. Then, as we showed in the previous topic, is also an eigenvector, and and are such that their scalar product vanishes: because, since is real, it equals its complex conjugate , and and are both representations of the same scalar product between and . This means and are orthogonal vectors. Also, they are both real vectors by construction. These vectors span the same subspace as and , which is an invariant subspace under the application of A. Therefore, they span an invariant plane. This plane is orthogonal to the invariant axis, which corresponds to the remaining eigenvector of A, with eigenvalue 1, because of the orthogonality of the eigenvectors of A. Rotation of vectors A vector is said to be rotating if it changes its orientation. This effect is generally only accompanied when its rate of change vector has non-zero perpendicular component to the original vector. This can be shown to be the case by considering a vector which is parameterized by some variable for which: Which also gives a relation of rate of change of unit vector by taking , to be such a vector: showing that vector is perpendicular to the vector, . From: , since the first term is parallel to and the second perpendicular to it, we can conclude in general that the parallel and perpendicular components of rate of change of a vector independently influence only the magnitude or orientation of the vector respectively. Hence, a rotating vector always has a non-zero perpendicular component of its rate of change vector against the vector itself. In higher dimensions As dimensions increase the number of rotation vectors increases. Along a four dimensional space (a hypervolume), rotations occur along x, y, z, and w axis. An object rotated on a w axis intersects through various volumes, where each intersection is equal to a self contained volume at an angle. This gives way to a new axis of rotation in a 4d hypervolume, were a 3d object can be rotated perpendicular to the z axis. Physics The speed of rotation is given by the angular frequency (rad/s) or frequency (turns per time), or period (seconds, days, etc.). The time-rate of change of angular frequency is angular acceleration (rad/s2), caused by torque. The ratio of torque to the angular acceleration is given by the moment of inertia: The angular velocity vector (an axial vector) also describes the direction of the axis of rotation. Similarly, the torque is an axial vector. The physics of the rotation around a fixed axis is mathematically described with the axis–angle representation of rotations. According to the right-hand rule, the direction away from the observer is associated with clockwise rotation and the direction towards the observer with counterclockwise rotation, like a screw. Circular motion It is possible for objects to have periodic circular trajectories without changing their orientation. These types of motion are treated under circular motion instead of rotation, more specifically as a curvilinear translation. Since translation involves displacement of rigid bodies while preserving the orientation of the body, in the case of curvilinear translation, all the points have the same instantaneous velocity whereas relative motion can only be observed in motions involving rotation. In rotation, the orientation of the object changes and the change in orientation is independent of the observers whose frames of reference have constant relative orientation over time. By Euler's theorem, any change in orientation can be described by rotation about an axis through a chosen reference point. Hence, the distinction between rotation and circular motion can be made by requiring an instantaneous axis for rotation, a line passing through instantaneous center of circle and perpendicular to the plane of motion. In the example depicting curvilinear translation, the center of circles for the motion lie on a straight line but it is parallel to the plane of motion and hence does not resolve to an axis of rotation. In contrast, a rotating body will always have its instantaneous axis of zero velocity, perpendicular to the plane of motion. More generally, due to Chasles' theorem, any motion of rigid bodies can be treated as a composition of rotation and translation, called general plane motion. A simple example of pure rotation is considered in rotation around a fixed axis. Cosmological principle The laws of physics are currently believed to be invariant under any fixed rotation. (Although they do appear to change when viewed from a rotating viewpoint: see rotating frame of reference.) In modern physical cosmology, the cosmological principle is the notion that the distribution of matter in the universe is homogeneous and isotropic when viewed on a large enough scale, since the forces are expected to act uniformly throughout the universe and have no preferred direction, and should, therefore, produce no observable irregularities in the large scale structuring over the course of evolution of the matter field that was initially laid down by the Big Bang. In particular, for a system which behaves the same regardless of how it is oriented in space, its Lagrangian is rotationally invariant. According to Noether's theorem, if the action (the integral over time of its Lagrangian) of a physical system is invariant under rotation, then angular momentum is conserved. Euler rotations Euler rotations provide an alternative description of a rotation. It is a composition of three rotations defined as the movement obtained by changing one of the Euler angles while leaving the other two constant. Euler rotations are never expressed in terms of the external frame, or in terms of the co-moving rotated body frame, but in a mixture. They constitute a mixed axes of rotation system, where the first angle moves the line of nodes around the external axis z, the second rotates around the line of nodes and the third one is an intrinsic rotation around an axis fixed in the body that moves. These rotations are called precession, nutation, and intrinsic rotation. Astronomy In astronomy, rotation is a commonly observed phenomenon; it includes both spin (auto-rotation) and orbital revolution. Spin Stars, planets and similar bodies may spin around on their axes. The rotation rate of planets in the solar system was first measured by tracking visual features. Stellar rotation is measured through Doppler shift or by tracking active surface features. An example is sunspots, which rotate around the Sun at the same velocity as the outer gases that make up the Sun. Under some circumstances orbiting bodies may lock their spin rotation to their orbital rotation around a larger body. This effect is called tidal locking; the Moon is tidal-locked to the Earth. This rotation induces a centrifugal acceleration in the reference frame of the Earth which slightly counteracts the effect of gravitation the closer one is to the equator. Earth's gravity combines both mass effects such that an object weighs slightly less at the equator than at the poles. Another is that over time the Earth is slightly deformed into an oblate spheroid; a similar equatorial bulge develops for other planets. Another consequence of the rotation of a planet are the phenomena of precession and nutation. Like a gyroscope, the overall effect is a slight "wobble" in the movement of the axis of a planet. Currently the tilt of the Earth's axis to its orbital plane (obliquity of the ecliptic) is 23.44 degrees, but this angle changes slowly (over thousands of years). (
Physical sciences
Classical mechanics
null
39792
https://en.wikipedia.org/wiki/3-sphere
3-sphere
In mathematics, a hypersphere or 3-sphere is a 4-dimensional analogue of a sphere, and is the 3-dimensional n-sphere. In 4-dimensional Euclidean space, it is the set of points equidistant from a fixed central point. The interior of a 3-sphere is a 4-ball. It is called a 3-sphere because topologically, the surface itself is 3-dimensional, even though it is curved into the 4th dimension. For example, when traveling on a 3-sphere, you can go north and south, east and west, or along a 3rd set of cardinal directions. This means that a 3-sphere is an example of a 3-manifold. Definition In coordinates, a 3-sphere with center and radius is the set of all points in real, 4-dimensional space () such that The 3-sphere centered at the origin with radius 1 is called the unit 3-sphere and is usually denoted : It is often convenient to regard as the space with 2 complex dimensions () or the quaternions (). The unit 3-sphere is then given by or This description as the quaternions of norm one identifies the 3-sphere with the versors in the quaternion division ring. Just as the unit circle is important for planar polar coordinates, so the 3-sphere is important in the polar view of 4-space involved in quaternion multiplication. See polar decomposition of a quaternion for details of this development of the three-sphere. This view of the 3-sphere is the basis for the study of elliptic space as developed by Georges Lemaître. Properties Elementary properties The 3-dimensional surface volume of a 3-sphere of radius is while the 4-dimensional hypervolume (the content of the 4-dimensional region, or ball, bounded by the 3-sphere) is Every non-empty intersection of a 3-sphere with a three-dimensional hyperplane is a 2-sphere (unless the hyperplane is tangent to the 3-sphere, in which case the intersection is a single point). As a 3-sphere moves through a given three-dimensional hyperplane, the intersection starts out as a point, then becomes a growing 2-sphere that reaches its maximal size when the hyperplane cuts right through the "equator" of the 3-sphere. Then the 2-sphere shrinks again down to a single point as the 3-sphere leaves the hyperplane. In a given three-dimensional hyperplane, a 3-sphere can rotate about an "equatorial plane" (analogous to a 2-sphere rotating about a central axis), in which case it appears to be a 2-sphere whose size is constant. Topological properties A 3-sphere is a compact, connected, 3-dimensional manifold without boundary. It is also simply connected. What this means, in the broad sense, is that any loop, or circular path, on the 3-sphere can be continuously shrunk to a point without leaving the 3-sphere. The Poincaré conjecture, proved in 2003 by Grigori Perelman, provides that the 3-sphere is the only three-dimensional manifold (up to homeomorphism) with these properties. The 3-sphere is homeomorphic to the one-point compactification of . In general, any topological space that is homeomorphic to the 3-sphere is called a topological 3-sphere. The homology groups of the 3-sphere are as follows: and are both infinite cyclic, while for all other indices . Any topological space with these homology groups is known as a homology 3-sphere. Initially Poincaré conjectured that all homology 3-spheres are homeomorphic to , but then he himself constructed a non-homeomorphic one, now known as the Poincaré homology sphere. Infinitely many homology spheres are now known to exist. For example, a Dehn filling with slope on any knot in the 3-sphere gives a homology sphere; typically these are not homeomorphic to the 3-sphere. As to the homotopy groups, we have and is infinite cyclic. The higher-homotopy groups () are all finite abelian but otherwise follow no discernible pattern. For more discussion see homotopy groups of spheres. Geometric properties The 3-sphere is naturally a smooth manifold, in fact, a closed embedded submanifold of . The Euclidean metric on induces a metric on the 3-sphere giving it the structure of a Riemannian manifold. As with all spheres, the 3-sphere has constant positive sectional curvature equal to where is the radius. Much of the interesting geometry of the 3-sphere stems from the fact that the 3-sphere has a natural Lie group structure given by quaternion multiplication (see the section below on group structure). The only other spheres with such a structure are the 0-sphere and the 1-sphere (see circle group). Unlike the 2-sphere, the 3-sphere admits nonvanishing vector fields (sections of its tangent bundle). One can even find three linearly independent and nonvanishing vector fields. These may be taken to be any left-invariant vector fields forming a basis for the Lie algebra of the 3-sphere. This implies that the 3-sphere is parallelizable. It follows that the tangent bundle of the 3-sphere is trivial. For a general discussion of the number of linear independent vector fields on a -sphere, see the article vector fields on spheres. There is an interesting action of the circle group on giving the 3-sphere the structure of a principal circle bundle known as the Hopf bundle. If one thinks of as a subset of , the action is given by . The orbit space of this action is homeomorphic to the two-sphere . Since is not homeomorphic to , the Hopf bundle is nontrivial. Topological construction There are several well-known constructions of the three-sphere. Here we describe gluing a pair of three-balls and then the one-point compactification. Gluing A 3-sphere can be constructed topologically by "gluing" together the boundaries of a pair of 3-balls. The boundary of a 3-ball is a 2-sphere, and these two 2-spheres are to be identified. That is, imagine a pair of 3-balls of the same size, then superpose them so that their 2-spherical boundaries match, and let matching pairs of points on the pair of 2-spheres be identically equivalent to each other. In analogy with the case of the 2-sphere (see below), the gluing surface is called an equatorial sphere. Note that the interiors of the 3-balls are not glued to each other. One way to think of the fourth dimension is as a continuous real-valued function of the 3-dimensional coordinates of the 3-ball, perhaps considered to be "temperature". We take the "temperature" to be zero along the gluing 2-sphere and let one of the 3-balls be "hot" and let the other 3-ball be "cold". The "hot" 3-ball could be thought of as the "upper hemisphere" and the "cold" 3-ball could be thought of as the "lower hemisphere". The temperature is highest/lowest at the centers of the two 3-balls. This construction is analogous to a construction of a 2-sphere, performed by gluing the boundaries of a pair of disks. A disk is a 2-ball, and the boundary of a disk is a circle (a 1-sphere). Let a pair of disks be of the same diameter. Superpose them and glue corresponding points on their boundaries. Again one may think of the third dimension as temperature. Likewise, we may inflate the 2-sphere, moving the pair of disks to become the northern and southern hemispheres. One-point compactification After removing a single point from the 2-sphere, what remains is homeomorphic to the Euclidean plane. In the same way, removing a single point from the 3-sphere yields three-dimensional space. An extremely useful way to see this is via stereographic projection. We first describe the lower-dimensional version. Rest the south pole of a unit 2-sphere on the -plane in three-space. We map a point of the sphere (minus the north pole ) to the plane by sending to the intersection of the line with the plane. Stereographic projection of a 3-sphere (again removing the north pole) maps to three-space in the same manner. (Notice that, since stereographic projection is conformal, round spheres are sent to round spheres or to planes.) A somewhat different way to think of the one-point compactification is via the exponential map. Returning to our picture of the unit two-sphere sitting on the Euclidean plane: Consider a geodesic in the plane, based at the origin, and map this to a geodesic in the two-sphere of the same length, based at the south pole. Under this map all points of the circle of radius are sent to the north pole. Since the open unit disk is homeomorphic to the Euclidean plane, this is again a one-point compactification. The exponential map for 3-sphere is similarly constructed; it may also be discussed using the fact that the 3-sphere is the Lie group of unit quaternions. Coordinate systems on the 3-sphere The four Euclidean coordinates for are redundant since they are subject to the condition that . As a 3-dimensional manifold one should be able to parameterize by three coordinates, just as one can parameterize the 2-sphere using two coordinates (such as latitude and longitude). Due to the nontrivial topology of it is impossible to find a single set of coordinates that cover the entire space. Just as on the 2-sphere, one must use at least two coordinate charts. Some different choices of coordinates are given below. Hyperspherical coordinates It is convenient to have some sort of hyperspherical coordinates on in analogy to the usual spherical coordinates on . One such choice — by no means unique — is to use , where where and run over the range 0 to , and runs over 0 to 2. Note that, for any fixed value of , and parameterize a 2-sphere of radius , except for the degenerate cases, when equals 0 or , in which case they describe a point. The round metric on the 3-sphere in these coordinates is given by and the volume form by These coordinates have an elegant description in terms of quaternions. Any unit quaternion can be written as a versor: where is a unit imaginary quaternion; that is, a quaternion that satisfies . This is the quaternionic analogue of Euler's formula. Now the unit imaginary quaternions all lie on the unit 2-sphere in so any such can be written: With in this form, the unit quaternion is given by where are as above. When is used to describe spatial rotations (cf. quaternions and spatial rotations), it describes a rotation about through an angle of . Hopf coordinates For unit radius another choice of hyperspherical coordinates, , makes use of the embedding of in . In complex coordinates we write This could also be expressed in as Here runs over the range 0 to , and and can take any values between 0 and 2. These coordinates are useful in the description of the 3-sphere as the Hopf bundle For any fixed value of between 0 and , the coordinates parameterize a 2-dimensional torus. Rings of constant and above form simple orthogonal grids on the tori. See image to right. In the degenerate cases, when equals 0 or , these coordinates describe a circle. The round metric on the 3-sphere in these coordinates is given by and the volume form by To get the interlocking circles of the Hopf fibration, make a simple substitution in the equations above In this case , and specify which circle, and specifies the position along each circle. One round trip (0 to 2) of or equates to a round trip of the torus in the 2 respective directions. Stereographic coordinates Another convenient set of coordinates can be obtained via stereographic projection of from a pole onto the corresponding equatorial hyperplane. For example, if we project from the point we can write a point in as where is a vector in and . In the second equality above, we have identified with a unit quaternion and with a pure quaternion. (Note that the numerator and denominator commute here even though quaternionic multiplication is generally noncommutative). The inverse of this map takes in to We could just as well have projected from the point , in which case the point is given by where is another vector in . The inverse of this map takes to Note that the coordinates are defined everywhere but and the coordinates everywhere but . This defines an atlas on consisting of two coordinate charts or "patches", which together cover all of . Note that the transition function between these two charts on their overlap is given by and vice versa. Group structure When considered as the set of unit quaternions, inherits an important structure, namely that of quaternionic multiplication. Because the set of unit quaternions is closed under multiplication, takes on the structure of a group. Moreover, since quaternionic multiplication is smooth, can be regarded as a real Lie group. It is a nonabelian, compact Lie group of dimension 3. When thought of as a Lie group, is often denoted or . It turns out that the only spheres that admit a Lie group structure are , thought of as the set of unit complex numbers, and , the set of unit quaternions (The degenerate case which consists of the real numbers 1 and −1 is also a Lie group, albeit a 0-dimensional one). One might think that , the set of unit octonions, would form a Lie group, but this fails since octonion multiplication is nonassociative. The octonionic structure does give one important property: parallelizability. It turns out that the only spheres that are parallelizable are , , and . By using a matrix representation of the quaternions, , one obtains a matrix representation of . One convenient choice is given by the Pauli matrices: This map gives an injective algebra homomorphism from to the set of 2 × 2 complex matrices. It has the property that the absolute value of a quaternion is equal to the square root of the determinant of the matrix image of . The set of unit quaternions is then given by matrices of the above form with unit determinant. This matrix subgroup is precisely the special unitary group . Thus, as a Lie group is isomorphic to . Using our Hopf coordinates we can then write any element of in the form Another way to state this result is if we express the matrix representation of an element of as an exponential of a linear combination of the Pauli matrices. It is seen that an arbitrary element can be written as The condition that the determinant of is +1 implies that the coefficients are constrained to lie on a 3-sphere. In literature In Edwin Abbott Abbott's Flatland, published in 1884, and in Sphereland, a 1965 sequel to Flatland by Dionys Burger, the 3-sphere is referred to as an oversphere, and a 4-sphere is referred to as a hypersphere. Writing in the American Journal of Physics, Mark A. Peterson describes three different ways of visualizing 3-spheres and points out language in The Divine Comedy that suggests Dante viewed the Universe in the same way; Carlo Rovelli supports the same idea. In Art Meets Mathematics in the Fourth Dimension, Stephen L. Lipscomb develops the concept of the hypersphere dimensions as it relates to art, architecture, and mathematics.
Mathematics
Four-dimensional space
null
39831
https://en.wikipedia.org/wiki/Battering%20ram
Battering ram
A battering ram is a siege engine that originated in ancient times and was designed to break open the masonry walls of fortifications or splinter their wooden gates. In its simplest form, a battering ram is just a large, heavy log carried by several people and propelled with force against an obstacle; the ram would be sufficient to damage the target if the log were massive enough and/or it were moved quickly enough (that is, if it had enough momentum). Later rams encased the log in an arrow-proof, fire-resistant canopy mounted on wheels. Inside the canopy, the log was swung from suspensory chains or ropes. Rams proved effective weapons of war because at the time wall-building materials such as stone and brick were weak in tension, and therefore prone to cracking when impacted with force. With repeated blows, the cracks would grow steadily until a hole was created. Eventually, a breach would appear in the fabric of the wall, enabling armed attackers to force their way through the gap and engage the inhabitants of the citadel. The introduction in the later Middle Ages of siege cannons, which harnessed the explosive power of gunpowder to propel weighty stone or iron balls against fortified obstacles, spelled the end of battering rams and other traditional siege weapons. Smaller, hand-held versions of battering rams are still used today by law enforcement officers and military personnel to break open locked doors. A capped ram is a battering ram that has an accessory at the head (usually made of iron or steel and sometimes punningly shaped into the head and horns of an ovine ram) to do more damage to a building. It was much more effective at destroying enemy walls and buildings than an uncapped ram but was heavier to carry. Design The earliest depiction of a possible battering ram is from the tomb of the 11th Dynasty Egyptian noble Khety, where a pair of soldiers advance towards a fortress under the protection of a mobile roofed structure, carrying a long pole that may represent a simple battering ram. During the Iron Age, in the ancient Middle East and Mediterranean, the battering ram's log was slung from a wheeled frame by ropes or chains so that it could be made more massive and be more easily bashed against its target. Frequently, the ram's point would be reinforced with a metal head or cap while vulnerable parts of the shaft were bound with strengthening metal bands. Vitruvius details in his text that Ceras the Carthaginian was the first to make a ram with a wooden base with wheels and a wooden superstructure, with the ram hung within. This structure moved so slowly, however, that he called it the testudo (Latin for "tortoise"). Another type of ram was one that maintained the normal shape and structure, but the support beams were instead made of saplings that were lashed together. The frame was then covered in hides as normal to defend from fire. The only solid beam present was the ram that was hung from the frame. The frame itself was so light that it could be carried on the shoulders of the men transporting the ram, and the same men could beat the ram against the wall when they reached it. Many battering rams had curved or slanted wooden roofs and side-screens, covered in protective materials, usually fresh wet hides. These canopies reduced the risk of the ram being set on fire, and protected the operators of the ram from arrow and javelin volleys launched from above. An image of an Assyrian battering ram depicts how sophisticated attacking and defensive practices had become by the 9th century BC. The defenders of a town wall are trying to set the ram alight with torches and have also put a chain under it. The attackers are trying to pull on the chain to free the ram, while the aforementioned wet hides on the canopy provide protection against the flames. By the time the Kushites made their incursions into Egypt, , walls, siege tactics and equipment had undergone many changes. Early shelters protecting sappers armed with poles trying to breach mudbrick ramparts gave way to battering rams. The first confirmed use of rams in the Occident happened from 503 to 502 BC when Opiter Verginius became consul of the Romans during the fight against Aurunci people: The second known use was in 427 BC, when the Spartans besieged Plataea. The first use of rams within the Mediterranean Basin, featuring in this case the simultaneous employment of siege towers to shelter the rammers from attack, occurred on the island of Sicily in 409 BC, at the Selinus siege. Defenders manning castles, forts or bastions would sometimes try to foil battering rams by dropping obstacles in front of the ram, such as a large sack of sawdust, just before the ram's head struck a wall or gate, or by using grappling hooks to immobilize the ram's log. Alternatively, the ram could be set ablaze, doused in fire-heated sand, pounded by boulders dropped from battlements or invested by a rapid sally of troops. Some battering rams were not slung from ropes or chains, but were instead supported by rollers. This allowed the ram to achieve a greater speed before striking its target, making it more destructive. Such a ram, as used by Alexander the Great, is described by Vitruvius. Alternatives to the battering ram included the drill, the sapper's mouse, the pick, the siege hook, and the hunting ram. These devices were smaller than a ram and could be used in confined spaces. Notable sieges Battering rams had an important effect on the evolution of defensive walls, which were constructed ever more ingeniously in a bid to nullify the effects of siege engines. Historical instances of the usage of battering rams in sieges of major cities include: The destruction of Jerusalem by the Romans The Crusades The Sack of Rome (410) The various sieges of Constantinople There is a popular myth in Gloucester, England that the well known children's rhyme, Humpty Dumpty, is about a battering ram used in the siege of Gloucester in 1643, during the Civil War. However, the story is almost certainly untrue; during the siege, which lasted only one month, no battering rams were used, although many cannons were. The idea seems to have originated in a spoof history essay by Professor David Daube written for The Oxford Magazine in 1956, which was widely believed despite obvious improbabilities (e.g., planning to cross the River Severn by running the ram down a hill at speed, although the river is about 30 m (100 feet) wide at this point). Modern use Battering rams still have a use in modern times. Police forces often employ small, one-man or two-man metal rams, known as enforcers, for forcing open locked portals or effecting a door breaching. Modern battering rams sometimes incorporate a pneumatic cylinder and piston driven by compressed air, which are triggered by striking a hard object and enhance the momentum of the impact significantly.
Technology
Artillery and siege
null
39834
https://en.wikipedia.org/wiki/Correlation%20does%20not%20imply%20causation
Correlation does not imply causation
The phrase "correlation does not imply causation" refers to the inability to legitimately deduce a cause-and-effect relationship between two events or variables solely on the basis of an observed association or correlation between them. The idea that "correlation implies causation" is an example of a questionable-cause logical fallacy, in which two events occurring together are taken to have established a cause-and-effect relationship. This fallacy is also known by the Latin phrase cum hoc ergo propter hoc ('with this, therefore because of this'). This differs from the fallacy known as post hoc ergo propter hoc ("after this, therefore because of this"), in which an event following another is seen as a necessary consequence of the former event, and from conflation, the errant merging of two events, ideas, databases, etc., into one. As with any logical fallacy, identifying that the reasoning behind an argument is flawed does not necessarily imply that the resulting conclusion is false. Statistical methods have been proposed that use correlation as the basis for hypothesis tests for causality, including the Granger causality test and convergent cross mapping. The Bradford Hill criteria, also known as Hill's criteria for causation, are a group of nine principles that can be useful in establishing epidemiologic evidence of a causal relationship. Usage and meaning of terms "Imply" In casual use, the word "implies" loosely means suggests, rather than requires. However, in logic, the technical use of the word "implies" means "is a sufficient condition for." That is the meaning intended by statisticians when they say causation is not certain. Indeed, p implies q has the technical meaning of the material conditional: if p then q symbolized as p → q. That is, "if circumstance p is true, then q follows." In that sense, it is always correct to say "Correlation does not imply causation." "Cause" The word "cause" (or "causation") has multiple meanings in English. In philosophical terminology, "cause" can refer to necessary, sufficient, or contributing causes. In examining correlation, "cause" is most often used to mean "one contributing cause" (but not necessarily the only contributing cause). Causal analysis Examples of illogically inferring causation from correlation B causes A (reverse causation or reverse causality) Reverse causation or reverse causality or wrong direction is an informal fallacy of questionable cause where cause and effect are reversed. The cause is said to be the effect and vice versa. Example 1 The faster that windmills are observed to rotate, the more wind is observed. Therefore, wind is caused by the rotation of windmills. (Or, simply put: windmills, as their name indicates, are machines used to produce wind.) In this example, the correlation (simultaneity) between windmill activity and wind velocity does not imply that wind is caused by windmills. It is rather the other way around, as suggested by the fact that wind does not need windmills to exist, while windmills need wind to rotate. Wind can be observed in places where there are no windmills or non-rotating windmills—and there are good reasons to believe that wind existed before the invention of windmills. Example 2 Low cholesterol is associated with an increase in mortality. Therefore, low cholesterol increases your risk of mortality. Causality is actually the other way around, since some diseases, such as cancer, cause low cholesterol due to a myriad of factors, such as weight loss, and they also cause an increase in mortality. This can also be seen in alcoholics. As alcoholics become diagnosed with cirrhosis of the liver, many quit drinking. However, they also experience an increased risk of mortality. In these instances, it is the diseases that cause an increased risk of mortality, but the increased mortality is attributed to the beneficial effects that follow the diagnosis, making healthy changes look unhealthy. Example 3 In other cases it may simply be unclear which is the cause and which is the effect. For example: Children that watch a lot of TV are the most violent. Clearly, TV makes children more violent. This could easily be the other way round; that is, violent children like watching more TV than less violent ones. Example 4 A correlation between recreational drug use and psychiatric disorders might be either way around: perhaps the drugs cause the disorders, or perhaps people use drugs to self medicate for preexisting conditions. Gateway drug theory may argue that marijuana usage leads to usage of harder drugs, but hard drug usage may lead to marijuana usage (see also confusion of the inverse). Indeed, in the social sciences where controlled experiments often cannot be used to discern the direction of causation, this fallacy can fuel long-standing scientific arguments. One such example can be found in education economics, between the screening/signaling and human capital models: it could either be that having innate ability enables one to complete an education, or that completing an education builds one's ability. Example 5 A historical example of this is that Europeans in the Middle Ages believed that lice were beneficial to health since there would rarely be any lice on sick people. The reasoning was that the people got sick because the lice left. The real reason however is that lice are extremely sensitive to body temperature. A small increase of body temperature, such as in a fever, makes the lice look for another host. The medical thermometer had not yet been invented and so that increase in temperature was rarely noticed. Noticeable symptoms came later, which gave the impression that the lice had left before the person became sick. In other cases, two phenomena can each be a partial cause of the other; consider poverty and lack of education, or procrastination and poor self-esteem. One making an argument based on these two phenomena must however be careful to avoid the fallacy of circular cause and consequence. Poverty is a cause of lack of education, but it is not the sole cause, and vice versa. Third factor C (the common-causal variable) causes both A and B The third-cause fallacy (also known as ignoring a common cause or questionable cause) is a logical fallacy in which a spurious relationship is confused for causation. It asserts that X causes Y when in reality, both X and Y are caused by Z. It is a variation on the post hoc ergo propter hoc fallacy and a member of the questionable cause group of fallacies. All of those examples deal with a lurking variable, which is simply a hidden third variable that affects both of the variables observed to be correlated. That third variable is also known as a confounding variable, with the slight difference that confounding variables need not be hidden and may thus be corrected for in an analysis. Note that the Wikipedia link to lurking variable redirects to confounding. A difficulty often also arises where the third factor, though fundamentally different from A and B, is so closely related to A and/or B as to be confused with them or very difficult to scientifically disentangle from them (see Example 4). Example 1 Sleeping with one's shoes on is strongly correlated with waking up with a headache. Therefore, sleeping with one's shoes on causes headache. The above example commits the correlation-implies-causation fallacy, as it prematurely concludes that sleeping with one's shoes on causes headache. A more plausible explanation is that both are caused by a third factor, in this case going to bed drunk, which thereby gives rise to a correlation. So the conclusion is false. Example 2 Young children who sleep with the light on are much more likely to develop myopia in later life. Therefore, sleeping with the light on causes myopia. This is a scientific example that resulted from a study at the University of Pennsylvania Medical Center. Published in the May 13, 1999, issue of Nature, the study received much coverage at the time in the popular press. However, a later study at Ohio State University did not find that infants sleeping with the light on caused the development of myopia. It did find a strong link between parental myopia and the development of child myopia, also noting that myopic parents were more likely to leave a light on in their children's bedroom. In this case, the cause of both conditions is parental myopia, and the above-stated conclusion is false. Example 3 As ice cream sales increase, the rate of drowning deaths increases sharply. Therefore, ice cream consumption causes drowning. This example fails to recognize the importance of time of year and temperature to ice cream sales. Ice cream is sold during the hot summer months at a much greater rate than during colder times, and it is during these hot summer months that people are more likely to engage in activities involving water, such as swimming. The increased drowning deaths are simply caused by more exposure to water-based activities, not ice cream. The stated conclusion is false. Example 4 A hypothetical study shows a relationship between test anxiety scores and shyness scores, with a statistical r value (strength of correlation) of +.59. Therefore, it may be simply concluded that shyness, in some part, causally influences test anxiety. However, as encountered in many psychological studies, another variable, a "self-consciousness score", is discovered that has a sharper correlation (+.73) with shyness. This suggests a possible "third variable" problem, however, when three such closely related measures are found, it further suggests that each may have bidirectional tendencies (see "bidirectional variable", above), being a cluster of correlated values each influencing one another to some extent. Therefore, the simple conclusion above may be false. Example 5 Since the 1950s, both the atmospheric CO2 level and obesity levels have increased sharply. Hence, atmospheric CO2 causes obesity. Richer populations tend to eat more food and produce more CO2. Example 6 HDL ("good") cholesterol is negatively correlated with incidence of heart attack. Therefore, taking medication to raise HDL decreases the chance of having a heart attack. Further research has called this conclusion into question. Instead, it may be that other underlying factors, like genes, diet and exercise, affect both HDL levels and the likelihood of having a heart attack; it is possible that medicines may affect the directly measurable factor, HDL levels, without affecting the chance of heart attack. Bidirectional causation: A causes B, and B causes A Causality is not necessarily one-way; in a predator-prey relationship, predator numbers affect prey numbers, but prey numbers, i.e. food supply, also affect predator numbers. Another well-known example is that cyclists have a lower Body Mass Index than people who do not cycle. This is often explained by assuming that cycling increases physical activity levels and therefore decreases BMI. Because results from prospective studies on people who increase their bicycle use show a smaller effect on BMI than cross-sectional studies, there may be some reverse causality as well. For example, people with a lower BMI may be more likely to want to cycle in the first place. The relationship between A and B is coincidental The two variables are not related at all, but correlate by chance. The more things are examined, the more likely it is that two unrelated variables will appear to be related. For example: The result of the last home game by the Washington Commanders prior to the presidential election predicted the outcome of every presidential election from 1936 to 2000 inclusive, despite the fact that the outcomes of football games had nothing to do with the outcome of the popular election. This streak was finally broken in 2004 (or 2012 using an alternative formulation of the original rule). The Mierscheid law, which correlates the Social Democratic Party of Germany's share of the popular vote with the size of crude steel production in Western Germany. Alternating bald–hairy Russian leaders: A bald (or obviously balding) state leader of Russia has succeeded a non-bald ("hairy") one, and vice versa, for nearly 200 years. The Bible code, Hebrew words predicting historical events supposedly hidden within the Torah: the huge number of combinations of letters makes appearances of any word in sufficiently lengthy text statistically insignificant. Use of correlation as scientific evidence Much of scientific evidence is based upon a correlation of variables that are observed to occur together. Scientists are careful to point out that correlation does not necessarily mean causation. The assumption that A causes B simply because A correlates with B is not accepted as a legitimate form of argument. However, sometimes people commit the opposite fallacy of dismissing correlation entirely. That would dismiss a large swath of important scientific evidence. Since it may be difficult or ethically impossible to run controlled double-blind studies to address certain questions, correlational evidence from several different angles may be useful for prediction despite failing to provide evidence for causation. For example, social workers might be interested in knowing how child abuse relates to academic performance. Although it would be unethical to perform an experiment in which children are randomly assigned to receive or not receive abuse, researchers can look at existing groups using a non-experimental correlational design. If in fact a negative correlation exists between abuse and academic performance, researchers could potentially use this knowledge of a statistical correlation to make predictions about children outside the study who experience abuse even though the study failed to provide causal evidence that abuse decreases academic performance. The combination of limited available methodologies with the dismissing correlation fallacy has on occasion been used to counter a scientific finding. For example, the tobacco industry has historically relied on a dismissal of correlational evidence to reject a link between tobacco smoke and lung cancer, as did biologist and statistician Ronald Fisher (frequently on the industry's behalf). Correlation is a valuable type of scientific evidence in fields such as medicine, psychology, and sociology. Correlations must first be confirmed as real, and every possible causative relationship must then be systematically explored. In the end, correlation alone cannot be used as evidence for a cause-and-effect relationship between a treatment and benefit, a risk factor and a disease, or a social or economic factor and various outcomes. It is one of the most abused types of evidence because it is easy and even tempting to come to premature conclusions based upon the preliminary appearance of a correlation.
Mathematics
Statistics
null
39878
https://en.wikipedia.org/wiki/Domain%20name
Domain name
In the Internet, a domain name is a string that identifies a realm of administrative autonomy, authority or control. Domain names are often used to identify services provided through the Internet, such as websites, email services and more. Domain names are used in various networking contexts and for application-specific naming and addressing purposes. In general, a domain name identifies a network domain or an Internet Protocol (IP) resource, such as a personal computer used to access the Internet, or a server computer. Domain names are formed by the rules and procedures of the Domain Name System (DNS). Any name registered in the DNS is a domain name. Domain names are organized in subordinate levels (subdomains) of the DNS root domain, which is nameless. The first-level set of domain names are the top-level domains (TLDs), including the generic top-level domains (gTLDs), such as the prominent domains com, info, net, edu, and org, and the country code top-level domains (ccTLDs). Below these top-level domains in the DNS hierarchy are the second-level and third-level domain names that are typically open for reservation by end-users who wish to connect local area networks to the Internet, create other publicly accessible Internet resources or run websites, such as "wikipedia.org". The registration of a second- or third-level domain name is usually administered by a domain name registrar who sell its services to the public. A fully qualified domain name (FQDN) is a domain name that is completely specified with all labels in the hierarchy of the DNS, having no parts omitted. Traditionally a FQDN ends in a dot (.) to denote the top of the DNS tree. Labels in the Domain Name System are case-insensitive, and may therefore be written in any desired capitalization method, but most commonly domain names are written in lowercase in technical contexts. A hostname is a domain name that has at least one associated IP address. Purpose Domain names serve to identify Internet resources, such as computers, networks, and services, with a text-based label that is easier to memorize than the numerical addresses used in the Internet protocols. A domain name may represent entire collections of such resources or individual instances. Individual Internet host computers use domain names as host identifiers, also called hostnames. The term hostname is also used for the leaf labels in the domain name system, usually without further subordinate domain name space. Hostnames appear as a component in Uniform Resource Locators (URLs) for Internet resources such as websites (e.g., en.wikipedia.org). Domain names are also used as simple identification labels to indicate ownership or control of a resource. Such examples are the realm identifiers used in the Session Initiation Protocol (SIP), the Domain Keys used to verify DNS domains in e-mail systems, and in many other Uniform Resource Identifiers (URIs). An important function of domain names is to provide easily recognizable and memorizable names to numerically addressed Internet resources. This abstraction allows any resource to be moved to a different physical location in the address topology of the network, globally or locally in an intranet. Such a move usually requires changing the IP address of a resource and the corresponding translation of this IP address to and from its domain name. Domain names are used to establish a unique identity. Organizations can choose a domain name that corresponds to their name, helping Internet users to reach them easily. A generic domain is a name that defines a general category, rather than a specific or personal instance, for example, the name of an industry, rather than a company name. Some examples of generic names are books.com, music.com, and travel.info. Companies have created brands based on generic names, and such generic domain names may be valuable. Domain names are often simply referred to as domains and domain name registrants are frequently referred to as domain owners, although domain name registration with a registrar does not confer any legal ownership of the domain name, only an exclusive right of use for a particular duration of time. The use of domain names in commerce may subject them to trademark law. History The practice of using a simple memorable abstraction of a host's numerical address on a computer network dates back to the ARPANET era, before the advent of today's commercial Internet. In the early network, each computer on the network retrieved the hosts file (host.txt) from a computer at SRI (now SRI International), which mapped computer hostnames to numerical addresses. The rapid growth of the network made it impossible to maintain a centrally organized hostname registry and in 1983 the Domain Name System was introduced on the ARPANET and published by the Internet Engineering Task Force as RFC 882 and RFC 883. The following table shows the first five .com domains with the dates of their registration: and the first five .edu domains: Domain name space Today, the Internet Corporation for Assigned Names and Numbers (ICANN) manages the top-level development and architecture of the Internet domain name space. It authorizes domain name registrars, through which domain names may be registered and reassigned. The domain name space consists of a tree of domain names. Each node in the tree holds information associated with the domain name. The tree sub-divides into zones beginning at the DNS root zone. Domain name syntax A domain name consists of one or more parts, technically called labels, that are conventionally concatenated, and delimited by dots, such as example.com. The right-most label conveys the top-level domain; for example, the domain name www.example.com belongs to the top-level domain com. The hierarchy of domains descends from the right to the left label in the name; each label to the left specifies a subdivision, or subdomain of the domain to the right. For example: the label example specifies a node example.com as a subdomain of the com domain, and www is a label to create www.example.com, a subdomain of example.com. Each label may contain from 1 to 63 octets. The empty label is reserved for the root node and when fully qualified is expressed as the empty label terminated by a dot. The full domain name may not exceed a total length of 253 ASCII characters in its textual representation. A hostname is a domain name that has at least one associated IP address. For example, the domain names www.example.com and example.com are also hostnames, whereas the com domain is not. However, other top-level domains, particularly country code top-level domains, may indeed have an IP address, and if so, they are also hostnames. Hostnames impose restrictions on the characters allowed in the corresponding domain name. A valid hostname is also a valid domain name, but a valid domain name may not necessarily be valid as a hostname. Top-level domains When the Domain Name System was devised in the 1980s, the domain name space was divided into two main groups of domains. The country code top-level domains (ccTLD) were primarily based on the two-character territory codes of ISO-3166 country abbreviations. In addition, a group of seven generic top-level domains (gTLD) was implemented which represented a set of categories of names and multi-organizations. These were the domains gov, edu, com, mil, org, net, and int. These two types of top-level domains (TLDs) are the highest level of domain names of the Internet. Top-level domains form the DNS root zone of the hierarchical Domain Name System. Every domain name ends with a top-level domain label. During the growth of the Internet, it became desirable to create additional generic top-level domains. As of October 2009, 21 generic top-level domains and 250 two-letter country-code top-level domains existed. In addition, the ARPA domain serves technical purposes in the infrastructure of the Domain Name System. During the 32nd International Public ICANN Meeting in Paris in 2008, ICANN started a new process of TLD naming policy to take a "significant step forward on the introduction of new generic top-level domains." This program envisions the availability of many new or already proposed domains, as well as a new application and implementation process. Observers believed that the new rules could result in hundreds of new top-level domains to be registered. In 2012, the program commenced, and received 1930 applications. By 2016, the milestone of 1000 live gTLD was reached. The Internet Assigned Numbers Authority (IANA) maintains an annotated list of top-level domains in the DNS root zone database. For special purposes, such as network testing, documentation, and other applications, IANA also reserves a set of special-use domain names. This list contains domain names such as example, local, localhost, and test. Other top-level domain names containing trade marks are registered for corporate use. Cases include brands such as BMW, Google, and Canon. Second-level and lower level domains Below the top-level domains in the domain name hierarchy are the second-level domain (SLD) names. These are the names directly to the left of .com, .net, and the other top-level domains. As an example, in the domain example.co.uk, co is the second-level domain. Next are third-level domains, which are written immediately to the left of a second-level domain. There can be fourth- and fifth-level domains, and so on, with virtually no limitation. Each label is separated by a full stop (dot). An example of an operational domain name with four levels of domain labels is sos.state.oh.us. 'sos' is said to be a sub-domain of 'state.oh.us', and 'state' a sub-domain of 'oh.us', etc. In general, subdomains are domains subordinate to their parent domain. An example of very deep levels of subdomain ordering are the IPv6 reverse resolution DNS zones, e.g., 1.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.ip6.arpa, which is the reverse DNS resolution domain name for the IP address of a loopback interface, or the localhost name. Second-level (or lower-level, depending on the established parent hierarchy) domain names are often created based on the name of a company (e.g., bbc.co.uk), product or service (e.g. hotmail.com). Below these levels, the next domain name component has been used to designate a particular host server. Therefore, ftp.example.com might be an FTP server, www.example.com would be a World Wide Web server, and mail.example.com could be an email server, each intended to perform only the implied function. Modern technology allows multiple physical servers with either different (cf. load balancing) or even identical addresses (cf. anycast) to serve a single hostname or domain name, or multiple domain names to be served by a single computer. The latter is very popular in Web hosting service centers, where service providers host the websites of many organizations on just a few servers. The hierarchical DNS labels or components of domain names are separated in a fully qualified name by the full stop (dot, .). Internationalized domain names The character set allowed in the Domain Name System is based on ASCII and does not allow the representation of names and words of many languages in their native scripts or alphabets. ICANN approved the Internationalized domain name (IDNA) system, which maps Unicode strings used in application user interfaces into the valid DNS character set by an encoding called Punycode. For example, københavn.eu is mapped to xn--kbenhavn-54a.eu. Many registries have adopted IDNA. Domain name registration History The first commercial Internet domain name, in the TLD com, was registered on 15 March 1985 in the name symbolics.com by Symbolics Inc., a computer systems firm in Cambridge, Massachusetts. By 1992, fewer than 15,000 com domains had been registered. In the first quarter of 2015, 294 million domain names had been registered. A large fraction of them are in the com TLD, which as of December 21, 2014, had 115.6 million domain names, including 11.9 million online business and e-commerce sites, 4.3 million entertainment sites, 3.1 million finance related sites, and 1.8 million sports sites. As of July 15, 2012, the com TLD had more registrations than all of the ccTLDs combined. 359.8 million domain names had been registered. Administration The right to use a domain name is delegated by domain name registrars, which are accredited by the Internet Corporation for Assigned Names and Numbers (ICANN), the organization charged with overseeing the name and number systems of the Internet. In addition to ICANN, each top-level domain (TLD) is maintained and serviced technically by an administrative organization operating a registry. A registry is responsible for maintaining the database of names registered within the TLD it administers. The registry receives registration information from each domain name registrar authorized to assign names in the corresponding TLD and publishes the information using a special service, the WHOIS protocol. Registries and registrars usually charge an annual fee for the service of delegating a domain name to a user and providing a default set of name servers. Often, this transaction is termed a sale or lease of the domain name, and the registrant may sometimes be called an "owner", but no such legal relationship is actually associated with the transaction, only the exclusive right to use the domain name. More correctly, authorized users are known as "registrants" or as "domain holders". ICANN publishes the complete list of TLD registries and domain name registrars. Registrant information associated with domain names is maintained in an online database accessible with the WHOIS protocol. For most of the 250 country code top-level domains (ccTLDs), the domain registries maintain the WHOIS (Registrant, name servers, expiration dates, etc.) information. Some domain name registries, often called network information centers (NIC), also function as registrars to end-users. The major generic top-level domain registries, such as for the com, net, org, info domains and others, use a registry-registrar model consisting of hundreds of domain name registrars (see lists at ICANN or VeriSign). In this method of management, the registry only manages the domain name database and the relationship with the registrars. The registrants (users of a domain name) are customers of the registrar, in some cases through additional layers of resellers. There are also a few other alternative DNS root providers that try to compete or complement ICANN's role of domain name administration, however, most of them failed to receive wide recognition, and thus domain names offered by those alternative roots cannot be used universally on most other internet-connecting machines without additional dedicated configurations. Technical requirements and process In the process of registering a domain name and maintaining authority over the new name space created, registrars use several key pieces of information connected with a domain: Administrative contact. A registrant usually designates an administrative contact to manage the domain name. The administrative contact usually has the highest level of control over a domain. Management functions delegated to the administrative contacts may include management of all business information, such as name of record, postal address, and contact information of the official registrant of the domain and the obligation to conform to the requirements of the domain registry in order to retain the right to use a domain name. Furthermore, the administrative contact installs additional contact information for technical and billing functions. Technical contact. The technical contact manages the name servers of a domain name. The functions of a technical contact include assuring conformance of the configurations of the domain name with the requirements of the domain registry, maintaining the domain zone records, and providing continuous functionality of the name servers (that leads to the accessibility of the domain name). Billing contact. The party responsible for receiving billing invoices from the domain name registrar and paying applicable fees. Name servers. Most registrars provide two or more name servers as part of the registration service. However, a registrant may specify its own authoritative name servers to host a domain's resource records. The registrar's policies govern the number of servers and the type of server information required. Some providers require a hostname and the corresponding IP address or just the hostname, which must be resolvable either in the new domain, or exist elsewhere. Based on traditional requirements (RFC 1034), typically a minimum of two servers is required. A domain name consists of one or more labels, each of which is formed from the set of ASCII letters, digits, and hyphens (a–z, A–Z, 0–9, -), but not starting or ending with a hyphen. The labels are case-insensitive; for example, 'label' is equivalent to 'Label' or 'LABEL'. In the textual representation of a domain name, the labels are separated by a full stop (period). Business models Domain names are often seen in analogy to real estate in that domain names are foundations on which a website can be built, and the highest quality domain names, like sought-after real estate, tend to carry significant value, usually due to their online brand-building potential, use in advertising, search engine optimization, and many other criteria. A few companies have offered low-cost, below-cost or even free domain registration with a variety of models adopted to recoup the costs to the provider. These usually require that domains be hosted on their website within a framework or portal that includes advertising wrapped around the domain holder's content, revenue from which allows the provider to recoup the costs. Domain registrations were free of charge when the DNS was new. A domain holder may provide an infinite number of subdomains in their domain. For example, the owner of example.org could provide subdomains such as foo.example.org and foo.bar.example.org to interested parties. Many desirable domain names are already assigned and users must search for other acceptable names, using Web-based search features, or WHOIS and dig operating system tools. Many registrars have implemented domain name suggestion tools which search domain name databases and suggest available alternative domain names related to keywords provided by the user. Resale of domain names The business of resale of registered domain names is known as the domain aftermarket. Various factors influence the perceived value or market value of a domain name. Most of the high-prize domain sales are carried out privately. Also, it is called confidential domain acquiring or anonymous domain acquiring. Domain name confusion Intercapping is often used to emphasize the meaning of a domain name, because DNS names are not case-sensitive. Some names may be misinterpreted in certain uses of capitalization. For example: Who Represents, a database of artists and agents, chose whorepresents.com, which can be misread. In such situations, the proper meaning may be clarified by placement of hyphens when registering a domain name. For instance, Experts Exchange, a programmers' discussion site, used expertsexchange.com, but changed its domain name to experts-exchange.com. Uses in website hosting The domain name is a component of a uniform resource locator (URL) used to access websites, for example: URL: http://www.example.net/index.html Top-level domain: net Second-level domain: example Hostname: www A domain name may point to multiple IP addresses to provide server redundancy for the services offered, a feature that is used to manage the traffic of large, popular websites. Web hosting services, on the other hand, run servers that are typically assigned only one or a few addresses while serving websites for many domains, a technique referred to as virtual web hosting. Such IP address overloading requires that each request identifies the domain name being referenced, for instance by using the HTTP request header field Host:, or Server Name Indication. Abuse and regulation Critics often claim abuse of administrative power over domain names. Particularly noteworthy was the VeriSign Site Finder system which redirected all unregistered .com and .net domains to a VeriSign webpage. For example, at a public meeting with VeriSign to air technical concerns about Site Finder, numerous people, active in the IETF and other technical bodies, explained how they were surprised by VeriSign's changing the fundamental behavior of a major component of Internet infrastructure, not having obtained the customary consensus. Site Finder, at first, assumed every Internet query was for a website, and it monetized queries for incorrect domain names, taking the user to VeriSign's search site. Other applications, such as many implementations of email, treat a lack of response to a domain name query as an indication that the domain does not exist, and that the message can be treated as undeliverable. The original VeriSign implementation broke this assumption for mail, because it would always resolve an erroneous domain name to that of Site Finder. While VeriSign later changed Site Finder's behaviour with regard to email, there was still widespread protest about VeriSign's action being more in its financial interest than in the interest of the Internet infrastructure component for which VeriSign was the steward. Despite widespread criticism, VeriSign only reluctantly removed it after the Internet Corporation for Assigned Names and Numbers (ICANN) threatened to revoke its contract to administer the root name servers. ICANN published the extensive set of letters exchanged, committee reports, and ICANN decisions. There is also significant disquiet regarding the United States Government's political influence over ICANN. This was a significant issue in the attempt to create a .xxx top-level domain and sparked greater interest in alternative DNS roots that would be beyond the control of any single country. Additionally, there are numerous accusations of domain name front running, whereby registrars, when given whois queries, automatically register the domain name for themselves. Network Solutions has been accused of this. Truth in Domain Names Act In the United States, the Truth in Domain Names Act of 2003, in combination with the PROTECT Act of 2003, forbids the use of a misleading domain name with the intention of attracting Internet users into visiting Internet pornography sites. The Truth in Domain Names Act follows the more general Anticybersquatting Consumer Protection Act passed in 1999 aimed at preventing typosquatting and deceptive use of names and trademarks in domain names. Seizures In the early 21st century, the US Department of Justice (DOJ) pursued the seizure of domain names, based on the legal theory that domain names constitute property used to engage in criminal activity, and thus are subject to forfeiture. For example, in the seizure of the domain name of a gambling website, the DOJ referenced and . In 2013 the US government seized Liberty Reserve, citing . The U.S. Congress passed the Combating Online Infringement and Counterfeits Act in 2010. Consumer Electronics Association vice president Michael Petricone was worried that seizure was a blunt instrument that could harm legitimate businesses. After a joint operation on February 15, 2011, the DOJ and the Department of Homeland Security claimed to have seized ten domains of websites involved in advertising and distributing child pornography, but also mistakenly seized the domain name of a large DNS provider, temporarily replacing 84,000 websites with seizure notices. In the United Kingdom, the Police Intellectual Property Crime Unit (PIPCU) has been attempting to seize domain names from registrars without court orders. Suspensions PIPCU and other UK law enforcement organisations make domain suspension requests to Nominet which they process on the basis of breach of terms and conditions. Around 16,000 domains are suspended annually, and about 80% of the requests originate from PIPCU. Property rights Because of the economic value it represents, the European Court of Human Rights has ruled that the exclusive right to a domain name is protected as property under article 1 of Protocol 1 to the European Convention on Human Rights. IDN variants ICANN Business Constituency (BC) has spent decades trying to make IDN variants work at the second level, and in the last several years at the top level. Domain name variants are domain names recognized in different character encodings, like a single domain presented in traditional Chinese and simplified Chinese. It is an Internationalization and localization problem. Under Domain Name Variants, the different encodings of the domain name (in simplified and traditional Chinese) would resolve to the same host. According to John Levine, an expert on Internet related topics, "Unfortunately, variants don't work. The problem isn't putting them in the DNS, it's that once they're in the DNS, they don't work anywhere else." Fictitious domain name A fictitious domain name is a domain name used in a work of fiction or popular culture to refer to a domain that does not actually exist, often with invalid or unofficial top-level domains such as ".web", a usage exactly analogous to the dummy 555 telephone number prefix used in film and other media. The canonical fictitious domain name is "example.com", specifically set aside by IANA in RFC 2606 for such use, along with the .example TLD. Domain names used in works of fiction have often been registered in the DNS, either by their creators or by cybersquatters attempting to profit from it. This phenomenon prompted NBC to purchase the domain name Hornymanatee.com after talk-show host Conan O'Brien spoke the name while ad-libbing on his show. O'Brien subsequently created a website based on the concept and used it as a running gag on the show. Companies whose works have used fictitious domain names have also employed firms such as MarkMonitor to park fictional domain names in order to prevent misuse by third parties. Misspelled domain names Misspelled domain names, also known as typosquatting or URL hijacking, are domain names that are intentionally or unintentionally misspelled versions of popular or well-known domain names. The goal of misspelled domain names is to capitalize on internet users who accidentally type in a misspelled domain name, and are then redirected to a different website. Misspelled domain names are often used for malicious purposes, such as phishing scams or distributing malware. In some cases, the owners of misspelled domain names may also attempt to sell the domain names to the owners of the legitimate domain names, or to individuals or organizations who are interested in capitalizing on the traffic generated by internet users who accidentally type in the misspelled domain names. To avoid being caught by a misspelled domain name, internet users should be careful to type in domain names correctly, and should avoid clicking on links that appear suspicious or unfamiliar. Additionally, individuals and organizations who own popular or well-known domain names should consider registering common misspellings of their domain names in order to prevent others from using them for malicious purposes. Domain name spoofing The term Domain name spoofing (or simply though less accurately, Domain spoofing) is used generically to describe one or more of a class of phishing attacks that depend on falsifying or misrepresenting an internet domain name. These are designed to persuade unsuspecting users into visiting a web site other than that intended, or opening an email that is not in reality from the address shown (or apparently shown). Although website and email spoofing attacks are more widely known, any service that relies on domain name resolution may be compromised. Types There are a number of better-known types of domain spoofing: Typosquatting, also called "URL hijacking", a "sting site", or a "fake URL", is a form of cybersquatting, and possibly brandjacking which relies on mistakes such as typos made by Internet users when inputting a website address into a web browser or composing an email address. Should a user accidentally enter an incorrect domain name, they may be led to any URL (including an alternative website owned by a cybersquatter). The typosquatter's URL will usually be one of five kinds, all similar to the victim site address: A common misspelling, or foreign language spelling, of the intended site A misspelling based on a typographical error A plural of a singular domain name A different top-level domain: (i.e. .com instead of .org) An abuse of the Country Code Top-Level Domain (ccTLD) (.cm, .co, or .om instead of .com) IDN homograph attack. This type of attack depends on registering a domain name that is similar to the 'target' domain, differing from it only because its spelling includes one or more characters that come from a different alphabet but look the same to the naked eye. For example, the Cyrillic, Latin, and Greek alphabets each have their own letter , each of which has its own binary code point. Turkish has a dotless letter i () that may not be perceived as different from the ASCII letter . Most web browsers warn of 'mixed alphabet' domain names, Other services, such as email applications, may not provide the same protection. Reputable top level domain and country code domain registrars will not accept applications to register a deceptive name but this policy cannot be presumed to be infallible. Risk mitigation ("Domain-based Message Authentication, Reporting and Conformance") (SSL certificate) Legitimate technologies that may be subverted
Technology
Internet
null
39936
https://en.wikipedia.org/wiki/Necrosis
Necrosis
Necrosis () is a form of cell injury which results in the premature death of cells in living tissue by autolysis. The term "necrosis" came about in the mid-19th century and is commonly attributed to German pathologist Rudolf Virchow, who is often regarded as one of the founders of modern pathology. Necrosis is caused by factors external to the cell or tissue, such as infection, or trauma which result in the unregulated digestion of cell components. In contrast, apoptosis is a naturally occurring programmed and targeted cause of cellular death. While apoptosis often provides beneficial effects to the organism, necrosis is almost always detrimental and can be fatal. Cellular death due to necrosis does not follow the apoptotic signal transduction pathway, but rather various receptors are activated and result in the loss of cell membrane integrity and an uncontrolled release of products of cell death into the extracellular space. This initiates an inflammatory response in the surrounding tissue, which attracts leukocytes and nearby phagocytes which eliminate the dead cells by phagocytosis. However, microbial damaging substances released by leukocytes would create collateral damage to surrounding tissues. This excess collateral damage inhibits the healing process. Thus, untreated necrosis results in a build-up of decomposing dead tissue and cell debris at or near the site of the cell death. A classic example is gangrene. For this reason, it is often necessary to remove necrotic tissue surgically, a procedure known as debridement. Classification Structural signs that indicate irreversible cell injury and the progression of necrosis include dense clumping and progressive disruption of genetic material, and disruption to membranes of cells and organelles. Morphological patterns There are six distinctive morphological patterns of necrosis: Coagulative necrosis is characterized by the formation of a gelatinous (gel-like) substance in dead tissues in which the architecture of the tissue is maintained, and can be observed by light microscopy. Coagulation occurs as a result of protein denaturation, causing albumin to transform into a firm and opaque state. This pattern of necrosis is typically seen in hypoxic (low-oxygen) environments, such as infarction. Coagulative necrosis occurs primarily in tissues such as the kidney, heart and adrenal glands. Severe ischemia most commonly causes necrosis of this form. Liquefactive necrosis (or colliquative necrosis), in contrast to coagulative necrosis, is characterized by the digestion of dead cells to form a viscous liquid mass. This is typical of bacterial, or sometimes fungal, infections because of their ability to stimulate an inflammatory response. The necrotic liquid mass is frequently creamy yellow due to the presence of dead leukocytes and is commonly known as pus. Hypoxic infarcts in the brain presents as this type of necrosis, because the brain contains little connective tissue but high amounts of digestive enzymes and lipids, and cells therefore can be readily digested by their own enzymes. Gangrenous necrosis can be considered a type of coagulative necrosis that resembles mummified tissue. It is characteristic of ischemia of lower limb and the gastrointestinal tracts. Both dry gangrene and gas gangrene can lead to this type of necrosis. If superimposed infection of dead tissues occurs, then liquefactive necrosis ensues (wet gangrene). Caseous necrosis can be considered a combination of coagulative and liquefactive necrosis, typically caused by mycobacteria (e.g. tuberculosis), fungi and some foreign substances. The necrotic tissue appears as white and friable, like clumped cheese. Dead cells disintegrate but are not completely digested, leaving granular particles. Microscopic examination shows amorphous granular debris enclosed within a distinctive inflammatory border. Some granulomas contain this pattern of necrosis. Fat necrosis is specialized necrosis of fat tissue, resulting from the action of activated lipases on fatty tissues such as the pancreas. In the pancreas it leads to acute pancreatitis, a condition where the pancreatic enzymes leak out into the peritoneal cavity, and liquefy the membrane by splitting the triglyceride esters into fatty acids through fat saponification. Calcium, magnesium or sodium may bind to these lesions to produce a chalky-white substance. The calcium deposits are microscopically distinctive and may be large enough to be visible on radiographic examinations. To the naked eye, calcium deposits appear as gritty white flecks. Fibrinoid necrosis is a special form of necrosis usually caused by immune-mediated vascular damage. It is marked by complexes of antigen and antibodies, referred to as immune complexes deposited within arterial walls together with fibrin. Other clinical classifications of necrosis There are also very specific forms of necrosis such as gangrene (term used in clinical practices for limbs which have had severe hypoxia), gummatous necrosis (due to spirochaetal infections) and hemorrhagic necrosis (due to the blockage of venous drainage of an organ or tissue). Myonecrosis is the death of individual muscle fibres due to injury, hypoxia, or infection. Common causes include spontaneous diabetic myonecrosis (a.k.a. diabetic muscle infarction) and clostridial myonecrosis (a.k.a. gas gangrene). Some spider bites may lead to necrosis. In the United States, only spider bites from the brown recluse spider (genus Loxosceles) reliably progress to necrosis. In other countries, spiders of the same genus, such as the Chilean recluse in South America, are also known to cause necrosis. Claims that yellow sac spiders and hobo spiders possess necrotic venom have not been substantiated. In blind mole rats (genus Spalax), the process of necrosis replaces the role of the systematic apoptosis normally used in many organisms. Low oxygen conditions, such as those common in blind mole rats' burrows, usually cause cells to undergo apoptosis. In adaptation to higher tendency of cell death, blind mole rats evolved a mutation in the tumor suppressor protein p53 (which is also used in humans) to prevent cells from undergoing apoptosis. Human cancer patients have similar mutations, and blind mole rats were thought to be more susceptible to cancer because their cells cannot undergo apoptosis. However, after a specific amount of time (within 3 days according to a study conducted at the University of Rochester), the cells in blind mole rats release interferon-beta (which the immune system normally uses to counter viruses) in response to over-proliferation of cells caused by the suppression of apoptosis. In this case, the interferon-beta triggers cells to undergo necrosis, and this mechanism also kills cancer cells in blind mole rats. Because of tumor suppression mechanisms such as this, blind mole rats and other spalacids are resistant to cancer. Causes Necrosis may occur due to external or internal factors. External factors External factors may involve mechanical trauma (physical damage to the body which causes cellular breakdown), electric shock, damage to blood vessels (which may disrupt blood supply to associated tissue), and ischemia. Thermal effects (extremely high or low temperature) can often result in necrosis due to the disruption of cells, especially in bone cells. Necrosis can also result from chemical trauma, with alkaline and acidic compounds causing liquefactive and coagulative necrosis, respectively, in affected tissues. The severity of such cases varies significantly based on multiple factors, including the compound concentration, type of tissue affected, and the extent of chemical exposure. In frostbite, crystals form, increasing the pressure of remaining tissue and fluid causing the cells to burst. Under extreme conditions tissues and cells may die through an unregulated process of membrane and cytosol destruction. Internal factors Internal factors causing necrosis include: trophoneurotic disorders (diseases that occur due to defective nerve action in a part of an organ which results in failure of nutrition); injury and paralysis of nerve cells. Pancreatic enzymes (lipases) are the major cause of fat necrosis. Necrosis can be activated by components of the immune system, such as the complement system; bacterial toxins; activated natural killer cells; and peritoneal macrophages. Pathogen-induced necrosis programs in cells with immunological barriers (intestinal mucosa) may alleviate invasion of pathogens through surfaces affected by inflammation. Toxins and pathogens may cause necrosis; toxins such as snake venoms may inhibit enzymes and cause cell death. Necrotic wounds have also resulted from the stings of Vespa mandarinia. Pathological conditions are characterized by inadequate secretion of cytokines. Nitric oxide (NO) and reactive oxygen species (ROS) are also accompanied by intense necrotic death of cells. A classic example of a necrotic condition is ischemia which leads to a drastic depletion of oxygen, glucose, and other trophic factors and induces massive necrotic death of endothelial cells and non-proliferating cells of surrounding tissues (neurons, cardiomyocytes, renal cells, etc.). Recent cytological data indicates that necrotic death occurs not only during pathological events but it is also a component of some physiological process. Activation-induced death of primary T lymphocytes and other important constituents of the immune response are caspase-independent and necrotic by morphology; hence, current researchers have demonstrated that necrotic cell death can occur not only during pathological processes, but also during normal processes such as tissue renewal, embryogenesis, and immune response. Pathogenesis Pathways Until recently, necrosis was thought to be an unregulated process. However, there are two broad pathways in which necrosis may occur in an organism. The first of these two pathways initially involves oncosis, where swelling of the cells occurs. Affected cells then proceed to blebbing, and this is followed by pyknosis, in which nuclear shrinkage transpires. In the final step of this pathway cell nuclei are dissolved into the cytoplasm, which is referred to as karyolysis. The second pathway is a secondary form of necrosis that is shown to occur after apoptosis and budding. In these cellular changes of necrosis, the nucleus breaks into fragments (known as karyorrhexis). Histopathological changes The nucleus changes in necrosis and characteristics of this change are determined by the manner in which its DNA breaks down: Karyolysis: the chromatin of the nucleus fades due to the loss of the DNA by degradation. Karyorrhexis: the shrunken nucleus fragments to complete dispersal. Pyknosis: the nucleus shrinks, and the chromatin condenses. Other typical cellular changes in necrosis include: Cytoplasmic hypereosinophilia on samples with H&E stain. It is seen as a darker stain of the cytoplasm. The cell membrane appears discontinuous when viewed with an electron microscope. This discontinuous membrane is caused by cell blebbing and the loss of microvilli. On a larger histologic scale, pseudopalisades (false palisades) are hypercellular zones that typically surround necrotic tissue. Pseudopalisading necrosis indicates an aggressive tumor. Treatment There are many causes of necrosis, and as such treatment is based upon how the necrosis came about. Treatment of necrosis typically involves two distinct processes: Usually, the underlying cause of the necrosis must be treated before the dead tissue itself can be dealt with. Debridement, referring to the removal of dead tissue by surgical or non-surgical means, is the standard therapy for necrosis. Depending on the severity of the necrosis, this may range from removal of small patches of skin to complete amputation of affected limbs or organs. Chemical removal of necrotic tissue is another option in which enzymatic debriding agents, categorised as proteolytic, fibrinolytic or collagenases, are used to target the various components of dead tissue. In select cases, special maggot therapy using Lucilia sericata larvae has been employed to remove necrotic tissue and infection. In the case of ischemia, which includes myocardial infarction, the restriction of blood supply to tissues causes hypoxia and the creation of reactive oxygen species (ROS) that react with, and damage proteins and membranes. Antioxidant treatments can be applied to scavenge the ROS. Wounds caused by physical agents, including physical trauma and chemical burns, can be treated with antibiotics and anti-inflammatory drugs to prevent bacterial infection and inflammation. Keeping the wound clean from infection also prevents necrosis. Chemical and toxic agents (e.g. pharmaceutical drugs, acids, bases) react with the skin leading to skin loss and eventually necrosis. Treatment involves identification and discontinuation of the harmful agent, followed by treatment of the wound, including prevention of infection and possibly the use of immunosuppressive therapies such as anti-inflammatory drugs or immunosuppressants. In the example of a snake bite, the use of anti-venom halts the spread of toxins whilst receiving antibiotics to impede infection. Even after the initial cause of the necrosis has been halted, the necrotic tissue will remain in the body. The body's immune response to apoptosis, which involves the automatic breaking down and recycling of cellular material, is not triggered by necrotic cell death due to the apoptotic pathway being disabled. In plants If calcium is deficient, pectin cannot be synthesized, and therefore the cell walls cannot be bonded and thus an impediment of the meristems. This will lead to necrosis of stem and root tips and leaf edges. For example, necrosis of tissue can occur in Arabidopsis thaliana due to plant pathogens. Cacti such as the Saguaro and Cardon in the Sonoran Desert experience necrotic patch formation regularly; a species of Dipterans called Drosophila mettleri has developed a P450 detoxification system to enable it to use the exudates released in these patches to both nest and feed larvae.
Biology and health sciences
Cell processes
Biology
40114
https://en.wikipedia.org/wiki/Escherichia%20coli
Escherichia coli
Escherichia coli ( ) is a gram-negative, facultative anaerobic, rod-shaped, coliform bacterium of the genus Escherichia that is commonly found in the lower intestine of warm-blooded organisms. Most E. coli strains are part of the normal microbiota of the gut, where they constitute about 0.1%, along with other facultative anaerobes. These bacteria are mostly harmless or even beneficial to humans. For example, some strains of E. coli benefit their hosts by producing vitamin K2 or by preventing the colonization of the intestine by harmful pathogenic bacteria. These mutually beneficial relationships between E. coli and humans are a type of mutualistic biological relationship—where both the humans and the E. coli are benefitting each other. E. coli is expelled into the environment within fecal matter. The bacterium grows massively in fresh fecal matter under aerobic conditions for three days, but its numbers decline slowly afterwards. Some serotypes, such as EPEC and ETEC, are pathogenic, causing serious food poisoning in their hosts. Fecal–oral transmission is the major route through which pathogenic strains of the bacterium cause disease. This transmission method is occasionally responsible for food contamination incidents that prompt product recalls. Cells are able to survive outside the body for a limited amount of time, which makes them potential indicator organisms to test environmental samples for fecal contamination. A growing body of research, though, has examined environmentally persistent E. coli which can survive for many days and grow outside a host. The bacterium can be grown and cultured easily and inexpensively in a laboratory setting, and has been intensively investigated for over 60 years. E. coli is a chemoheterotroph whose chemically defined medium must include a source of carbon and energy. E. coli is the most widely studied prokaryotic model organism, and an important species in the fields of biotechnology and microbiology, where it has served as the host organism for the majority of work with recombinant DNA. Under favourable conditions, it takes as little as 20 minutes to reproduce. Biology and biochemistry Type and morphology E. coli is a gram-negative, facultative anaerobe, nonsporulating coliform bacterium. Cells are typically rod-shaped, and are about 2.0 μm long and 0.25–1.0 μm in diameter, with a cell volume of 0.6–0.7 μm3. E. coli stains gram-negative because its cell wall is composed of a thin peptidoglycan layer and an outer membrane. During the staining process, E. coli picks up the color of the counterstain safranin and stains pink. The outer membrane surrounding the cell wall provides a barrier to certain antibiotics, such that E. coli is not damaged by penicillin. The flagella which allow the bacteria to swim have a peritrichous arrangement. It also attaches and effaces to the microvilli of the intestines via an adhesion molecule known as intimin. Metabolism E. coli can live on a wide variety of substrates and uses mixed acid fermentation in anaerobic conditions, producing lactate, succinate, ethanol, acetate, and carbon dioxide. Since many pathways in mixed-acid fermentation produce hydrogen gas, these pathways require the levels of hydrogen to be low, as is the case when E. coli lives together with hydrogen-consuming organisms, such as methanogens or sulphate-reducing bacteria. In addition, E. colis metabolism can be rewired to solely use CO2 as the source of carbon for biomass production. In other words, this obligate heterotroph's metabolism can be altered to display autotrophic capabilities by heterologously expressing carbon fixation genes as well as formate dehydrogenase and conducting laboratory evolution experiments. This may be done by using formate to reduce electron carriers and supply the ATP required in anabolic pathways inside of these synthetic autotrophs. E. coli has three native glycolytic pathways: EMPP, EDP, and OPPP. The EMPP employs ten enzymatic steps to yield two pyruvates, two ATP, and two NADH per glucose molecule while OPPP serves as an oxidation route for NADPH synthesis. Although the EDP is the more thermodynamically favourable of the three pathways, E. coli do not use the EDP for glucose metabolism, relying mainly on the EMPP and the OPPP. The EDP mainly remains inactive except for during growth with gluconate. Catabolite repression When growing in the presence of a mixture of sugars, bacteria will often consume the sugars sequentially through a process known as catabolite repression. By repressing the expression of the genes involved in metabolizing the less preferred sugars, cells will usually first consume the sugar yielding the highest growth rate, followed by the sugar yielding the next highest growth rate, and so on. In doing so the cells ensure that their limited metabolic resources are being used to maximize the rate of growth. The well-used example of this with E. coli involves the growth of the bacterium on glucose and lactose, where E. coli will consume glucose before lactose. Catabolite repression has also been observed in E. coli in the presence of other non-glucose sugars, such as arabinose and xylose, sorbitol, rhamnose, and ribose. In E. coli, glucose catabolite repression is regulated by the phosphotransferase system, a multi-protein phosphorylation cascade that couples glucose uptake and metabolism. Culture growth Optimum growth of E. coli occurs at , but some laboratory strains can multiply at temperatures up to . E. coli grows in a variety of defined laboratory media, such as lysogeny broth, or any medium that contains glucose, ammonium phosphate monobasic, sodium chloride, magnesium sulfate, potassium phosphate dibasic, and water. Growth can be driven by aerobic or anaerobic respiration, using a large variety of redox pairs, including the oxidation of pyruvic acid, formic acid, hydrogen, and amino acids, and the reduction of substrates such as oxygen, nitrate, fumarate, dimethyl sulfoxide, and trimethylamine N-oxide. E. coli is classified as a facultative anaerobe. It uses oxygen when it is present and available. It can, however, continue to grow in the absence of oxygen using fermentation or anaerobic respiration. Respiration type is managed in part by the arc system. The ability to continue growing in the absence of oxygen is an advantage to bacteria because their survival is increased in environments where water predominates. Cell cycle The bacterial cell cycle is divided into three stages. The B period occurs between the completion of cell division and the beginning of DNA replication. The C period encompasses the time it takes to replicate the chromosomal DNA. The D period refers to the stage between the conclusion of DNA replication and the end of cell division. The doubling rate of E. coli is higher when more nutrients are available. However, the length of the C and D periods do not change, even when the doubling time becomes less than the sum of the C and D periods. At the fastest growth rates, replication begins before the previous round of replication has completed, resulting in multiple replication forks along the DNA and overlapping cell cycles. The number of replication forks in fast growing E. coli typically follows 2n (n = 1, 2 or 3). This only happens if replication is initiated simultaneously from all origins of replications, and is referred to as synchronous replication. However, not all cells in a culture replicate synchronously. In this case cells do not have multiples of two replication forks. Replication initiation is then referred to being asynchronous. However, asynchrony can be caused by mutations to for instance DnaA or DnaA initiator-associating protein DiaA. Although E. coli reproduces by binary fission the two supposedly identical cells produced by cell division are functionally asymmetric with the old pole cell acting as an aging parent that repeatedly produces rejuvenated offspring. When exposed to an elevated stress level, damage accumulation in an old E. coli lineage may surpass its immortality threshold so that it arrests division and becomes mortal. Cellular aging is a general process, affecting prokaryotes and eukaryotes alike. Genetic adaptation E. coli and related bacteria possess the ability to transfer DNA via bacterial conjugation or transduction, which allows genetic material to spread horizontally through an existing population. The process of transduction, which uses the bacterial virus called a bacteriophage, is where the spread of the gene encoding for the Shiga toxin from the Shigella bacteria to E. coli helped produce E. coli O157:H7, the Shiga toxin-producing strain of E. coli. Diversity E. coli encompasses an enormous population of bacteria that exhibit a very high degree of both genetic and phenotypic diversity. Genome sequencing of many isolates of E. coli and related bacteria shows that a taxonomic reclassification would be desirable. However, this has not been done, largely due to its medical importance, and E. coli remains one of the most diverse bacterial species: only 20% of the genes in a typical E. coli genome is shared among all strains. In fact, from the more constructive point of view, the members of genus Shigella (S. dysenteriae, S. flexneri, S. boydii, and S. sonnei) should be classified as E. coli strains, a phenomenon termed taxa in disguise. Similarly, other strains of E. coli (e.g. the K-12 strain commonly used in recombinant DNA work) are sufficiently different that they would merit reclassification. A strain is a subgroup within the species that has unique characteristics that distinguish it from other strains. These differences are often detectable only at the molecular level; however, they may result in changes to the physiology or lifecycle of the bacterium. For example, a strain may gain pathogenic capacity, the ability to use a unique carbon source, the ability to take upon a particular ecological niche, or the ability to resist antimicrobial agents. Different strains of E. coli are often host-specific, making it possible to determine the source of fecal contamination in environmental samples. For example, knowing which E. coli strains are present in a water sample allows researchers to make assumptions about whether the contamination originated from a human, another mammal, or a bird. Serotypes A common subdivision system of E. coli, but not based on evolutionary relatedness, is by serotype, which is based on major surface antigens (O antigen: part of lipopolysaccharide layer; H: flagellin; K antigen: capsule), e.g. O157:H7). It is, however, common to cite only the serogroup, i.e. the O-antigen. At present, about 190 serogroups are known. The common laboratory strain has a mutation that prevents the formation of an O-antigen and is thus not typeable. Genome plasticity and evolution Like all lifeforms, new strains of E. coli evolve through the natural biological processes of mutation, gene duplication, and horizontal gene transfer; in particular, 18% of the genome of the laboratory strain MG1655 was horizontally acquired since the divergence from Salmonella. E. coli K-12 and E. coli B strains are the most frequently used varieties for laboratory purposes. Some strains develop traits that can be harmful to a host animal. These virulent strains typically cause a bout of diarrhea that is often self-limiting in healthy adults but is frequently lethal to children in the developing world. More virulent strains, such as O157:H7, cause serious illness or death in the elderly, the very young, or the immunocompromised. The genera Escherichia and Salmonella diverged around 102 million years ago (credibility interval: 57–176 mya), an event unrelated to the much earlier (see Synapsid) divergence of their hosts: the former being found in mammals and the latter in birds and reptiles. This was followed by a split of an Escherichia ancestor into five species (E. albertii, E. coli, E. fergusonii, E. hermannii, and E. vulneris). The last E. coli ancestor split between 20 and 30 million years ago. The long-term evolution experiments using E. coli, begun by Richard Lenski in 1988, have allowed direct observation of genome evolution over more than 65,000 generations in the laboratory. For instance, E. coli typically do not have the ability to grow aerobically with citrate as a carbon source, which is used as a diagnostic criterion with which to differentiate E. coli from other, closely, related bacteria such as Salmonella. In this experiment, one population of E. coli unexpectedly evolved the ability to aerobically metabolize citrate, a major evolutionary shift with some hallmarks of microbial speciation. In the microbial world, a relationship of predation can be established similar to that observed in the animal world. Considered, it has been seen that E. coli is the prey of multiple generalist predators, such as Myxococcus xanthus. In this predator-prey relationship, a parallel evolution of both species is observed through genomic and phenotypic modifications, in the case of E. coli the modifications are modified in two aspects involved in their virulence such as mucoid production (excessive production of exoplasmic acid alginate ) and the suppression of the OmpT gene, producing in future generations a better adaptation of one of the species that is counteracted by the evolution of the other, following a co-evolutionary model demonstrated by the Red Queen hypothesis. Neotype strain E. coli is the type species of the genus (Escherichia) and in turn Escherichia is the type genus of the family Enterobacteriaceae, where the family name does not stem from the genus Enterobacter + "i" (sic.) + "aceae", but from "enterobacterium" + "aceae" (enterobacterium being not a genus, but an alternative trivial name to enteric bacterium). The original strain described by Escherich is believed to be lost, consequently a new type strain (neotype) was chosen as a representative: the neotype strain is U5/41T, also known under the deposit names DSM 30083, ATCC 11775, and NCTC 9001, which is pathogenic to chickens and has an O1:K1:H7 serotype. However, in most studies, either O157:H7, K-12 MG1655, or K-12 W3110 were used as a representative E. coli. The genome of the type strain has only lately been sequenced. Phylogeny of E. coli strains Many strains belonging to this species have been isolated and characterised. In addition to serotype (vide supra), they can be classified according to their phylogeny, i.e. the inferred evolutionary history, as shown below where the species is divided into six groups as of 2014. Particularly the use of whole genome sequences yields highly supported phylogenies. The phylogroup structure remains robust to newer methods and sequences, which sometimes adds newer groups, giving 8 or 14 as of 2023. The link between phylogenetic distance ("relatedness") and pathology is small, e.g. the O157:H7 serotype strains, which form a clade ("an exclusive group")—group E below—are all enterohaemorragic strains (EHEC), but not all EHEC strains are closely related. In fact, four different species of Shigella are nested among E. coli strains (vide supra), while E. albertii and E. fergusonii are outside this group. Indeed, all Shigella species were placed within a single subspecies of E. coli in a phylogenomic study that included the type strain. All commonly used research strains of E. coli belong to group A and are derived mainly from Clifton's K-12 strain (λ+ F+; O16) and to a lesser degree from d'Herelle's "Bacillus coli" strain (B strain; O7). There have been multiple proposals to revise the taxonomy to match phylogeny. However, all these proposals need to face the fact that Shigella remains a widely used name in medicine and find ways to reduce any confusion that can stem from renaming. Genomics The first complete DNA sequence of an E. coli genome (laboratory strain K-12 derivative MG1655) was published in 1997. It is a circular DNA molecule 4.6 million base pairs in length, containing 4288 annotated protein-coding genes (organized into 2584 operons), seven ribosomal RNA (rRNA) operons, and 86 transfer RNA (tRNA) genes. Despite having been the subject of intensive genetic analysis for about 40 years, many of these genes were previously unknown. The coding density was found to be very high, with a mean distance between genes of only 118 base pairs. The genome was observed to contain a significant number of transposable genetic elements, repeat elements, cryptic prophages, and bacteriophage remnants. Most genes have only a single copy. More than three hundred complete genomic sequences of Escherichia and Shigella species are known. The genome sequence of the type strain of E. coli was added to this collection before 2014. Comparison of these sequences shows a remarkable amount of diversity; only about 20% of each genome represents sequences present in every one of the isolates, while around 80% of each genome can vary among isolates. Each individual genome contains between 4,000 and 5,500 genes, but the total number of different genes among all of the sequenced E. coli strains (the pangenome) exceeds 16,000. This very large variety of component genes has been interpreted to mean that two-thirds of the E. coli pangenome originated in other species and arrived through the process of horizontal gene transfer. Gene nomenclature Genes in E. coli are usually named in accordance with the uniform nomenclature proposed by Demerec et al. Gene names are 3-letter acronyms that derive from their function (when known) or mutant phenotype and are italicized. When multiple genes have the same acronym, the different genes are designated by a capital later that follows the acronym and is also italicized. For instance, recA is named after its role in homologous recombination plus the letter A. Functionally related genes are named recB, recC, recD etc. The proteins are named by uppercase acronyms, e.g. RecA, RecB, etc. When the genome of E. coli strain K-12 substr. MG1655 was sequenced, all known or predicted protein-coding genes were numbered (more or less) in their order on the genome and abbreviated by b numbers, such as b2819 (= recD). The "b" names were created after Fred Blattner, who led the genome sequence effort. Another numbering system was introduced with the sequence of another E. coli K-12 substrain, W3110, which was sequenced in Japan and hence uses numbers starting by JW... (Japanese W3110), e.g. JW2787 (= recD). Hence, recD = b2819 = JW2787. Note, however, that most databases have their own numbering system, e.g. the EcoGene database uses EG10826 for recD. Finally, ECK numbers are specifically used for alleles in the MG1655 strain of E. coli K-12. Complete lists of genes and their synonyms can be obtained from databases such as EcoGene or Uniprot. Proteomics Proteome The genome sequence of E. coli predicts 4288 protein-coding genes, of which 38 percent initially had no attributed function. Comparison with five other sequenced microbes reveals ubiquitous as well as narrowly distributed gene families; many families of similar genes within E. coli are also evident. The largest family of paralogous proteins contains 80 ABC transporters. The genome as a whole is strikingly organized with respect to the local direction of replication; guanines, oligonucleotides possibly related to replication and recombination, and most genes are so oriented. The genome also contains insertion sequence (IS) elements, phage remnants, and many other patches of unusual composition indicating genome plasticity through horizontal transfer. Several studies have experimentally investigated the proteome of E. coli. By 2006, 1,627 (38%) of the predicted proteins (open reading frames, ORFs) had been identified experimentally. Mateus et al. 2020 detected 2,586 proteins with at least 2 peptides (60% of all proteins). Post-translational modifications (PTMs) Although much fewer bacterial proteins seem to have post-translational modifications (PTMs) compared to eukaryotic proteins, a substantial number of proteins are modified in E. coli. For instance, Potel et al. (2018) found 227 phosphoproteins of which 173 were phosphorylated on histidine. The majority of phosphorylated amino acids were serine (1,220 sites) with only 246 sites on histidine and 501 phosphorylated threonines and 162 tyrosines. Interactome The interactome of E. coli has been studied by affinity purification and mass spectrometry (AP/MS) and by analyzing the binary interactions among its proteins. Protein complexes. A 2006 study purified 4,339 proteins from cultures of strain K-12 and found interacting partners for 2,667 proteins, many of which had unknown functions at the time. A 2009 study found 5,993 interactions between proteins of the same E. coli strain, though these data showed little overlap with those of the 2006 publication. Binary interactions. Rajagopala et al. (2014) have carried out systematic yeast two-hybrid screens with most E. coli proteins, and found a total of 2,234 protein-protein interactions. This study also integrated genetic interactions and protein structures and mapped 458 interactions within 227 protein complexes. Normal microbiota E. coli belongs to a group of bacteria informally known as coliforms that are found in the gastrointestinal tract of warm-blooded animals. E. coli normally colonizes an infant's gastrointestinal tract within 40 hours of birth, arriving with food or water or from the individuals handling the child. In the bowel, E. coli adheres to the mucus of the large intestine. It is the primary facultative anaerobe of the human gastrointestinal tract. (Facultative anaerobes are organisms that can grow in either the presence or absence of oxygen.) As long as these bacteria do not acquire genetic elements encoding for virulence factors, they remain benign commensals. Therapeutic use Due to the low cost and speed with which it can be grown and modified in laboratory settings, E. coli is a popular expression platform for the production of recombinant proteins used in therapeutics. One advantage to using E. coli over another expression platform is that E. coli naturally does not export many proteins into the periplasm, making it easier to recover a protein of interest without cross-contamination. The E. coli K-12 strains and their derivatives (DH1, DH5α, MG1655, RV308 and W3110) are the strains most widely used by the biotechnology industry. Nonpathogenic E. coli strain Nissle 1917 (EcN), (Mutaflor) and E. coli O83:K24:H31 (Colinfant)) are used as probiotic agents in medicine, mainly for the treatment of various gastrointestinal diseases, including inflammatory bowel disease. It is thought that the EcN strain might impede the growth of opportunistic pathogens, including Salmonella and other coliform enteropathogens, through the production of microcin proteins the production of siderophores. Role in disease Most E. coli strains do not cause disease, naturally living in the gut, but virulent strains can cause gastroenteritis, urinary tract infections, neonatal meningitis, hemorrhagic colitis, and Crohn's disease. Common signs and symptoms include severe abdominal cramps, diarrhea, hemorrhagic colitis, vomiting, and sometimes fever. In rarer cases, virulent strains are also responsible for bowel necrosis (tissue death) and perforation without progressing to hemolytic-uremic syndrome, peritonitis, mastitis, sepsis, and gram-negative pneumonia. Very young children are more susceptible to develop severe illness, such as hemolytic uremic syndrome; however, healthy individuals of all ages are at risk to the severe consequences that may arise as a result of being infected with E. coli. Some strains of E. coli, for example O157:H7, can produce Shiga toxin. The Shiga toxin causes inflammatory responses in target cells of the gut, leaving behind lesions which result in the bloody diarrhea that is a symptom of a Shiga toxin-producing E. coli (STEC) infection. This toxin further causes premature destruction of the red blood cells, which then clog the body's filtering system, the kidneys, in some rare cases (usually in children and the elderly) causing hemolytic-uremic syndrome (HUS), which may lead to kidney failure and even death. Signs of hemolytic uremic syndrome include decreased frequency of urination, lethargy, and paleness of cheeks and inside the lower eyelids. In 25% of HUS patients, complications of nervous system occur, which in turn causes strokes. In addition, this strain causes the buildup of fluid (since the kidneys do not work), leading to edema around the lungs, legs, and arms. This increase in fluid buildup especially around the lungs impedes the functioning of the heart, causing an increase in blood pressure. Uropathogenic E. coli (UPEC) is one of the main causes of urinary tract infections. It is part of the normal microbiota in the gut and can be introduced in many ways. In particular for females, the direction of wiping after defecation (wiping back to front) can lead to fecal contamination of the urogenital orifices. Anal intercourse can also introduce this bacterium into the male urethra, and in switching from anal to vaginal intercourse, the male can also introduce UPEC to the female urogenital system. Enterotoxigenic E. coli (ETEC) is the most common cause of traveler's diarrhea, with as many as 840 million cases worldwide in developing countries each year. The bacteria, typically transmitted through contaminated food or drinking water, adheres to the intestinal lining, where it secretes either of two types of enterotoxins, leading to watery diarrhea. The rate and severity of infections are higher among children under the age of five, including as many as 380,000 deaths annually. In May 2011, one E. coli strain, O104:H4, was the subject of a bacterial outbreak that began in Germany. Certain strains of E. coli are a major cause of foodborne illness. The outbreak started when several people in Germany were infected with enterohemorrhagic E. coli (EHEC) bacteria, leading to hemolytic-uremic syndrome (HUS), a medical emergency that requires urgent treatment. The outbreak did not only concern Germany, but also 15 other countries, including regions in North America. On 30 June 2011, the German Bundesinstitut für Risikobewertung (BfR) (Federal Institute for Risk Assessment, a federal institute within the German Federal Ministry of Food, Agriculture and Consumer Protection) announced that seeds of fenugreek from Egypt were likely the cause of the EHEC outbreak. Some studies have demonstrated an absence of E. coli in the gut flora of subjects with the metabolic disorder Phenylketonuria. It is hypothesized that the absence of these normal bacterium impairs the production of the key vitamins B2 (riboflavin) and K2 (menaquinone) – vitamins which are implicated in many physiological roles in humans such as cellular and bone metabolism – and so contributes to the disorder. Carbapenem-resistant E. coli (carbapenemase-producing E. coli) that are resistant to the carbapenem class of antibiotics, considered the drugs of last resort for such infections. They are resistant because they produce an enzyme called a carbapenemase that disables the drug molecule. Incubation period The time between ingesting the STEC bacteria and feeling sick is called the "incubation period". The incubation period is usually 3–4 days after the exposure, but may be as short as 1 day or as long as 10 days. The symptoms often begin slowly with mild belly pain or non-bloody diarrhea that worsens over several days. HUS, if it occurs, develops an average 7 days after the first symptoms, when the diarrhea is improving. Diagnosis Diagnosis of infectious diarrhea and identification of antimicrobial resistance is performed using a stool culture with subsequent antibiotic sensitivity testing. It requires a minimum of 2 days and maximum of several weeks to culture gastrointestinal pathogens. The sensitivity (true positive) and specificity (true negative) rates for stool culture vary by pathogen, although a number of human pathogens can not be cultured. For culture-positive samples, antimicrobial resistance testing takes an additional 12–24 hours to perform. Current point of care molecular diagnostic tests can identify E. coli and antimicrobial resistance in the identified strains much faster than culture and sensitivity testing. Microarray-based platforms can identify specific pathogenic strains of E. coli and E. coli-specific AMR genes in two hours or less with high sensitivity and specificity, but the size of the test panel (i.e., total pathogens and antimicrobial resistance genes) is limited. Newer metagenomics-based infectious disease diagnostic platforms are currently being developed to overcome the various limitations of culture and all currently available molecular diagnostic technologies. Treatment The mainstay of treatment is the assessment of dehydration and replacement of fluid and electrolytes. Administration of antibiotics has been shown to shorten the course of illness and duration of excretion of enterotoxigenic E. coli (ETEC) in adults in endemic areas and in traveller's diarrhea, though the rate of resistance to commonly used antibiotics is increasing and they are generally not recommended. The antibiotic used depends upon susceptibility patterns in the particular geographical region. Currently, the antibiotics of choice are fluoroquinolones or azithromycin, with an emerging role for rifaximin. Rifaximin, a semisynthetic rifamycin derivative, is an effective and well-tolerated antibacterial for the management of adults with non-invasive traveller's diarrhea. Rifaximin was significantly more effective than placebo and no less effective than ciprofloxacin in reducing the duration of diarrhea. While rifaximin is effective in patients with E. coli-predominant traveller's diarrhea, it appears ineffective in patients infected with inflammatory or invasive enteropathogens. Prevention ETEC is the type of E. coli that most vaccine development efforts are focused on. Antibodies against the LT and major CFs of ETEC provide protection against LT-producing, ETEC-expressing homologous CFs. Oral inactivated vaccines consisting of toxin antigen and whole cells, i.e. the licensed recombinant cholera B subunit (rCTB)-WC cholera vaccine Dukoral, have been developed. There are currently no licensed vaccines for ETEC, though several are in various stages of development. In different trials, the rCTB-WC cholera vaccine provided high (85–100%) short-term protection. An oral ETEC vaccine candidate consisting of rCTB and formalin inactivated E. coli bacteria expressing major CFs has been shown in clinical trials to be safe, immunogenic, and effective against severe diarrhoea in American travelers but not against ETEC diarrhoea in young children in Egypt. A modified ETEC vaccine consisting of recombinant E. coli strains over-expressing the major CFs and a more LT-like hybrid toxoid called LCTBA, are undergoing clinical testing. Other proven prevention methods for E. coli transmission include handwashing and improved sanitation and drinking water, as transmission occurs through fecal contamination of food and water supplies. Additionally, thoroughly cooking meat and avoiding consumption of raw, unpasteurized beverages, such as juices and milk are other proven methods for preventing E. coli. Lastly, cross-contamination of utensils and work spaces should be avoided when preparing food. Model organism in life science research Because of its long history of laboratory culture and ease of manipulation, E. coli plays an important role in modern biological engineering and industrial microbiology. The work of Stanley Norman Cohen and Herbert Boyer in E. coli, using plasmids and restriction enzymes to create recombinant DNA, became a foundation of biotechnology. E. coli is a very versatile host for the production of heterologous proteins, and various protein expression systems have been developed which allow the production of recombinant proteins in E. coli. Researchers can introduce genes into the microbes using plasmids which permit high level expression of protein, and such protein may be mass-produced in industrial fermentation processes. One of the first useful applications of recombinant DNA technology was the manipulation of E. coli to produce human insulin. Many proteins previously thought difficult or impossible to be expressed in E. coli in folded form have been successfully expressed in E. coli. For example, proteins with multiple disulphide bonds may be produced in the periplasmic space or in the cytoplasm of mutants rendered sufficiently oxidizing to allow disulphide-bonds to form, while proteins requiring post-translational modification such as glycosylation for stability or function have been expressed using the N-linked glycosylation system of Campylobacter jejuni engineered into E. coli. Modified E. coli cells have been used in vaccine development, bioremediation, production of biofuels, lighting, and production of immobilised enzymes. Strain K-12 is a mutant form of E. coli that over-expresses the enzyme Alkaline phosphatase (ALP). The mutation arises due to a defect in the gene that constantly codes for the enzyme. A gene that is producing a product without any inhibition is said to have constitutive activity. This particular mutant form is used to isolate and purify the aforementioned enzyme. Strain OP50 of Escherichia coli is used for maintenance of Caenorhabditis elegans cultures. Strain JM109 is a mutant form of E. coli that is recA and endA deficient. The strain can be utilized for blue/white screening when the cells carry the fertility factor episome. Lack of recA decreases the possibility of unwanted restriction of the DNA of interest and lack of endA inhibit plasmid DNA decomposition. Thus, JM109 is useful for cloning and expression systems. Model organism E. coli is frequently used as a model organism in microbiology studies. Cultivated strains (e.g. E. coli K12) are well-adapted to the laboratory environment, and, unlike wild-type strains, have lost their ability to thrive in the intestine. Many laboratory strains lose their ability to form biofilms. These features protect wild-type strains from antibodies and other chemical attacks, but require a large expenditure of energy and material resources. E. coli is often used as a representative microorganism in the research of novel water treatment and sterilisation methods, including photocatalysis. By standard plate count methods, following sequential dilutions, and growth on agar gel plates, the concentration of viable organisms or CFUs (Colony Forming Units), in a known volume of treated water can be evaluated, allowing the comparative assessment of materials performance. In 1946, Joshua Lederberg and Edward Tatum first described the phenomenon known as bacterial conjugation using E. coli as a model bacterium, and it remains the primary model to study conjugation. E. coli was an integral part of the first experiments to understand phage genetics, and early researchers, such as Seymour Benzer, used E. coli and phage T4 to understand the topography of gene structure. Prior to Benzer's research, it was not known whether the gene was a linear structure, or if it had a branching pattern. E. coli was one of the first organisms to have its genome sequenced; the complete genome of E. coli K12 was published by Science in 1997. From 2002 to 2010, a team at the Hungarian Academy of Science created a strain of Escherichia coli called MDS42, which is now sold by Scarab Genomics of Madison, WI under the name of "Clean Genome E. coli", where 15% of the genome of the parental strain (E. coli K-12 MG1655) were removed to aid in molecular biology efficiency, removing IS elements, pseudogenes and phages, resulting in better maintenance of plasmid-encoded toxic genes, which are often inactivated by transposons. Biochemistry and replication machinery were not altered. By evaluating the possible combination of nanotechnologies with landscape ecology, complex habitat landscapes can be generated with details at the nanoscale. On such synthetic ecosystems, evolutionary experiments with E. coli have been performed to study the spatial biophysics of adaptation in an island biogeography on-chip. In other studies, non-pathogenic E. coli has been used as a model microorganism towards understanding the effects of simulated microgravity (on Earth) on the same. Uses in biological computing Since 1961, scientists proposed the idea of genetic circuits used for computational tasks. Collaboration between biologists and computing scientists has allowed designing digital logic gates on the metabolism of E. coli. As Lac operon is a two-stage process, genetic regulation in the bacteria is used to realize computing functions. The process is controlled at the transcription stage of DNA into messenger RNA. Studies are being performed attempting to program E. coli to solve complicated mathematics problems, such as the Hamiltonian path problem. A computer to control protein production of E. coli within yeast cells has been developed. A method has also been developed to use bacteria to behave as an LCD screen. In July 2017, separate experiments with E. coli published on Nature showed the potential of using living cells for computing tasks and storing information. A team formed with collaborators of the Biodesign Institute at Arizona State University and Harvard's Wyss Institute for Biologically Inspired Engineering developed a biological computer inside E. coli that responded to a dozen inputs. The team called the computer "ribocomputer", as it was composed of ribonucleic acid. Meanwhile, Harvard researchers probed that is possible to store information in bacteria after successfully archiving images and movies in the DNA of living E. coli cells. In 2021, a team led by biophysicist Sangram Bagh realized a study with E. coli to solve 2 × 2 maze problems to probe the principle for distributed computing among cells. History In 1885, the German-Austrian pediatrician Theodor Escherich discovered this organism in the feces of healthy individuals. He called it Bacterium coli commune because it is found in the colon. Early classifications of prokaryotes placed these in a handful of genera based on their shape and motility (at that time Ernst Haeckel's classification of bacteria in the kingdom Monera was in place). Bacterium coli was the type species of the now invalid genus Bacterium when it was revealed that the former type species ("Bacterium triloculare") was missing. Following a revision of Bacterium, it was reclassified as Bacillus coli by Migula in 1895 and later reclassified in the newly created genus Escherichia, named after its original discoverer, by Aldo Castellani and Albert John Chalmers. In 1996, an outbreak of E. coli food poisoning occurred in Wishaw, Scotland, killing 21 people. This death toll was exceeded in 2011, when the 2011 Germany E. coli O104:H4 outbreak, linked to organic fenugreek sprouts, killed 53 people. In 2024, an outbreak of E. coli food poisoning occurred across the U.S. was linked to U.S.-grown organic carrots causing one fatality and dozens of illnesses. Uses E. coli has several practical uses besides its use as a vector for genetic experiments and processes. For example, E. coli can be used to generate synthetic propane and recombinant human growth hormone.
Biology and health sciences
Other organisms
null
40142
https://en.wikipedia.org/wiki/Botulism
Botulism
Botulism is a rare and potentially fatal illness caused by botulinum toxin, which is produced by the bacterium Clostridium botulinum. The disease begins with weakness, blurred vision, feeling tired, and trouble speaking. This may then be followed by weakness of the arms, chest muscles, and legs. Vomiting, swelling of the abdomen, and diarrhea may also occur. The disease does not usually affect consciousness or cause a fever. Botulism can occur in several ways. The bacterial spores which cause it are common in both soil and water and are very resistant. They produce the botulinum toxin when exposed to low oxygen levels and certain temperatures. Foodborne botulism happens when food containing the toxin is eaten. Infant botulism instead happens when the bacterium develops in the intestines and releases the toxin. This typically only occurs in children less than one year old, as protective mechanisms against development of the bacterium develop after that age. Wound botulism is found most often among those who inject street drugs. In this situation, spores enter a wound, and in the absence of oxygen, release the toxin. The disease is not passed directly between people. Its diagnosis is confirmed by finding the toxin or bacteria in the person in question. Prevention is primarily by proper food preparation. The toxin, though not the spores, is destroyed by heating it to more than for longer than five minutes. The clostridial spores can be destroyed in an autoclave with moist heat (120°C/ 250°F for at least 15 minutes) or dry heat (160°C for 2 hours) or by irradiation. The spores of group I strains are inactivated by heating at 121°C (250°F) for 3 minutes during commercial canning. Spores of group II strains are less heat-resistant, and they are often damaged by 90°C (194°F) for 10 minutes, 85°C for 52 minutes, or 80°C for 270 minutes; however, these treatments may not be sufficient in some foods. Honey can contain the organism, and for this reason, honey should not be fed to children under 12 months. Treatment is with an antitoxin. In those who lose their ability to breathe on their own, mechanical ventilation may be necessary for months. Antibiotics may be used for wound botulism. Death occurs in 5 to 10% of people. Botulism also affects many other animals. The word is from Latin , meaning 'sausage'. Signs and symptoms The muscle weakness of botulism characteristically starts in the muscles supplied by the cranial nerves—a group of twelve nerves that control eye movements, the facial muscles and the muscles controlling chewing and swallowing. Double vision, drooping of both eyelids, loss of facial expression and swallowing problems may therefore occur. In addition to affecting the voluntary muscles, it can also cause disruptions in the autonomic nervous system. This is experienced as a dry mouth and throat (due to decreased production of saliva), postural hypotension (decreased blood pressure on standing, with resultant lightheadedness and risk of blackouts), and eventually constipation (due to decreased forward movement of intestinal contents). Some of the toxins (B and E) also precipitate nausea, vomiting, and difficulty with talking. The weakness then spreads to the arms (starting in the shoulders and proceeding to the forearms) and legs (again from the thighs down to the feet). Severe botulism leads to reduced movement of the muscles of respiration, and hence problems with gas exchange. This may be experienced as dyspnea (difficulty breathing), but when severe can lead to respiratory failure, due to the buildup of unexhaled carbon dioxide and its resultant depressant effect on the brain. This may lead to respiratory compromise and death if untreated. Clinicians frequently think of the symptoms of botulism in terms of a classic triad: bulbar palsy and descending paralysis, lack of fever, and clear senses and mental status ("clear sensorium"). Infant botulism Infant botulism (also referred to as floppy baby syndrome) was first recognized in 1976, and is the most common form of botulism in the United States. Infants are susceptible to infant botulism in the first year of life, with more than 90% of cases occurring in infants younger than six months. Infant botulism results from the ingestion of the C. botulinum spores, and subsequent colonization of the small intestine. The infant gut may be colonized when the composition of the intestinal microflora (normal flora) is insufficient to competitively inhibit the growth of C. botulinum and levels of bile acids (which normally inhibit clostridial growth) are lower than later in life. The growth of the spores releases botulinum toxin, which is then absorbed into the bloodstream and taken throughout the body, causing paralysis by blocking the release of acetylcholine at the neuromuscular junction. Typical symptoms of infant botulism include constipation, lethargy, weakness, difficulty feeding, and an altered cry, often progressing to a complete descending flaccid paralysis. Although constipation is usually the first symptom of infant botulism, it is commonly overlooked. Honey is a known dietary reservoir of C. botulinum spores and has been linked to infant botulism. For this reason, honey is not recommended for infants less than one year of age. Most cases of infant botulism, however, are thought to be caused by acquiring the spores from the natural environment. Clostridium botulinum is a ubiquitous soil-dwelling bacterium. Many infant botulism patients have been demonstrated to live near a construction site or an area of soil disturbance. Infant botulism has been reported in 49 of 50 US states (all save for Rhode Island), and cases have been recognized in 26 countries on five continents. Complications Infant botulism has no long-term side effects. Botulism can result in death due to respiratory failure. However, in the past 50 years, the proportion of patients with botulism who die has fallen from about 50% to 7% due to improved supportive care. A patient with severe botulism may require mechanical ventilation (breathing support through a ventilator) as well as intensive medical and nursing care, sometimes for several months. The person may require rehabilitation therapy after leaving the hospital. Cause Clostridium botulinum is an anaerobic, Gram-positive, spore-forming rod. Botulinum toxin is one of the most powerful known toxins: about one microgram is lethal to humans when inhaled. It acts by blocking nerve function (neuromuscular blockade) through inhibition of the excitatory neurotransmitter acetylcholine's release from the presynaptic membrane of neuromuscular junctions in the somatic nervous system. This causes paralysis. Advanced botulism can cause respiratory failure by paralysing the muscles of the chest; this can progress to respiratory arrest. Furthermore, acetylcholine release from the presynaptic membranes of muscarinic nerve synapses is blocked. This can lead to a variety of autonomic signs and symptoms described above. In all cases, illness is caused by the botulinum toxin which the bacterium C. botulinum produces in anaerobic conditions and not by the bacterium itself. The pattern of damage occurs because the toxin affects nerves that fire (depolarize) at a higher frequency first. Mechanisms of entry into the human body for botulinum toxin are described below. Colonization of the gut The most common form in Western countries is infant botulism. This occurs in infants who are colonized with the bacterium in the small intestine during the early stages of their lives. The bacterium then produces the toxin, which is absorbed into the bloodstream. The consumption of honey during the first year of life has been identified as a risk factor for infant botulism; it is a factor in a fifth of all cases. The adult form of infant botulism is termed adult intestinal toxemia, and is exceedingly rare. Food Toxin that is produced by the bacterium in containers of food that have been improperly preserved is the most common cause of food-borne botulism. Fish that has been pickled without the salinity or acidity of brine that contains acetic acid and high sodium levels, as well as smoked fish stored at too high a temperature, presents a risk, as does improperly canned food. Food-borne botulism results from contaminated food in which C. botulinum spores have been allowed to germinate in low-oxygen conditions. This typically occurs in improperly prepared home-canned food substances and fermented dishes without adequate salt or acidity. Given that multiple people often consume food from the same source, it is common for more than a single person to be affected simultaneously. Symptoms usually appear 12–36 hours after eating, but can also appear within 6 hours to 10 days. No withdrawal periods have been established for cows affected by Botulism. Lactating cows injected with various doses of Botulinum toxin C have not resulted in detectable Botulinum neurotoxin in milk produced. Using mouse bioassays and immunostick ELISA tests, botulinum toxin was detected in whole blood and serum but not in milk samples, suggesting that botulinum type C toxin does not enter milk in detectable concentrations. Cooking and pasteurization denatures botulinum toxin but does not necessarily eliminate spores. Botulinum spores or toxins can find their way into the dairy production chain from the environment. Despite the low risk of milk and meat contamination, the protocol for fatal bovine botulism cases appears to be incineration of carcasses and withholding any potentially contaminated milk from human consumption. It is also advised that raw milk from affected cows should not be consumed by humans or fed to calves. There have been several reports of botulism from pruno wine made of food scraps in prison. In a Mississippi prison in 2016, prisoners illegally brewed alcohol that led to 31 cases of botulism. The research study done on these cases found the symptoms of mild botulism matched the symptoms severe botulism though the outcomes and progression of the disease were different. Wound Wound botulism results from the contamination of a wound with the bacteria, which then secrete the toxin into the bloodstream. This has become more common in intravenous drug users since the 1990s, especially people using black tar heroin and those injecting heroin into the skin rather than the veins. Wound botulism can also come from a minor wound that is not properly cleaned out; the skin grows over the wound thus trapping the spore in an anaerobic environment and creating botulism. One example was a person who cut their ankle while using a weed eater; as the wound healed over, it trapped a blade of grass and spec of soil under the skin that lead to severe botulism requiring hospitalization and rehabilitation for months. Wound botulism accounts for 29% of cases. Inhalation Isolated cases of botulism have been described after inhalation by laboratory workers. Injection (iatrogenic botulism) Symptoms of botulism may occur away from the injection site of botulinum toxin. This may include loss of strength, blurred vision, change of voice, or trouble breathing which can result in death. Onset can be hours to weeks after an injection. This generally only occurs with inappropriate strengths of botulinum toxin for cosmetic use or due to the larger doses used to treat movement disorders. However, there are cases where an off-label use of botulinum toxin resulted in severe botulism and death. Following a 2008 review the FDA added these concerns as a boxed warning. An international grassroots effort led by NeverTox to assemble the people experiencing Iatrogenic Botulism Poisoning (IBP) and provide education and emotional support serves 39,000 people through a Facebook group who are suffering from adverse events from botulinum toxin injections. Lawsuits about botulism against Pharmaceuticals Prior to the boxed warning labels that included a disclaimer that botulinum toxin injections could cause botulism, there were a series of lawsuits against the pharmaceutical firms that manufactured injectable botulinum toxin. A Hollywood producer's wife brought a lawsuit after experiencing debilitating adverse events from migraine treatment. A lawsuit on behalf of a 3-year-old boy who was permanently disabled by a botulinum toxin injection was settled in court during the trial. The family of a 7-year-old boy treated with botulinum toxin injections for leg spasms sued after the boy almost died. Several families of people who died after treatments with botulinum toxin injections brought lawsuits. One lawsuit prevailed for the plaintiff who was awarded compensation of $18 million; the plaintiff was a physician who was diagnosed with botulism by thirteen neurologists at the NIH. Deposition video from that lawsuit quotes a pharmaceutical executive stating that "Botox doesn't cause botulism." Mechanism The toxin is the protein botulinum toxin produced under anaerobic conditions (where there is no oxygen) by the bacterium Clostridium botulinum. Clostridium botulinum is a large anaerobic Gram-positive bacillus that forms subterminal endospores. There are eight serological varieties of the bacterium denoted by the letters A to H. The toxin from all of these acts in the same way and produces similar symptoms: the motor nerve endings are prevented from releasing acetylcholine, causing flaccid paralysis and symptoms of blurred vision, ptosis, nausea, vomiting, diarrhea or constipation, cramps, and respiratory difficulty. Botulinum toxin is broken into eight neurotoxins (labeled as types A, B, C [C1, C2], D, E, F, and G), which are antigenically and serologically distinct but structurally similar. Human botulism is caused mainly by types A, B, E, and (rarely) F. Types C and D cause toxicity only in other animals. In October 2013, scientists released news of the discovery of type H, the first new botulism neurotoxin found in forty years. However, further studies showed type H to be a chimeric toxin composed of parts of types F and A (FA). Some types produce a characteristic putrefactive smell and digest meat (types A and some of B and F); these are said to be proteolytic; type E and some types of B, C, D and F are nonproteolytic and can go undetected because there is no strong odor associated with them. When the bacteria are under stress, they develop spores, which are inert. Their natural habitats are in the soil, in the silt that comprises the bottom sediment of streams, lakes, and coastal waters and ocean, while some types are natural inhabitants of the intestinal tracts of mammals (e.g., horses, cattle, humans), and are present in their excreta. The spores can survive in their inert form for many years. Toxin is produced by the bacteria when environmental conditions are favourable for the spores to replicate and grow, but the gene that encodes for the toxin protein is actually carried by a virus or phage that infects the bacteria. Little is known about the natural factors that control phage infection and replication within the bacteria. The spores require warm temperatures, a protein source, an anaerobic environment, and moisture in order to become active and produce toxin. In the wild, decomposing vegetation and invertebrates combined with warm temperatures can provide ideal conditions for the botulism bacteria to activate and produce toxin that may affect feeding birds and other animals. Spores are not killed by boiling, but botulism is uncommon because special, rarely obtained conditions are necessary for botulinum toxin production from C. botulinum spores, including an anaerobic, low-salt, low-acid, low-sugar environment at ambient temperatures. Botulinum inhibits the release within the nervous system of acetylcholine, a neurotransmitter, responsible for communication between motor neurons and muscle cells. All forms of botulism lead to paralysis that typically starts with the muscles of the face and then spreads towards the limbs. In severe forms, botulism leads to paralysis of the breathing muscles and causes respiratory failure. In light of this life-threatening complication, all suspected cases of botulism are treated as medical emergencies, and public health officials are usually involved to identify the source and take steps to prevent further cases from occurring. Botulinum toxin A and E specifically cleave the SNAP-25, whereas serotype B, D, F and G cut synaptobrevin. Serotype C cleaves both SNAP-25 and syntaxin. This causes blockade of neurotransmitter acetylcholine release, ultimately leading to paralysis. Diagnosis For botulism in babies, diagnosis should be made on signs and symptoms. Confirmation of the diagnosis is made by testing of a stool or enema specimen with the mouse bioassay. In people whose history and physical examination suggest botulism, these clues are often not enough to allow a diagnosis. Other diseases such as Guillain–Barré syndrome, stroke, and myasthenia gravis can appear similar to botulism, and special tests may be needed to exclude these other conditions. These tests may include a brain scan, cerebrospinal fluid examination, nerve conduction test (electromyography, or EMG), and an edrophonium chloride (Tensilon) test for myasthenia gravis. A definite diagnosis can be made if botulinum toxin is identified in the food, stomach or intestinal contents, vomit or feces. The toxin is occasionally found in the blood in peracute cases. Botulinum toxin can be detected by a variety of techniques, including enzyme-linked immunosorbent assays (ELISAs), electrochemiluminescent (ECL) tests and mouse inoculation or feeding trials. The toxins can be typed with neutralization tests in mice. In toxicoinfectious botulism, the organism can be cultured from tissues. On egg yolk medium, toxin-producing colonies usually display surface iridescence that extends beyond the colony. Prevention Although the vegetative form of the bacteria is destroyed by boiling, the spore itself is not killed by the temperatures reached with normal sea-level-pressure boiling, leaving it free to grow and again produce the toxin when conditions are right. A recommended prevention measure for infant botulism is to avoid giving honey to infants less than 12 months of age, as botulinum spores are often present. In older children and adults the normal intestinal bacteria suppress development of C. botulinum. While commercially canned goods are required to undergo a "botulinum cook" in a pressure cooker at for 3 minutes, and thus rarely cause botulism, there have been notable exceptions. Two were the 1978 Alaskan salmon outbreak and the 2007 Castleberry's Food Company outbreak. Foodborne botulism is the rarest form, accounting for only around 15% of cases (US) and has more frequently resulted from home-canned foods with low acid content, such as carrot juice, asparagus, green beans, beets, and corn. However, outbreaks of botulism have resulted from more unusual sources. In July 2002, fourteen Alaskans ate muktuk (whale meat) from a beached whale, and eight of them developed symptoms of botulism, two of them requiring mechanical ventilation. Other, much rarer sources of infection (about every decade in the US) include garlic or herbs stored covered in oil without acidification, chili peppers, improperly handled baked potatoes wrapped in aluminum foil, tomatoes, and home-canned or fermented fish. When canning or preserving food at home, attention should be paid to hygiene, pressure, temperature, refrigeration and storage. When making home preserves, only acidic fruit such as apples, pears, stone fruits and berries should be used. Tropical fruit and tomatoes are low in acidity and must have some acidity added before they are canned. Low-acid foods have pH values higher than 4.6. They include red meats, seafood, poultry, milk, and all fresh vegetables except for most tomatoes. Most mixtures of low-acid and acid foods also have pH values above 4.6 unless their recipes include enough lemon juice, citric acid, or vinegar to make them acidic. Acid foods have a pH of 4.6 or lower. They include fruits, pickles, sauerkraut, jams, jellies, marmalades, and fruit butters. Although tomatoes usually are considered an acid food, some are now known to have pH values slightly above 4.6. Figs also have pH values slightly above 4.6. Therefore, if they are to be canned as acid foods, these products must be acidified to a pH of 4.6 or lower with lemon juice or citric acid. Properly acidified tomatoes and figs are acid foods and can be safely processed in a boiling-water canner. Oils infused with fresh garlic or herbs should be acidified and refrigerated. Potatoes which have been baked while wrapped in aluminum foil should be kept hot until served or refrigerated. Because the botulism toxin is destroyed by high temperatures, home-canned foods are best boiled for 10 minutes before eating. Metal cans containing food in which bacteria are growing may bulge outwards due to gas production from bacterial growth or the food inside may be foamy or have a bad odor; cans with any of these signs should be discarded. Any container of food which has been heat-treated and then assumed to be airtight which shows signs of not being so, e.g., metal cans with pinprick holes from rust or mechanical damage, should be discarded. Contamination of a canned food solely with C. botulinum may not cause any visual defects to the container, such as bulging. Only assurance of sufficient thermal processing during production, and absence of a route for subsequent contamination, should be used as indicators of food safety. The addition of nitrites and nitrates to processed meats such as ham, bacon, and sausages reduces growth and toxin production of C. botulinum. Vaccine Vaccines are under development, but they have side effects. As of 2017 work to develop a better vaccine was being carried out, but the US FDA had not approved any vaccine against botulism. Treatment Botulism is generally treated with botulism antitoxin and supportive care. Supportive care for botulism includes monitoring of respiratory function. Respiratory failure due to paralysis may require mechanical ventilation for 2 to 8 weeks, plus intensive medical and nursing care. After this time, paralysis generally improves as new neuromuscular connections are formed. In some abdominal cases, physicians may try to remove contaminated food still in the digestive tract by inducing vomiting or using enemas. Wounds should be treated, usually surgically, to remove the source of the toxin-producing bacteria. Antitoxin Botulinum antitoxin consists of antibodies that neutralize botulinum toxin in the circulatory system by passive immunization. This prevents additional toxin from binding to the neuromuscular junction, but does not reverse any already inflicted paralysis. In adults, a trivalent antitoxin containing antibodies raised against botulinum toxin types A, B, and E is used most commonly; however, a heptavalent botulism antitoxin has also been developed and was approved by the U.S. FDA in 2013. In infants, horse-derived antitoxin is sometimes avoided for fear of infants developing serum sickness or lasting hypersensitivity to horse-derived proteins. To avoid this, a human-derived antitoxin has been developed and approved by the U.S. FDA in 2003 for the treatment of infant botulism. This human-derived antitoxin has been shown to be both safe and effective for the treatment of infant botulism. However, the danger of equine-derived antitoxin to infants has not been clearly established, and one study showed the equine-derived antitoxin to be both safe and effective for the treatment of infant botulism. Trivalent (A,B,E) botulinum antitoxin is derived from equine sources utilizing whole antibodies (Fab and Fc portions). In the United States, this antitoxin is available from the local health department via the CDC. The second antitoxin, heptavalent (A,B,C,D,E,F,G) botulinum antitoxin, is derived from "despeciated" equine IgG antibodies which have had the Fc portion cleaved off leaving the F(ab')2 portions. This less immunogenic antitoxin is effective against all known strains of botulism where not contraindicated. Prognosis The paralysis caused by botulism can persist for two to eight weeks, during which supportive care and ventilation may be necessary to keep the patient alive. Botulism can be fatal in five to ten percent of people who are affected. However, if left untreated, botulism is fatal in 40 to 50 percent of cases. Infant botulism typically has no long-term side effects but can be complicated by treatment-associated adverse events. The case fatality rate is less than two percent for hospitalized babies. Epidemiology Globally, botulism is fairly rare, with approximately 1,000 identified cases yearly. United States In the United States an average of 145 cases are reported each year. Of these, roughly 65% are infant botulism, 20% are wound botulism, and 15% are foodborne. Infant botulism is predominantly sporadic and not associated with epidemics, but great geographic variability exists. From 1974 to 1996, for example, 47% of all infant botulism cases reported in the U.S. occurred in California. Between 1990 and 2000, the Centers for Disease Control and Prevention reported 263 individual foodborne cases from 160 botulism events in the United States with a case-fatality rate of 4%. Thirty-nine percent (103 cases and 58 events) occurred in Alaska, all of which were attributable to traditional Alaskan aboriginal foods. In the lower 49 states, home-canned food was implicated in 70 events (~69%) with canned asparagus being the most frequent cause. Two restaurant-associated outbreaks affected 25 people. The median number of cases per year was 23 (range 17–43), the median number of events per year was 14 (range 9–24). The highest incidence rates occurred in Alaska, Idaho, Washington, and Oregon. All other states had an incidence rate of 1 case per ten million people or less. The number of cases of food borne and infant botulism has changed little in recent years, but wound botulism has increased because of the use of black tar heroin, especially in California. All data regarding botulism antitoxin releases and laboratory confirmation of cases in the US are recorded annually by the Centers for Disease Control and Prevention and published on their website. On 2 July 1971, the U.S. Food and Drug Administration (FDA) released a public warning after learning that a New York man had died and his wife had become seriously ill due to botulism after eating a can of Bon Vivant vichyssoise soup. Between 31 March and 6 April 1977, 59 individuals developed type B botulism. All who fell ill had eaten at the same Mexican restaurant in Pontiac, Michigan, and had consumed a hot sauce made with improperly home-canned jalapeño peppers, either by adding it to their food, or by eating nachos that had been prepared with the hot sauce. The full clinical spectrum (mild symptomatology with neurologic findings through life-threatening ventilatory paralysis) of type B botulism was documented. In April 1994, the largest outbreak of botulism in the United States since 1978 occurred in El Paso, Texas. Thirty people were affected; 4 required mechanical ventilation. All ate food from a Greek restaurant. The attack rate among people who ate a potato-based dip was 86% (19/22) compared with 6% (11/176) among people who did not eat the dip (relative risk [RR] = 13.8; 95% confidence interval [CI], 7.6–25.1). The attack rate among people who ate an eggplant-based dip was 67% (6/9) compared with 13% (24/189) among people who did not (RR = 5.2; 95% CI, 2.9–9.5). Botulism toxin type A was detected in patients and in both dips. Toxin formation resulted from holding aluminum foil-wrapped baked potatoes at room temperature, apparently for several days, before they were used in the dips. Food handlers should be informed of the potential hazards caused by holding foil-wrapped potatoes at ambient temperatures after cooking. In 2002, fourteen Alaskans ate muktuk (whale blubber) from a beached whale, resulting in eight of them developing botulism, with two of the affected requiring mechanical ventilation. Beginning in late June 2007, 8 people contracted botulism poisoning by eating canned food products produced by Castleberry's Food Company in its Augusta, Georgia plant. It was later identified that the Castleberry's plant had serious production problems on a specific line of retorts that had under-processed the cans of food. These issues included broken cooking alarms, leaking water valves and inaccurate temperature devices, all the result of poor management of the company. All of the victims were hospitalized and placed on mechanical ventilation. The Castleberry's Food Company outbreak was the first instance of botulism in commercial canned foods in the United States in over 30 years. One person died, 21 cases were confirmed, and 10 more were suspected in Lancaster, Ohio when a botulism outbreak occurred after a church potluck in April 2015. The suspected source was a salad made from home-canned potatoes. A botulism outbreak occurred in Northern California in May 2017 after 10 people consumed nacho cheese dip served at a gas station in Sacramento County. One man died as a result of the outbreak. United Kingdom The largest recorded outbreak of foodborne botulism in the United Kingdom occurred in June 1989. A total of 27 patients were affected; one patient died. Twenty-five of the patients had eaten one brand of hazelnut yogurt in the week before the onset of symptoms. Control measures included the cessation of all yogurt production by the implicated producer, the withdrawal of the firm's yogurts from sale, the recall of cans of the hazelnut conserve, and advice to the general public to avoid the consumption of all hazelnut yogurts. China From 1958 to 1983 there were 986 outbreaks of botulism in China involving 4,377 people with 548 deaths. Qapqal disease After the Chinese Communist Revolution in 1949, a mysterious plague (named Qapqal disease) was noticed to be affecting several Sibe villages in Qapqal Xibe Autonomous County. It was endemic with distinctive epidemic patterns, yet the underlying cause remained unknown for a long period of time. It caused a number of deaths and forced some people to leave the place. In 1958, a team of experts were sent to the area by the Ministry of Health to investigate the cases. The epidemic survey conducted proved that the disease was primarily type A botulism, with several cases of type B. The team also discovered that the source of the botulinum was local fermented grain and beans, as well as a raw meat food called mi song hu hu. They promoted the improvement of fermentation techniques among local residents, and thus eliminated the disease. Canada From 1985 to 2005 there were outbreaks causing 91 confirmed cases of foodborne botulism in Canada, 85% of which were in Inuit communities, especially Nunavik, as well as First Nations of the coast of British Columbia, following consumption of traditionally prepared marine mammal and fish products. Ukraine In 2017, there were 70 cases of botulism with 8 deaths in Ukraine. The previous year there were 115 cases with 12 deaths. Most cases were the result of dried fish, a common local drinking snack. Vietnam In 2020, several cases of botulism were reported in Vietnam. All of them were related to a product containing contaminated vegetarian pâté. Some patients were put on life support. Other susceptible species Botulism can occur in many vertebrates and invertebrates. Botulism has been reported in such species as rats, mice, chicken, frogs, toads, goldfish, aplysia, squid, crayfish, drosophila and leeches. Death from botulism is common in waterfowl; an estimated 10,000 to 100,000 birds die of botulism annually. The disease is commonly called "limberneck". In some large outbreaks, a million or more birds may die. Ducks appear to be affected most often. An enzootic form of duck botulism in the Western US and Canada is known as "western duck sickness". Botulism also affects commercially raised poultry. In chickens, the mortality rate varies from a few birds to 40% of the flock. Botulism seems to be relatively uncommon in domestic mammals; however, in some parts of the world, epidemics with up to 65% mortality are seen in cattle. The prognosis is poor in large animals that are recumbent. In cattle, the symptoms may include drooling, restlessness, incoordination, urine retention, dysphagia, and sternal recumbency. Laterally recumbent animals are usually very close to death. In sheep, the symptoms may include drooling, a serous nasal discharge, stiffness, and incoordination. Abdominal respiration may be observed and the tail may switch on the side. As the disease progresses, the limbs may become paralyzed and death may occur. Phosphorus-deficient cattle, especially in southern Africa, are inclined to ingest bones and carrion containing clostridial toxins and consequently develop lame sickness or lamsiekte. The clinical signs in horses are similar to cattle. The muscle paralysis is progressive; it usually begins at the hindquarters and gradually moves to the front limbs, neck, and head. Death generally occurs 24 to 72 hours after initial symptoms and results from respiratory paralysis. Some foals are found dead without other clinical signs. Clostridium botulinum type C toxin has been incriminated as the cause of grass sickness, a condition in horses which occurs in rainy and hot summers in Northern Europe. The main symptom is pharynx paralysis. Domestic dogs may develop systemic toxemia after consuming C. botulinum type C exotoxin or spores within bird carcasses or other infected meat but are generally resistant to the more severe effects of C. botulinum type C. Symptoms include flaccid muscle paralysis, which can lead to death due to cardiac and respiratory arrest. Pigs are relatively resistant to botulism. Reported symptoms include anorexia, refusal to drink, vomiting, pupillary dilation, and muscle paralysis. In poultry and wild birds, flaccid paralysis is usually seen in the legs, wings, neck and eyelids. Broiler chickens with the toxicoinfectious form may also have diarrhea with excess urates. Prevention in non-human species One of the main routes of exposure for botulism is through the consumption of food contaminated with C. botulinum. Food-borne botulism can be prevented in domestic animals through careful inspection of the feed, purchasing high quality feed from reliable sources, and ensuring proper storage. Poultry litter and animal carcasses are places in which C. botulinum spores are able to germinate so it is advised to avoid spreading poultry litter or any carcass containing materials on fields producing feed materials due to their potential for supporting C. botulinum growth. Additionally, water sources should be checked for dead or dying animals, and fields should be checked for animal remains prior to mowing for hay or silage. Correcting any dietary deficiencies can also prevent animals from consuming contaminated materials such as bones or carcasses. Raw materials used for silage or feed mixed on site should be checked for any sign of mold or rotten appearance. Acidification of animal feed can reduce, but will not eliminate, the risk of toxin formation, especially in carcasses that remain whole. Vaccines in animals Vaccines have been developed for use in animals to prevent botulism. The availability and approval of these vaccines varies depending on the location, with places experiencing more cases generally having more vaccines available and routine vaccination is more common. A variety of vaccines have been developed for the prevention of botulism in livestock. Most initial vaccinations require multiple doses at intervals from 2–6 weeks, however, some newer vaccines require only one shot. This mainly depends on the type of vaccine and manufacturers recommendations. All vaccines require annual boosters to maintain immunity. Many of these vaccines can be used on multiple species including cattle, sheep, and goats with some labeled for use in horses and mules as well as separate vaccines for mink. Additionally, vaccination during an outbreak is as beneficial as therapeutic treatment in cattle, and this method is also used in horses and pheasants. The use of region specific toxoids to immunize animals has been shown to be effective. Toxoid types C and D used to immunize cattle is a useful vaccination method in South Africa and Australia. Toxoid has also been shown to be an appropriate method of immunizing minks and pheasants. In endemic areas, for example Kentucky, vaccination with type B toxoid appears to be effective. Use in biological warfare and terrorism United States Based on CIA research in Fort Detrick on biological warfare, anthrax and botulism were widely regarded as the two most effective options. During the 1950s, a highly lethal strain was discovered during the biological warfare program. The CIA continued to hold 5 grams of Clostridium botulinum, even after Nixon's ban on biological warfare in 1969. During the Gulf War, when the United States were concerned with a potential biowarfare attack, the efforts around botulism turned to prevention. However, the only way to make antitoxin in America until the 1990s was by drawing antibodies from a single horse named First Flight, raising much concern from Pentagon health officials. Iraq Iraq has historically possessed many types of germs, including botulism. The American Type Culture Collection sold 5 variants of botulinum to the University of Baghdad in May 1986. 1991 CIA reports also show Iraqis filled shells, warheads, and bombs with biological agents like botulinum (though none have been deployed). The Iraqi air force used the code name "tea" to refer to botulinum, and it was also referred to as bioweapon "A." Japan A Japanese cult called Aum Shinrikyo created laboratories that produced biological weapons, specifically botulinum, anthrax, and Q fever. From 1990 to 1995, the cult staged numerous unsuccessful bioterrorism attacks on civilians. They sprayed botulinum toxin from a truck in downtown Tokyo and in the Narita airport, but there are no reported cases of botulism as a result.
Biology and health sciences
Infectious disease
null
40163
https://en.wikipedia.org/wiki/Darmstadtium
Darmstadtium
Darmstadtium is a synthetic chemical element; it has symbol Ds and atomic number 110. It is extremely radioactive: the most stable known isotope, darmstadtium-281, has a half-life of approximately 14 seconds. Darmstadtium was first created in November 1994 by the GSI Helmholtz Centre for Heavy Ion Research in the city of Darmstadt, Germany, after which it was named. In the periodic table, it is a d-block transactinide element. It is a member of the 7th period and is placed in the group 10 elements, although no chemical experiments have yet been carried out to confirm that it behaves as the heavier homologue to platinum in group 10 as the eighth member of the 6d series of transition metals. Darmstadtium is calculated to have similar properties to its lighter homologues, nickel, palladium, and platinum. Introduction History Discovery Darmstadtium was first discovered on November 9, 1994, at the Institute for Heavy Ion Research (Gesellschaft für Schwerionenforschung, GSI) in Darmstadt, Germany, by Peter Armbruster and Gottfried Münzenberg, under the direction of Sigurd Hofmann. The team bombarded a lead-208 target with accelerated nuclei of nickel-62 in a heavy ion accelerator and detected a single atom of the isotope darmstadtium-269: Two more atoms followed on November 12 and 17. (Yet another was originally reported to have been found on November 11, but it turned out to be based on data fabricated by Victor Ninov, and was later retracted.) In the same series of experiments, the same team also carried out the reaction using heavier nickel-64 ions. During two runs, 9 atoms of were convincingly detected by correlation with known daughter decay properties: Prior to this, there had been failed synthesis attempts in 1986–87 at the Joint Institute for Nuclear Research in Dubna (then in the Soviet Union) and in 1990 at the GSI. A 1995 attempt at the Lawrence Berkeley National Laboratory resulted in signs suggesting but not pointing conclusively at the discovery of a new isotope formed in the bombardment of with , and a similarly inconclusive 1994 attempt at the JINR showed signs of being produced from and . Each team proposed its own name for element 110: the American team proposed hahnium after Otto Hahn in an attempt to resolve the controversy of naming element 105 (which they had long been suggesting this name for), the Russian team proposed becquerelium after Henri Becquerel, and the German team proposed darmstadtium after Darmstadt, the location of their institute. The IUPAC/IUPAP Joint Working Party (JWP) recognised the GSI team as discoverers in their 2001 report, giving them the right to suggest a name for the element. Naming Using Mendeleev's nomenclature for unnamed and undiscovered elements, darmstadtium should be known as eka-platinum. In 1979, IUPAC published recommendations according to which the element was to be called ununnilium (with the corresponding symbol of Uun), a systematic element name as a placeholder, until the element was discovered (and the discovery then confirmed) and a permanent name was decided on. Although widely used in the chemical community on all levels, from chemistry classrooms to advanced textbooks, the recommendations were mostly ignored among scientists in the field, who called it "element 110", with the symbol of E110, (110) or even simply 110. In 1996, the Russian team proposed the name becquerelium after Henri Becquerel. The American team in 1997 proposed the name hahnium after Otto Hahn (previously this name had been used for element 105). The name darmstadtium (Ds) was suggested by the GSI team in honor of the city of Darmstadt, where the element was discovered. The GSI team originally also considered naming the element wixhausium, after the suburb of Darmstadt known as Wixhausen where the element was discovered, but eventually decided on darmstadtium. Policium had also been proposed as a joke due to the emergency telephone number in Germany being 1–1–0. The new name darmstadtium was officially recommended by IUPAC on August 16, 2003. Isotopes Darmstadtium has no stable or naturally occurring isotopes. Several radioactive isotopes have been synthesized in the laboratory, either by fusing two atoms or by observing the decay of heavier elements. Eleven different isotopes of darmstadtium have been reported with atomic masses 267, 269–271, 273, 275–277, and 279–281, although darmstadtium-267 is unconfirmed. Three darmstadtium isotopes, darmstadtium-270, darmstadtium-271, and darmstadtium-281, have known metastable states, although that of darmstadtium-281 is unconfirmed. Most of these decay predominantly through alpha decay, but some undergo spontaneous fission. Stability and half-lives All darmstadtium isotopes are extremely unstable and radioactive; in general, the heavier isotopes are more stable than the lighter. The most stable known darmstadtium isotope, 281Ds, is also the heaviest known darmstadtium isotope; it has a half-life of 14 seconds. The isotope 279Ds has a half-life of 0.18 seconds, while the unconfirmed 281mDs has a half-life of 0.9 seconds. The remaining isotopes and metastable states have half-lives between 1 microsecond and 70 milliseconds. Some unknown darmstadtium isotopes may have longer half-lives, however. Theoretical calculation in a quantum tunneling model reproduces the experimental alpha decay half-life data for the known darmstadtium isotopes. It also predicts that the undiscovered isotope 294Ds, which has a magic number of neutrons (184), would have an alpha decay half-life on the order of 311 years; exactly the same approach predicts a ~350-year alpha half-life for the non-magic 293Ds isotope, however. Predicted properties Other than nuclear properties, no properties of darmstadtium or its compounds have been measured; this is due to its extremely limited and expensive production and the fact that darmstadtium (and its parents) decays very quickly. Properties of darmstadtium metal remain unknown and only predictions are available. Chemical Darmstadtium is the eighth member of the 6d series of transition metals, and should be much like the platinum group metals. Calculations on its ionization potentials and atomic and ionic radii are similar to that of its lighter homologue platinum, thus implying that darmstadtium's basic properties will resemble those of the other group 10 elements, nickel, palladium, and platinum. Prediction of the probable chemical properties of darmstadtium has not received much attention recently. Darmstadtium should be a very noble metal. The predicted standard reduction potential for the Ds2+/Ds couple is 1.7 V. Based on the most stable oxidation states of the lighter group 10 elements, the most stable oxidation states of darmstadtium are predicted to be the +6, +4, and +2 states; however, the neutral state is predicted to be the most stable in aqueous solutions. In comparison, only platinum is known to show the maximum oxidation state in the group, +6, while the most stable state is +2 for both nickel and palladium. It is further expected that the maximum oxidation states of elements from bohrium (element 107) to darmstadtium (element 110) may be stable in the gas phase but not in aqueous solution. Darmstadtium hexafluoride (DsF6) is predicted to have very similar properties to its lighter homologue platinum hexafluoride (PtF6), having very similar electronic structures and ionization potentials. It is also expected to have the same octahedral molecular geometry as PtF6. Other predicted darmstadtium compounds are darmstadtium carbide (DsC) and darmstadtium tetrachloride (DsCl4), both of which are expected to behave like their lighter homologues. Unlike platinum, which preferentially forms a cyanide complex in its +2 oxidation state, Pt(CN)2, darmstadtium is expected to preferentially remain in its neutral state and form instead, forming a strong Ds–C bond with some multiple bond character. Physical and atomic Darmstadtium is expected to be a solid under normal conditions and to crystallize in the body-centered cubic structure, unlike its lighter congeners which crystallize in the face-centered cubic structure, because it is expected to have different electron charge densities from them. It should be a very heavy metal with a density of around 26–27 g/cm3. In comparison, the densest known element that has had its density measured, osmium, has a density of only 22.61 g/cm3. The outer electron configuration of darmstadtium is calculated to be 6d8 7s2, which obeys the Aufbau principle and does not follow platinum's outer electron configuration of 5d9 6s1. This is due to the relativistic stabilization of the 7s2 electron pair over the whole seventh period, so that none of the elements from 104 to 112 are expected to have electron configurations violating the Aufbau principle. The atomic radius of darmstadtium is expected to be around 132 pm. Experimental chemistry Unambiguous determination of the chemical characteristics of darmstadtium has yet to have been established due to the short half-lives of darmstadtium isotopes and a limited number of likely volatile compounds that could be studied on a very small scale. One of the few darmstadtium compounds that are likely to be sufficiently volatile is darmstadtium hexafluoride (), as its lighter homologue platinum hexafluoride () is volatile above 60 °C and therefore the analogous compound of darmstadtium might also be sufficiently volatile; a volatile octafluoride () might also be possible. For chemical studies to be carried out on a transactinide, at least four atoms must be produced, the half-life of the isotope used must be at least 1 second, and the rate of production must be at least one atom per week. Even though the half-life of 281Ds, the most stable confirmed darmstadtium isotope, is 14 seconds, long enough to perform chemical studies, another obstacle is the need to increase the rate of production of darmstadtium isotopes and allow experiments to carry on for weeks or months so that statistically significant results can be obtained. Separation and detection must be carried out continuously to separate out the darmstadtium isotopes and have automated systems experiment on the gas-phase and solution chemistry of darmstadtium, as the yields for heavier elements are predicted to be smaller than those for lighter elements; some of the separation techniques used for bohrium and hassium could be reused. However, the experimental chemistry of darmstadtium has not received as much attention as that of the heavier elements from copernicium to livermorium. The more neutron-rich darmstadtium isotopes are the most stable and are thus more promising for chemical studies. However, they can only be produced indirectly from the alpha decay of heavier elements, and indirect synthesis methods are not as favourable for chemical studies as direct synthesis methods. The more neutron-rich isotopes 276Ds and 277Ds might be produced directly in the reaction between thorium-232 and calcium-48, but the yield was expected to be low. Following several unsuccessful attempts, 276Ds was produced in this reaction in 2022 and observed to have a half-life less than a millisecond and a low yield, in agreement with predictions. Additionally, 277Ds was successfully synthesized using indirect methods (as a granddaughter of 285Fl) and found to have a short half-life of 3.5 ms, not long enough to perform chemical studies. The only known darmstadtium isotope with a half-life long enough for chemical research is 281Ds, which would have to be produced as the granddaughter of 289Fl.
Physical sciences
Group 10
Chemistry
40171
https://en.wikipedia.org/wiki/Lead%28II%29%20azide
Lead(II) azide
Lead(II) azide is an inorganic compound. More so than other azides, it is explosive. It is used in detonators to initiate secondary explosives. In a commercially usable form, it is a white to buff powder. Preparation and handling Lead(II) azide is prepared by the reaction of sodium azide and lead(II) nitrate in aqueous solution. Lead(II) acetate can also be used. Thickeners such as dextrin or polyvinyl alcohol are often added to the solution to stabilize the precipitated product. In fact, it is normally shipped in a dextrinated solution that lowers its sensitivity. Production history Lead azide in its pure form was first prepared by Theodor Curtius in 1891. Due to sensitivity and stability concerns, the dextrinated form of lead azide (MIL-L-3055) was developed in the 1920s and 1930s with large scale production by DuPont Co beginning in 1932. Detonator development during World War II resulted in the need for a form of lead azide with a more brisant output. RD-1333 lead azide (MIL-DTL-46225), a version of lead azide with sodium carboxymethyl cellulose as a precipitating agent, was developed to meet that need. The Vietnam War saw an accelerated need for lead azide and it was during this time that Special Purpose Lead Azide (MIL-L-14758) was developed; the US government also began stockpiling lead azide in large quantities. After the Vietnam War, the use of lead azide dramatically decreased. Due to the size of the US stockpile, the manufacture of lead azide in the US ceased completely by the early 1990s. In the 2000s, concerns about the age and stability of stockpiled lead azide led the US government to investigate methods to dispose of its stockpiled lead azide and obtain new manufacturers. Explosive characteristics Lead azide is highly sensitive and usually handled and stored under water in insulated rubber containers. It will explode after a fall of around 150 mm (6 in) or in the presence of a static discharge of 7 millijoules. Its detonation velocity is around . Ammonium acetate and sodium dichromate are used to destroy small quantities of lead azide. Lead azide has immediate deflagration to detonation transition (DDT), meaning that even small amounts undergo full detonation (after being hit by flame or static electricity). Lead azide reacts with copper, zinc, cadmium, or alloys containing these metals to form other azides. For example, copper azide is even more explosive and too sensitive to be used commercially. Lead azide was a component of the six .22 (5.6 mm) caliber Devastator rounds fired from a Röhm RG-14 revolver by John Hinckley, Jr. in his assassination attempt on U.S. President Ronald Reagan on March 30, 1981. The rounds consisted of lead azide centers with lacquer-sealed aluminum tips designed to explode upon impact. A strong probability exists that the bullet which struck White House press secretary James Brady in the head exploded. The remaining bullets that hit people, including the shot that hit President Reagan, did not explode.
Physical sciences
Nitride salts
Chemistry
40187
https://en.wikipedia.org/wiki/Conjugate%20%28acid-base%20theory%29
Conjugate (acid-base theory)
A conjugate acid, within the Brønsted–Lowry acid–base theory, is a chemical compound formed when an acid gives a proton () to a base—in other words, it is a base with a hydrogen ion added to it, as it loses a hydrogen ion in the reverse reaction. On the other hand, a conjugate base is what remains after an acid has donated a proton during a chemical reaction. Hence, a conjugate base is a substance formed by the removal of a proton from an acid, as it can gain a hydrogen ion in the reverse reaction. Because some acids can give multiple protons, the conjugate base of an acid may itself be acidic. In summary, this can be represented as the following chemical reaction: Johannes Nicolaus Brønsted and Martin Lowry introduced the Brønsted–Lowry theory, which said that any compound that can give a proton to another compound is an acid, and the compound that receives the proton is a base. A proton is a subatomic particle in the nucleus with a unit positive electrical charge. It is represented by the symbol because it has the nucleus of a hydrogen atom, that is, a hydrogen cation. A cation can be a conjugate acid, and an anion can be a conjugate base, depending on which substance is involved and which acid–base theory is used. The simplest anion which can be a conjugate base is the free electron in a solution whose conjugate acid is the atomic hydrogen. Acid–base reactions In an acid–base reaction, an acid and a base react to form a conjugate base and a conjugate acid respectively. The acid loses a proton and the base gains a proton. In diagrams which indicate this, the new bond formed between the base and the proton is shown by an arrow that starts on an electron pair from the base and ends at the hydrogen ion (proton) that will be transferred: In this case, the water molecule is the conjugate acid of the basic hydroxide ion after the latter received the hydrogen ion from ammonium. On the other hand, ammonia is the conjugate base for the acidic ammonium after ammonium has donated a hydrogen ion to produce the water molecule. Also, OH− can be considered as the conjugate base of , since the water molecule donates a proton to give in the reverse reaction. The terms "acid", "base", "conjugate acid", and "conjugate base" are not fixed for a certain chemical substance but can be swapped if the reaction taking place is reversed. Strength of conjugates The strength of a conjugate acid is proportional to its splitting constant. A stronger conjugate acid will split more easily into its products, "push" hydrogen protons away and have a higher equilibrium constant. The strength of a conjugate base can be seen as its tendency to "pull" hydrogen protons towards itself. If a conjugate base is classified as strong, it will "hold on" to the hydrogen proton when dissolved and its acid will not split. If a chemical is a strong acid, its conjugate base will be weak. An example of this case would be the splitting of hydrochloric acid in water. Since is a strong acid (it splits up to a large extent), its conjugate base () will be weak. Therefore, in this system, most will be hydronium ions instead of attached to Cl− anions and the conjugate bases will be weaker than water molecules. On the other hand, if a chemical is a weak acid its conjugate base will not necessarily be strong. Consider that ethanoate, the conjugate base of ethanoic acid, has a base splitting constant (Kb) of about , making it a weak base. In order for a species to have a strong conjugate base it has to be a very weak acid, like water. Identifying conjugate acid–base pairs To identify the conjugate acid, look for the pair of compounds that are related. The acid–base reaction can be viewed in a before and after sense. The before is the reactant side of the equation, the after is the product side of the equation. The conjugate acid in the after side of an equation gains a hydrogen ion, so in the before side of the equation the compound that has one less hydrogen ion of the conjugate acid is the base. The conjugate base in the after side of the equation lost a hydrogen ion, so in the before side of the equation, the compound that has one more hydrogen ion of the conjugate base is the acid. Consider the following acid–base reaction: Nitric acid () is an acid because it donates a proton to the water molecule and its conjugate base is nitrate (). The water molecule acts as a base because it receives the hydrogen cation (proton) and its conjugate acid is the hydronium ion (). Applications One use of conjugate acids and bases lies in buffering systems, which include a buffer solution. In a buffer, a weak acid and its conjugate base (in the form of a salt), or a weak base and its conjugate acid, are used in order to limit the pH change during a titration process. Buffers have both organic and non-organic chemical applications. For example, besides buffers being used in lab processes, human blood acts as a buffer to maintain pH. The most important buffer in our bloodstream is the carbonic acid-bicarbonate buffer, which prevents drastic pH changes when is introduced. This functions as such: CO2 + H2O <=> H2CO3 <=> HCO3^- + H+ Furthermore, here is a table of common buffers. A second common application with an organic compound would be the production of a buffer with acetic acid. If acetic acid, a weak acid with the formula , was made into a buffer solution, it would need to be combined with its conjugate base in the form of a salt. The resulting mixture is called an acetate buffer, consisting of aqueous and aqueous . Acetic acid, along with many other weak acids, serve as useful components of buffers in different lab settings, each useful within their own pH range. Ringer's lactate solution is an example where the conjugate base of an organic acid, lactic acid, is combined with sodium, calcium and potassium cations and chloride anions in distilled water which together form a fluid which is isotonic in relation to human blood and is used for fluid resuscitation after blood loss due to trauma, surgery, or a burn injury. Table of acids and their conjugate bases Below are several examples of acids and their corresponding conjugate bases; note how they differ by just one proton (H+ ion). Acid strength decreases and conjugate base strength increases down the table. Table of bases and their conjugate acids In contrast, here is a table of bases and their conjugate acids. Similarly, base strength decreases and conjugate acid strength increases down the table.
Physical sciences
Concepts
Chemistry
40197
https://en.wikipedia.org/wiki/Vapor%20pressure
Vapor pressure
Vapor pressure or equilibrium vapor pressure is the pressure exerted by a vapor in thermodynamic equilibrium with its condensed phases (solid or liquid) at a given temperature in a closed system. The equilibrium vapor pressure is an indication of a liquid's thermodynamic tendency to evaporate. It relates to the balance of particles escaping from the liquid (or solid) in equilibrium with those in a coexisting vapor phase. A substance with a high vapor pressure at normal temperatures is often referred to as volatile. The pressure exhibited by vapor present above a liquid surface is known as vapor pressure. As the temperature of a liquid increases, the attractive interactions between liquid molecules become less significant in comparison to the entropy of those molecules in the gas phase, increasing the vapor pressure. Thus, liquids with strong intermolecular interactions are likely to have smaller vapor pressures, with the reverse true for weaker interactions. The vapor pressure of any substance increases non-linearly with temperature, often described by the Clausius–Clapeyron relation. The atmospheric pressure boiling point of a liquid (also known as the normal boiling point) is the temperature at which the vapor pressure equals the ambient atmospheric pressure. With any incremental increase in that temperature, the vapor pressure becomes sufficient to overcome atmospheric pressure and cause the liquid to form vapor bubbles. Bubble formation in greater depths of liquid requires a slightly higher temperature due to the higher fluid pressure, due to hydrostatic pressure of the fluid mass above. More important at shallow depths is the higher temperature required to start bubble formation. The surface tension of the bubble wall leads to an overpressure in the very small initial bubbles. Measurement and units Vapor pressure is measured in the standard units of pressure. The International System of Units (SI) recognizes pressure as a derived unit with the dimension of force per area and designates the pascal (Pa) as its standard unit. One pascal is one newton per square meter (N·m−2 or kg·m−1·s−2). Experimental measurement of vapor pressure is a simple procedure for common pressures between 1 and 200 kPa. The most accurate results are obtained near the boiling point of the substance; measurements smaller than are subject to major errors. Procedures often consist of purifying the test substance, isolating it in a container, evacuating any foreign gas, then measuring the equilibrium pressure of the gaseous phase of the substance in the container at different temperatures. Better accuracy is achieved when care is taken to ensure that the entire substance and its vapor are both at the prescribed temperature. This is often done, as with the use of an isoteniscope, by submerging the containment area in a liquid bath. Very low vapor pressures of solids can be measured using the Knudsen effusion cell method. In a medical context, vapor pressure is sometimes expressed in other units, specifically millimeters of mercury (mmHg). Accurate knowledge of the vapor pressure is important for volatile inhalational anesthetics, most of which are liquids at body temperature but have a relatively high vapor pressure. Estimating vapor pressures with Antoine equation The Antoine equation is a pragmatic mathematical expression of the relation between the vapor pressure and the temperature of pure liquid or solid substances. It is obtained by curve-fitting and is adapted to the fact that vapor pressure is usually increasing and concave as a function of temperature. The basic form of the equation is: and it can be transformed into this temperature-explicit form: where: is the absolute vapor pressure of a substance is the temperature of the substance , and are substance-specific coefficients (i.e., constants or parameters) is typically either or A simpler form of the equation with only two coefficients is sometimes used: which can be transformed to: Sublimations and vaporizations of the same substance have separate sets of Antoine coefficients, as do components in mixtures. Each parameter set for a specific compound is only applicable over a specified temperature range. Generally, temperature ranges are chosen to maintain the equation's accuracy of a few up to 8–10 percent. For many volatile substances, several different sets of parameters are available and used for different temperature ranges. The Antoine equation has poor accuracy with any single parameter set when used from a compound's melting point to its critical temperature. Accuracy is also usually poor when vapor pressure is under 10 Torr because of the limitations of the apparatus used to establish the Antoine parameter values. The Wagner equation gives "one of the best" fits to experimental data but is quite complex. It expresses reduced vapor pressure as a function of reduced temperature. Relation to boiling point of liquids As a general trend, vapor pressures of liquids at ambient temperatures increase with decreasing boiling points. This is illustrated in the vapor pressure chart (see right) that shows graphs of the vapor pressures versus temperatures for a variety of liquids. At the normal boiling point of a liquid, the vapor pressure is equal to the standard atmospheric pressure defined as 1 atmosphere, 760Torr, 101.325kPa, or 14.69595psi. For example, at any given temperature, methyl chloride has the highest vapor pressure of any of the liquids in the chart. It also has the lowest normal boiling point at , which is where the vapor pressure curve of methyl chloride (the blue line) intersects the horizontal pressure line of one atmosphere (atm) of absolute vapor pressure. Although the relation between vapor pressure and temperature is non-linear, the chart uses a logarithmic vertical axis to produce slightly curved lines, so one chart can graph many liquids. A nearly straight line is obtained when the logarithm of the vapor pressure is plotted against 1/(T + 230) where T is the temperature in degrees Celsius. The vapor pressure of a liquid at its boiling point equals the pressure of its surrounding environment. Liquid mixtures: Raoult's law Raoult's law gives an approximation to the vapor pressure of mixtures of liquids. It states that the activity (pressure or fugacity) of a single-phase mixture is equal to the mole-fraction-weighted sum of the components' vapor pressures: where is the mixture's vapor pressure, is the mole fraction of component in the liquid phase and is the mole fraction of component in the vapor phase respectively. is the vapor pressure of component . Raoult's law is applicable only to non-electrolytes (uncharged species); it is most appropriate for non-polar molecules with only weak intermolecular attractions (such as London forces). Systems that have vapor pressures higher than indicated by the above formula are said to have positive deviations. Such a deviation suggests weaker intermolecular attraction than in the pure components, so that the molecules can be thought of as being "held in" the liquid phase less strongly than in the pure liquid. An example is the azeotrope of approximately 95% ethanol and water. Because the azeotrope's vapor pressure is higher than predicted by Raoult's law, it boils at a temperature below that of either pure component. There are also systems with negative deviations that have vapor pressures that are lower than expected. Such a deviation is evidence for stronger intermolecular attraction between the constituents of the mixture than exists in the pure components. Thus, the molecules are "held in" the liquid more strongly when a second molecule is present. An example is a mixture of trichloromethane (chloroform) and 2-propanone (acetone), which boils above the boiling point of either pure component. The negative and positive deviations can be used to determine thermodynamic activity coefficients of the components of mixtures. Solids Equilibrium vapor pressure can be defined as the pressure reached when a condensed phase is in equilibrium with its own vapor. In the case of an equilibrium solid, such as a crystal, this can be defined as the pressure when the rate of sublimation of a solid matches the rate of deposition of its vapor phase. For most solids this pressure is very low, but some notable exceptions are naphthalene, dry ice (the vapor pressure of dry ice is 5.73 MPa (831 psi, 56.5 atm) at 20 °C, which causes most sealed containers to rupture), and ice. All solid materials have a vapor pressure. However, due to their often extremely low values, measurement can be rather difficult. Typical techniques include the use of thermogravimetry and gas transpiration. There are a number of methods for calculating the sublimation pressure (i.e., the vapor pressure) of a solid. One method is to estimate the sublimation pressure from extrapolated liquid vapor pressures (of the supercooled liquid), if the heat of fusion is known, by using this particular form of the Clausius–Clapeyron relation: where: is the sublimation pressure of the solid component at the temperature . is the extrapolated vapor pressure of the liquid component at the temperature . is the heat of fusion. is the gas constant. is the sublimation temperature. is the melting point temperature. This method assumes that the heat of fusion is temperature-independent, ignores additional transition temperatures between different solid phases, and it gives a fair estimation for temperatures not too far from the melting point. It also shows that the sublimation pressure is lower than the extrapolated liquid vapor pressure (ΔfusH > 0) and the difference grows with increased distance from the melting point. Boiling point of water Like all liquids, water boils when its vapor pressure reaches its surrounding pressure. In nature, the atmospheric pressure is lower at higher elevations and water boils at a lower temperature. The boiling temperature of water for atmospheric pressures can be approximated by the Antoine equation: or transformed into this temperature-explicit form: where the temperature is the boiling point in degrees Celsius and the pressure is in torr. Dühring's rule Dühring's rule states that a linear relationship exists between the temperatures at which two solutions exert the same vapor pressure. Examples The following table is a list of a variety of substances ordered by increasing vapor pressure (in absolute units). Estimating vapor pressure from molecular structure Several empirical methods exist to estimate the vapor pressure from molecular structure for organic molecules. Some examples are SIMPOL.1 method, the method of Moller et al., and EVAPORATION (Estimation of VApour Pressure of ORganics, Accounting for Temperature, Intramolecular, and Non-additivity effects). Meaning in meteorology In meteorology, the term vapor pressure means the partial pressure of water vapor in the atmosphere, even if it is not in equilibrium. This differs from its meaning in other sciences. According to the American Meteorological Society Glossary of Meteorology, saturation vapor pressure properly refers to the equilibrium vapor pressure of water above a flat surface of liquid water or solid ice, and is a function only of temperature and whether the condensed phase is liquid or solid. Relative humidity is defined relative to saturation vapor pressure. Equilibrium vapor pressure does not require the condensed phase to be a flat surface; it might consist of tiny droplets possibly containing solutes (impurities), such as a cloud. Equilibrium vapor pressure may differ significantly from saturation vapor pressure depending on the size of droplets and presence of other particles which act as cloud condensation nuclei. However, these terms are used inconsistently, and some authors use "saturation vapor pressure" outside the narrow meaning given by the AMS Glossary. For example, a text on atmospheric convection states, "The Kelvin effect causes the saturation vapor pressure over the curved surface of the droplet to be greater than that over a flat water surface" (emphasis added). The still-current term saturation vapor pressure derives from the obsolete theory that water vapor dissolves into air, and that air at a given temperature can only hold a certain amount of water before becoming "saturated". Actually, as stated by Dalton's law (known since 1802), the partial pressure of water vapor or any substance does not depend on air at all, and the relevant temperature is that of the liquid. Nevertheless, the erroneous belief persists among the public and even meteorologists, aided by the misleading terms saturation pressure and supersaturation and the related definition of relative humidity.
Physical sciences
Thermodynamics
Chemistry
40203
https://en.wikipedia.org/wiki/Hubble%20Space%20Telescope
Hubble Space Telescope
The Hubble Space Telescope (HST or Hubble) is a space telescope that was launched into low Earth orbit in 1990 and remains in operation. It was not the first space telescope, but it is one of the largest and most versatile, renowned as a vital research tool and as a public relations boon for astronomy. The Hubble telescope is named after astronomer Edwin Hubble and is one of NASA's Great Observatories. The Space Telescope Science Institute (STScI) selects Hubble's targets and processes the resulting data, while the Goddard Space Flight Center (GSFC) controls the spacecraft. Hubble features a mirror, and its five main instruments observe in the ultraviolet, visible, and near-infrared regions of the electromagnetic spectrum. Hubble's orbit outside the distortion of Earth's atmosphere allows it to capture extremely high-resolution images with substantially lower background light than ground-based telescopes. It has recorded some of the most detailed visible light images, allowing a deep view into space. Many Hubble observations have led to breakthroughs in astrophysics, such as determining the rate of expansion of the universe. The Hubble telescope was funded and built in the 1970s by the United States space agency NASA with contributions from the European Space Agency. Its intended launch was in 1983, but the project was beset by technical delays, budget problems, and the 1986 Challenger disaster. Hubble was launched in 1990, but its main mirror had been ground incorrectly, resulting in spherical aberration that compromised the telescope's capabilities. The optics were corrected to their intended quality by a servicing mission in 1993. Hubble is the only telescope designed to be maintained in space by astronauts. Five Space Shuttle missions have repaired, upgraded, and replaced systems on the telescope, including all five of the main instruments. The fifth mission was initially canceled on safety grounds following the Columbia disaster (2003), but after NASA administrator Michael D. Griffin approved it, the servicing mission was completed in 2009. Hubble completed 30 years of operation in April 2020 and is predicted to last until 2030 to 2040. Hubble is the visible light telescope in NASA's Great Observatories program; other parts of the spectrum are covered by the Compton Gamma Ray Observatory, the Chandra X-ray Observatory, and the Spitzer Space Telescope (which covers the infrared bands). The mid-IR-to-visible band successor to the Hubble telescope is the James Webb Space Telescope (JWST), which was launched on December 25, 2021, with the Nancy Grace Roman Space Telescope due to follow in 2027. Conception, design and aim Proposals and precursors In 1923, Hermann Oberth—considered a father of modern rocketry, along with Robert H. Goddard and Konstantin Tsiolkovsky—published ("The Rocket into Planetary Space"), which mentioned how a telescope could be propelled into Earth orbit by a rocket. The history of the Hubble Space Telescope can be traced to 1946, to astronomer Lyman Spitzer's paper "Astronomical advantages of an extraterrestrial observatory". In it, he discussed the two main advantages that a space-based observatory would have over ground-based telescopes. First, the angular resolution (the smallest separation at which objects can be clearly distinguished) would be limited only by diffraction, rather than by the turbulence in the atmosphere, which causes stars to twinkle, known to astronomers as seeing. At that time ground-based telescopes were limited to resolutions of 0.5–1.0 arcseconds, compared to a theoretical diffraction-limited resolution of about 0.05 arcsec for an optical telescope with a mirror in diameter. Second, a space-based telescope could observe infrared and ultraviolet light, which are strongly absorbed by the atmosphere of Earth. Spitzer devoted much of his career to pushing for the development of a space telescope. In 1962, a report by the U.S. National Academy of Sciences recommended development of a space telescope as part of the space program, and in 1965, Spitzer was appointed as head of a committee given the task of defining scientific objectives for a large space telescope. Also crucial was the work of Nancy Grace Roman, the "Mother of Hubble". Well before it became an official NASA project, she gave public lectures touting the scientific value of the telescope. After it was approved, she became the program scientist, setting up the steering committee in charge of making astronomer needs feasible to implement and writing testimony to Congress throughout the 1970s to advocate continued funding of the telescope. Her work as project scientist helped set the standards for NASA's operation of large scientific projects. Space-based astronomy had begun on a very small scale following World War II, as scientists made use of developments that had taken place in rocket technology. The first ultraviolet spectrum of the Sun was obtained in 1946, and NASA launched the Orbiting Solar Observatory (OSO) to obtain UV, X-ray, and gamma-ray spectra in 1962. An orbiting solar telescope was launched in 1962 by the United Kingdom as part of the Ariel programme, and in 1966 NASA launched the first Orbiting Astronomical Observatory (OAO) mission. OAO-1's battery failed after three days, terminating the mission. It was followed by Orbiting Astronomical Observatory 2 (OAO-2), which carried out ultraviolet observations of stars and galaxies from its launch in 1968 until 1972, well beyond its original planned lifetime of one year. The OSO and OAO missions demonstrated the important role space-based observations could play in astronomy. In 1968, NASA developed firm plans for a space-based reflecting telescope with a mirror in diameter, known provisionally as the Large Orbiting Telescope or Large Space Telescope (LST), with a launch slated for 1979. These plans emphasized the need for crewed maintenance missions to the telescope to ensure such a costly program had a lengthy working life, and the concurrent development of plans for the reusable Space Shuttle indicated that the technology to allow this was soon to become available. Quest for funding The continuing success of the OAO program encouraged increasingly strong consensus within the astronomical community that the LST should be a major goal. In 1970, NASA established two committees, one to plan the engineering side of the space telescope project, and the other to determine the scientific goals of the mission. Once these had been established, the next hurdle for NASA was to obtain funding for the instrument, which would be far more costly than any Earth-based telescope. The U.S. Congress questioned many aspects of the proposed budget for the telescope and forced cuts in the budget for the planning stages, which at the time consisted of very detailed studies of potential instruments and hardware for the telescope. In 1974, public spending cuts led to Congress deleting all funding for the telescope project. In 1977, then NASA Administrator James C. Fletcher proposed a token $5 million for Hubble in NASA's budget. Then NASA Associate Administrator for Space Science, Noel Hinners, instead cut all funding for Hubble, gambling that this would galvanize the scientific community into fighting for full funding. As Hinners recalls: The political ploy worked. In response to Hubble being zeroed out of NASA's budget, a nationwide lobbying effort was coordinated among astronomers. Many astronomers met congressmen and senators in person, and large-scale letter-writing campaigns were organized. The National Academy of Sciences published a report emphasizing the need for a space telescope, and eventually, the Senate agreed to half the budget that had originally been approved by Congress. The funding issues led to a reduction in the scale of the project, with the proposed mirror diameter reduced from 3 m to 2.4 m, both to cut costs and to allow a more compact and effective configuration for the telescope hardware. A proposed precursor space telescope to test the systems to be used on the main satellite was dropped, and budgetary concerns also prompted collaboration with the European Space Agency (ESA). ESA agreed to provide funding and supply one of the first generation instruments for the telescope, as well as the solar cells that would power it, and staff to work on the telescope in the United States, in return for European astronomers being guaranteed at least 15% of the observing time on the telescope. Congress eventually approved funding of US$36 million for 1978, and the design of the LST began in earnest, aiming for a launch date of 1983. In 1983, the telescope was named after Edwin Hubble, who confirmed one of the greatest scientific discoveries of the 20th century, made by Georges Lemaître, that the universe is expanding. Construction and engineering Once the Space Telescope project had been given the go-ahead, work on the program was divided among many institutions. Marshall Space Flight Center (MSFC) was given responsibility for the design, development, and construction of the telescope, while Goddard Space Flight Center was given overall control of the scientific instruments and ground-control center for the mission. MSFC commissioned the optics company Perkin-Elmer to design and build the optical telescope assembly (OTA) and Fine Guidance Sensors for the space telescope. Lockheed was commissioned to construct and integrate the spacecraft in which the telescope would be housed. Optical telescope assembly Optically, the HST is a Cassegrain reflector of Ritchey–Chrétien design, as are most large professional telescopes. This design, with two hyperbolic mirrors, is known for good imaging performance over a wide field of view, with the disadvantage that the mirrors have shapes that are hard to fabricate and test. The mirror and optical systems of the telescope determine the final performance, and they were designed to exacting specifications. Optical telescopes typically have mirrors polished to an accuracy of about a tenth of the wavelength of visible light, but the Space Telescope was to be used for observations from the visible through the ultraviolet (shorter wavelengths) and was specified to be diffraction limited to take full advantage of the space environment. Therefore, its mirror needed to be polished to an accuracy of 10 nanometers, or about 1/65 of the wavelength of red light. On the long wavelength end, the OTA was not designed with optimum infrared performance in mind—for example, the mirrors are kept at stable (and warm, about 15 °C) temperatures by heaters. This limits Hubble's performance as an infrared telescope. Perkin-Elmer (PE) intended to use custom-built and extremely sophisticated computer-controlled polishing machines to grind the mirror to the required shape. However, in case their cutting-edge technology ran into difficulties, NASA demanded that PE sub-contract to Kodak to construct a back-up mirror using traditional mirror-polishing techniques. (The team of Kodak and Itek also bid on the original mirror polishing work. Their bid called for the two companies to double-check each other's work, which would have almost certainly caught the polishing error that later caused problems.) The Kodak mirror is now on permanent display at the National Air and Space Museum. An Itek mirror built as part of the effort is now used in the 2.4 m telescope at the Magdalena Ridge Observatory. Construction of the Perkin-Elmer mirror began in 1979, starting with a blank manufactured by Corning from their ultra-low expansion glass. To keep the mirror's weight to a minimum it consisted of top and bottom plates, each thick, sandwiching a honeycomb lattice. Perkin-Elmer simulated microgravity by supporting the mirror from the back with 130 rods that exerted varying amounts of force. This ensured the mirror's final shape would be correct and to specification when deployed. Mirror polishing continued until May 1981. NASA reports at the time questioned Perkin-Elmer's managerial structure, and the polishing began to slip behind schedule and over budget. To save money, NASA halted work on the back-up mirror and moved the launch date of the telescope to October 1984. The mirror was completed by the end of 1981; it was washed using of hot, deionized water and then received a reflective coating of 65 nm-thick aluminum and a protective coating of 25 nm-thick magnesium fluoride. Doubts continued to be expressed about Perkin-Elmer's competence on a project of this importance, as their budget and timescale for producing the rest of the OTA continued to inflate. In response to a schedule described as "unsettled and changing daily", NASA postponed the launch date of the telescope until April 1985. Perkin-Elmer's schedules continued to slip at a rate of about one month per quarter, and at times delays reached one day for each day of work. NASA was forced to postpone the launch date until March and then September 1986. By this time, the total project budget had risen to US$1.175 billion. Spacecraft systems The spacecraft in which the telescope and instruments were to be housed was another major engineering challenge. It would have to withstand frequent passages from direct sunlight into the darkness of Earth's shadow, which would cause major changes in temperature, while being stable enough to allow extremely accurate pointing of the telescope. A shroud of multi-layer insulation keeps the temperature within the telescope stable and surrounds a light aluminum shell in which the telescope and instruments sit. Within the shell, a graphite-epoxy frame keeps the working parts of the telescope firmly aligned. Because graphite composites are hygroscopic, there was a risk that water vapor absorbed by the truss while in Lockheed's clean room would later be expressed in the vacuum of space; resulting in the telescope's instruments being covered by ice. To reduce that risk, a nitrogen gas purge was performed before launching the telescope into space. As well as electrical power systems, the Pointing Control System controls HST orientation using five types of sensors (magnetic sensors, optical sensors, and six gyroscopes) and two types of actuators (reaction wheels and magnetic torquers). While construction of the spacecraft in which the telescope and instruments would be housed proceeded somewhat more smoothly than the construction of the OTA, Lockheed experienced some budget and schedule slippage, and by the summer 1985, construction of the spacecraft was 30% over budget and three months behind schedule. An MSFC report said Lockheed tended to rely on NASA directions rather than take their own initiative in the construction. Computer systems and data processing The two initial, primary computers on the HST were the 1.25 MHz DF-224 system, built by Rockwell Autonetics, which contained three redundant CPUs, and two redundant NSSC-1 (NASA Standard Spacecraft Computer, Model 1) systems, developed by Westinghouse and GSFC using diode–transistor logic (DTL). A co-processor for the DF-224 was added during Servicing Mission 1 in 1993, which consisted of two redundant strings of an Intel-based 80386 processor with an 80387 math co-processor. The DF-224 and its 386 co-processor were replaced by a 25 MHz Intel-based 80486 processor system during Servicing Mission 3A in 1999. The new computer is 20 times faster, with six times more memory, than the DF-224 it replaced. It increases throughput by moving some computing tasks from the ground to the spacecraft and saves money by allowing the use of modern programming languages. Additionally, some of the science instruments and components had their own embedded microprocessor-based control systems. The MATs (Multiple Access Transponder) components, MAT-1 and MAT-2, use Hughes Aircraft CDP1802CD microprocessors. The Wide Field and Planetary Camera (WFPC) also used an RCA 1802 microprocessor (or possibly the older 1801 version). The WFPC-1 was replaced by the WFPC-2 during Servicing Mission 1 in 1993, which was then replaced by the Wide Field Camera 3 (WFC3) during Servicing Mission 4 in 2009. The upgrade extended Hubble's capability of seeing deeper into the universe and providing images in three broad regions of the spectrum. Initial instruments When launched, the HST carried five scientific instruments: the Wide Field and Planetary Camera (WF/PC), Goddard High Resolution Spectrograph (GHRS), High Speed Photometer (HSP), Faint Object Camera (FOC) and the Faint Object Spectrograph (FOS). WF/PC used a radial instrument bay, and the other four instruments were each installed in an axial instrument bay. WF/PC was a high-resolution imaging device primarily intended for optical observations. It was built by NASA's Jet Propulsion Laboratory, and incorporated a set of 48 filters isolating spectral lines of particular astrophysical interest. The instrument contained eight charge-coupled device (CCD) chips divided between two cameras, each using four CCDs. Each CCD has a resolution of 0.64 megapixels. The wide field camera (WFC) covered a large angular field at the expense of resolution, while the planetary camera (PC) took images at a longer effective focal length than the WF chips, giving it a greater magnification. The Goddard High Resolution Spectrograph (GHRS) was a spectrograph designed to operate in the ultraviolet. It was built by the Goddard Space Flight Center and could achieve a spectral resolution of 90,000. Also optimized for ultraviolet observations were the FOC and FOS, which were capable of the highest spatial resolution of any instruments on Hubble. Rather than CCDs, these three instruments used photon-counting digicons as their detectors. The FOC was constructed by ESA, while the University of California, San Diego, and Martin Marietta Corporation built the FOS. The final instrument was the HSP, designed and built at the University of Wisconsin–Madison. It was optimized for visible and ultraviolet light observations of variable stars and other astronomical objects varying in brightness. It could take up to 100,000 measurements per second with a photometric accuracy of about 2% or better. HST's guidance system can also be used as a scientific instrument. Its three Fine Guidance Sensors (FGS) are primarily used to keep the telescope accurately pointed during an observation, but can also be used to carry out extremely accurate astrometry; measurements accurate to within 0.0003 arcseconds have been achieved. Ground support The Space Telescope Science Institute (STScI) is responsible for the scientific operation of the telescope and the delivery of data products to astronomers. STScI is operated by the Association of Universities for Research in Astronomy (AURA) and is physically located in Baltimore, Maryland on the Homewood campus of Johns Hopkins University, one of the 39 U.S. universities and seven international affiliates that make up the AURA consortium. STScI was established in 1981 after something of a power struggle between NASA and the scientific community at large. NASA had wanted to keep this function in-house, but scientists wanted it to be based in an academic establishment. The Space Telescope European Coordinating Facility (ST-ECF), established at Garching bei München near Munich in 1984, provided similar support for European astronomers until 2011, when these activities were moved to the European Space Astronomy Centre. One complex task that falls to STScI is scheduling observations for the telescope. Hubble is in a low-Earth orbit to enable servicing missions, which results in most astronomical targets being occulted by the Earth for slightly less than half of each orbit. Observations cannot take place when the telescope passes through the South Atlantic Anomaly due to elevated radiation levels, and there are also sizable exclusion zones around the Sun (precluding observations of Mercury), Moon and Earth. The solar avoidance angle is about 50°, to keep sunlight from illuminating any part of the OTA. Earth and Moon avoidance keeps bright light out of the FGSs, and keeps scattered light from entering the instruments. If the FGSs are turned off, the Moon and Earth can be observed. Earth observations were used very early in the program to generate flat-fields for the WFPC1 instrument. There is a so-called continuous viewing zone (CVZ), within roughly 24° of Hubble's orbital poles, in which targets are not occulted for long periods. Due to the precession of the orbit, the location of the CVZ moves slowly over a period of eight weeks. Because the limb of the Earth is always within about 30° of regions within the CVZ, the brightness of scattered earthshine may be elevated for long periods during CVZ observations. Hubble orbits in low Earth orbit at an altitude of approximately and an inclination of 28.5°. The position along its orbit changes over time in a way that is not accurately predictable. The density of the upper atmosphere varies according to many factors, and this means Hubble's predicted position for six weeks' time could be in error by up to . Observation schedules are typically finalized only a few days in advance, as a longer lead time would mean there was a chance the target would be unobservable by the time it was due to be observed. Engineering support for HST is provided by NASA and contractor personnel at the Goddard Space Flight Center in Greenbelt, Maryland, south of the STScI. Hubble's operation is monitored 24 hours per day by four teams of flight controllers who make up Hubble's Flight Operations Team. Challenger disaster, delays, and eventual launch By January 1986, the planned launch date for Hubble that October looked feasible, but the Challenger disaster brought the U.S. space program to a halt, grounded the Shuttle fleet, and forced the launch to be postponed for several years. During this delay the telescope was kept in a clean room, powered up and purged with nitrogen, until a launch could be rescheduled. This costly situation (about per month) pushed the overall costs of the project higher. However, this delay allowed time for engineers to perform extensive tests, swap out a possibly failure-prone battery, and make other improvements. Furthermore, the ground software needed to control Hubble was not ready in 1986, and was barely ready by the 1990 launch. Following the resumption of shuttle flights, successfully launched the Hubble on April 24, 1990, as part of the STS-31 mission. At launch, NASA had spent approximately in inflation-adjusted 2010 dollars on the project. Hubble's cumulative costs are estimated to be about in 2015 dollars, which include all subsequent servicing costs, but not ongoing operations, making it the most expensive science mission in NASA history. List of Hubble instruments Hubble accommodates five science instruments at a given time, plus the Fine Guidance Sensors, which are mainly used for aiming the telescope but are occasionally used for scientific astrometry measurements. Early instruments were replaced with more advanced ones during the Shuttle servicing missions. COSTAR was a corrective optics device rather than a science instrument, but occupied one of the four axial instrument bays. Since the final servicing mission in 2009, the four active instruments have been ACS, COS, STIS and WFC3. NICMOS is kept in hibernation, but may be revived if WFC3 were to fail in the future. Advanced Camera for Surveys (ACS; 2002–present) Cosmic Origins Spectrograph (COS; 2009–present) Corrective Optics Space Telescope Axial Replacement (COSTAR; 1993–2009) Faint Object Camera (FOC; 1990–2002) Faint Object Spectrograph (FOS; 1990–1997) Fine Guidance Sensor (FGS; 1990–present) Goddard High Resolution Spectrograph (GHRS/HRS; 1990–1997) High Speed Photometer (HSP; 1990–1993) Near Infrared Camera and Multi-Object Spectrometer (NICMOS; 1997–present, hibernating since 2008) Space Telescope Imaging Spectrograph (STIS; 1997–present (non-operative 2004–2009)) Wide Field and Planetary Camera (WFPC; 1990–1993) Wide Field and Planetary Camera 2 (WFPC2; 1993–2009) Wide Field Camera 3 (WFC3; 2009–present) Of the former instruments, three (COSTAR, FOS and WFPC2) are displayed in the Smithsonian National Air and Space Museum. The FOC is in the Dornier museum, Germany. The HSP is in the Space Place at the University of Wisconsin–Madison. The first WFPC was dismantled, and some components were then re-used in WFC3. Flawed mirror Within weeks of the launch of the telescope, the returned images indicated a serious problem with the optical system. Although the first images appeared to be sharper than those of ground-based telescopes, Hubble failed to achieve a final sharp focus and the best image quality obtained was drastically lower than expected. Images of point sources spread out over a radius of more than one arcsecond, instead of having a point spread function (PSF) concentrated within a circle 0.1 arcseconds (485 nrad) in diameter, as had been specified in the design criteria. Analysis of the flawed images revealed that the primary mirror had been polished to the wrong shape. Although it was believed to be one of the most precisely figured optical mirrors ever made, smooth to about 10 nanometers, the outer perimeter was too flat by about 2200 nanometers (about mm or inch). This difference was catastrophic, introducing severe spherical aberration, a flaw in which light reflecting off the edge of a mirror focuses on a different point from the light reflecting off its center. The effect of the mirror flaw on scientific observations depended on the particular observation—the core of the aberrated PSF was sharp enough to permit high-resolution observations of bright objects, and spectroscopy of point sources was affected only through a sensitivity loss. However, the loss of light to the large, out-of-focus halo severely reduced the usefulness of the telescope for faint objects or high-contrast imaging. This meant nearly all the cosmological programs were essentially impossible, since they required observation of exceptionally faint objects. This led politicians to question NASA's competence, scientists to rue the cost which could have gone to more productive endeavors, and comedians to make jokes about NASA and the telescope. In the 1991 comedy The Naked Gun 2½: The Smell of Fear, in a scene where historical disasters are displayed, Hubble is pictured with RMS Titanic and LZ 129 Hindenburg. Nonetheless, during the first three years of the Hubble mission, before the optical corrections, the telescope carried out a large number of productive observations of less demanding targets. The error was well characterized and stable, enabling astronomers to partially compensate for the defective mirror by using sophisticated image processing techniques such as deconvolution. Origin of the problem A commission headed by Lew Allen, director of the Jet Propulsion Laboratory, was established to determine how the error could have arisen. The Allen Commission found that a reflective null corrector, a testing device used to achieve a properly shaped non-spherical mirror, had been incorrectly assembled—one lens was out of position by . During the initial grinding and polishing of the mirror, Perkin-Elmer analyzed its surface with two conventional refractive null correctors. However, for the final manufacturing step (figuring), they switched to the custom-built reflective null corrector, designed explicitly to meet very strict tolerances. The incorrect assembly of this device resulted in the mirror being ground very precisely but to the wrong shape. During fabrication, a few tests using conventional null correctors correctly reported spherical aberration. But these results were dismissed, thus missing the opportunity to catch the error, because the reflective null corrector was considered more accurate. The commission blamed the failings primarily on Perkin-Elmer. Relations between NASA and the optics company had been severely strained during the telescope construction, due to frequent schedule slippage and cost overruns. NASA found that Perkin-Elmer did not review or supervise the mirror construction adequately, did not assign its best optical scientists to the project (as it had for the prototype), and in particular did not involve the optical designers in the construction and verification of the mirror. While the commission heavily criticized Perkin-Elmer for these managerial failings, NASA was also criticized for not picking up on the quality control shortcomings, such as relying totally on test results from a single instrument. Design of a solution Many feared that Hubble would be abandoned. The design of the telescope had always incorporated servicing missions, and astronomers immediately began to seek potential solutions to the problem that could be applied at the first servicing mission, scheduled for 1993. While Kodak had ground a back-up mirror for Hubble, it would have been impossible to replace the mirror in orbit, and too expensive and time-consuming to bring the telescope back to Earth for a refit. Instead, the fact that the mirror had been ground so precisely to the wrong shape led to the design of new optical components with exactly the same error but in the opposite sense, to be added to the telescope at the servicing mission, effectively acting as "spectacles" to correct the spherical aberration. The first step was a precise characterization of the error in the main mirror. Working backwards from images of point sources, astronomers determined that the conic constant of the mirror as built was , instead of the intended . The same number was also derived by analyzing the null corrector used by Perkin-Elmer to figure the mirror, as well as by analyzing interferograms obtained during ground testing of the mirror. Because of the way the HST's instruments were designed, two different sets of correctors were required. The design of the Wide Field and Planetary Camera 2, already planned to replace the existing WF/PC, included relay mirrors to direct light onto the four separate charge-coupled device (CCD) chips making up its two cameras. An inverse error built into their surfaces could completely cancel the aberration of the primary. However, the other instruments lacked any intermediate surfaces that could be configured in this way, and so required an external correction device. The Corrective Optics Space Telescope Axial Replacement (COSTAR) system was designed to correct the spherical aberration for light focused at the FOC, FOS, and GHRS. It consists of two mirrors in the light path with one ground to correct the aberration. To fit the COSTAR system onto the telescope, one of the other instruments had to be removed, and astronomers selected the High Speed Photometer to be sacrificed. By 2002, all the original instruments requiring COSTAR had been replaced by instruments with their own corrective optics. COSTAR was then removed and returned to Earth in 2009 where it is exhibited at the National Air and Space Museum in Washington, D.C. The area previously used by COSTAR is now occupied by the Cosmic Origins Spectrograph. Servicing missions and new instruments Servicing overview Hubble was designed to accommodate regular servicing and equipment upgrades while in orbit. Instruments and limited life items were designed as orbital replacement units. Five servicing missions (SM 1, 2, 3A, 3B, and 4) were flown by NASA Space Shuttles, the first in December 1993 and the last in May 2009. Servicing missions were delicate operations that began with maneuvering to intercept the telescope in orbit and carefully retrieving it with the shuttle's mechanical arm. The necessary work was then carried out in multiple tethered spacewalks over a period of four to five days. After a visual inspection of the telescope, astronauts conducted repairs, replaced failed or degraded components, upgraded equipment, and installed new instruments. Once work was completed, the telescope was redeployed, typically after boosting to a higher orbit to address the orbital decay caused by atmospheric drag. Servicing Mission 1 The first Hubble servicing mission was scheduled for 1993 before the mirror problem was discovered. It assumed greater importance, as the astronauts would need to do extensive work to install corrective optics; failure would have resulted in either abandoning Hubble or accepting its permanent disability. Other components failed before the mission, causing the repair cost to rise to $500 million (not including the cost of the shuttle flight). A successful repair would help demonstrate the viability of building Space Station Alpha. STS-49 in 1992 demonstrated the difficulty of space work. While its rescue of Intelsat 603 received praise, the astronauts had taken possibly reckless risks in doing so. Neither the rescue nor the unrelated assembly of prototype space station components occurred as the astronauts had trained, causing NASA to reassess planning and training, including for the Hubble repair. The agency assigned to the mission Story Musgrave—who had worked on satellite repair procedures since 1976—and six other experienced astronauts, including two from STS-49. The first mission director since Project Apollo would coordinate a crew with 16 previous shuttle flights. The astronauts were trained to use about a hundred specialized tools. Heat had been the problem on prior spacewalks, which occurred in sunlight. Hubble needed to be repaired out of sunlight. Musgrave discovered during vacuum training, seven months before the mission, that spacesuit gloves did not sufficiently protect against the cold of space. After STS-57 confirmed the issue in orbit, NASA quickly changed equipment, procedures, and flight plan. Seven total mission simulations occurred before launch, the most thorough preparation in shuttle history. No complete Hubble mockup existed, so the astronauts studied many separate models (including one at the Smithsonian) and mentally combined their varying and contradictory details. Service Mission 1 flew aboard Endeavour in December 1993, and involved installation of several instruments and other equipment over ten days. Most importantly, the High Speed Photometer was replaced with the COSTAR corrective optics package, and WF/PC was replaced with the Wide Field and Planetary Camera 2 (WFPC2) with an internal optical correction system. The solar arrays and their drive electronics were also replaced, as well as four gyroscopes in the telescope pointing system, two electrical control units and other electrical components, and two magnetometers. The onboard computers were upgraded with added coprocessors, and Hubble's orbit was boosted. On January 13, 1994, NASA declared the mission a complete success and showed the first sharper images. The mission was one of the most complex performed to that date, involving five long extra-vehicular activity periods. Its success was a boon for NASA, as well as for the astronomers who now had a more capable space telescope. Servicing Mission 2 Servicing Mission 2, flown by Discovery in February 1997, replaced the GHRS and the FOS with the Space Telescope Imaging Spectrograph (STIS) and the Near Infrared Camera and Multi-Object Spectrometer (NICMOS), replaced an Engineering and Science Tape Recorder with a new Solid State Recorder, and repaired thermal insulation. NICMOS contained a heat sink of solid nitrogen to reduce the thermal noise from the instrument, but shortly after it was installed, an unexpected thermal expansion resulted in part of the heat sink coming into contact with an optical baffle. This led to an increased warming rate for the instrument and reduced its original expected lifetime of 4.5 years to about two years. Servicing Mission 3A Servicing Mission 3A, flown by Discovery, took place in December 1999, and was a split-off from Servicing Mission3 after three of the six onboard gyroscopes had failed. The fourth failed a few weeks before the mission, rendering the telescope incapable of performing scientific observations. The mission replaced all six gyroscopes, replaced a Fine Guidance Sensor and the computer, installed a Voltage/temperature Improvement Kit (VIK) to prevent battery overcharging, and replaced thermal insulation blankets. Servicing Mission 3B Servicing Mission 3B flown by Columbia in March 2002 saw the installation of a new instrument, with the FOC (which, except for the Fine Guidance Sensors when used for astrometry, was the last of the original instruments) being replaced by the Advanced Camera for Surveys (ACS). This meant COSTAR was no longer required, since all new instruments had built-in correction for the main mirror aberration. The mission also revived NICMOS by installing a closed-cycle cooler and replaced the solar arrays for the second time, providing 30 percent more power. Servicing Mission 4 Plans called for Hubble to be serviced in February 2005, but the Columbia disaster in 2003, in which the orbiter disintegrated on re-entry into the atmosphere, had wide-ranging effects to the Hubble program and other NASA missions. NASA Administrator Sean O'Keefe decided all future shuttle missions had to be able to reach the safe haven of the International Space Station should in-flight problems develop. As no shuttles were capable of reaching both HST and the space station during the same mission, future crewed service missions were canceled. This decision was criticized by numerous astronomers who felt Hubble was valuable enough to merit the human risk. HST's planned successor, the James Webb Space Telescope (JWST), by 2004 was not expected to launch until at least 2011. JWST was eventually launched in December 2021. A gap in space-observing capabilities between a decommissioning of Hubble and the commissioning of a successor was of major concern to many astronomers, given the significant scientific impact of HST. The consideration that JWST will not be located in low Earth orbit, and therefore cannot be easily upgraded or repaired in the event of an early failure, only made concerns more acute. On the other hand, NASA officials were concerned that continuing to service Hubble would consume funds from other programs and delay the JWST. In January 2004, O'Keefe said he would review his decision to cancel the final servicing mission to HST, due to public outcry and requests from Congress for NASA to look for a way to save it. The National Academy of Sciences convened an official panel, which recommended in July 2004 that the HST should be preserved despite the apparent risks. Their report urged "NASA should take no actions that would preclude a space shuttle servicing mission to the Hubble Space Telescope". In August 2004, O'Keefe asked Goddard Space Flight Center to prepare a detailed proposal for a robotic service mission. These plans were later canceled, the robotic mission being described as "not feasible". In late 2004, several Congressional members, led by Senator Barbara Mikulski, held public hearings and carried on a fight with much public support (including thousands of letters from school children across the U.S.) to get the Bush Administration and NASA to reconsider the decision to drop plans for a Hubble rescue mission. The nomination in April 2005 of a new NASA Administrator, Michael D. Griffin, changed the situation, as Griffin stated he would consider a crewed servicing mission. Soon after his appointment Griffin authorized Goddard to proceed with preparations for a crewed Hubble maintenance flight, saying he would make the final decision after the next two shuttle missions. In October 2006 Griffin gave the final go-ahead, and the 11-day mission by Atlantis was scheduled for October 2008. Hubble's main data-handling unit failed in September 2008, halting all reporting of scientific data until its back-up was brought online on October 25, 2008. Since a failure of the backup unit would leave the HST helpless, the service mission was postponed to incorporate a replacement for the primary unit. Servicing Mission 4 (SM4), flown by Atlantis in May 2009, was the last scheduled shuttle mission for HST. SM4 installed the replacement data-handling unit, repaired the ACS and STIS systems, installed improved nickel hydrogen batteries, and replaced other components including all six gyroscopes. SM4 also installed two new observation instruments—Wide Field Camera 3 (WFC3) and the Cosmic Origins Spectrograph (COS)—and the Soft Capture and Rendezvous System, which will enable the future rendezvous, capture, and safe disposal of Hubble by either a crewed or robotic mission. Except for the ACS's High Resolution Channel, which could not be repaired and was disabled, the work accomplished during SM4 rendered the telescope fully functional. Major projects Since the start of the program, a number of research projects have been carried out, some of them almost solely with Hubble, others coordinated facilities such as Chandra X-ray Observatory and ESO's Very Large Telescope. Although the Hubble observatory is nearing the end of its life, there are still major projects scheduled for it. One example is the current (2022) ULLYSES project (Ultraviolet Legacy Library of Young Stars as Essential Standards) which will last for three years to observe a set of high- and low-mass young stars and will shed light on star formation and composition. Another is the OPAL project (Outer Planet Atmospheres Legacy), which is focussed on understanding the evolution and dynamics of the atmosphere of the outer planets (such as Jupiter and Uranus) by making baseline observations over an extended period. Cosmic Assembly Near-infrared Deep Extragalactic Legacy Survey In an August 2013 press release, CANDELS was referred to as "the largest project in the history of Hubble". The survey "aims to explore galactic evolution in the early Universe, and the first seeds of cosmic structure at less than one billion years after the Big Bang." The CANDELS project site describes the survey's goals as the following: The Cosmic Assembly Near-IR Deep Extragalactic Legacy Survey is designed to document the first third of galactic evolution from z = 8 to 1.5 via deep imaging of more than 250,000 galaxies with WFC3/IR and ACS. It will also find the first Type Ia SNe beyond z > 1.5 and establish their accuracy as standard candles for cosmology. Five premier multi-wavelength sky regions are selected; each has multi-wavelength data from Spitzer and other facilities, and has extensive spectroscopy of the brighter galaxies. The use of five widely separated fields mitigates cosmic variance and yields statistically robust and complete samples of galaxies down to 109 solar masses out to z ~ 8. Frontier Fields program The program, officially named "Hubble Deep Fields Initiative 2012", is aimed to advance the knowledge of early galaxy formation by studying high-redshift galaxies in blank fields with the help of gravitational lensing to see the "faintest galaxies in the distant universe". The Frontier Fields web page describes the goals of the program being: to reveal hitherto inaccessible populations of z = 5–10 galaxies that are ten to fifty times fainter intrinsically than any presently known to solidify our understanding of the stellar masses and star formation histories of sub-L* galaxies at the earliest times to provide the first statistically meaningful morphological characterization of star forming galaxies at z > 5 to find z > 8 galaxies stretched out enough by cluster lensing to discern internal structure and/or magnified enough by cluster lensing for spectroscopic follow-up. Cosmic Evolution Survey (COSMOS) The Cosmic Evolution Survey (COSMOS) is an astronomical survey designed to probe the formation and evolution of galaxies as a function of both cosmic time (redshift) and the local galaxy environment. The survey covers a two square degree equatorial field with spectroscopy and X-ray to radio imaging by most of the major space-based telescopes and a number of large ground based telescopes, making it a key focus region of extragalactic astrophysics. COSMOS was launched in 2006 as the largest project pursued by the Hubble Space Telescope at the time, and still is the largest continuous area of sky covered for the purposes of mapping deep space in blank fields, 2.5 times the area of the moon on the sky and 17 times larger than the largest of the CANDELS regions. The COSMOS scientific collaboration that was forged from the initial COSMOS survey is the largest and longest-running extragalactic collaboration, known for its collegiality and openness. The study of galaxies in their environment can be done only with large areas of the sky, larger than a half square degree. More than two million galaxies are detected, spanning 90% of the age of the Universe. The COSMOS collaboration is led by Caitlin Casey, Jeyhan Kartaltepe, and Vernesa Smolcic and involves more than 200 scientists in a dozen countries. Public use Proposal process Anyone can apply for time on the telescope; there are no restrictions on nationality or academic affiliation, but funding for analysis is available only to U.S. institutions. Competition for time on the telescope is intense, with about one-fifth of the proposals submitted in each cycle earning time on the schedule. Calls for proposals are issued roughly annually, with time allocated for a cycle lasting about one year. Proposals are divided into several categories; "general observer" proposals are the most common, covering routine observations. "Snapshot observations" are those in which targets require only 45 minutes or less of telescope time, including overheads such as acquiring the target. Snapshot observations are used to fill in gaps in the telescope schedule that cannot be filled by regular general observer programs. Astronomers may make "Target of Opportunity" proposals, in which observations are scheduled if a transient event covered by the proposal occurs during the scheduling cycle. In addition, up to 10% of the telescope time is designated "director's discretionary" (DD) time. Astronomers can apply to use DD time at any time of year, and it is typically awarded for study of unexpected transient phenomena such as supernovae. Other uses of DD time have included the observations that led to views of the Hubble Deep Field and Hubble Ultra Deep Field, and in the first four cycles of telescope time, observations that were carried out by amateur astronomers. In 2012, the ESA held a contest for public image processing of Hubble data to encourage the discovery of "hidden treasures" in the raw Hubble data. Use by amateur astronomers The first director of STScI, Riccardo Giacconi, announced in 1986 that he intended to devote some of his director discretionary time to allowing amateur astronomers to use the telescope. The total time to be allocated was only a few hours per cycle but excited great interest among amateur astronomers. Proposals for amateur time were stringently reviewed by a committee of amateur astronomers, and time was awarded only to proposals that were deemed to have genuine scientific merit, did not duplicate proposals made by professionals, and required the unique capabilities of the space telescope. Thirteen amateur astronomers were awarded time on the telescope, with observations being carried out between 1990 and 1997. One such study was "Transition Comets—UV Search for OH". The first proposal, "A Hubble Space Telescope Study of Posteclipse Brightening and Albedo Changes on Io", was published in Icarus, a journal devoted to solar system studies. A second study from another group of amateurs was also published in Icarus. After that time, however, budget reductions at STScI made the support of work by amateur astronomers untenable, and no additional amateur programs have been carried out. Regular Hubble proposals still include findings or discovered objects by amateurs and citizen scientists. These observations are often in a collaboration with professional astronomers. One of the earliest such observations is the Great White Spot of 1990 on planet Saturn, discovered by amateur astronomer S. Wilber and observed by HST under a proposal by J. Westphal (Caltech). Later professional-amateur observations by Hubble include discoveries by the Galaxy Zoo project, such as Voorwerpjes and Green Pea galaxies. The "Gems of the Galaxies" program is based on a list of objects by Galaxy Zoo volunteers that was shortened with the help of an online vote. Additionally there are observations of minor planets discovered by amateur astronomers, such as 2I/Borisov and changes in the atmosphere of the gas giants Jupiter and Saturn or the ice giants Uranus and Neptune. In the pro-am collaboration backyard worlds the HST was used to observe a planetary mass object, called WISE J0830+2837. The non-detection by the HST helped to classify this peculiar object. Scientific results Key projects In the early 1980s, NASA and STScI convened four panels to discuss key projects. These were projects that were both scientifically important and would require significant telescope time, which would be explicitly dedicated to each project. This guaranteed that these particular projects would be completed early, in case the telescope failed sooner than expected. The panels identified three such projects: 1) a study of the nearby intergalactic medium using quasar absorption lines to determine the properties of the intergalactic medium and the gaseous content of galaxies and groups of galaxies; 2) a medium deep survey using the Wide Field Camera to take data whenever one of the other instruments was being used and 3) a project to determine the Hubble constant within ten percent by reducing the errors, both external and internal, in the calibration of the distance scale. Important discoveries Hubble has helped resolve some long-standing problems in astronomy, while also raising new questions. Some results have required new theories to explain them. Age and expansion of the universe Among its primary mission targets was to measure distances to Cepheid variable stars more accurately than ever before, and thus constrain the value of the Hubble constant, the measure of the rate at which the universe is expanding, which is also related to its age. Before the launch of HST, estimates of the Hubble constant typically had errors of up to 50%, but Hubble measurements of Cepheid variables in the Virgo Cluster and other distant galaxy clusters provided a measured value with an accuracy of ±10%, which is consistent with other more accurate measurements made since Hubble's launch using other techniques. The estimated age is now about 13.7 billion years, but before the Hubble Telescope, scientists predicted an age ranging from 10 to 20 billion years. While Hubble helped to refine estimates of the age of the universe, it also upended theories about its future. Astronomers from the High-z Supernova Search Team and the Supernova Cosmology Project used ground-based telescopes and HST to observe distant supernovae and uncovered evidence that, far from decelerating under the influence of gravity, the expansion of the universe is instead accelerating. Three members of these two groups have subsequently been awarded Nobel Prizes for their discovery. The cause of this acceleration remains poorly understood; the term used for the currently-unknown cause is dark energy, signifying that it is dark (unable to be directly seen and detected) to our current scientific instruments. Black holes The high-resolution spectra and images provided by the HST have been especially well-suited to establishing the prevalence of black holes in the center of nearby galaxies. While it had been hypothesized in the early 1960s that black holes would be found at the centers of some galaxies, and astronomers in the 1980s identified a number of good black hole candidates, work conducted with Hubble shows that black holes are probably common to the centers of all galaxies. The Hubble programs further established that the masses of the nuclear black holes and properties of the galaxies are closely related. Extending visible wavelength images A unique window on the Universe enabled by Hubble are the Hubble Deep Field, Hubble Ultra-Deep Field, and Hubble Extreme Deep Field images, which used Hubble's unmatched sensitivity at visible wavelengths to create images of small patches of sky that are the deepest ever obtained at optical wavelengths. The images reveal galaxies billions of light years away, thereby providing information about the early Universe, and have accordingly generated a wealth of scientific papers. The Wide Field Camera3 improved the view of these fields in the infrared and ultraviolet, supporting the discovery of some of the most distant objects yet discovered, such as MACS0647-JD. The non-standard object SCP 06F6 was discovered by the Hubble Space Telescope in February 2006. On March 3, 2016, researchers using Hubble data announced the discovery of the farthest confirmed galaxy to date: GN-z11, which Hubble observed as it existed roughly 400 million years after the Big Bang. The Hubble observations occurred on February 11, 2015, and April 3, 2015, as part of the CANDELS/GOODS-North surveys. Solar System discoveries The collision of Comet Shoemaker-Levy 9 with Jupiter in 1994 was fortuitously timed for astronomers, coming just a few months after Servicing Mission1 had restored Hubble's optical performance. Hubble images of the planet were sharper than any taken since the passage of Voyager 2 in 1979, and were crucial in studying the dynamics of the collision of a large comet with Jupiter, an event believed to occur once every few centuries. In March 2015, researchers announced that measurements of aurorae around Ganymede, one of Jupiter's moons, revealed that it has a subsurface ocean. Using Hubble to study the motion of its aurorae, the researchers determined that a large saltwater ocean was helping to suppress the interaction between Jupiter's magnetic field and that of Ganymede. The ocean is estimated to be deep, trapped beneath a ice crust. HST has also been used to study objects in the outer reaches of the Solar System, including the dwarf planets Pluto, Eris, and Sedna. During June and July 2012, U.S. astronomers using Hubble discovered Styx, a tiny fifth moon orbiting Pluto. From June to August 2015, Hubble was used to search for a Kuiper belt object (KBO) target for the New Horizons Kuiper Belt Extended Mission (KEM) when similar searches with ground telescopes failed to find a suitable target. This resulted in the discovery of at least five new KBOs, including the eventual KEM target, 486958 Arrokoth, that New Horizons performed a close fly-by of on January 1, 2019. In April 2022 NASA announced that astronomers were able to use images from HST to determine the size of the nucleus of comet C/2014 UN271 (Bernardinelli–Bernstein), which is the largest icy comet nucleus ever seen by astronomers. The nucleus of C/2014 UN271 has an estimated mass of 50 trillion tons which is 50 times the mass of other known comets in our solar system. Supernova reappearance On December 11, 2015, Hubble captured an image of the first-ever predicted reappearance of a supernova, dubbed "Refsdal", which was calculated using different mass models of a galaxy cluster whose gravity is warping the supernova's light. The supernova was previously seen in November 2014 behind galaxy cluster MACS J1149.5+2223 as part of Hubble's Frontier Fields program. The light from the cluster took roughly five billion years to reach Earth, while the light from the supernova behind it took five billion more years than that, as measured by their respective redshifts. Because of the gravitational effect of the galaxy cluster, four images of the supernova appeared instead of one, an example of an Einstein cross. Based on early lens models, a fifth image was predicted to reappear by the end of 2015. Refsdal reappeared as predicted in 2015. Mass and size of Milky Way In March 2019, observations from Hubble and data from the European Space Agency's Gaia space observatory were combined to determine that the mass of the Milky Way Galaxy is approximately 1.5 trillion times the mass of the Sun, a value intermediate between prior estimates. Other discoveries Other discoveries made with Hubble data include proto-planetary disks (proplyds) in the Orion Nebula; evidence for the presence of extrasolar planets around Sun-like stars; and the optical counterparts of the still-mysterious gamma-ray bursts. Using gravitational lensing, Hubble observed a galaxy designated MACS 2129-1 approximately 10 billion light-years from Earth. MACS 2129-1 subverted expectations about galaxies in which new star formation had ceased, a significant result for understanding the formation of elliptical galaxies. In 2022 Hubble detected the light of the farthest individual star ever seen to date. The star, WHL0137-LS (nicknamed Earendel), existed within the first billion years after the big bang. It will be observed by NASA's James Webb Space Telescope to confirm Earendel is indeed a star. Impact on astronomy Many objective measures show the positive impact of Hubble data on astronomy. Over 15,000 papers based on Hubble data have been published in peer-reviewed journals, and countless more have appeared in conference proceedings. Looking at papers several years after their publication, about one-third of all astronomy papers have no citations, while only two percent of papers based on Hubble data have no citations. On average, a paper based on Hubble data receives about twice as many citations as papers based on non-Hubble data. Of the 200 papers published each year that receive the most citations, about 10% are based on Hubble data. Although the HST has clearly helped astronomical research, its financial cost has been large. A study on the relative astronomical benefits of different sizes of telescopes found that while papers based on HST data generate 15 times as many citations as a ground-based telescope such as the William Herschel Telescope, the HST costs about 100 times as much to build and maintain. Deciding between building ground- versus space-based telescopes is complex. Even before Hubble was launched, specialized ground-based techniques such as aperture masking interferometry had obtained higher-resolution optical and infrared images than Hubble would achieve, though restricted to targets about 108 times brighter than the faintest targets observed by Hubble. Since then, advances in adaptive optics have extended the high-resolution imaging capabilities of ground-based telescopes to the infrared imaging of faint objects. The usefulness of adaptive optics versus HST observations depends strongly on the particular details of the research questions being asked. In the visible bands, adaptive optics can correct only a relatively small field of view, whereas HST can conduct high-resolution optical imaging over a wider field. Moreover, Hubble can image more faint objects, since ground-based telescopes are affected by the background of scattered light created by the Earth's atmosphere. Impact on aerospace engineering In addition to its scientific results, Hubble has also made significant contributions to aerospace engineering, in particular the performance of systems in low Earth orbit (LEO). These insights result from Hubble's long lifetime on orbit, extensive instrumentation, and return of assemblies to the Earth where they can be studied in detail. In particular, Hubble has contributed to studies of the behavior of graphite composite structures in vacuum, optical contamination from residual gas and human servicing, radiation damage to electronics and sensors, and the long term behavior of multi-layer insulation. One lesson learned was that gyroscopes assembled using pressurized oxygen to deliver suspension fluid were prone to failure due to electric wire corrosion. Gyroscopes are now assembled using pressurized nitrogen. Another is that optical surfaces in LEO can have surprisingly long lifetimes; Hubble was only expected to last 15 years before the mirror became unusable, but after 14 years there was no measureable degradation. Finally, Hubble servicing missions, particularly those that serviced components not designed for in-space maintenance, have contributed towards the development of new tools and techniques for on-orbit repair. Hubble data Transmission to Earth Hubble data was initially stored on the spacecraft. When launched, the storage facilities were old-fashioned reel-to-reel tape drives, but these were replaced by solid state data storage facilities during servicing missions2 and 3A. About twice daily, the Hubble Space Telescope radios data to a satellite in the geosynchronous Tracking and Data Relay Satellite System (TDRSS), which then downlinks the science data to one of two 60-foot (18-meter) diameter high-gain microwave antennas located at the White Sands Test Facility in White Sands, New Mexico. From there they are sent to the Space Telescope Operations Control Center at Goddard Space Flight Center, and finally to the Space Telescope Science Institute for archiving. Each week, HST downlinks approximately 140 gigabits of data. Color images All images from Hubble are monochromatic grayscale, taken through a variety of filters, each passing specific wavelengths of light, and incorporated in each camera. Color images are created by combining separate monochrome images taken through different filters. This process can also create false-color versions of images including infrared and ultraviolet channels, where infrared is typically rendered as a deep red and ultraviolet is rendered as a deep blue. Archives All Hubble data is eventually made available via the Mikulski Archive for Space Telescopes at STScI, CADC and ESA/ESAC. Data is usually proprietary—available only to the principal investigator (PI) and astronomers designated by the PI—for twelve months after being taken. The PI can apply to the director of the STScI to extend or reduce the proprietary period in some circumstances. Observations made on Director's Discretionary Time are exempt from the proprietary period, and are released to the public immediately. Calibration data such as flat fields and dark frames are also publicly available straight away. All data in the archive is in the FITS format, which is suitable for astronomical analysis but not for public use. The Hubble Heritage Project processes and releases to the public a small selection of the most striking images in JPEG and TIFF formats. Pipeline reduction Astronomical data taken with CCDs must undergo several calibration steps before they are suitable for astronomical analysis. STScI has developed sophisticated software that automatically calibrates data when they are requested from the archive using the best calibration files available. This 'on-the-fly' processing means large data requests can take a day or more to be processed and returned. The process by which data is calibrated automatically is known as 'pipeline reduction', and is increasingly common at major observatories. Astronomers may if they wish retrieve the calibration files themselves and run the pipeline reduction software locally. This may be desirable when calibration files other than those selected automatically need to be used. Data analysis Hubble data can be analyzed using many different packages. STScI maintains the custom-made Space Telescope Science Data Analysis System (STSDAS) software, which contains all the programs needed to run pipeline reduction on raw data files, as well as many other astronomical image processing tools, tailored to the requirements of Hubble data. The software runs as a module of IRAF, a popular astronomical data reduction program. Outreach activities NASA considered it important for the Space Telescope to capture the public's imagination, given the considerable contribution of taxpayers to its construction and operational costs. After the difficult early years when the faulty mirror severely dented Hubble's reputation with the public, the first servicing mission allowed its rehabilitation as the corrected optics produced numerous remarkable images. Several initiatives have helped to keep the public informed about Hubble activities. In the United States, outreach efforts are coordinated by the Space Telescope Science Institute (STScI) Office for Public Outreach, which was established in 2000 to ensure that U.S. taxpayers saw the benefits of their investment in the space telescope program. To that end, STScI operates the HubbleSite.org website. The Hubble Heritage Project, operating out of the STScI, provides the public with high-quality images of the most interesting and striking objects observed. The Heritage team is composed of amateur and professional astronomers, as well as people with backgrounds outside astronomy, and emphasizes the aesthetic nature of Hubble images. The Heritage Project is granted a small amount of time to observe objects which, for scientific reasons, may not have images taken at enough wavelengths to construct a full-color image. Since 1999, the leading Hubble outreach group in Europe has been the Hubble European Space Agency Information Centre (HEIC). This office was established at the Space Telescope European Coordinating Facility in Munich, Germany. HEIC's mission is to fulfill HST outreach and education tasks for the European Space Agency. The work is centered on the production of news and photo releases that highlight interesting Hubble results and images. These are often European in origin, and so increase awareness of both ESA's Hubble share (15%) and the contribution of European scientists to the observatory. ESA produces educational material, including a videocast series called Hubblecast designed to share world-class scientific news with the public. The Hubble Space Telescope has won two Space Achievement Awards from the Space Foundation, for its outreach activities, in 2001 and 2010. A replica of the Hubble Space Telescope is displayed on the courthouse lawn in Marshfield, Missouri, the hometown of namesake Edwin P. Hubble. Celebration images The Hubble Space Telescope celebrated its 20th anniversary in space on April 24, 2010. To commemorate the occasion, NASA, ESA, and the Space Telescope Science Institute (STScI) released an image from the Carina Nebula. To commemorate Hubble's 25th anniversary in space on April 24, 2015, STScI released images of the Westerlund 2 cluster, located about away in the constellation Carina, through its Hubble 25 website. The European Space Agency created a dedicated 25th anniversary page on its website. In April 2016, a special celebratory image of the Bubble Nebula was released for Hubble's 26th "birthday". Equipment failures Gyroscope rotation sensors HST uses gyroscopes to detect and measure any rotations so it can stabilize itself in orbit and point accurately and steadily at astronomical targets. HST has six of these rate-sensing gyroscopes installed. Three gyroscopes are normally required for operation; observations are still possible with two or one, but the area of sky that can be viewed would be somewhat restricted, and observations requiring very accurate pointing are more difficult. In 2018, the plan was to drop into one-gyroscope mode if fewer than three working gyroscopes were operational. The gyroscopes are part of the Pointing Control System, which uses five types of sensors (magnetic sensors, optical sensors, and the gyroscopes) and two types of actuators (reaction wheels and magnetic torquers). After the Columbia disaster in 2003, it was unclear whether another servicing mission would be possible, and gyroscope life became a concern again, so engineers developed new software for two-gyroscope and one-gyroscope modes to maximize the potential lifetime. The development was successful, and in 2005, it was decided to switch to two-gyroscope mode for regular telescope operations as a means of extending the lifetime of the mission. The switch to this mode was made in August 2005, leaving Hubble with two gyroscopes in use, two on backup, and two inoperable. One more gyroscope failed in 2007. By the time of the final repair mission in May 2009, during which all six gyroscopes were replaced (with two new pairs and one refurbished pair), only three were still working. Engineers determined that the gyroscope failures were caused by corrosion of electric wires powering the motor that was initiated by oxygen-pressurized air used to deliver the thick suspending fluid. The new gyroscope models were assembled using pressurized nitrogen and were expected to be much more reliable. In the 2009 servicing mission all six gyroscopes were replaced, and after almost ten years only three gyroscopes failed, and only after exceeding the average expected run time for the design. Of the six gyroscopes replaced in 2009, three were of the old design susceptible for flex-lead failure, and three were of the new design with a longer expected lifetime. The first of the old-style gyroscopes failed in March 2014, and the second in April 2018. On October 5, 2018, the last of the old-style gyroscopes failed, and one of the new-style gyroscopes was powered-up from standby state. However, that reserve gyroscope did not immediately perform within operational limits, and so the observatory was placed into "safe" mode while scientists attempted to fix the problem. NASA tweeted on October 22, 2018, that the "rotation rates produced by the backup gyro have reduced and are now within a normal range. Additional tests to be performed to ensure Hubble can return to science operations with this gyro." The solution that restored the backup new-style gyroscope to operational range was widely reported as "turning it off and on again". A "running restart" of the gyroscope was performed, but this had no effect, and the final resolution to the failure was more complex. The failure was attributed to an inconsistency in the fluid surrounding the float within the gyroscope (e.g., an air bubble). On October 18, 2018, the Hubble Operations Team directed the spacecraft into a series of maneuvers—moving the spacecraft in opposite directions—in order to mitigate the inconsistency. Only after the maneuvers, and a subsequent set of maneuvers on October 19, did the gyroscope truly operate within its normal range. Instruments and electronics Past servicing missions have exchanged old instruments for new ones, avoiding failure and making new types of science possible. Without servicing missions, all the instruments will eventually fail. In August 2004, the power system of the Space Telescope Imaging Spectrograph (STIS) failed, rendering the instrument inoperable. The electronics had originally been fully redundant, but the first set of electronics failed in May 2001. This power supply was fixed during Servicing Mission4 in May 2009. Similarly, the Advanced Camera for Surveys (ACS) main camera primary electronics failed in June 2006, and the power supply for the backup electronics failed on January 27, 2007. Only the instrument's Solar Blind Channel (SBC) was operable using the side-1 electronics. A new power supply for the wide angle channel was added during SM 4, but quick tests revealed this did not help the high resolution channel. The Wide Field Channel (WFC) was returned to service by STS-125 in May 2009 but the High Resolution Channel (HRC) remains offline. On January 8, 2019, Hubble entered a partial safe mode following suspected hardware problems in its most advanced instrument, the Wide Field Camera 3 instrument. NASA later reported that the cause of the safe mode within the instrument was a detection of voltage levels out of a defined range. On January 15, 2019, NASA said the cause of the failure was a software problem. Engineering data within the telemetry circuits were not accurate. In addition, all other telemetry within those circuits also contained erroneous values indicating that this was a telemetry issue and not a power supply issue. After resetting the telemetry circuits and associated boards the instrument began functioning again. On January 17, 2019, the device was returned to normal operation and on the same day it completed its first science observations. 2021 power control issue On June 13, 2021, Hubble's payload computer halted due to a suspected issue with a memory module. An attempt to restart the computer on June 14 failed. Further attempts to switch to one of three other backup memory modules on board the spacecraft failed on June 18. On June 23 and 24, NASA engineers switched Hubble to a backup payload computer, but these operations have failed as well with the same error. On June 28, 2021, NASA announced that it was extending the investigation to other components. Scientific operations were suspended while NASA worked to diagnose and resolve the issue. After identifying a malfunctioning power control unit (PCU) supplying power to one of Hubble's computers, NASA was able to switch to a backup PCU and bring Hubble back to operational mode on July 16. On October 23, 2021, HST instruments reported missing synchronization messages and went into safe mode. By December 8, 2021, NASA had restored full science operations and was developing updates to make instruments more resilient to missing synchronization messages. Future Orbital decay and controlled reentry Hubble orbits the Earth in the extremely tenuous upper atmosphere, and over time its orbit decays due to drag. If not reboosted, it will re-enter the Earth's atmosphere within some decades, with the exact date depending on how active the Sun is and its impact on the upper atmosphere. If Hubble were to descend in a completely uncontrolled re-entry, parts of the main mirror and its support structure would probably survive, leaving the potential for damage or even human fatalities. In 2013, deputy project manager James Jeletic projected that Hubble could survive into the 2020s. Based on solar activity and atmospheric drag, or lack thereof, a natural atmospheric reentry for Hubble will occur between 2028 and 2040. In June 2016, NASA extended the service contract for Hubble until June 2021. In November 2021, NASA extended the service contract for Hubble until June 2026. NASA's original plan for safely de-orbiting Hubble was to retrieve it using a Space Shuttle. Hubble would then have most likely been displayed in the Smithsonian Institution. This is no longer possible since the Space Shuttle fleet has been retired, and would have been unlikely in any case due to the cost of the mission and risk to the crew. Instead, NASA considered adding an external propulsion module to allow controlled re-entry. Ultimately, in 2009, as part of Servicing Mission 4, the last servicing mission by the Space Shuttle, NASA installed the Soft Capture Mechanism (SCM), to enable deorbit by either a crewed or robotic mission. The SCM, together with the Relative Navigation System (RNS), mounted on the Shuttle to collect data to "enable NASA to pursue numerous options for the safe de-orbit of Hubble", constitute the Soft Capture and Rendezvous System (SCRS). Possible service missions , the Trump Administration was considering a proposal by the Sierra Nevada Corporation to use a crewed version of its Dream Chaser spacecraft to service Hubble some time in the 2020s as a continuation of its scientific capabilities and as insurance against any malfunctions in the James Webb Space Telescope. In 2020, John Grunsfeld said that SpaceX Crew Dragon or Orion could perform another repair mission within ten years. While robotic technology is not yet sophisticated enough, he said, with another crewed visit "We could keep Hubble going for another few decades" with new gyros and instruments. Billionaire private astronaut Jared Isaacman proposed to fund a servicing mission using SpaceX spacecraft. Though it might save NASA much money, SpaceX and NASA differed on the mission's risk. In September 2022, NASA and SpaceX signed a Space Act Agreement to investigate the possibility of launching a Crew Dragon mission to service and boost Hubble to a higher orbit, possibly extending its lifespan by another 20 years. This mission could have been the second of the Polaris Program, but by June 2024 NASA had rejected a private servicing mission because of potential damage to the observatory. Successors There is no direct replacement to Hubble as an ultraviolet and visible light space telescope, because near-term space telescopes do not duplicate Hubble's wavelength coverage (near-ultraviolet to near-infrared wavelengths), instead concentrating on the further infrared bands. These bands are preferred for studying high redshift and low-temperature objects, objects generally older and farther away in the universe. These wavelengths are also difficult or impossible to study from the ground, justifying the expense of a space-based telescope. Large ground-based telescopes can image some of the same wavelengths as Hubble, sometimes challenge HST in terms of resolution by using adaptive optics (AO), have much larger light-gathering power, and can be upgraded more easily, but cannot yet match Hubble's excellent resolution over a wide field of view with the very dark background of space. Plans for a Hubble successor materialized as the Next Generation Space Telescope project, which culminated in plans for the James Webb Space Telescope (JWST), the formal successor of Hubble. Very different from a scaled-up Hubble, it is designed to operate colder and farther away from the Earth at the L2 Lagrangian point, where thermal and optical interference from the Earth and Moon are lessened. It is not engineered to be fully serviceable (such as replaceable instruments), but the design includes a docking ring to enable visits from other spacecraft. A main scientific goal of JWST is to observe the most distant objects in the universe, beyond the reach of existing instruments. It is expected to detect stars in the early Universe approximately 280 million years older than stars HST now detects. The telescope is an international collaboration between NASA, the European Space Agency, and the Canadian Space Agency since 1996, and was launched on December 25, 2021, on an Ariane 5 rocket. Although JWST is primarily an infrared instrument, its coverage extends down to 600 nm wavelength light, or roughly orange in the visible spectrum. A typical human eye can see to about 750 nm wavelength light, so there is some overlap with the longest visible wavelength bands, including orange and red light. A complementary telescope, looking at even longer wavelengths than Hubble or JWST, was the European Space Agency's Herschel Space Observatory, launched on May 14, 2009. Like JWST, Herschel was not designed to be serviced after launch, and had a mirror substantially larger than Hubble's, but observed only in the far infrared and submillimeter. It needed helium coolant, of which it ran out on April 29, 2013. Further concepts for advanced 21st-century space telescopes include the Large Ultraviolet Optical Infrared Surveyor (LUVOIR), a conceptualized optical space telescope that if realized could be a more direct successor to HST, with the ability to observe and photograph astronomical objects in the visible, ultraviolet, and infrared wavelengths, with substantially better resolution than Hubble or the Spitzer Space Telescope. The final planning report, prepared for the 2020 Astronomy and Astrophysics Decadal Survey, suggested a launch date of 2039. The Decadal Survey eventually recommended that ideas for LUVOIR be combined with the Habitable Exoplanet Observer proposal to devise a new, 6-meter flagship telescope that could launch in the 2040s. Existing ground-based telescopes, and various proposed Extremely Large Telescopes, can exceed the HST in terms of sheer light-gathering power and diffraction limit due to larger mirrors, but other factors affect telescopes. In some cases, they may be able to match or exceed Hubble in resolution by using adaptive optics (AO). However, AO on large ground-based reflectors will not make Hubble and other space telescopes obsolete. Most AO systems sharpen the view over a very narrow field—Lucky Cam, for example, produces crisp images just 10 to 20 arcseconds wide, whereas Hubble's cameras produce crisp images across a 150 arcsecond (2½ arcminutes) field. Furthermore, space telescopes can study the universe across the entire electromagnetic spectrum, most of which is blocked by Earth's atmosphere. Finally, the background sky is darker in space than on the ground, because air absorbs solar energy during the day and then releases it at night, producing a faint—but nevertheless discernible—airglow that washes out low-contrast astronomical objects.
Technology
Uncrewed spacecraft
null
40239
https://en.wikipedia.org/wiki/Geosynchronous%20orbit
Geosynchronous orbit
A geosynchronous orbit (sometimes abbreviated GSO) is an Earth-centered orbit with an orbital period that matches Earth's rotation on its axis, 23 hours, 56 minutes, and 4 seconds (one sidereal day). The synchronization of rotation and orbital period means that, for an observer on Earth's surface, an object in geosynchronous orbit returns to exactly the same position in the sky after a period of one sidereal day. Over the course of a day, the object's position in the sky may remain still or trace out a path, typically in a figure-8 form, whose precise characteristics depend on the orbit's inclination and eccentricity. A circular geosynchronous orbit has a constant altitude of . A special case of geosynchronous orbit is the geostationary orbit (often abbreviated GEO), which is a circular geosynchronous orbit in Earth's equatorial plane with both inclination and eccentricity equal to 0. A satellite in a geostationary orbit remains in the same position in the sky to observers on the surface. Communications satellites are often given geostationary or close-to-geostationary orbits, so that the satellite antennas that communicate with them do not have to move but can be pointed permanently at the fixed location in the sky where the satellite appears. History In 1929, Herman Potočnik described both geosynchronous orbits in general and the special case of the geostationary Earth orbit in particular as useful orbits for space stations. The first appearance of a geosynchronous orbit in popular literature was in October 1942, in the first Venus Equilateral story by George O. Smith, but Smith did not go into details. British science fiction author Arthur C. Clarke popularised and expanded the concept in a 1945 paper entitled Extra-Terrestrial Relays – Can Rocket Stations Give Worldwide Radio Coverage?, published in Wireless World magazine. Clarke acknowledged the connection in his introduction to The Complete Venus Equilateral. The orbit, which Clarke first described as useful for broadcast and relay communications satellites, is sometimes called the Clarke Orbit. Similarly, the collection of artificial satellites in this orbit is known as the Clarke Belt. In technical terminology, the geosynchronous orbits are often referred to as geostationary if they are roughly over the equator, but the terms are used somewhat interchangeably. Specifically, geosynchronous Earth orbit (GEO) may be a synonym for geosynchronous equatorial orbit, or geostationary Earth orbit. The first geosynchronous satellite was designed by Harold Rosen while he was working at Hughes Aircraft in 1959. Inspired by Sputnik 1, he wanted to use a geostationary (geosynchronous equatorial) satellite to globalise communications. Telecommunications between the US and Europe was then possible between just 136 people at a time, and reliant on high frequency radios and an undersea cable. Conventional wisdom at the time was that it would require too much rocket power to place a satellite in a geosynchronous orbit and it would not survive long enough to justify the expense, so early efforts were put towards constellations of satellites in low or medium Earth orbit. The first of these were the passive Echo balloon satellites in 1960, followed by Telstar 1 in 1962. Although these projects had difficulties with signal strength and tracking that could be solved through geosynchronous satellites, the concept was seen as impractical, so Hughes often withheld funds and support. By 1961, Rosen and his team had produced a cylindrical prototype with a diameter of , height of , weighing ; it was light, and small, enough to be placed into orbit by then-available rocketry, was spin stabilised and used dipole antennas producing a pancake-shaped waveform. In August 1961, they were contracted to begin building the working satellite. They lost Syncom 1 to electronics failure, but Syncom 2 was successfully placed into a geosynchronous orbit in 1963. Although its inclined orbit still required moving antennas, it was able to relay TV transmissions, and allowed for US President John F. Kennedy to phone Nigerian prime minister Abubakar Tafawa Balewa from a ship on August 23, 1963. Today there are hundreds of geosynchronous satellites providing remote sensing, navigation and communications. Although most populated land locations on the planet now have terrestrial communications facilities (microwave, fiber-optic), which often have latency and bandwidth advantages, and telephone access covering 96% of the population and internet access 90% as of 2018, some rural and remote areas in developed countries are still reliant on satellite communications. Types Geostationary orbit A geostationary equatorial orbit (GEO) is a circular geosynchronous orbit in the plane of the Earth's equator with a radius of approximately (measured from the center of the Earth). A satellite in such an orbit is at an altitude of approximately above mean sea level. It maintains the same position relative to the Earth's surface. If one could see a satellite in geostationary orbit, it would appear to hover at the same point in the sky, i.e., not exhibit diurnal motion, while the Sun, Moon, and stars would traverse the skies behind it. Such orbits are useful for telecommunications satellites. A perfectly stable geostationary orbit is an ideal that can only be approximated. In practice the satellite drifts out of this orbit because of perturbations such as the solar wind, radiation pressure, variations in the Earth's gravitational field, and the gravitational effect of the Moon and Sun, and thrusters are used to maintain the orbit in a process known as station-keeping. Eventually, without the use of thrusters, the orbit will become inclined, oscillating between 0° and 15° every 55 years. At the end of the satellite's lifetime, when fuel approaches depletion, satellite operators may decide to omit these expensive manoeuvres to correct inclination and only control eccentricity. This prolongs the life-time of the satellite as it consumes less fuel over time, but the satellite can then only be used by ground antennas capable of following the N-S movement. Geostationary satellites will also tend to drift around one of two stable longitudes of 75° and 255° without station keeping. Elliptical and inclined geosynchronous orbits Many objects in geosynchronous orbits have eccentric and/or inclined orbits. Eccentricity makes the orbit elliptical and appear to oscillate E-W in the sky from the viewpoint of a ground station, while inclination tilts the orbit compared to the equator and makes it appear to oscillate N-S from a groundstation. These effects combine to form an analemma (figure-8). Satellites in elliptical/eccentric orbits must be tracked by steerable ground stations. Tundra orbit The Tundra orbit is an eccentric geosynchronous orbit, which allows the satellite to spend most of its time dwelling over one high latitude location. It sits at an inclination of 63.4°, which is a frozen orbit, which reduces the need for stationkeeping. At least two satellites are needed to provide continuous coverage over an area. It was used by the Sirius XM Satellite Radio to improve signal strength in the northern US and Canada. Quasi-zenith orbit The Quasi-Zenith Satellite System (QZSS) is a four-satellite system that operates in a geosynchronous orbit at an inclination of 42° and a 0.075 eccentricity. Each satellite dwells over Japan, allowing signals to reach receivers in urban canyons then passes quickly over Australia. Launch Geosynchronous satellites are launched to the east into a prograde orbit that matches the rotation rate of the equator. The smallest inclination that a satellite can be launched into is that of the launch site's latitude, so launching the satellite from close to the equator limits the amount of inclination change needed later. Additionally, launching from close to the equator allows the speed of the Earth's rotation to give the satellite a boost. A launch site should have water or deserts to the east, so any failed rockets do not fall on a populated area. Most launch vehicles place geosynchronous satellites directly into a geosynchronous transfer orbit (GTO), an elliptical orbit with an apogee at GSO height and a low perigee. On-board satellite propulsion is then used to raise the perigee, circularise and reach GSO. Once in a viable geostationary orbit, spacecraft can change their longitudinal position by adjusting their semi-major axis such that the new period is shorter or longer than a sidereal day, in order to effect an apparent "drift" Eastward or Westward, respectively. Once at the desired longitude, the spacecraft's period is restored to geosynchronous. Proposed orbits Statite proposal A statite is a hypothetical satellite that uses radiation pressure from the Sun against a solar sail to modify its orbit. It would hold its location over the dark side of the Earth at a latitude of approximately 30 degrees. It would return to the same spot in the sky every 24 hours from an Earth-based viewer's perspective, so be functionally similar to a geosynchronous orbit. Space elevator A further form of geosynchronous orbit is the theoretical space elevator. If a mass orbiting above the geostationary belt is tethered to the earth’s surface, and the mass is accelerated to maintain an orbital period equal to one sidereal day, then since the orbit now requires more downward force than is supplied by gravity alone. The tether will become tensioned by the extra centripetal force required, and this tension force is available to hoist objects up the tether structure. Retired satellites Geosynchronous satellites require some station-keeping in order to remain in position, and once they run out of thruster fuel and are no longer useful they are moved into a higher graveyard orbit. It is not feasible to deorbit geosynchronous satellites, for to do so would take far more fuel than would be used by slightly elevating the orbit; and atmospheric drag is negligible, giving GSOs lifetimes of thousands of years. The retirement process is becoming increasingly regulated and satellites must have a 90% chance of moving over 200 km above the geostationary belt at end of life. Space debris Space debris in geosynchronous orbits typically has a lower collision speed than at LEO since most GSO satellites orbit in the same plane, altitude and speed; however, the presence of satellites in eccentric orbits allows for collisions at up to 4 km/s. Although a collision is comparatively unlikely, GSO satellites have a limited ability to avoid any debris. Debris less than 10 cm in diameter cannot be seen from the Earth, making it difficult to assess their prevalence. Despite efforts to reduce risk, spacecraft collisions have occurred. The European Space Agency telecom satellite Olympus-1 was struck by a meteoroid on August 11, 1993, and eventually moved to a graveyard orbit, and in 2006 the Russian Express-AM11 communications satellite was struck by an unknown object and rendered inoperable, although its engineers had enough contact time with the satellite to send it into a graveyard orbit. In 2017 both AMC-9 and Telkom-1 broke apart from an unknown cause. Properties A geosynchronous orbit has the following properties: Period: 1436 minutes (one sidereal day) Semi-major axis: 42,164 km Period All geosynchronous orbits have an orbital period equal to exactly one sidereal day. This means that the satellite will return to the same point above the Earth's surface every (sidereal) day, regardless of other orbital properties. This orbital period, T, is directly related to the semi-major axis of the orbit through the formula: where: is the length of the orbit's semi-major axis is the standard gravitational parameter of the central body Inclination A geosynchronous orbit can have any inclination. Satellites commonly have an inclination of zero, ensuring that the orbit remains over the equator at all times, making it stationary with respect to latitude from the point of view of a ground observer (and in the ECEF reference frame). Another popular inclinations is 63.4° for a Tundra orbit, which ensures that the orbit's argument of perigee does not change over time. Ground track In the special case of a geostationary orbit, the ground track of a satellite is a single point on the equator. In the general case of a geosynchronous orbit with a non-zero inclination or eccentricity, the ground track is a more or less distorted figure-eight, returning to the same places once per sidereal day.
Physical sciences
Orbital mechanics
null
40282
https://en.wikipedia.org/wiki/Type%20theory
Type theory
In mathematics and theoretical computer science, a type theory is the formal presentation of a specific type system. Type theory is the academic study of type systems. Some type theories serve as alternatives to set theory as a foundation of mathematics. Two influential type theories that have been proposed as foundations are: Typed λ-calculus of Alonzo Church Intuitionistic type theory of Per Martin-Löf Most computerized proof-writing systems use a type theory for their foundation. A common one is Thierry Coquand's Calculus of Inductive Constructions. History Type theory was created to avoid a paradox in a mathematical equation based on naive set theory and formal logic. Russell's paradox (first described in Gottlob Frege's The Foundations of Arithmetic) is that, without proper axioms, it is possible to define the set of all sets that are not members of themselves; this set both contains itself and does not contain itself. Between 1902 and 1908, Bertrand Russell proposed various solutions to this problem. By 1908, Russell arrived at a ramified theory of types together with an axiom of reducibility, both of which appeared in Whitehead and Russell's Principia Mathematica published in 1910, 1912, and 1913. This system avoided contradictions suggested in Russell's paradox by creating a hierarchy of types and then assigning each concrete mathematical entity to a specific type. Entities of a given type were built exclusively of subtypes of that type, thus preventing an entity from being defined using itself. This resolution of Russell's paradox is similar to approaches taken in other formal systems, such as Zermelo-Fraenkel set theory. Type theory is particularly popular in conjunction with Alonzo Church's lambda calculus. One notable early example of type theory is Church's simply typed lambda calculus. Church's theory of types helped the formal system avoid the Kleene–Rosser paradox that afflicted the original untyped lambda calculus. Church demonstrated that it could serve as a foundation of mathematics and it was referred to as a higher-order logic. In the modern literature, "type theory" refers to a typed system based around lambda calculus. One influential system is Per Martin-Löf's intuitionistic type theory, which was proposed as a foundation for constructive mathematics. Another is Thierry Coquand's calculus of constructions, which is used as the foundation by Coq, Lean, and other computer proof assistants. Type theory is an active area of research, one direction being the development of homotopy type theory. Applications Mathematical foundations The first computer proof assistant, called Automath, used type theory to encode mathematics on a computer. Martin-Löf specifically developed intuitionistic type theory to encode all mathematics to serve as a new foundation for mathematics. There is ongoing research into mathematical foundations using homotopy type theory. Mathematicians working in category theory already had difficulty working with the widely accepted foundation of Zermelo–Fraenkel set theory. This led to proposals such as Lawvere's Elementary Theory of the Category of Sets (ETCS). Homotopy type theory continues in this line using type theory. Researchers are exploring connections between dependent types (especially the identity type) and algebraic topology (specifically homotopy). Proof assistants Much of the current research into type theory is driven by proof checkers, interactive proof assistants, and automated theorem provers. Most of these systems use a type theory as the mathematical foundation for encoding proofs, which is not surprising, given the close connection between type theory and programming languages: LF is used by Twelf, often to define other type theories; many type theories which fall under higher-order logic are used by the HOL family of provers and PVS; computational type theory is used by NuPRL; calculus of constructions and its derivatives are used by Coq, Matita, and Lean; UTT (Luo's Unified Theory of dependent Types) is used by Agda which is both a programming language and proof assistant Many type theories are supported by LEGO and Isabelle. Isabelle also supports foundations besides type theories, such as ZFC. Mizar is an example of a proof system that only supports set theory. Programming languages Any static program analysis, such as the type checking algorithms in the semantic analysis phase of compiler, has a connection to type theory. A prime example is Agda, a programming language which uses UTT (Luo's Unified Theory of dependent Types) for its type system. The programming language ML was developed for manipulating type theories (see LCF) and its own type system was heavily influenced by them. Linguistics Type theory is also widely used in formal theories of semantics of natural languages, especially Montague grammar and its descendants. In particular, categorial grammars and pregroup grammars extensively use type constructors to define the types (noun, verb, etc.) of words. The most common construction takes the basic types and for individuals and truth-values, respectively, and defines the set of types recursively as follows: if and are types, then so is ; nothing except the basic types, and what can be constructed from them by means of the previous clause are types. A complex type is the type of functions from entities of type to entities of type . Thus one has types like which are interpreted as elements of the set of functions from entities to truth-values, i.e. indicator functions of sets of entities. An expression of type is a function from sets of entities to truth-values, i.e. a (indicator function of a) set of sets. This latter type is standardly taken to be the type of natural language quantifiers, like everybody or nobody (Montague 1973, Barwise and Cooper 1981). Type theory with records is a formal semantics representation framework, using records to express type theory types. It has been used in natural language processing, principally computational semantics and dialogue systems. Social sciences Gregory Bateson introduced a theory of logical types into the social sciences; his notions of double bind and logical levels are based on Russell's theory of types. Logic A type theory is a mathematical logic, which is to say it is a collection of rules of inference that result in judgments. Most logics have judgments asserting "The proposition is true", or "The formula is a well-formed formula". A type theory has judgments that define types and assign them to a collection of formal objects, known as terms. A term and its type are often written together as . Terms A term in logic is recursively defined as a constant symbol, variable, or a function application, where a term is applied to another term. Constant symbols could include the natural number , the Boolean value , and functions such as the successor function and conditional operator . Thus some terms could be , , , and . Judgments Most type theories have 4 judgments: " is a type" " is a term of type " "Type is equal to type " "Terms and both of type are equal" Judgments may follow from assumptions. For example, one might say "assuming is a term of type and is a term of type , it follows that is a term of type ". Such judgments are formally written with the turnstile symbol . If there are no assumptions, there will be nothing to the left of the turnstile. The list of assumptions on the left is the context of the judgment. Capital greek letters, such as and , are common choices to represent some or all of the assumptions. The 4 different judgments are thus usually written as follows. Some textbooks use a triple equal sign to stress that this is judgmental equality and thus an extrinsic notion of equality. The judgments enforce that every term has a type. The type will restrict which rules can be applied to a term. Rules of Inference A type theory's inference rules say what judgments can be made, based on the existence of other judgments. Rules are expressed as a Gentzen-style deduction using a horizontal line, with the required input judgments above the line and the resulting judgment below the line. For example, the following inference rule states a substitution rule for judgmental equality.The rules are syntactic and work by rewriting. The metavariables , , , , and may actually consist of complex terms and types that contain many function applications, not just single symbols. To generate a particular judgment in type theory, there must be a rule to generate it, as well as rules to generate all of that rule's required inputs, and so on. The applied rules form a proof tree, where the top-most rules need no assumptions. One example of a rule that does not require any inputs is one that states the type of a constant term. For example, to assert that there is a term of type , one would write the following. Type inhabitation Generally, the desired conclusion of a proof in type theory is one of type inhabitation. The decision problem of type inhabitation (abbreviated by ) is: Given a context and a type , decide whether there exists a term that can be assigned the type in the type environment . Girard's paradox shows that type inhabitation is strongly related to the consistency of a type system with Curry–Howard correspondence. To be sound, such a system must have uninhabited types. A type theory usually has several rules, including ones to: create a judgment (known as a context in this case) add an assumption to the context (context weakening) rearrange the assumptions use an assumption to create a variable define reflexivity, symmetry and transitivity for judgmental equality define substitution for application of lambda terms list all the interactions of equality, such as substitution define a hierarchy of type universes assert the existence of new types Also, for each "by rule" type, there are 4 different kinds of rules "type formation" rules say how to create the type "term introduction" rules define the canonical terms and constructor functions, like "pair" and "S". "term elimination" rules define the other functions like "first", "second", and "R". "computation" rules specify how computation is performed with the type-specific functions. For examples of rules, an interested reader may follow Appendix A.2 of the Homotopy Type Theory book, or read Martin-Löf's Intuitionistic Type Theory. Connections to foundations The logical framework of a type theory bears a resemblance to intuitionistic, or constructive, logic. Formally, type theory is often cited as an implementation of the Brouwer–Heyting–Kolmogorov interpretation of intuitionistic logic. Additionally, connections can be made to category theory and computer programs. Intuitionistic logic When used as a foundation, certain types are interpreted to be propositions (statements that can be proven), and terms inhabiting the type are interpreted to be proofs of that proposition. When some types are interpreted as propositions, there is a set of common types that can be used to connect them to make a Boolean algebra out of types. However, the logic is not classical logic but intuitionistic logic, which is to say it does not have the law of excluded middle nor double negation. Under this intuitionistic interpretation, there are common types that act as the logical operators: Because the law of excluded middle does not hold, there is no term of type . Likewise, double negation does not hold, so there is no term of type . It is possible to include the law of excluded middle and double negation into a type theory, by rule or assumption. However, terms may not compute down to canonical terms and it will interfere with the ability to determine if two terms are judgementally equal to each other. Constructive mathematics Per Martin-Löf proposed his intuitionistic type theory as a foundation for constructive mathematics. Constructive mathematics requires when proving "there exists an with property ", one must construct a particular and a proof that it has property . In type theory, existence is accomplished using the dependent product type, and its proof requires a term of that type. An example of a non-constructive proof is proof by contradiction. The first step is assuming that does not exist and refuting it by contradiction. The conclusion from that step is "it is not the case that does not exist". The last step is, by double negation, concluding that exists. Constructive mathematics does not allow the last step of removing the double negation to conclude that exists. Most of the type theories proposed as foundations are constructive, and this includes most of the ones used by proof assistants. It is possible to add non-constructive features to a type theory, by rule or assumption. These include operators on continuations such as call with current continuation. However, these operators tend to break desirable properties such as canonicity and parametricity. Curry-Howard correspondence The Curry–Howard correspondence is the observed similarity between logics and programming languages. The implication in logic, "A B" resembles a function from type "A" to type "B". For a variety of logics, the rules are similar to expressions in a programming language's types. The similarity goes farther, as applications of the rules resemble programs in the programming languages. Thus, the correspondence is often summarized as "proofs as programs". The opposition of terms and types can also be viewed as one of implementation and specification. By program synthesis, (the computational counterpart of) type inhabitation can be used to construct (all or parts of) programs from the specification given in the form of type information. Type inference Many programs that work with type theory (e.g., interactive theorem provers) also do type inferencing. It lets them select the rules that the user intends, with fewer actions by the user. Research areas Category theory Although the initial motivation for category theory was far removed from foundationalism, the two fields turned out to have deep connections. As John Lane Bell writes: "In fact categories can themselves be viewed as type theories of a certain kind; this fact alone indicates that type theory is much more closely related to category theory than it is to set theory." In brief, a category can be viewed as a type theory by regarding its objects as types (or sorts), i.e. "Roughly speaking, a category may be thought of as a type theory shorn of its syntax." A number of significant results follow in this way: cartesian closed categories correspond to the typed λ-calculus (Lambek, 1970); C-monoids (categories with products and exponentials and one non-terminal object) correspond to the untyped λ-calculus (observed independently by Lambek and Dana Scott around 1980); locally cartesian closed categories correspond to Martin-Löf type theories (Seely, 1984). The interplay, known as categorical logic, has been a subject of active research since then; see the monograph of Jacobs (1999) for instance. Homotopy type theory Homotopy type theory attempts to combine type theory and category theory. It focuses on equalities, especially equalities between types. Homotopy type theory differs from intuitionistic type theory mostly by its handling of the equality type. In 2016, cubical type theory was proposed, which is a homotopy type theory with normalization. Definitions Terms and types Atomic terms The most basic types are called atoms, and a term whose type is an atom is known as an atomic term. Common atomic terms included in type theories are natural numbers, often notated with the type , Boolean logic values (/), notated with the type , and formal variables, whose type may vary. For example, the following may be atomic terms. Function terms In addition to atomic terms, most modern type theories also allow for functions. Function types introduce an arrow symbol, and are defined inductively: If and are types, then the notation is the type of a function which takes a parameter of type and returns a term of type . Types of this form are known as simple types. Some terms may be declared directly as having a simple type, such as the following term, , which takes in two natural numbers in sequence and returns one natural number. Strictly speaking, a simple type only allows for one input and one output, so a more faithful reading of the above type is that is a function which takes in a natural number and returns a function of the form . The parentheses clarify that does not have the type , which would be a function which takes in a function of natural numbers and returns a natural number. The convention is that the arrow is right associative, so the parentheses may be dropped from 's type. Lambda terms New function terms may be constructed using lambda expressions, and are called lambda terms. These terms are also defined inductively: a lambda term has the form , where is a formal variable and is a term, and its type is notated , where is the type of , and is the type of . The following lambda term represents a function which doubles an input natural number. The variable is and (implicit from the lambda term's type) must have type . The term has type , which is seen by applying the function application inference rule twice. Thus, the lambda term has type , which means it is a function taking a natural number as an argument and returning a natural number. A lambda term is often referred to as an anonymous function because it lacks a name. The concept of anonymous functions appears in many programming languages. Inference Rules Function application The power of type theories is in specifying how terms may be combined by way of inference rules. Type theories which have functions also have the inference rule of function application: if is a term of type , and is a term of type , then the application of to , often written , has type . For example, if one knows the type notations , , and , then the following type notations can be deduced from function application. Parentheses indicate the order of operations; however, by convention, function application is left associative, so parentheses can be dropped where appropriate. In the case of the three examples above, all parentheses could be omitted from the first two, and the third may simplified to . Reductions Type theories that allow for lambda terms also include inference rules known as -reduction and -reduction. They generalize the notion of function application to lambda terms. Symbolically, they are written (-reduction). , if is not a free variable in (-reduction). The first reduction describes how to evaluate a lambda term: if a lambda expression is applied to a term , one replaces every occurrence of in with . The second reduction makes explicit the relationship between lambda expressions and function types: if is a lambda term, then it must be that is a function term because it is being applied to . Therefore, the lambda expression is equivalent to just , as both take in one argument and apply to it. For example, the following term may be -reduced. In type theories that also establish notions of equality for types and terms, there are corresponding inference rules of -equality and -equality. Common terms and types Empty type The empty type has no terms. The type is usually written or . One use for the empty type is proofs of type inhabitation. If for a type , it is consistent to derive a function of type , then is uninhabited, which is to say it has no terms. Unit type The unit type has exactly 1 canonical term. The type is written or and the single canonical term is written . The unit type is also used in proofs of type inhabitation. If for a type , it is consistent to derive a function of type , then is inhabited, which is to say it must have one or more terms. Boolean type The Boolean type has exactly 2 canonical terms. The type is usually written or or . The canonical terms are usually and . Natural numbers Natural numbers are usually implemented in the style of Peano Arithmetic. There is a canonical term for zero. Canonical values larger than zero use iterated applications of a successor function . Dependent typing Some type theories allow for types of complex terms, such as functions or lists, to depend on the types of its arguments. For example, a type theory could have the dependent type , which should correspond to lists of terms, where each term must have type . In this case, has the type , where denotes the universe of all types in the theory. Some theories also permit types to be dependent on terms instead of types. For example, a theory could have the type , where is a term of type encoding the length of the vector. This allows for greater specificity and type safety: functions with vector length restrictions or length matching requirements, such as the dot product, can encode this requirement as part of the type. There are foundational issues that can arise from dependent types if a theory is not careful about what dependencies are allowed, such as Girard's Paradox. The logician Henk Barendegt introduced the lambda cube as a framework for studying various restrictions and levels of dependent typing. Product type The product type depends on two types, and its terms are commonly written as ordered pairs or with the symbol . The pair has the product type , where is the type of and is the type of . The product type is usually defined with eliminator functions and . returns , and returns . Besides ordered pairs, this type is used for the concepts of logical conjunction and intersection. Sum type The sum type depends on two types, and it is commonly written with the symbol or . In programming languages, sum types may be referred to as tagged unions. The type is usually defined with constructors and , which are injective, and an eliminator function such that returns , and returns . The sum type is used for the concepts of logical disjunction and union. Dependent products and sums Two common type dependencies, dependent product and dependent sum types, allow for the theory to encode BHK intuitionistic logic by acting as equivalents to universal and existential quantification; this is formalized by Curry–Howard Correspondence. As they also connect to products and sums in set theory, they are often written with the symbols and , respectively. Dependent product and sum types commonly appear in function types and are frequently incorporated in programming languages. For example, consider a function , which takes in a and a term of type , and returns the list with the element at the end. The type annotation of such a function would be , which can be read as "for any type , pass in a and an , and return a ". Sum types are seen in dependent pairs, where the second type depends on the value of the first term. This arises naturally in computer science where functions may return different types of outputs based on the input. For example, the Boolean type is usually defined with an eliminator function , which takes three arguments and behaves as follows. returns , and returns . The return type of this function depends on its input. If the type theory allows for dependent types, then it is possible to define a function such that returns , and returns . The type of may then be written as . Identity type Following the notion of Curry-Howard Correspondence, the identity type is a type introduced to mirror propositional equivalence, as opposed to the judgmental (syntactic) equivalence that type theory already provides. An identity type requires two terms of the same type and is written with the symbol . For example, if and are terms, then is a possible type. Canonical terms are created with a reflexivity function, . For a term , the call returns the canonical term inhabiting the type . The complexities of equality in type theory make it an active research topic; homotopy type theory is a notable area of research that mainly deals with equality in type theory. Inductive types Inductive types are a general template for creating a large variety of types. In fact, all the types described above and more can be defined using the rules of inductive types. Two methods of generating inductive types are induction-recursion and induction-induction. A method that only uses lambda terms is Scott encoding. Some proof assistants, such as Coq and Lean, are based on the calculus for inductive constructions, which is a calculus of constructions with inductive types. Differences from set theory The most commonly accepted foundation for mathematics is first-order logic with the language and axioms of Zermelo–Fraenkel set theory with the axiom of choice, abbreviated ZFC. Type theories having sufficient expressibility may also act as a foundation of mathematics. There are a number of differences between these two approaches. Set theory has both rules and axioms, while type theories only have rules. Type theories, in general, do not have axioms and are defined by their rules of inference. Classical set theory and logic have the law of excluded middle. When a type theory encodes the concepts of "and" and "or" as types, it leads to intuitionistic logic, and does not necessarily have the law of excluded middle. In set theory, an element is not restricted to one set. The element can appear in subsets and unions with other sets. In type theory, terms (generally) belong to only one type. Where a subset would be used, type theory can use a predicate function or use a dependently-typed product type, where each element is paired with a proof that the subset's property holds for . Where a union would be used, type theory uses the sum type, which contains new canonical terms. Type theory has a built-in notion of computation. Thus, "1+1" and "2" are different terms in type theory, but they compute to the same value. Moreover, functions are defined computationally as lambda terms. In set theory, "1+1=2" means that "1+1" is just another way to refer the value "2". Type theory's computation does require a complicated concept of equality. Set theory encodes numbers as sets. Type theory can encode numbers as functions using Church encoding, or more naturally as inductive types, and the construction closely resembles Peano's axioms. In type theory, proofs are types whereas in set theory, proofs are part of the underlying first-order logic. Proponents of type theory will also point out its connection to constructive mathematics through the BHK interpretation, its connection to logic by the Curry–Howard isomorphism, and its connections to Category theory. Properties of type theories Terms usually belong to a single type. However, there are set theories that define "subtyping". Computation takes place by repeated application of rules. Many types of theories are strongly normalizing, which means that any order of applying the rules will always end in the same result. However, some are not. In a normalizing type theory, the one-directional computation rules are called "reduction rules", and applying the rules "reduces" the term. If a rule is not one-directional, it is called a "conversion rule". Some combinations of types are equivalent to other combinations of types. When functions are considered "exponentiation", the combinations of types can be written similarly to algebraic identities. Thus, , , , , . Axioms Most type theories do not have axioms. This is because a type theory is defined by its rules of inference. This is a source of confusion for people familiar with Set Theory, where a theory is defined by both the rules of inference for a logic (such as first-order logic) and axioms about sets. Sometimes, a type theory will add a few axioms. An axiom is a judgment that is accepted without a derivation using the rules of inference. They are often added to ensure properties that cannot be added cleanly through the rules. Axioms can cause problems if they introduce terms without a way to compute on those terms. That is, axioms can interfere with the normalizing property of the type theory. Some commonly encountered axioms are: "Axiom K" ensures "uniqueness of identity proofs". That is, that every term of an identity type is equal to reflexivity. "Univalence Axiom" holds that equivalence of types is equality of types. The research into this property led to cubical type theory, where the property holds without needing an axiom. "Law of Excluded Middle" is often added to satisfy users who want classical logic, instead of intuitionistic logic. The Axiom of Choice does not need to be added to type theory, because in most type theories it can be derived from the rules of inference. This is because of the constructive nature of type theory, where proving that a value exists requires a method to compute the value. The Axiom of Choice is less powerful in type theory than most set theories, because type theory's functions must be computable and, being syntax-driven, the number of terms in a type must be countable. (See .) List of type theories Major Simply typed lambda calculus which is a higher-order logic intuitionistic type theory system F LF is often used to define other type theories calculus of constructions and its derivatives Minor Automath ST type theory UTT (Luo's Unified Theory of dependent Types) some forms of combinatory logic others defined in the lambda cube (also known as pure type systems) others under the name typed lambda calculus Active research Homotopy type theory explores equality of types Cubical Type Theory is an implementation of homotopy type theory
Mathematics
Mathematical logic
null
40283
https://en.wikipedia.org/wiki/Melting%20point
Melting point
The melting point (or, rarely, liquefaction point) of a substance is the temperature at which it changes state from solid to liquid. At the melting point the solid and liquid phase exist in equilibrium. The melting point of a substance depends on pressure and is usually specified at a standard pressure such as 1 atmosphere or 100 kPa. When considered as the temperature of the reverse change from liquid to solid, it is referred to as the freezing point or crystallization point. Because of the ability of substances to supercool, the freezing point can easily appear to be below its actual value. When the "characteristic freezing point" of a substance is determined, in fact, the actual methodology is almost always "the principle of observing the disappearance rather than the formation of ice, that is, the melting point." Examples For most substances, melting and freezing points are approximately equal. For example, the melting and freezing points of mercury is . However, certain substances possess differing solid-liquid transition temperatures. For example, agar melts at and solidifies from ; such direction dependence is known as hysteresis. The melting point of ice at 1 atmosphere of pressure is very close to ; this is also known as the ice point. In the presence of nucleating substances, the freezing point of water is not always the same as the melting point. In the absence of nucleators water can exist as a supercooled liquid down to before freezing. The metal with the highest melting point is tungsten, at ; this property makes tungsten excellent for use as electrical filaments in incandescent lamps. The often-cited carbon does not melt at ambient pressure but sublimes at about ; a liquid phase only exists above pressures of and estimated (see carbon phase diagram). Hafnium carbonitride (HfCN) is a refractory compound with the highest known melting point of any substance to date and the only one confirmed to have a melting point above at ambient pressure. Quantum mechanical computer simulations predicted that this alloy (HfN0.38C0.51) would have a melting point of about 4,400 K. This prediction was later confirmed by experiment, though a precise measurement of its exact melting point has yet to be confirmed. At the other end of the scale, helium does not freeze at all at normal pressure even at temperatures arbitrarily close to absolute zero; a pressure of more than twenty times normal atmospheric pressure is necessary. Melting point measurements Many laboratory techniques exist for the determination of melting points. A Kofler bench is a metal strip with a temperature gradient (range from room temperature to 300 °C). Any substance can be placed on a section of the strip, revealing its thermal behaviour at the temperature at that point. Differential scanning calorimetry gives information on melting point together with its enthalpy of fusion. A basic melting point apparatus for the analysis of crystalline solids consists of an oil bath with a transparent window (most basic design: a Thiele tube) and a simple magnifier. Several grains of a solid are placed in a thin glass tube and partially immersed in the oil bath. The oil bath is heated (and stirred) and with the aid of the magnifier (and external light source) melting of the individual crystals at a certain temperature can be observed. A metal block might be used instead of an oil bath. Some modern instruments have automatic optical detection. The measurement can also be made continuously with an operating process. For instance, oil refineries measure the freeze point of diesel fuel "online", meaning that the sample is taken from the process and measured automatically. This allows for more frequent measurements as the sample does not have to be manually collected and taken to a remote laboratory. Techniques for refractory materials For refractory materials (e.g. platinum, tungsten, tantalum, some carbides and nitrides, etc.) the extremely high melting point (typically considered to be above, say, 1,800 °C) may be determined by heating the material in a black body furnace and measuring the black-body temperature with an optical pyrometer. For the highest melting materials, this may require extrapolation by several hundred degrees. The spectral radiance from an incandescent body is known to be a function of its temperature. An optical pyrometer matches the radiance of a body under study to the radiance of a source that has been previously calibrated as a function of temperature. In this way, the measurement of the absolute magnitude of the intensity of radiation is unnecessary. However, known temperatures must be used to determine the calibration of the pyrometer. For temperatures above the calibration range of the source, an extrapolation technique must be employed. This extrapolation is accomplished by using Planck's law of radiation. The constants in this equation are not known with sufficient accuracy, causing errors in the extrapolation to become larger at higher temperatures. However, standard techniques have been developed to perform this extrapolation. Consider the case of using gold as the source (mp = 1,063 °C). In this technique, the current through the filament of the pyrometer is adjusted until the light intensity of the filament matches that of a black-body at the melting point of gold. This establishes the primary calibration temperature and can be expressed in terms of current through the pyrometer lamp. With the same current setting, the pyrometer is sighted on another black-body at a higher temperature. An absorbing medium of known transmission is inserted between the pyrometer and this black-body. The temperature of the black-body is then adjusted until a match exists between its intensity and that of the pyrometer filament. The true higher temperature of the black-body is then determined from Planck's Law. The absorbing medium is then removed and the current through the filament is adjusted to match the filament intensity to that of the black-body. This establishes a second calibration point for the pyrometer. This step is repeated to carry the calibration to higher temperatures. Now, temperatures and their corresponding pyrometer filament currents are known and a curve of temperature versus current can be drawn. This curve can then be extrapolated to very high temperatures. In determining melting points of a refractory substance by this method, it is necessary to either have black body conditions or to know the emissivity of the material being measured. The containment of the high melting material in the liquid state may introduce experimental difficulties. Melting temperatures of some refractory metals have thus been measured by observing the radiation from a black body cavity in solid metal specimens that were much longer than they were wide. To form such a cavity, a hole is drilled perpendicular to the long axis at the center of a rod of the material. These rods are then heated by passing a very large current through them, and the radiation emitted from the hole is observed with an optical pyrometer. The point of melting is indicated by the darkening of the hole when the liquid phase appears, destroying the black body conditions. Today, containerless laser heating techniques, combined with fast pyrometers and spectro-pyrometers, are employed to allow for precise control of the time for which the sample is kept at extreme temperatures. Such experiments of sub-second duration address several of the challenges associated with more traditional melting point measurements made at very high temperatures, such as sample vaporization and reaction with the container. Thermodynamics For a solid to melt, heat is required to raise its temperature to the melting point. However, further heat needs to be supplied for the melting to take place: this is called the heat of fusion, and is an example of latent heat. From a thermodynamics point of view, at the melting point the change in Gibbs free energy (ΔG) of the material is zero, but the enthalpy (H) and the entropy (S) of the material are increasing (ΔH, ΔS > 0). Melting phenomenon happens when the Gibbs free energy of the liquid becomes lower than the solid for that material. At various pressures this happens at a specific temperature. It can also be shown that: Here T, ΔS and ΔH are respectively the temperature at the melting point, change of entropy of melting and the change of enthalpy of melting. The melting point is sensitive to extremely large changes in pressure, but generally this sensitivity is orders of magnitude less than that for the boiling point, because the solid-liquid transition represents only a small change in volume. If, as observed in most cases, a substance is more dense in the solid than in the liquid state, the melting point will increase with increases in pressure. Otherwise the reverse behavior occurs. Notably, this is the case of water, as illustrated graphically to the right, but also of Si, Ge, Ga, Bi. With extremely large changes in pressure, substantial changes to the melting point are observed. For example, the melting point of silicon at ambient pressure (0.1 MPa) is 1415 °C, but at pressures in excess of 10 GPa it decreases to 1000 °C. Melting points are often used to characterize organic and inorganic compounds and to ascertain their purity. The melting point of a pure substance is always higher and has a smaller range than the melting point of an impure substance or, more generally, of mixtures. The higher the quantity of other components, the lower the melting point and the broader will be the melting point range, often referred to as the "pasty range". The temperature at which melting begins for a mixture is known as the solidus while the temperature where melting is complete is called the liquidus. Eutectics are special types of mixtures that behave like single phases. They melt sharply at a constant temperature to form a liquid of the same composition. Alternatively, on cooling a liquid with the eutectic composition will solidify as uniformly dispersed, small (fine-grained) mixed crystals with the same composition. In contrast to crystalline solids, glasses do not possess a melting point; on heating they undergo a smooth glass transition into a viscous liquid. Upon further heating, they gradually soften, which can be characterized by certain softening points. Freezing-point depression The freezing point of a solvent is depressed when another compound is added, meaning that a solution has a lower freezing point than a pure solvent. This phenomenon is used in technical applications to avoid freezing, for instance by adding salt or ethylene glycol to water. Carnelley's rule In organic chemistry, Carnelley's rule, established in 1882 by Thomas Carnelley, states that high molecular symmetry is associated with high melting point. Carnelley based his rule on examination of 15,000 chemical compounds. For example, for three structural isomers with molecular formula C5H12 the melting point increases in the series isopentane −160 °C (113 K) n-pentane −129.8 °C (143 K) and neopentane −16.4 °C (256.8 K). Likewise in xylenes and also dichlorobenzenes the melting point increases in the order meta, ortho and then para. Pyridine has a lower symmetry than benzene hence its lower melting point but the melting point again increases with diazine and triazines. Many cage-like compounds like adamantane and cubane with high symmetry have relatively high melting points. A high melting point results from a high heat of fusion, a low entropy of fusion, or a combination of both. In highly symmetrical molecules the crystal phase is densely packed with many efficient intermolecular interactions resulting in a higher enthalpy change on melting. Predicting the melting point of substances (Lindemann's criterion) An attempt to predict the bulk melting point of crystalline materials was first made in 1910 by Frederick Lindemann. The idea behind the theory was the observation that the average amplitude of thermal vibrations increases with increasing temperature. Melting initiates when the amplitude of vibration becomes large enough for adjacent atoms to partly occupy the same space. The Lindemann criterion states that melting is expected when the vibration root mean square amplitude exceeds a threshold value. Assuming that all atoms in a crystal vibrate with the same frequency ν, the average thermal energy can be estimated using the equipartition theorem as where m is the atomic mass, ν is the frequency, u is the average vibration amplitude, kB is the Boltzmann constant, and T is the absolute temperature. If the threshold value of u2 is c2a2 where c is the Lindemann constant and a is the atomic spacing, then the melting point is estimated as Several other expressions for the estimated melting temperature can be obtained depending on the estimate of the average thermal energy. Another commonly used expression for the Lindemann criterion is From the expression for the Debye frequency for ν, where θD is the Debye temperature and h is the Planck constant. Values of c range from 0.15 to 0.3 for most materials. Databases and automated prediction In February 2011, Alfa Aesar released over 10,000 melting points of compounds from their catalog as open data and similar data has been mined from patents. The Alfa Aesar and patent data have been summarized in (respectively) random forest and support vector machines. Melting point of the elements
Physical sciences
Phase transitions
Physics
40310
https://en.wikipedia.org/wiki/Magnetohydrodynamics
Magnetohydrodynamics
In physics and engineering, magnetohydrodynamics (MHD; also called magneto-fluid dynamics or hydro­magnetics) is a model of electrically conducting fluids that treats all interpenetrating particle species together as a single continuous medium. It is primarily concerned with the low-frequency, large-scale, magnetic behavior in plasmas and liquid metals and has applications in multiple fields including space physics, geophysics, astrophysics, and engineering. The word magneto­hydro­dynamics is derived from meaning magnetic field, meaning water, and meaning movement. The field of MHD was initiated by Hannes Alfvén, for which he received the Nobel Prize in Physics in 1970. History The MHD description of electrically conducting fluids was first developed by Hannes Alfvén in a 1942 paper published in Nature titled "Existence of Electromagnetic–Hydrodynamic Waves" which outlined his discovery of what are now referred to as Alfvén waves. Alfvén initially referred to these waves as "electromagnetic–hydrodynamic waves"; however, in a later paper he noted, "As the term 'electromagnetic–hydrodynamic waves' is somewhat complicated, it may be convenient to call this phenomenon 'magneto–hydrodynamic' waves." Equations In MHD, motion in the fluid is described using linear combinations of the mean motions of the individual species: the current density and the center of mass velocity . In a given fluid, each species has a number density , mass , electric charge , and a mean velocity . The fluid's total mass density is then , and the motion of the fluid can be described by the current density expressed as and the center of mass velocity expressed as: MHD can be described by a set of equations consisting of a continuity equation, an equation of motion, an equation of state, Ampère's Law, Faraday's law, and Ohm's law. As with any fluid description to a kinetic system, a closure approximation must be applied to highest moment of the particle distribution equation. This is often accomplished with approximations to the heat flux through a condition of adiabaticity or isothermality. In the adiabatic limit, that is, the assumption of an isotropic pressure and isotropic temperature, a fluid with an adiabatic index , electrical resistivity , magnetic field , and electric field can be described by the continuous equation the equation of state the equation of motion the low-frequency Ampère's law Faraday's law and Ohm's law Taking the curl of this equation and using Ampère's law and Faraday's law results in the induction equation, where is the magnetic diffusivity. In the equation of motion, the Lorentz force term can be expanded using Ampère's law and a vector calculus identity to give where the first term on the right hand side is the magnetic tension force and the second term is the magnetic pressure force. Ideal MHD The simplest form of MHD, ideal MHD, assumes that the resistive term in Ohm's law is small relative to the other terms such that it can be taken to be equal to zero. This occurs in the limit of large magnetic Reynolds numbers during which magnetic induction dominates over magnetic diffusion at the velocity and length scales under consideration. Consequently, processes in ideal MHD that convert magnetic energy into kinetic energy, referred to as ideal processes, cannot generate heat and raise entropy. A fundamental concept underlying ideal MHD is the frozen-in flux theorem which states that the bulk fluid and embedded magnetic field are constrained to move together such that one can be said to be "tied" or "frozen" to the other. Therefore, any two points that move with the bulk fluid velocity and lie on the same magnetic field line will continue to lie on the same field line even as the points are advected by fluid flows in the system. The connection between the fluid and magnetic field fixes the topology of the magnetic field in the fluid—for example, if a set of magnetic field lines are tied into a knot, then they will remain so as long as the fluid has negligible resistivity. This difficulty in reconnecting magnetic field lines makes it possible to store energy by moving the fluid or the source of the magnetic field. The energy can then become available if the conditions for ideal MHD break down, allowing magnetic reconnection that releases the stored energy from the magnetic field. Ideal MHD equations In ideal MHD, the resistive term vanishes in Ohm's law giving the ideal Ohm's law, Similarly, the magnetic diffusion term in the induction equation vanishes giving the ideal induction equation, Applicability of ideal MHD to plasmas Ideal MHD is only strictly applicable when: The plasma is strongly collisional, so that the time scale of collisions is shorter than the other characteristic times in the system, and the particle distributions are therefore close to Maxwellian. The resistivity due to these collisions is small. In particular, the typical magnetic diffusion times over any scale length present in the system must be longer than any time scale of interest. Interest in length scales much longer than the ion skin depth and Larmor radius perpendicular to the field, long enough along the field to ignore Landau damping, and time scales much longer than the ion gyration time (system is smooth and slowly evolving). Importance of resistivity In an imperfectly conducting fluid the magnetic field can generally move through the fluid following a diffusion law with the resistivity of the plasma serving as a diffusion constant. This means that solutions to the ideal MHD equations are only applicable for a limited time for a region of a given size before diffusion becomes too important to ignore. One can estimate the diffusion time across a solar active region (from collisional resistivity) to be hundreds to thousands of years, much longer than the actual lifetime of a sunspot—so it would seem reasonable to ignore the resistivity. By contrast, a meter-sized volume of seawater has a magnetic diffusion time measured in milliseconds. Even in physical systems—which are large and conductive enough that simple estimates of the Lundquist number suggest that the resistivity can be ignored—resistivity may still be important: many instabilities exist that can increase the effective resistivity of the plasma by factors of more than 109. The enhanced resistivity is usually the result of the formation of small scale structure like current sheets or fine scale magnetic turbulence, introducing small spatial scales into the system over which ideal MHD is broken and magnetic diffusion can occur quickly. When this happens, magnetic reconnection may occur in the plasma to release stored magnetic energy as waves, bulk mechanical acceleration of material, particle acceleration, and heat. Magnetic reconnection in highly conductive systems is important because it concentrates energy in time and space, so that gentle forces applied to a plasma for long periods of time can cause violent explosions and bursts of radiation. When the fluid cannot be considered as completely conductive, but the other conditions for ideal MHD are satisfied, it is possible to use an extended model called resistive MHD. This includes an extra term in Ohm's Law which models the collisional resistivity. Generally MHD computer simulations are at least somewhat resistive because their computational grid introduces a numerical resistivity. Structures in MHD systems In many MHD systems most of the electric current is compressed into thin nearly-two-dimensional ribbons termed current sheets. These can divide the fluid into magnetic domains, inside of which the currents are relatively weak. Current sheets in the solar corona are thought to be between a few meters and a few kilometers in thickness, which is quite thin compared to the magnetic domains (which are thousands to hundreds of thousands of kilometers across). Another example is in the Earth's magnetosphere, where current sheets separate topologically distinct domains, isolating most of the Earth's ionosphere from the solar wind. Waves The wave modes derived using the MHD equations are called magnetohydrodynamic waves or MHD waves. There are three MHD wave modes that can be derived from the linearized ideal-MHD equations for a fluid with a uniform and constant magnetic field: Alfvén waves Slow magnetosonic waves Fast magnetosonic waves These modes have phase velocities that are independent of the magnitude of the wavevector, so they experience no dispersion. The phase velocity depends on the angle between the wave vector and the magnetic field . An MHD wave propagating at an arbitrary angle with respect to the time independent or bulk field will satisfy the dispersion relation where is the Alfvén speed. This branch corresponds to the shear Alfvén mode. Additionally the dispersion equation gives where is the ideal gas speed of sound. The plus branch corresponds to the fast-MHD wave mode and the minus branch corresponds to the slow-MHD wave mode. A summary of the properties of these waves is provided: The MHD oscillations will be damped if the fluid is not perfectly conducting but has a finite conductivity, or if viscous effects are present. MHD waves and oscillations are a popular tool for the remote diagnostics of laboratory and astrophysical plasmas, for example, the corona of the Sun (Coronal seismology). Extensions Resistive Resistive MHD describes magnetized fluids with finite electron diffusivity (). This diffusivity leads to a breaking in the magnetic topology; magnetic field lines can 'reconnect' when they collide. Usually this term is small and reconnections can be handled by thinking of them as not dissimilar to shocks; this process has been shown to be important in the Earth-Solar magnetic interactions. Extended Extended MHD describes a class of phenomena in plasmas that are higher order than resistive MHD, but which can adequately be treated with a single fluid description. These include the effects of Hall physics, electron pressure gradients, finite Larmor Radii in the particle gyromotion, and electron inertia. Two-fluid Two-fluid MHD describes plasmas that include a non-negligible Hall electric field. As a result, the electron and ion momenta must be treated separately. This description is more closely tied to Maxwell's equations as an evolution equation for the electric field exists. Hall In 1960, M. J. Lighthill criticized the applicability of ideal or resistive MHD theory for plasmas. It concerned the neglect of the "Hall current term" in Ohm's law, a frequent simplification made in magnetic fusion theory. Hall-magnetohydrodynamics (HMHD) takes into account this electric field description of magnetohydrodynamics, and Ohm's law takes the form where is the electron number density and is the elementary charge. The most important difference is that in the absence of field line breaking, the magnetic field is tied to the electrons and not to the bulk fluid. Electron MHD Electron Magnetohydrodynamics (EMHD) describes small scales plasmas when electron motion is much faster than the ion one. The main effects are changes in conservation laws, additional resistivity, importance of electron inertia. Many effects of Electron MHD are similar to effects of the Two fluid MHD and the Hall MHD. EMHD is especially important for z-pinch, magnetic reconnection, ion thrusters, neutron stars, and plasma switches. Collisionless MHD is also often used for collisionless plasmas. In that case the MHD equations are derived from the Vlasov equation. Reduced By using a multiscale analysis the (resistive) MHD equations can be reduced to a set of four closed scalar equations. This allows for, amongst other things, more efficient numerical calculations. Limitations Importance of kinetic effects Another limitation of MHD (and fluid theories in general) is that they depend on the assumption that the plasma is strongly collisional (this is the first criterion listed above), so that the time scale of collisions is shorter than the other characteristic times in the system, and the particle distributions are Maxwellian. This is usually not the case in fusion, space and astrophysical plasmas. When this is not the case, or the interest is in smaller spatial scales, it may be necessary to use a kinetic model which properly accounts for the non-Maxwellian shape of the distribution function. However, because MHD is relatively simple and captures many of the important properties of plasma dynamics it is often qualitatively accurate and is therefore often the first model tried. Effects which are essentially kinetic and not captured by fluid models include double layers, Landau damping, a wide range of instabilities, chemical separation in space plasmas and electron runaway. In the case of ultra-high intensity laser interactions, the incredibly short timescales of energy deposition mean that hydrodynamic codes fail to capture the essential physics. Applications Geophysics Beneath the Earth's mantle lies the core, which is made up of two parts: the solid inner core and liquid outer core. Both have significant quantities of iron. The liquid outer core moves in the presence of the magnetic field and eddies are set up into the same due to the Coriolis effect. These eddies develop a magnetic field which boosts Earth's original magnetic field—a process which is self-sustaining and is called the geomagnetic dynamo. Based on the MHD equations, Glatzmaier and Paul Roberts have made a supercomputer model of the Earth's interior. After running the simulations for thousands of years in virtual time, the changes in Earth's magnetic field can be studied. The simulation results are in good agreement with the observations as the simulations have correctly predicted that the Earth's magnetic field flips every few hundred thousand years. During the flips, the magnetic field does not vanish altogether—it just gets more complex. Earthquakes Some monitoring stations have reported that earthquakes are sometimes preceded by a spike in ultra low frequency (ULF) activity. A remarkable example of this occurred before the 1989 Loma Prieta earthquake in California, although a subsequent study indicates that this was little more than a sensor malfunction. On December 9, 2010, geoscientists announced that the DEMETER satellite observed a dramatic increase in ULF radio waves over Haiti in the month before the magnitude 7.0 Mw 2010 earthquake. Researchers are attempting to learn more about this correlation to find out whether this method can be used as part of an early warning system for earthquakes. Space Physics The study of space plasmas near Earth and throughout the Solar System is known as space physics. Areas researched within space physics encompass a large number of topics, ranging from the ionosphere to auroras, Earth's magnetosphere, the Solar wind, and coronal mass ejections. MHD forms the framework for understanding how populations of plasma interact within the local geospace environment. Researchers have developed global models using MHD to simulate phenomena within Earth's magnetosphere, such as the location of Earth's magnetopause (the boundary between the Earth's magnetic field and the solar wind), the formation of the ring current, auroral electrojets, and geomagnetically induced currents. One prominent use of global MHD models is in space weather forecasting. Intense solar storms have the potential to cause extensive damage to satellites and infrastructure, thus it is crucial that such events are detected early. The Space Weather Prediction Center (SWPC) runs MHD models to predict the arrival and impacts of space weather events at Earth. Astrophysics MHD applies to astrophysics, including stars, the interplanetary medium (space between the planets), and possibly within the interstellar medium (space between the stars) and jets. Most astrophysical systems are not in local thermal equilibrium, and therefore require an additional kinematic treatment to describe all the phenomena within the system (see Astrophysical plasma). Sunspots are caused by the Sun's magnetic fields, as Joseph Larmor theorized in 1919. The solar wind is also governed by MHD. The differential solar rotation may be the long-term effect of magnetic drag at the poles of the Sun, an MHD phenomenon due to the Parker spiral shape assumed by the extended magnetic field of the Sun. Previously, theories describing the formation of the Sun and planets could not explain how the Sun has 99.87% of the mass, yet only 0.54% of the angular momentum in the Solar System. In a closed system such as the cloud of gas and dust from which the Sun was formed, mass and angular momentum are both conserved. That conservation would imply that as the mass concentrated in the center of the cloud to form the Sun, it would spin faster, much like a skater pulling their arms in. The high speed of rotation predicted by early theories would have flung the proto-Sun apart before it could have formed. However, magnetohydrodynamic effects transfer the Sun's angular momentum into the outer solar system, slowing its rotation. Breakdown of ideal MHD (in the form of magnetic reconnection) is known to be the likely cause of solar flares. The magnetic field in a solar active region over a sunspot can store energy that is released suddenly as a burst of motion, X-rays, and radiation when the main current sheet collapses, reconnecting the field. Magnetic confinement fusion MHD describes a wide range of physical phenomena occurring in fusion plasmas in devices such as tokamaks or stellarators. The Grad-Shafranov equation derived from ideal MHD describes the equilibrium of axisymmetric toroidal plasma in a tokamak. In tokamak experiments, the equilibrium during each discharge is routinely calculated and reconstructed, which provides information on the shape and position of the plasma controlled by currents in external coils. MHD stability theory is known to govern the operational limits of tokamaks. For example, the ideal MHD kink modes provide hard limits on the achievable plasma beta (Troyon limit) and plasma current (set by the requirement of the safety factor). Sensors Magnetohydrodynamic sensors are used for precision measurements of angular velocities in inertial navigation systems such as in aerospace engineering. Accuracy improves with the size of the sensor. The sensor is capable of surviving in harsh environments. Engineering MHD is related to engineering problems such as plasma confinement, liquid-metal cooling of nuclear reactors, and electromagnetic casting (among others). A magnetohydrodynamic drive or MHD propulsor is a method for propelling seagoing vessels using only electric and magnetic fields with no moving parts, using magnetohydrodynamics. The working principle involves electrification of the propellant (gas or water) which can then be directed by a magnetic field, pushing the vehicle in the opposite direction. Although some working prototypes exist, MHD drives remain impractical. The first prototype of this kind of propulsion was built and tested in 1965 by Steward Way, a professor of mechanical engineering at the University of California, Santa Barbara. Way, on leave from his job at Westinghouse Electric, assigned his senior-year undergraduate students to develop a submarine with this new propulsion system. In the early 1990s, a foundation in Japan (Ship & Ocean Foundation (Minato-ku, Tokyo)) built an experimental boat, the Yamato-1, which used a magnetohydrodynamic drive incorporating a superconductor cooled by liquid helium, and could travel at 15 km/h. MHD power generation fueled by potassium-seeded coal combustion gas showed potential for more efficient energy conversion (the absence of solid moving parts allows operation at higher temperatures), but failed due to cost-prohibitive technical difficulties. One major engineering problem was the failure of the wall of the primary-coal combustion chamber due to abrasion. In microfluidics, MHD is studied as a fluid pump for producing a continuous, nonpulsating flow in a complex microchannel design. MHD can be implemented in the continuous casting process of metals to suppress instabilities and control the flow. Industrial MHD problems can be modeled using the open-source software EOF-Library. Two simulation examples are 3D MHD with a free surface for electromagnetic levitation melting, and liquid metal stirring by rotating permanent magnets. Magnetic drug targeting An important task in cancer research is developing more precise methods for delivery of medicine to affected areas. One method involves the binding of medicine to biologically compatible magnetic particles (such as ferrofluids), which are guided to the target via careful placement of permanent magnets on the external body. Magnetohydrodynamic equations and finite element analysis are used to study the interaction between the magnetic fluid particles in the bloodstream and the external magnetic field.
Physical sciences
Magnetostatics
Physics
40318
https://en.wikipedia.org/wiki/Portland%20cement
Portland cement
Portland cement is the most common type of cement in general use around the world as a basic ingredient of concrete, mortar, stucco, and non-specialty grout. It was developed from other types of hydraulic lime in England in the early 19th century by Joseph Aspdin, and is usually made from limestone. It is a fine powder, produced by heating limestone and clay minerals in a kiln to form clinker, and then grinding the clinker with the addition of several percent (often around 5%) gypsum. Several types of portland cement are available. The most common, historically called ordinary portland cement (OPC), is grey, but white portland cement is also available. Its name is derived from its resemblance to Portland stone which is quarried on the Isle of Portland in Dorset, England. It was named by Joseph Aspdin who obtained a patent for it in 1824. His son William Aspdin is regarded as the inventor of "modern" portland cement due to his developments in the 1840s. The low cost and widespread availability of the limestone, shales, and other naturally occurring materials used in portland cement make it a relatively cheap building material. Its most common use is in the production of concrete, a composite material consisting of aggregate (gravel and sand), cement, and water. History Portland cement was developed from natural cements made in Britain beginning in the middle of the 18th century. Its name is derived from its similarity to Portland stone, a type of building stone quarried on the Isle of Portland in Dorset, England. The development of modern portland cement (sometimes called ordinary or normal portland cement) began in 1756, when John Smeaton experimented with combinations of different limestones and additives, including trass and pozzolanas, intended for the construction of a lighthouse, now known as Smeaton's Tower. In the late 18th century, Roman cement was developed and patented in 1796 by James Parker. Roman cement quickly became popular, but was largely replaced by portland cement in the 1850s. In 1811, James Frost produced a cement he called British cement. James Frost is reported to have erected a manufactory for making of an artificial cement in 1826. In 1811 Edgar Dobbs of Southwark patented a cement of the kind invented 7 years later by the French engineer Louis Vicat. Vicat's cement is an artificial hydraulic lime, and is considered the "principal forerunner" of portland cement. The name portland cement is recorded in a directory published in 1823 being associated with a William Lockwood and possibly others. In his 1824 cement patent, Joseph Aspdin called his invention "portland cement" because of its resemblance to Portland stone. Aspdin's cement was nothing like modern portland cement, but a first step in the development of modern portland cement, and has been called a "proto-portland cement". William Aspdin had left his father's company, to form his own cement manufactury. In the 1840s William Aspdin, apparently accidentally, produced calcium silicates which are a middle step in the development of portland cement. In 1848, William Aspdin further improved his cement. Then, in 1853, he moved to Germany, where he was involved in cement making. William Aspdin made what could be called "meso-portland cement" (a mix of portland cement and hydraulic lime). Isaac Charles Johnson further refined the production of "meso-portland cement" (middle stage of development), and claimed to be the real father of portland cement. In 1859, John Grant of the Metropolitan Board of Works, set out requirements for cement to be used in the London sewer project. This became a specification for portland cement. The next development in the manufacture of portland cement was the introduction of the rotary kiln, patented by Frederick Ransome in 1885 (U.K.) and 1886 (U.S.); which allowed a stronger, more homogeneous mixture and a continuous manufacturing process. The Hoffmann "endless" kiln which was said to give "perfect control over combustion" was tested in 1860 and shown to produce a superior grade of cement. This cement was made at the Portland Cementfabrik Stern at Stettin, which was the first to use a Hoffmann kiln. The Association of German Cement Manufacturers issued a standard on portland cement in 1878. Portland cement had been imported into the United States from Germany and England, and in the 1870s and 1880s, it was being produced by Eagle Portland cement near Kalamazoo, Michigan. In 1875, the first portland cement was produced in the Coplay Cement Company Kilns under the direction of David O. Saylor in Coplay, Pennsylvania. By the early 20th century, American-made portland cement had displaced most of the imported portland cement. Composition ASTM C150 defines portland cement as: The European Standard EN 197-1 uses the following definition: (The last two requirements were already set out in the German Standard, issued in 1909). Clinkers make up more than 90% of the cement, along with a limited amount of calcium sulphate (CaSO4, which controls the set time), and up to 5% minor constituents (fillers) as allowed by various standards. Clinkers are nodules (diameters, ) of a sintered material that is produced when a raw mixture of predetermined composition is heated to high temperature. The key chemical reaction distinguishing portland cement from other hydraulic limes occurs at these high temperatures (>) as belite (Ca2SiO4) combines with calcium oxide (CaO) to form alite (Ca3SiO5). Manufacturing Portland cement clinker is made by heating, in a cement kiln, a mixture of raw materials to a calcining temperature of above and then a fusion temperature, which is about for modern cements, to sinter the materials into clinker. The materials in cement clinker are alite, belite, tricalcium aluminate, and tetracalcium alumino ferrite. The aluminium, iron, and magnesium oxides are present as a flux allowing the calcium silicates to form at a lower temperature, and contribute little to the strength. For special cements, such as low heat (LH) and sulphate resistant (SR) types, it is necessary to limit the amount of tricalcium aluminate (3 CaO·Al2O3) formed. The major raw material for the clinker-making is usually limestone (CaCO3) mixed with a second material containing clay as source of alumino-silicate. Normally, an impure limestone which contains clay or SiO2 is used. The CaCO3 content of these limestones can be as low as 80%. Secondary raw materials (materials in the raw mix other than limestone) depend on the purity of the limestone. Some of the materials used are clay, shale, sand, iron ore, bauxite, fly ash, and slag. When a cement kiln is fired by coal, the ash of the coal acts as a secondary raw material. Cement grinding To achieve the desired setting qualities in the finished product, a quantity (2–8%, but typically 5%) of calcium sulphate (usually gypsum or anhydrite) is added to the clinker, and the mixture is finely ground to form the finished cement powder. This is achieved in a cement mill. The grinding process is controlled to obtain a powder with a broad particle size range, in which typically 15% by mass consists of particles below 5 μm diameter, and 5% of particles above 45 μm. The measure of fineness usually used is the 'specific surface area', which is the total particle surface area of a unit mass of cement. The rate of initial reaction (up to 24 hours) of the cement on addition of water is directly proportional to the specific surface area. Typical values are 320–380 m2·kg−1 for general purpose cements, and 450–650 m2·kg−1 for 'rapid hardening' cements. The cement is conveyed by belt or powder pump to a silo for storage. Cement plants normally have sufficient silo space for one to 20 weeks of production, depending upon local demand cycles. The cement is delivered to end users either in bags, or as bulk powder blown from a pressure vehicle into the customer's silo. In industrial countries, 80% or more of cement is delivered in bulk. Setting and hardening Cement sets when mixed with water by way of a complex series of chemical reactions still only partly understood. The different constituents slowly crystallise, and the interlocking of their crystals gives cement its strength. Carbon dioxide is slowly absorbed to convert the portlandite (Ca(OH)2) into insoluble calcium carbonate. After the initial setting, immersion in warm water will speed up setting. Gypsum is added as an inhibitor to prevent flash (or quick) setting. Use The most common use for portland cement is in the production of concrete. Concrete is a composite material consisting of aggregate (gravel and sand), cement, and water. As a construction material, concrete can be cast in almost any shape desired, and once hardened, can become a structural (load bearing) element. Concrete can be used in the construction of structural elements like panels, beams, and street furniture, or may be cast-in situ for superstructures like roads and dams. These may be supplied with concrete mixed on site, or may be provided with 'ready-mixed' concrete made at permanent mixing sites. Portland cement is also used in mortars (with sand and water only), for plasters and screeds, and in grouts (cement/water mixes squeezed into gaps to consolidate foundations, road-beds, etc.). When water is mixed with portland cement, the product sets in a few hours and hardens over a period of weeks. These processes can vary widely, depending upon the mix used and the conditions of curing of the product, but a typical concrete sets in about 6 hours and develops a compressive strength of 8 MPa in 24 hours. The strength rises to 15 MPa at 3 days, 23 MPa at 1 week, 35 MPa at 4 weeks, and 41 MPa at 3 months. In principle, the strength continues to rise slowly as long as water is available for continued hydration, but concrete is usually allowed to dry out after a few weeks and this causes strength growth to stop. Types General ASTM C150 Five types of portland cements exist, with variations of the first three according to ASTM C150. Type I portland cement is known as common or general-purpose cement. It is generally assumed unless another type is specified. It is commonly used for general construction, especially when making precast, and precast-prestressed concrete that is not to be in contact with soils or ground water. The typical compound compositions of this type are: 55% (C3S), 19% (C2S), 10% (C3A), 7% (C4AF), 2.8% MgO, 2.9% (SO3), 1.0% ignition loss, and 1.0% free CaO (utilizing cement chemist notation). A limitation on the composition is that the (C3A) shall not exceed 15%. Type II provides moderate sulphate resistance, and gives off less heat during hydration. This type of cement costs about the same as type I. Its typical compound composition is: 51% (C3S), 24% (C2S), 6% (C3A), 11% (C4AF), 2.9% MgO, 2.5% (SO3), 0.8% ignition loss, and 1.0% free CaO. A limitation on the composition is that the (C3A) shall not exceed 8%, which reduces its vulnerability to sulphates. This type is for general construction exposed to moderate sulphate attack, and is meant for use when concrete is in contact with soils and ground water, especially in the western United States due to the high sulphur content of the soils. Because of similar price to that of type I, type II is much used as a general purpose cement, and the majority of portland cement sold in North America meets this specification. Note: Cement meeting (among others) the specifications for types I and II has become commonly available on the world market. Type III has relatively high early strength. Its typical compound composition is: 57% (C3S), 19% (C2S), 10% (C3A), 7% (C4AF), 3.0% MgO, 3.1% (SO3), 0.9% ignition loss, and 1.3% free CaO. This cement is similar to type I, but ground finer. Some manufacturers make a separate clinker with higher C3S and/or C3A content, but this is increasingly rare, and the general purpose clinker is usually used, ground to a specific surface area typically 50–80% higher. The gypsum level may also be increased a small amount. This gives the concrete using this type of cement a three-day compressive strength equal to the seven-day compressive strength of types I and II. Its seven-day compressive strength is almost equal to 28-day compressive strengths of types I and II. The only downside is that the six-month strength of type III is the same or slightly less than that of types I and II. Therefore, the long-term strength is sacrificed. It is usually used for precast concrete manufacture, where high one-day strength allows fast turnover of molds. It may also be used in emergency construction and repairs, and construction of machine bases and gate installations. Type IV portland cement is generally known for its low heat of hydration. Its typical compound composition is: 28% (C3S), 49% (C2S), 4% (C3A), 12% (C4AF), 1.8% MgO, 1.9% (SO3), 0.9% ignition loss, and 0.8% free CaO. The percentages of (C2S) and (C4AF) are relatively high and (C3S) and (C3A) are relatively low. A limitation on this type is that the maximum percentage of (C3A) is seven, and the maximum percentage of (C3S) is thirty-five. This causes the heat given off by the hydration reaction to develop at a slower rate. Consequently, the strength of the concrete develops slowly. After one or two years the strength is higher than the other types after full curing. This cement is used for very large concrete structures, such as dams, which have a low surface to volume ratio. This type of cement is generally not stocked by manufacturers, but some might consider a large special order. This type of cement has not been made for many years, because portland-pozzolan cements and ground granulated blast furnace slag addition offer a cheaper and more reliable alternative. Type V is used where sulphate resistance is important. Its typical compound composition is: 38% (C3S), 43% (C2S), 4% (C3A), 9% (C4AF), 1.9% MgO, 1.8% (SO3), 0.9% ignition loss, and 0.8% free CaO. This cement has a very low (C3A) composition which accounts for its high sulphate resistance. The maximum content of (C3A) allowed is 5% for type V portland cement. Another limitation is that the (C4AF) + 2(C3A) composition cannot exceed 20%. This type is used in concrete to be exposed to alkali soil and ground water sulphates which react with (C3A) causing disruptive expansion. It is unavailable in many places, although its use is common in the western United States and Canada. As with type IV, type V portland cement has mainly been supplanted by the use of ordinary cement with added ground granulated blast furnace slag or tertiary blended cements containing slag and fly ash. Types Ia, IIa, and IIIa have the same composition as types I, II, and III. The only difference is that in Ia, IIa, and IIIa, an air-entraining agent is ground into the mix. The air-entrainment must meet the minimum and maximum optional specification found in the ASTM manual. These types are only available in the eastern United States and Canada, only on a limited basis. They are a poor approach to air-entrainment which improves resistance to freezing under low temperatures. Types II(MH) and II(MH)a have a similar composition as types II and IIa, but with a mild heat. EN 197 norm The European norm EN 197-1 defines five classes of common cement that comprise portland cement as a main constituent. These classes differ from the ASTM classes. Constituents that are permitted in portland-composite cements are artificial pozzolans (blast furnace slag (in fact a latent hydraulic binder), silica fume, and fly ashes), or natural pozzolans (siliceous or siliceous aluminous materials such as volcanic ash glasses, calcined clays and shale). CSA A3000-08 The Canadian standards describe six main classes of cement, four of which can also be supplied as a blend containing ground limestone (where a suffix L is present in the class names). White portland cement White portland cement or white ordinary portland cement (WOPC) is similar to ordinary gray portland cement in all respects, except for its high degree of whiteness. Obtaining this colour requires high purity raw materials (low Fe2O3 content), and some modification to the method of manufacture, among others a higher kiln temperature required to sinter the clinker in the absence of ferric oxides acting as a flux in normal clinker. As Fe2O3 contributes to decrease the melting point of the clinker (normally 1450 °C), the white cement requires a higher sintering temperature (around 1600 °C). Because of this, it is somewhat more expensive than the grey product. The main requirement is to have a low iron content which should be less than 0.5 wt.% expressed as Fe2O3 for white cement, and less than 0.9 wt.% for off-white cement. It also helps to have the iron oxide as ferrous oxide (FeO) which is obtained via slightly reducing conditions in the kiln, i.e., operating with zero excess oxygen at the kiln exit. This gives the clinker and cement a green tinge. Other metallic oxides such as Cr2O3 (green), MnO (pink), TiO2 (white), etc., in trace content, can also give colour tinges, so for a given project it is best to use cement from a single batch. Safety issues Bags of cement routinely have health and safety warnings printed on them, because not only is cement highly alkaline, but the setting process is also exothermic. As a result, wet cement is strongly caustic and can easily cause severe skin burns if not promptly washed off with water. Similarly, dry cement powder in contact with mucous membranes can cause severe eye or respiratory irritation. The reaction of cement dust with moisture in the sinuses and lungs can also cause a chemical burn, as well as headaches, fatigue, and lung cancer. The production of comparatively low-alkalinity cements (pH<11) is an area of ongoing investigation. In Scandinavia, France, and the United Kingdom, the level of chromium(VI), which is considered to be toxic and a major skin irritant, may not exceed 2 parts per million (ppm). In the US, the Occupational Safety and Health Administration (OSHA) has set the legal limit (permissible exposure limit) for portland cement exposure in the workplace as 50 mppcf (million particles per cubic foot) over an 8-hour workday. The National Institute for Occupational Safety and Health (NIOSH) has set a recommended exposure limit (REL) of 10 mg/m3 total exposure and 5 mg/m3 respiratory exposure over an 8-hour workday. At levels of 5000 mg/m3, portland cement is immediately dangerous to life and health. Environmental effects Portland cement manufacture can cause environmental impacts at all stages of the process. These include emissions of airborne pollution in the form of dust; gases; noise and vibration when operating machinery and during blasting in quarries; consumption of large quantities of fuel during manufacture; release of from the raw materials during manufacture, and damage to countryside from quarrying. Equipment to reduce dust emissions during quarrying and manufacture of cement is widely used, and equipment to trap and separate exhaust gases are coming into increased use. Environmental protection also includes the re-integration of quarries into the countryside after they have been closed down by returning them to nature or re-cultivating them. Portland cement is caustic, so it can cause chemical burns. The powder can cause irritation or, with severe exposure, lung cancer, and can contain a number of hazardous components, including crystalline silica and hexavalent chromium. Environmental concerns are the high energy consumption required to mine, manufacture, and transport the cement, and the related air pollution, including the release of the greenhouse gas carbon dioxide, dioxin, , , and particulates. Production of portland cement contributes about 10% of world carbon dioxide emissions. The International Energy Agency has estimated that cement production will increase by between 12 and 23% by 2050 to meet the needs of the world's growing population. There are several ongoing researches targeting a suitable replacement of portland cement by supplementary cementitious materials. Epidemiologic
Technology
Building materials
null
40323
https://en.wikipedia.org/wiki/Inertial%20confinement%20fusion
Inertial confinement fusion
Inertial confinement fusion (ICF) is a fusion energy process that initiates nuclear fusion reactions by compressing and heating targets filled with fuel. The targets are small pellets, typically containing deuterium (2H) and tritium (3H). Energy is deposited in the target's outer layer, which explodes outward. This produces a reaction force in the form of shock waves that travel through the target. The waves compress and heat it. Sufficiently powerful shock waves will cause fusion of the fuel. ICF is one of two major branches of fusion energy research; the other is magnetic confinement fusion (MCF). When first proposed in the early 1970s, ICF appeared to be a practical approach to power production and the field flourished. Experiments demonstrated that the efficiency of these devices was much lower than expected. Throughout the 1980s and '90s, experiments were conducted in order to understand the interaction of high-intensity laser light and plasma. These led to the design of much larger machines that achieved ignition-generating energies. The largest operational ICF experiment is the National Ignition Facility (NIF) in the US. In 2022, the NIF produced fusion, delivering 2.05 megajoules (MJ) of energy to the target which produced 3.15 MJ, the first time that an ICF device produced more energy than was delivered to the target. Description Fusion basics Fusion reactions combine smaller atoms to form larger ones. This occurs when two atoms (or ions, atoms stripped of their electrons) come close enough to each other that the nuclear force dominates the electrostatic force that otherwise keeps them apart. Overcoming electrostatic repulsion requires kinetic energy sufficient to overcome the Coulomb barrier or fusion barrier. Less energy is needed to cause lighter nuclei to fuse, as they have less electrical charge and thus a lower barrier energy. Thus the barrier is lowest for hydrogen. Conversely, the nuclear force increases with the number of nucleons, so isotopes of hydrogen that contain additional neutrons reduce the required energy. The easiest fuel is a mixture of 2H, and 3H, known as D-T. The odds of fusion occurring are a function of the fuel density and temperature and the length of time that the density and temperature are maintained. Even under ideal conditions, the chance that a D and T pair fuse is very small. Higher density and longer times allow more encounters among the atoms. This cross section is further dependent on individual ion energies. This combination, the fusion triple product, must reach the Lawson criterion, to reach ignition. Thermonuclear devices The first ICF devices were the hydrogen bombs invented in the early 1950s. A hydrogen bomb consists of two bombs in a single case. The first, the primary stage, is a fission-powered device normally using plutonium. When it explodes it gives off a burst of thermal X-rays that fill the interior of the specially designed bomb casing. These X-rays are absorbed by a special material surrounding the secondary stage, which consists mostly of the fusion fuel. The X-rays heat this material and cause it to explode. Due to Newton's Third Law, this causes the fuel inside to be driven inward, compressing and heating it. This causes the fusion fuel to reach the temperature and density where fusion reactions begin. In the case of D-T fuel, most of the energy is released in the form of alpha particles and neutrons. Under normal conditions, an alpha can travel about 10 mm through the fuel, but in the ultra-dense conditions in the compressed fuel, they can travel about 0.01 mm before their electrical charge, interacting with the surrounding plasma, causes them to lose velocity. This means the majority of the energy released by the alphas is redeposited in the fuel. This transfer of kinetic energy heats the surrounding particles to the energies they need to undergo fusion. This process causes the fusion fuel to burn outward from the center. The electrically neutral neutrons travel longer distances in the fuel mass and do not contribute to this self-heating process. In a bomb, they are instead used to either breed tritium through reactions in a lithium-deuteride fuel, or are used to split additional fissionable fuel surrounding the secondary stage, often part of the bomb casing. The requirement that the reaction has to be sparked by a fission bomb makes this method impractical for power generation. Not only would the fission triggers be expensive to produce, but the minimum size of such a bomb is large, defined roughly by the critical mass of the plutonium fuel used. Generally, it seems difficult to build efficient nuclear fusion devices much smaller than about 1 kiloton in yield, and the fusion secondary would add to this yield. This makes it a difficult engineering problem to extract power from the resulting explosions. Project PACER studied solutions to the engineering issues, but also demonstrated that it was not economically feasible. The cost of the bombs was far greater than the value of the resulting electricity. Mechanism of action The energy needed to overcome the Coulomb barrier corresponds to the energy of the average particle in a gas heated to 100 million K. The specific heat of hydrogen is about 14 Joule per gram-K, so considering a 1 milligram fuel pellet, the energy needed to raise the mass as a whole to this temperature is 1.4  megajoules (MJ). In the more widely developed magnetic fusion energy (MFE) approach, confinement times are on the order of one second. However, plasmas can be sustained for minutes. In this case the confinement time represents the amount of time it takes for the energy from the reaction to be lost to the environment - through a variety of mechanisms. For a one second confinement, the density needed to meet the Lawson criterion is about 1014 particles per cubic centimetre (cc). For comparison, air at sea level has about 2.7 x 1019 particles/cc, so the MFE approach has been described as "a good vacuum". Considering a 1 milligram drop of D-T fuel in liquid form, the size is about 1 mm and the density is about 4 x 1020/cc. Nothing holds the fuel together. Heat created by fusion events causes it to expand at the speed of sound, which leads to a confinement time around 2 x 10−10 seconds. At liquid density the required confinement time is about 2 x 10−7s. In this case only about 0.1 percent of the fuel fuses before the drop blows apart. The rate of fusion reactions is a function of density, and density can be improved through compression. If the drop is compressed from 1 mm to 0.1 mm in diameter, the confinement time drops by the same factor of 10, because the particles have less distance to travel before they escape. However, the density, which is the cube of the dimensions, increases by 1,000 times. This means the overall rate of fusion increases 1,000 times while the confinement drops by 10 times, a 100-fold improvement. In this case 10% of the fuel undergoes fusion; 10% of 1 mg of fuel produces about 30 MJ of energy, 30 times the amount needed to compress it to that density. The other key concept in ICF is that the entire fuel mass does not have to be raised to 100 million K. In a fusion bomb the reaction continues because the alpha particles released in the interior heat the fuel around it. At liquid density the alphas travel about 10 mm and thus their energy escapes the fuel. In the 0.1 mm compressed fuel, the alphas have a range of about 0.016 mm, meaning that they will stop within the fuel and heat it. In this case a "propagating burn" can be caused by heating only the center of the fuel to the needed temperature. This requires far less energy; calculations suggested 1 kJ is enough to reach the compression goal. Some method is needed to heat the interior to fusion temperatures, and do so while when the fuel is compressed and the density is high enough. In modern ICF devices, the density of the compressed fuel mixture is as much as one-thousand times the density of water, or one-hundred times that of lead, around 1000 g/cm3. Much of the work since the 1970s has been on ways to create the central hot-spot that starts off the burning, and dealing with the many practical problems in reaching the desired density. Heating concepts Early calculations suggested that the amount of energy needed to ignite the fuel was very small, but this does not match subsequent experience. Hot spot ignition The initial solution to the heating problem involved deliberate "shaping" of the energy delivery. The idea was to use an initial lower-energy pulse to vaporize the capsule and cause compression, and then a very short, very powerful pulse near the end of the compression cycle. The goal is to launch shock waves into the compressed fuel that travel inward to the center. When they reach the center they meet the waves coming in from other sides. This causes a brief period where the density in the center reaches much higher values, over 800 g/cm3. The central hot spot ignition concept was the first to suggest ICF was not only a practical route to fusion, but relatively simple. This led to numerous efforts to build working systems in the early 1970s. These experiments revealed unexpected loss mechanisms. Early calculations suggested about 4.5x107 J/g would be needed, but modern calculations place it closer to 108 J/g. Greater understanding led to complex shaping of the pulse into multiple time intervals. Fast ignition The fast ignition approach employs a separate laser to supply additional energy directly to the center of the fuel. This can be done mechanically, often using a small metal cone to puncture the outer fuel pellet wall to inject the energy into the center. In tests, this approach failed because the laser pulse had to reach the center at a precise moment, while the center is obscured by debris and free electrons from the compression pulse. It also has the disadvantage of requiring a second laser pulse, which generally involves a completely separate laser. Shock ignition Shock ignition is similar in concept to the hot-spot technique, but instead of achieving ignition via compression heating, a powerful shock wave is sent into the fuel at a later time through a combination of compression and shock heating. This increases the efficiency of the process while lowering the overall amount of power required. Direct vs. indirect drive In the simplest method of inertial confinement, the fuel is arranged as a sphere. This allows it to be compressed uniformly from all sides. To produce the inward force, the fuel is placed within a thin capsule that absorbs energy from the driver beams, causing the capsule shell to explode outward. The capsule shell is usually made of a lightweight plastic, and the fuel is deposited as a layer on the inside by injecting and freezing the gaseous fuel into the shell. Shining the driver beams directly onto the fuel capsule is known as "direct drive". The implosion process must be extremely uniform in order to avoid asymmetry due to Rayleigh–Taylor instability and similar effects. For a beam energy of 1 MJ, the fuel capsule cannot be larger than about 2 mm before these effects disrupt the implosion symmetry. This limits the size of the laser beams to a diameter so narrow that it is difficult to achieve in practice. Alternatively "indirect drive" illuminates a small cylinder of heavy metal, often gold or lead, known as a hohlraum. The beam energy heats the hohlraum until it emits X-rays. These X-rays fill the interior of the hohlraum and heat the capsule. The advantage of indirect drive is that the beams can be larger and less accurate. The disadvantage is that much of the delivered energy is used to heat the hohlraum until it is "X-ray hot", so the end-to-end energy efficiency is much lower than the direct drive method. Challenges The primary challenges with increasing ICF performance are: Improving the energy delivered to the target Controlling symmetry of the imploding fuel Delaying fuel heating until sufficient density is achieved Preventing premature mixing of hot and cool fuel by hydrodynamic instabilities Achieving shockwave convergence at the fuel center In order to focus the shock wave on the center of the target, the target must be made with great precision and sphericity with tolerances of no more than a few micrometres over its (inner and outer) surface. The lasers must be precisely targeted in space and time. Beam timing is relatively simple and is solved by using delay lines in the beams' optical path to achieve picosecond accuracy. The other major issue is so-called "beam-beam" imbalance and beam anisotropy. These problems are, respectively, where the energy delivered by one beam may be higher or lower than other beams impinging and of "hot spots" within a beam diameter hitting a target which induces uneven compression on the target surface, thereby forming Rayleigh-Taylor instabilities in the fuel, prematurely mixing it and reducing heating efficacy at the instant of maximum compression. The Richtmyer-Meshkov instability is also formed during the process due to shock waves. These problems have been mitigated by beam smoothing techniques and beam energy diagnostics; however, RT instability remains a major issue. Modern cryogenic hydrogen ice targets tend to freeze a thin layer of deuterium on the inside of the shell while irradiating it with a low power infrared laser to smooth its inner surface and monitoring it with a microscope equipped camera, thereby allowing the layer to be closely monitored. Cryogenic targets filled with D-T are "self-smoothing" due to the small amount of heat created by tritium decay. This is referred to as "beta-layering". In the indirect drive approach, the absorption of thermal x-rays by the target is more efficient than the direct absorption of laser light. However, the hohlraums take up considerable energy to heat, significantly reducing energy transfer efficiency. Most often, indirect drive hohlraum targets are used to simulate thermonuclear weapons tests due to the fact that the fusion fuel in weapons is also imploded mainly by X-ray radiation. ICF drivers are evolving. Lasers have scaled up from a few joules and kilowatts to megajoules and hundreds of terawatts, using mostly frequency doubled or tripled light from neodymium glass amplifiers. Heavy ion beams are particularly interesting for commercial generation, as they are easy to create, control, and focus. However, it is difficult to achieve the energy densities required to implode a target efficiently, and most ion-beam systems require the use of a hohlraum surrounding the target to smooth out the irradiation. History Conception United States ICF history began as part of the "Atoms For Peace" conference in 1957. This was an international, UN-sponsored conference between the US and the Soviet Union. Some thought was given to using a hydrogen bomb to heat a water-filled cavern. The resulting steam could then be used to power conventional generators, and thereby provide electrical power. This meeting led to Operation Plowshare, formed in June 1957 and formally named in 1961. It included three primary concepts; energy generation under Project PACER, the use of nuclear explosions for excavation, and for fracking in the natural gas industry. PACER was directly tested in December 1961 when the 3 kt Project Gnome device was detonated in bedded salt in New Mexico. While the press looked on, radioactive steam was released from the drill shaft, at some distance from the test site. Further studies designed engineered cavities to replace natural ones, but Plowshare turned from bad to worse, especially after the failure of 1962's Sedan which produced significant fallout. PACER continued to receive funding until 1975, when a 3rd party study demonstrated that the cost of electricity from PACER would be ten times the cost of conventional nuclear plants. Another outcome of Atoms For Peace was to prompt John Nuckolls to consider what happens on the fusion side of the bomb as fuel mass is reduced. This work suggested that at sizes on the order of milligrams, little energy would be needed to ignite the fuel, much less than a fission primary. He proposed building, in effect, tiny all-fusion explosives using a tiny drop of D-T fuel suspended in the center of a hohlraum. The shell provided the same effect as the bomb casing in an H-bomb, trapping x-rays inside to irradiate the fuel. The main difference is that the X-rays would be supplied by an external device that heated the shell from the outside until it was glowing in the x-ray region. The power would be delivered by a then-unidentified pulsed power source he referred to, using bomb terminology, as the "primary". The main advantage to this scheme is the fusion efficiency at high densities. According to the Lawson criterion, the amount of energy needed to heat the D-T fuel to break-even conditions at ambient pressure is perhaps 100 times greater than the energy needed to compress it to a pressure that would deliver the same rate of fusion. So, in theory, the ICF approach could offer dramatically more gain. This can be understood by considering the energy losses in a conventional scenario where the fuel is slowly heated, as in the case of magnetic fusion energy; the rate of energy loss to the environment is based on the temperature difference between the fuel and its surroundings, which continues to increase as the fuel temperature increases. In the ICF case, the entire hohlraum is filled with high-temperature radiation, limiting losses. Germany In 1956 a meeting was organized at the Max Planck Institute in Germany by fusion pioneer Carl Friedrich von Weizsäcker. At this meeting Friedwardt Winterberg proposed the non-fission ignition of a thermonuclear micro-explosion by a convergent shock wave driven with high explosives. Further reference to Winterberg's work in Germany on nuclear micro explosions (mininukes) is contained in a declassified report of the former East German Stasi (Staatsicherheitsdienst). In 1964 Winterberg proposed that ignition could be achieved by an intense beam of microparticles accelerated to a velocity of 1000 km/s. In 1968, he proposed to use intense electron and ion beams generated by Marx generators for the same purpose. The advantage of this proposal is that charged particle beams are not only less expensive than laser beams, but can entrap the charged fusion reaction products due to the strong self-magnetic beam field, drastically reducing the compression requirements for beam ignited cylindrical targets. USSR In 1967, research fellow Gurgen Askaryan published an article proposing the use of focused laser beams in the fusion of lithium deuteride or deuterium. Early research Through the late 1950s, and collaborators at Lawrence Livermore National Laboratory (LLNL) completed computer simulations of the ICF concept. In early 1960, they performed a full simulation of the implosion of 1 mg of D-T fuel inside a dense shell. The simulation suggested that a 5 MJ power input to the hohlraum would produce 50 MJ of fusion output, a gain of 10x. This was before the laser and a variety of other possible drivers were considered, including pulsed power machines, charged particle accelerators, plasma guns, and hypervelocity pellet guns. Two theoretical advances advanced the field. One came from new simulations that considered the timing of the energy delivered in the pulse, known as "pulse shaping", leading to better implosion. The second was to make the shell much larger and thinner, forming a thin shell as opposed to an almost solid ball. These two changes dramatically increased implosion efficiency and thereby greatly lowered the required compression energy. Using these improvements, it was calculated that a driver of about 1 MJ would be needed, a five-fold reduction. Over the next two years, other theoretical advancements were proposed, notably Ray Kidder's development of an implosion system without a hohlraum, the so-called "direct drive" approach, and Stirling Colgate and Ron Zabawski's work on systems with as little as 1 μg of D-T fuel. The introduction of the laser in 1960 at Hughes Research Laboratories in California appeared to present a perfect driver mechanism. However, the maximum power produced by these devices appeared very limited, far below what would be needed. This was addressed with Gordon Gould's introduction of the Q-switching which was applied to lasers in 1961 at Hughes Research Laboratories. Q-switching allows a laser amplifier to be pumped to very high energies without starting stimulated emission, and then triggered to release this energy in a burst by introducing a tiny seed signal. With this technique it appeared any limits to laser power were well into the region that would be useful for ICF. Starting in 1962, Livermore's director John S. Foster, Jr. and Edward Teller began a small ICF laser study. Even at this early stage the suitability of ICF for weapons research was well understood and was the primary reason for its funding. Over the next decade, LLNL made small experimental devices for basic laser-plasma interaction studies. Development begins In 1967 Kip Siegel started KMS Industries. In the early 1970s he formed KMS Fusion to begin development of a laser-based ICF system. This development led to considerable opposition from the weapons labs, including LLNL, who put forth a variety of reasons that KMS should not be allowed to develop ICF in public. This opposition was funnelled through the Atomic Energy Commission, which controlled funding. Adding to the background noise were rumours of an aggressive Soviet ICF program, new higher-powered CO2 and glass lasers, the electron beam driver concept, and the energy crisis which added impetus to many energy projects. In 1972 John Nuckolls wrote a paper introducing ICF and suggesting that testbed systems could be made to generate fusion with drivers in the kJ range, and high-gain systems with MJ drivers. In spite of limited resources and business problems, KMS Fusion successfully demonstrated IFC fusion on 1 May 1974. This success was soon followed by Siegel's death and the end of KMS Fusion a year later. By this point several weapons labs and universities had started their own programs, notably the solid-state lasers (Nd:glass lasers) at LLNL and the University of Rochester, and krypton fluoride excimer lasers systems at Los Alamos and the Naval Research Laboratory. "High-energy" ICF High-energy ICF experiments (multi-hundred joules per shot) began in the early 1970s, when better lasers appeared. Funding for fusion research was stimulated by energy crises produced rapid gains in performance, and inertial designs were soon reaching the same sort of "below break-even" conditions of the best MCF systems. LLNL was, in particular, well funded and started a laser fusion development program. Their Janus laser started operation in 1974, and validated the approach of using Nd:glass lasers for high power devices. Focusing problems were explored in the Long path and Cyclops lasers, which led to the larger Argus laser. None of these were intended to be practical devices, but they increased confidence that the approach was valid. It was then believed that a much larger device of the Cyclops type could both compress and heat targets, leading to ignition. This misconception was based on extrapolation of the fusion yields seen from experiments utilizing the so-called "exploding pusher" fuel capsule. During the late 1970s and early 1980s the estimates for laser energy on target needed to achieve ignition doubled almost yearly as plasma instabilities and laser-plasma energy coupling loss modes were increasingly understood. The realization that exploding pusher target designs and single-digit kilojoule (kJ) laser irradiation intensities would never scale to high yields led to the effort to increase laser energies to the 100 kJ level in the ultraviolet band and to the production of advanced ablator and cryogenic DT ice target designs. Shiva and Nova One of the earliest large scale attempts at an ICF driver design was the Shiva laser, a 20-beam neodymium doped glass laser system at LLNL that started operation in 1978. Shiva was a "proof of concept" design intended to demonstrate compression of fusion fuel capsules to many times the liquid density of hydrogen. In this, Shiva succeeded, reaching 100 times the liquid density of deuterium. However, due to the laser's coupling with hot electrons, premature heating of the dense plasma was problematic and fusion yields were low. This failure to efficiently heat the compressed plasma pointed to the use of optical frequency multipliers as a solution that would frequency triple the infrared light from the laser into the ultraviolet at 351 nm. Schemes to efficiently triple the frequency of laser light discovered at the Laboratory for Laser Energetics in 1980 was experimented with in the 24 beam OMEGA laser and the NOVETTE laser, which was followed by the Nova laser design with 10 times Shiva's energy, the first design with the specific goal of reaching ignition. Nova also failed, this time due to severe variation in laser intensity in its beams (and differences in intensity between beams) caused by filamentation that resulted in large non-uniformity in irradiation smoothness at the target and asymmetric implosion. The techniques pioneered earlier could not address these new issues. This failure led to a much greater understanding of the process of implosion, and the way forward again seemed clear, namely to increase the uniformity of irradiation, reduce hot-spots in the laser beams through beam smoothing techniques to reduce Rayleigh–Taylor instabilities and increase laser energy on target by at least an order of magnitude. Funding was constrained in the 1980s. National Ignition Facility The resulting 192-beam design, dubbed the National Ignition Facility, started construction at LLNL in 1997. NIF's main objective is to operate as the flagship experimental device of the so-called nuclear stewardship program, supporting LLNLs traditional bomb-making role. Completed in March 2009, NIF experiments set new records for power delivery by a laser. As of September 27, 2013, for the first time fusion energy generated was greater than the energy absorbed into deuterium–tritium fuel. In June, 2018 NIF announced record production of 54kJ of fusion energy output. On August 8, 2021 the NIF produced 1.3MJ of output, 25x higher than the 2018 result, generating 70% of the break-even definition of ignition - when energy out equals energy in. As of December 2022, the NIF claims to have become the first fusion experiment to achieve scientific breakeven on December 5, 2022, with an experiment producing 3.15 megajoules of energy from a 2.05 megajoule input of laser light (somewhat less than the energy needed to boil 1 kg of water) for an energy gain of about 1.5. Fast ignition Fast ignition may offer a way to directly heat fuel after compression, thus decoupling the heating and compression phases. In this approach, the target is first compressed "normally" using a laser system. When the implosion reaches maximum density (at the stagnation point or "bang time"), a second short, high-power petawatt (PW) laser delivers a single pulse to one side of the core, dramatically heating it and starting ignition. The two types of fast ignition are the "plasma bore-through" method and the "cone-in-shell" method. In plasma bore-through, the second laser bores through the outer plasma of an imploding capsule, impinges on and heats the core. In the cone-in-shell method, the capsule is mounted on the end of a small high-z (high atomic number) cone such that the tip of the cone projects into the core. In this second method, when the capsule is imploded, the laser has a clear view of the core and does not use energy to bore through a 'corona' plasma. However, the presence of the cone affects the implosion process in significant ways that are not fully understood. Several projects are currently underway to explore fast ignition, including upgrades to the OMEGA laser at the University of Rochester and the GEKKO XII device in Japan. HiPer is a proposed £500 million facility in the European Union. Compared to NIF's 2 MJ UV beams, HiPER's driver was planned to be 200 kJ and heater 70 kJ, although the predicted fusion gains are higher than NIF. It was to employ diode lasers, which convert electricity into laser light with much higher efficiency and run cooler. This allows them to operate at much higher frequencies. HiPER proposed to operate at 1 MJ at 1 Hz, or alternately 100 kJ at 10 Hz. The project's final update was in 2014. It was expected to offer a higher Q with a 10x reduction in construction costs times. Other projects The French Laser Mégajoule achieved its first experimental line in 2002, and its first target shots were conducted in 2014. The machine was roughly 75% complete as of 2016. Using a different approach entirely is the z-pinch device. Z-pinch uses massive electric currents switched into a cylinder comprising extremely fine wires. The wires vaporize to form an electrically conductive, high current plasma. The resulting circumferential magnetic field squeezes the plasma cylinder, imploding it, generating a high-power x-ray pulse that can be used to implode a fuel capsule. Challenges to this approach include relatively low drive temperatures, resulting in slow implosion velocities and potentially large instability growth, and preheat caused by high-energy x-rays. Shock ignition was proposed to address problems with fast ignition. Japan developed the KOYO-F design and laser inertial fusion test (LIFT) experimental reactor. In April 2017, clean energy startup Apollo Fusion began to develop a hybrid fusion-fission reactor technology. In Germany, technology company Marvel Fusion is working on laser-initiated inertial confinement fusion. The startup adopted a short-pulsed high energy laser and the aneutronic fuel pB11. It was founded in Munich 2019. It works with Siemens Energy, TRUMPF, and Thales. The company partnered with Ludwig Maximilian University of Munich in July 2022. In March 2022, Australian company HB11 announced fusion using non-thermal laser pB11, at a higher than predicted rate of alpha particle creation. Other companies include NIF-like Longview Fusion and fast-ignition origned Focused Energy. Applications Electricity generation Inertial fusion energy (IFE) power plants have been studied since the late 1970s. These devices were to deliver multiple targets per second into the reaction chamber, using the resulting energy to drive a conventional steam turbine. Technical challenges Even if the many technical challenges in reaching ignition were all to be solved, practical problems abound. Given the 1 to 1.5% efficiency of the laser amplification process and that steam-driven turbine systems are typically about 35% efficient, fusion gains would have to be on the order of 125-fold just to energetically break even. An order of magnitude improvement in laser efficiency may be possible through the use of designs that replace flash lamps with laser diodes that are tuned to produce most of their energy in a frequency range that is strongly absorbed. Initial experimental devices offer efficiencies of about 10%, and it is suggested that 20% is possible. NIF uses about 330 MJ to produce the driver beams, producing an expected yield of about 20 MJ, with maximum credible yield of 45 MJ. Power extraction ICF systems face some of the secondary power extraction problems as MCF systems. One of the primary concerns is how to successfully remove heat from the reaction chamber without interfering with the targets and driver beams. Another concern is that the released neutrons react with the reactor structure, mechanically weakening it, and turning it intensely radioactive. Conventional metals such as steel would have a short lifetime and require frequent replacement of the core containment walls. Another concern is fusion afterdamp (debris left in the reaction chamber), which could interfere with subsequent shots, including helium ash produced by fusion, along with unburned hydrogen and other elements used in the fuel pellet. This problem is most troublesome with indirect drive systems. If the driver energy misses the fuel pellet completely and strikes the containment chamber, material could foul the interaction region, or the lenses or focusing elements. One concept, as shown in the HYLIFE-II design, is to use a "waterfall" of FLiBe, a molten mix of fluoride salts of lithium and beryllium, which both protect the chamber from neutrons and carry away heat. The FLiBe is passed into a heat exchanger where it heats water for the turbines. The tritium produced by splitting lithium nuclei can be extracted in order to close the power plant's thermonuclear fuel cycle, a necessity for perpetual operation because tritium is rare and otherwise must be manufactured. Another concept, Sombrero, uses a reaction chamber built of carbon-fiber-reinforced polymer which has a low neutron cross section. Cooling is provided by a molten ceramic, chosen because of its ability to absorb the neutrons and its efficiency as a heat transfer agent. Economic viability Another factor working against IFE is the cost of the fuel. Even as Nuckolls was developing his earliest calculations, co-workers pointed out that if an IFE machine produces 50 MJ of fusion energy, a shot could produce perhaps 10 MJ (2.8 kWh) of energy. Wholesale rates for electrical power on the grid were about 0.3 cents/kWh at the time, which meant the monetary value of the shot was perhaps one cent. In the intervening 50 years the real price of power has remained about even, and the rate in 2012 in Ontario, Canada was about 2.8 cents/kWh. Thus, in order for an IFE plant to be economically viable, fuel shots would have to cost considerably less than ten cents in 2012 dollars. Direct-drive systems avoid the use of a hohlraum and thereby may be less expensive in fuel terms. However, these systems still require an ablator, and the accuracy and geometrical considerations are critical. The direct-drive approach still may not be less expensive to operate. Nuclear weapons The hot and dense conditions encountered during an ICF experiment are similar to those in a thermonuclear weapon, and have applications to nuclear weapons programs. When energy is put into the fuel pellets, the result is shock-wave explosions. With enough shock waves, the fuel pellets combine to form helium, and a free neutron and energy is released. ICF experiments might be used, for example, to help determine how warhead performance degrades as it ages, or as part of a weapons design program. Retaining knowledge and expertise inside the nuclear weapons program is another motivation for pursuing ICF. Funding for the NIF in the United States is sourced from the Nuclear Weapons Stockpile Stewardship program, whose goals are oriented accordingly. It has been argued that some aspects of ICF research violate the Comprehensive Test Ban Treaty or the Nuclear Non-Proliferation Treaty. In the long term, despite the formidable technical hurdles, ICF research could lead to the creation of a "pure fusion weapon". Neutron source ICF has the potential to produce orders of magnitude more neutrons than spallation. Neutrons are capable of locating hydrogen atoms in molecules, resolving atomic thermal motion and studying collective excitations of photons more effectively than X-rays. Neutron scattering studies of molecular structures could resolve problems associated with protein folding, diffusion through membranes, proton transfer mechanisms, dynamics of molecular motors, etc. by modulating thermal neutrons into beams of slow neutrons. In combination with fissile materials, neutrons produced by ICF can potentially be used in Hybrid Nuclear Fusion designs to produce electric power.
Technology
Power generation
null
40330
https://en.wikipedia.org/wiki/Magnoliaceae
Magnoliaceae
The Magnoliaceae () are a flowering plant family, the magnolia family, in the order Magnoliales. It consists of two genera: Magnolia and Liriodendron (tulip trees). Unlike most angiosperms, whose flower parts are in whorls (rings), the Magnoliaceae have their stamens and pistils in spirals on a conical receptacle. This arrangement is found in some fossil plants and is believed to be a basal or early condition for angiosperms. The flowers also have parts not distinctly differentiated into sepals and petals, while angiosperms that evolved later tend to have distinctly differentiated sepals and petals. The poorly differentiated perianth parts that occupy both positions are known as tepals. The family has about 219 species and ranges across subtropical eastern North America, Mexico and Central America, the West Indies, tropical South America, southern and eastern India, Sri Lanka, Indochina, Malesia, China, Japan, and Korea. Genera The number of genera in Magnoliaceae is a subject of debate. Up to 17 have been recognized, including Alcimandra, Lirianthe, Manglietia, Michelia, Pachylarnax, Parakmeria, Talauma and Yulania. However, many recent studies have opted to merge all genera within subfamily Magnolioideae into the genus Magnolia. Thus, Magnoliaceae would include only two extant genera, Magnolia and Liriodendron. Description The monophyly of Magnoliaceae is supported by a number of shared morphological characters among the various genera in the family. Most have bisexual flowers (with the exception of Kmeria and some species of Magnolia section Gynopodium), showy, fragrant, radial, and with an elongated receptacle. Leaves are alternate, simple, and sometimes lobed. The inflorescence is a solitary, showy flower with indistinguishable petals and sepals. Sepals range from six to many; stamens are numerous and feature short filaments which are poorly differentiated from the anthers. Carpels are usually numerous, distinct, and on an elongated receptacle or torus. The fruit is an etaerio of follicles which usually become closely appressed as they mature and open along the abaxial surface. Seeds have a fleshy coat, aril, and color that ranges from red to orange (except Liriodendron). Magnoliaceae flowers are beetle pollinated, except for Liriodendron, which is bee pollinated. The carpels of Magnolia flowers are especially thick to avoid damage by beetles that land, crawl, and feast on them. The seeds of Magnolioideae are bird-dispersed, while the seeds of Liriodendron are wind-dispersed. Biogeography Due to its great age, the geographical distribution of the Magnoliaceae has become disjunct or fragmented as a result of major geologic events such as ice ages, continental drift, and mountain formation. This distribution pattern has isolated some species, while keeping others in close contact. Extant species of the Magnoliaceae are widely distributed in temperate and tropical Asia from the Himalayas to Japan and southwest through Malaysia and New Guinea. Asia is home to about two-thirds of the species in Magnoliaceae, with the remainder of the family spread across the Americas with temperate species extending into southern Canada and tropical elements extending into Brazil and the West Indies. Systematics Foundational Taxonomic and Systematics Research (18th-19th century) The earliest botanical description of the Magnoliaceae as a family is in Antonii Laurentii de Jussieu's Genera Plantarum, which describes eight genera included within the family (Euryandra, Drymis, Illicium, Michelia, Magnolia, Talauma, Liriodendrum, and Mayna) as well as four genera closely related to the family (Dillenia, Curatella, Ochna, and Quassia). Bentham and Hooker's Genera Plantarum, almost a century later, sorts the family's genera into three tribes: the Wintereae, including the genera Drimys and Illicium, the Magnolieae, including the genera Talauma, Magnolia, Manglieta, Michelia, and Liriodendron, and the Schizandreae, including the genera Schizandra and Kadsura. In his following work Adansonia, Baillon recognizes Bentham and Hooker's changes and additions but proposes an alternative taxonomy where he sets aside the Tulipier genus and include all remaining genera under one Magnolieae tribe. From this basic separation, scholars have continued to debate the systematics of the family. Modern Systematics Research (20th-21st century) Dandy's taxonomic proposal in 1927 sets aside the genus Liriodendron as a part of the subfamily Liriodendreae and includes Bentham and Hooker's four genera in addition to four more (Kmeria, Pachylarnax, Alcimandra, and Elmerrillia) within the Magnolieae tribe. Dandy's model with eleven genera was widely accepted until molecular evidence brought it into question (Figlar, 2019). Qiu et al. analyzed molecular data in 1995 to investigate the divergences within and between East Asian and East North American species of Magnolia, presenting molecular evidence which shows that Dandy's section Rytidospermum is not monophyletic. Azuma et al. employ both molecular phylogeny and parsimonious mapping of the chemistry of floral scents in 1999 to propose a phylogenetic tree where, unlike Dandy's taxonomy, they include Michelia species within the Magnolia genus as a sister group to the subgenus Yulania and also find that the section Rytidospermum is not monophyletic, placing some of its members in a clade with the section Oyama. The most recent research on the family continues the debate over the genera of the family. Wang et al.'s study analyzes complete chloroplast genome sequences of 86 species in the Magnoliaceae and supports a phylogeny with fifteen major clades, two subfamilies, two genera, and fifteen sections, maintaining Magnolia'''s classification as one monophyletic genus. Dong et al. also place Magnolia as the sole genus of the subfamily Magnolioideae made up of fifteen sections. However, Yang et al. and Zhao et al. work with phylogenies of the Magnoliaceae that recognize several genera in the Magnolioideae. Consensus and Debates Today Although phylogenetic trees of the Magnoliaceae still include anywhere from 2 to 17 genera, the broad generic concept (where one genus, Magnolia, is in the Magnolioideae) is largely accepted as a practical construction upheld by molecular and morphological evidence. Even as debates over rank persist, monophyletic groups are largely established with opportunities for further research into endangered and extinct species. The family's place as early angiosperms means that research into its taxonomy and evolutionary history contributes to our broader understanding of the evolution of plant life. The development of DNA sequencing at the end of the 20th century had a profound impact on the research of phylogenetic relationships within the family. The employment of ndhF and cpDNA sequences has refuted many of the traditionally accepted phylogenetic relationships within the Magnoliaceae. For example, the genera Magnolia and Michelia were shown to be paraphyletic when the remaining four genera of the Magnolioideae are split out. In fact, even many of the subgenera (Magnolia subg. Magnolia, Magnolia subg. Talauma) have been found to be paraphyletic. Although no completely resolved phylogeny for the family has yet been determined, these technological advances have allowed systematists to broadly circumscribe major lineages. Economic significance As a whole, the Magnoliaceae are not an economically significant family. With the exception of ornamental cultivation, the economic significance of magnolias is generally confined to the use of wood from certain timber species and the use of bark and flowers from several species believed to possess medicinal qualities. The wood of the American tuliptree, Liriodendron tulipifera and the wood of the cucumbertree magnolia, Magnolia acuminata, and, to a lesser degree, that of the Frasier magnolia, Magnolia fraseri, are harvested and marketed collectively as "yellow poplar." This is a lightweight and exceptionally fine-grained wood, lending itself to precision woodworking for purposes such as pipe organ building. Magnolias have a rich cultural tradition in China, where references to their healing qualities go back thousands of years. The Chinese have long used the bark of Magnolia officinalis, a magnolia native to the mountains of China with large leaves and fragrant white flowers, as a remedy for cramps, abdominal pain, nausea, diarrhea, and indigestion. Certain magnolia flowers, such as the buds of Magnolia liliiflora, have been used to treat chronic respiratory and sinus infections and lung congestion. Recently, magnolia bark has become incorporated into alternative medicine in the west, where tablets made from the bark of M. officinalis have been marketed as an aid for anxiety, allergies, asthma, and weight loss. Compounds found in magnolia bark might have antibacterial and antifungal properties, but no large-scale study on the health effects of magnolia bark or flowers has yet been conducted.
Biology and health sciences
Magnoliales
Plants
40333
https://en.wikipedia.org/wiki/Magnolia
Magnolia
Magnolia is a large genus of about 210 to 340 flowering plant species in the subfamily Magnolioideae of the family Magnoliaceae. The natural range of Magnolia species is disjunct, with a main center in east, south and southeast Asia and a secondary center in eastern North America, Central America, the West Indies, and some species in South America. Magnolia is an ancient genus. Fossilized specimens of M. acuminata have been found dating to 20 million years ago (mya), and fossils of plants identifiably belonging to the Magnoliaceae date to 95 mya. They are theorized to have evolved to encourage pollination by beetles as they existed prior to the evolution of bees. Another aspect of Magnolia considered to represent an ancestral state is that the flower bud is enclosed in a bract rather than in sepals; the perianth parts are undifferentiated and called tepals rather than distinct sepals and petals. Magnolia shares the tepal characteristic with several other flowering plants near the base of the flowering plant lineage, such as Amborella and Nymphaea (as well as with many more recently derived plants, such as Lilium). The magnolia was made the state flower of Mississippi in 1900. The magnolia symbolizes stability in the United States; in China beauty and gentleness. Description Magnolias are spreading evergreen or deciduous trees or shrubs characterised by large fragrant flowers, which may be bowl-shaped or star-shaped, in shades of white, pink, purple, green, or yellow. In deciduous species, the blooms often appear before the leaves in spring. Cone-like fruits are often produced in the autumn. As with all Magnoliaceae, the perianth is undifferentiated, with 9–15 tepals in three or more whorls. The flowers are hermaphroditic, with numerous adnate carpels and stamens arranged in a spiral fashion on the elongated receptacle. The flowers' carpels are often damaged by pollinating beetles. The fruit dehisces along the dorsal sutures of the carpels. The pollen is monocolpate, and the embryonic development is of the Polygonum type. Taxonomists, including James E. Dandy in 1927, have used differences in the fruits of Magnoliaceae as the basis for classification systems. Taxonomy History Early The name Magnolia first appeared in 1703 in the Genera written by French botanist Charles Plumier, for a flowering tree from the island of Martinique (talauma). It was named after French botanist Pierre Magnol. English botanist William Sherard, who studied botany in Paris under Joseph Pitton de Tournefort, a pupil of Magnol, was most probably the first after Plumier to adopt the genus name Magnolia. He was at least responsible for the taxonomic part of Johann Jacob Dillenius's Hortus Elthamensis and of Mark Catesby's Natural History of Carolina, Florida and the Bahama Islands. These were the first works after Plumier's Genera that used the name Magnolia, this time for some species of flowering trees from temperate North America. The species that Plumier originally named Magnolia was later described as Annona dodecapetala by Jean-Baptiste Lamarck and has since been named Magnolia plumieri and Talauma plumieri (among a number of other names), but is now known as Magnolia dodecapetala. Carl Linnaeus, who was familiar with Plumier's Genera, adopted the genus name Magnolia in 1735 in his first edition of Systema Naturae, without a description but with a reference to Plumier's work. In 1753, he took up Plumier's Magnolia in the first edition of Species Plantarum. He described a monotypic genus, with the sole species being Magnolia virginiana. Since Linnaeus never saw a herbarium specimen (if there ever was one) of Plumier's Magnolia and had only his description and a rather poor picture at hand, he must have taken it for the same plant that was described by Mark Catesby in his 1730 Natural History of Carolina. He placed it in the synonymy of Magnolia virginiana var. fœtida, the taxon now known as Magnolia grandiflora. Under Magnolia virginiana, Linnaeus described five varieties (glauca, fœtida, grisea, tripetala, and acuminata). In the tenth edition of Systema Naturae (1759), he merged grisea with glauca and raised the four remaining varieties to species status. By the end of the 18th century, botanists and plant hunters exploring Asia had begun to name and describe the Magnolia species from China and Japan. The first Asiatic species to be described by western botanists were Magnolia denudata, Magnolia liliiflora, Magnolia coco, and Magnolia figo. Soon after that, in 1794, Carl Peter Thunberg collected and described Magnolia obovata from Japan, and roughly at the same time Magnolia kobus was also first collected. Recent With the number of species increasing, the genus was divided into two subgenera, Magnolia and Yulania. Magnolia contains the American evergreen species M. grandiflora, which is of horticultural importance, especially in the southeastern United States, and M. virginiana, the type species. Yulania contains several deciduous Asiatic species, such as M. denudata and M. kobus, which have become horticulturally important in their own right and as parents in hybrids. Classified in Yulania is also the American deciduous M. acuminata (cucumber tree), which has recently attained greater status as the parent responsible for the yellow flower color in many new hybrids. Relations in the family Magnoliaceae have puzzled taxonomists for a long time. Because the family is quite old and has survived many geological events (such as ice ages, mountain formation, and continental drift), its distribution has become scattered. Some species or groups of species have been isolated for a long time, while others could stay in close contact. To create divisions in the family (or even within the genus Magnolia) solely based upon morphological characters has proven to be a nearly impossible task. By the end of the 20th century, DNA sequencing had become available as a method of large-scale research on phylogenetic relationships. Several studies, including studies on many species in the family Magnoliaceae, were carried out to investigate relationships. What these studies all revealed was that the genus Michelia and Magnolia subgenus Yulania were far more closely allied to each other than either one of them was to Magnolia subgenus Magnolia. These phylogenetic studies were supported by morphological data. As nomenclature is supposed to reflect relationships, the situation with the species names in Michelia and Magnolia subgenus Yulania was undesirable. Taxonomically, three choices are available: to join Michelia and Yulania species in a common genus, not being Magnolia (for which the name Michelia has priority); to raise subgenus Yulania to generic rank, leaving Michelia names and subgenus Magnolia names untouched, or; to join Michelia with the genus Magnolia into the genus Magnolia s.l. (a big genus). Magnolia subgenus Magnolia cannot be renamed because it contains M. virginiana, the type species of the genus and of the family. Not many Michelia species have so far become horticulturally or economically important, apart from their wood. Both subgenus Magnolia and subgenus Yulania include species of major horticultural importance, and a change of name would be very undesirable for many people, especially in the horticultural branch. In Europe, Magnolia is even more or less a synonym for Yulania, since most of the cultivated species on this continent have Magnolia (Yulania) denudata as one of their parents. Most taxonomists who acknowledge close relations between Yulania and Michelia therefore support the third option and join Michelia with Magnolia. The same goes, mutatis mutandis, for the (former) genera Talauma and Dugandiodendron, which are then placed in subgenus Magnolia, and genus Manglietia, which could be joined with subgenus Magnolia or may even earn the status of an extra subgenus. Elmerrillia seems to be closely related to Michelia and Yulania, in which case it will most likely be treated in the same way as Michelia is now. The precise nomenclatural status of small or monospecific genera like Kmeria, Parakmeria, Pachylarnax, Manglietiastrum, Aromadendron, Woonyoungia, Alcimandra, Paramichelia, and Tsoongiodendron remains uncertain. Taxonomists who merge Michelia into Magnolia tend to merge these small genera into Magnolia s.l. as well. Botanists do not agree on whether to recognize a big Magnolia or the different small genera. For example, Flora of China offers two choices: a large genus Magnolia, which includes about 300 species and everything in the Magnoliaceae except Liriodendron (tulip tree), or 16 different genera, some of them recently split out or re-recognized, each of which contains up to 50 species. The western co-author favors the big genus Magnolia, whereas the Chinese recognize the different small genera. Fossil record Fossils assignable to Magnolia extend into the Paleogene, such as Magnolia nanningensis, named for mummified wood from the Oligocene of Guangxi, China, which has a close affinity to members of the modern section Michelia. Subdivision In 2012, the Magnolia Society published on its website a classification of the genus produced by Richard B. Figlar, based on a 2004 classification by Figlar and Hans Peter Nooteboom. Species of Magnolia were listed under three subgenera, 12 sections, and 13 subsections. Subsequent molecular phylogenetic studies have led to some revisions of this system; for example, the subgenus Magnolia was found not to be monophyletic. A revised classification in 2020, based on a phylogenetic analysis of complete chloroplast genomes, abandoned subgenera and subsections, dividing Magnolia into 15 sections. The relationships among these sections are shown in the following cladogram, as is the paraphyletic status of subgenus Magnolia. The table below compares the 2012 and 2020 classifications. (The circumscriptions of the corresponding taxa may not be the same.) Uses Horticulture In general, the genus Magnolia has attracted horticultural interest. Some, such as the shrub M. stellata (star magnolia) and the tree M. × soulangeana (saucer magnolia) flower quite early in the spring, before the leaves open. Others flower in late spring or early summer, including M. virginiana (sweetbay magnolia) and M. grandiflora (southern magnolia). The shape of these flowers lend themselves to the common name tulip tree that is sometimes applied to some Magnolia species. Hybridisation has been immensely successful in combining the best aspects of different species to give plants which flower at an earlier age than the parent species, as well as having more impressive flowers. One of the most popular garden magnolias, M. × soulangeana, is a hybrid of M. liliiflora and M. denudata. In the eastern United States, five native species are frequently in cultivation: M. acuminata (as a shade tree), M. grandiflora, M. virginiana, M. tripetala, and M. macrophylla. The last two species must be planted where high winds are not a frequent problem because of the large size of their leaves. Culinary The flowers of many species are considered edible. In parts of England, the petals of M. grandiflora are pickled and used as a spicy condiment. In some Asian cuisines, the buds are pickled and used to flavor rice and scent tea. In Japan, the young leaves and flower buds of M. hypoleuca are broiled and eaten as a vegetable. Older leaves are made into a powder and used as seasoning; dried, whole leaves are placed on a charcoal brazier and filled with miso, leeks, daikon, and shiitake, and broiled. There is a type of miso which is seasoned with magnolia, hoba miso. Traditional medicine The bark and flower buds of M. officinalis have long been used in traditional Chinese medicine, where they are known as hou po (厚朴). In Japan, kōboku, M. obovata, has been used in a similar manner. Timber The cucumbertree, M. acuminata, grows to large size and is harvested as a timber tree in northeastern U.S. forests. Its wood is sold as "yellow poplar" along with that of the tuliptree, Liriodendron tulipifera. The Fraser magnolia, M. fraseri, also attains enough size sometimes to be harvested, as well. Chemical compounds and bioeffects The aromatic bark contains magnolol, honokiol, 4-O-methylhonokiol, and obovatol. Magnolol and honokiol activate the nuclear receptor peroxisome proliferator-activated receptor gamma. Culture Symbols White or Yulan magnolia (subgenus Yulania) is the official flower of Shanghai. Magnolia grandiflora is the state flower of both Mississippi and Louisiana. The flower's abundance in Mississippi is reflected in its nickname of "Magnolia State" and the state flag. The magnolia is also the state tree of Mississippi. One of the many nicknames for Houston is "Magnolia City". Historically, magnolias have been associated with the Southern United States. Magnolia sieboldii is the national flower of North Korea and the Gangnam District of Seoul. Arts The 1989 movie Steel Magnolias is based on a 1987 play, Steel Magnolias, by Robert Harling. They are about the bond among a group of women from Louisiana, who can be as beautiful as magnolias, but are as tough as steel. The name 'magnolia' specifically refers to a magnolia tree about which they are arguing at the beginning. In the 1939 song "Strange Fruit", originally written as a poem by New York schoolteacher and communist activist Abel Meeropol to condemn the practice of lynching, the magnolia flower was referred to as being associated with the Southern United States, where many lynchings took place: Pastoral scene of the gallant south The bulging eyes and the twisted mouth Scent of magnolias, sweet and fresh, Then the sudden smell of burning flesh. Despite Meeropol's frequent mention of the South and magnolia trees, the horrific image which inspired his poem, Lawrence Beitler's 1930 photograph of the lynching of Thomas Shipp and Abram Smith following the robbery and murder of Claude Deteer, was taken in Marion, Indiana, where magnolia trees are less common. In the 1960s, magnolias were a symbol of the South in the popular press: the New York Post noted of Lyndon Johnson that "A man who wore a ten-gallon Stetson and spoke with a magnolia accent had little hope of winning the Democratic nomination in 1960", and biographer Robert Caro picks up the symbol by saying that when Johnson became president "[t]he taint of magnolias still remained to be scrubbed off."
Biology and health sciences
Magnoliales
Plants
40336
https://en.wikipedia.org/wiki/Rhododendron
Rhododendron
Rhododendron (; : rhododendra) is a very large genus of about 1,024 species of woody plants and in the heath family (Ericaceae). They can be either evergreen or deciduous. Most species are native to eastern Asia and the Himalayan region, but smaller numbers occur elsewhere in Asia, and in North America, Europe and Australia. It is the national flower of Nepal, the state flower of Washington and West Virginia in the United States, the state flower of Nagaland and Himachal Pradesh in India, the provincial flower of Jeju Province in South Korea, the provincial flower of Jiangxi in China and the state tree of Sikkim and Uttarakhand in India. Most species have brightly colored flowers which bloom from late winter through to early summer. Azaleas make up two subgenera of Rhododendron. They are distinguished from "true" rhododendrons by having only five anthers per flower. Etymology The common and generic name comes . Description Rhododendron is a genus of shrubs and small to (rarely) large trees, the smallest species growing to tall, and the largest, R. protistum var. giganteum, reported to tall. The leaves are spirally arranged; leaf size can range from to over , exceptionally in R. sinogrande. They may be either evergreen or deciduous. In some species, the undersides of the leaves are covered with scales (lepidote) or hairs (indumentum). Some of the best known species are noted for their many clusters of large flowers. A recently discovered species in New Guinea has flowers up to six inches (fifteen centimeters) in width, the largest in the whole genus. The accompanying photograph shows it as having seven petals. There are alpine species with small flowers and small leaves, and tropical species such as section Vireya that often grow as epiphytes. Species in this genus may be part of the heath complex in oak-heath forests in eastern North America. They have frequently been divided based on the presence or absence of scales on the abaxial (lower) leaf surface (lepidote or elepidote). These scales, unique to subgenus Rhododendron, are modified hairs consisting of a polygonal scale attached by a stalk. Rhododendron are characterised by having inflorescences with scarious (dry) perulae, a chromosome number of x=13, fruit that has a septicidal capsule, an ovary that is superior (or nearly so), stamens that have no appendages, and agglutinate (clumped) pollen. Taxonomy Rhododendron is the largest genus in the family Ericaceae, with over 1,000 species, (though estimates vary from 850 to 1,200) and is morphologically diverse. Consequently, the taxonomy has been historically complex. Early history Although Rhododendrons had been known since the description of Rhododendron hirsutum by Charles de l'Écluse (Clusius) in the sixteenth century, and were known to classical writers (Magor 1990), and referred to as Chamaerhododendron (low-growing rose tree), the genus was first formally described by Linnaeus in his Species Plantarum in 1753. He listed five species under Rhododendron: R. ferrugineum (the type species), R. dauricum, R. hirsutum, R. chamaecistus (now Rhodothamnus chamaecistus (L.) Rchb.) and R. maximum. At that time he considered the then known six species of Azalea that he had described earlier in 1735 in his Systema Naturae as a separate genus. Linnaeus' six species of Azalea were Azalea indica, A. pontica, A. lutea, A. viscosa, A. lapponica and A. procumbens (now Kalmia procumbens), which he distinguished from Rhododendron by having five stamens, as opposed to ten. As new species of what are now considered Rhododendron were discovered, they were assigned to separate genera if they seemed to differ significantly from the type species. For instance Rhodora (Linnaeus 1763) for Rhododendron canadense, Vireya (Blume 1826) and Hymenanthes (Blume 1826) for Rhododendron metternichii, now R. degronianum. Meanwhile, other botanists such as Salisbury (1796) and Tate (1831) began to question the distinction between Azalea and Rhododendron, and finally in 1836, Azalea was incorporated into Rhododendron and the genus divided into eight sections. Of these Tsutsutsi (Tsutsusi), Pentanthera, Pogonanthum, Ponticum and Rhodora are still used, the other sections being Lepipherum, Booram, and Chamaecistus. This structure largely survived till recently (2004), following which the development of molecular phylogeny led to major of traditional morphological classifications, although other authors such as Candolle, who described six sections, used slightly different numeration. Soon, as more species became available in the nineteenth century so did a better understanding of the characteristics necessary for the major divisions. Chief amongst these were Maximovicz's Rhododendreae Asiae Orientali and Planchon. Maximovicz used flower bud position and its relationship with leaf buds to create eight "Sections". Bentham and Hooker used a similar scheme, but called the divisions "Series". It was not until 1893 that Koehne appreciated the significance of scaling and hence the separation of lepidote and elepidote species. The large number of species that were available by the early twentieth century prompted a new approach when Balfour introduced the concept of grouping species into series. The Species of Rhododendron referred to this series concept as the Balfourian system. That system continued up to modern times in Davidian's four volume The Rhododendron Species. Modern classification The next major attempt at classification was by Sleumer who from 1934 began incorporating the Balfourian series into the older hierarchical structure of subgenera and sections, according to the International Code of Botanical Nomenclature, culminating in 1949 with his "Ein System der Gattung Rhododendron" and subsequent refinements. Most of the Balfourian series are represented by Sleumer as subsections, though some appear as sections or even subgenera. Sleumer based his system on the relationship of the flower buds to the leaf buds, habitat, flower structure, and whether the leaves were lepidote or non-lepidote. While Sleumer's work was widely accepted, many in the United States and the United Kingdom continued to use the simpler Balfourian system of the Edinburgh group. Sleumer's system underwent many revisions by others, predominantly the Edinburgh group in their continuing Royal Botanic Garden Edinburgh notes. Cullen of the Edinburgh group, placing more emphasis on the lepidote characteristics of the leaves, united all of the lepidote species into subgenus Rhododendron, including four of Sleumer's subgenera (Rhododendron, Pseudoazalea, Pseudorhodorastrum, Rhodorastrum). In 1986 Philipson & Philipson raised two sections of subgenus Aleastrum (Mumeazalea, Candidastrum) to subgenera, while reducing genus Therorhodion to a subgenus of Rhododendron. In 1987 Spethmann, adding phytochemical features proposed a system with fifteen subgenera grouped into three 'chorus' subgenera. A number of closely related genera had been included together with Rhododendron in a former tribe, Rhodoreae. These have been progressively incorporated into Rhododendron. Chamberlain and Rae moved the monotypic section Tsusiopsis together with the monotypic genus Tsusiophyllum into section Tsutsusi, while Kron & Judd reduced genus Ledum to a subsection of section Rhododendron. Then Judd & Kron moved two species (R. schlippenbachii and R. quinquefolium) from section Brachybachii, subgenus Tsutsusi and two from section Rhodora, subgenus Pentanthera (R. albrechtii, R. pentaphyllum) into section Sciadorhodion, subgenus Pentanthera. Finally Chamberlain brought the various systems together in 1996, with 1,025 species divided into eight subgenera. Goetsch (2005) provides a comparison of the Sleumer and Chamberlain schemata (Table 1). Phylogeny The era of molecular analysis rather than descriptive features can be dated to the work of Kurashige (1988) and Kron (1997) who used matK sequencing. Later Gao et al. (2002) used ITS sequences to determine a cladistic analysis. They confirmed that the genus Rhododendron was monophyletic, with subgenus Therorhodion in the basal position, consistent with the matK studies. Following publication of the studies of Goetsch et al. (2005) with RPB2, there began an ongoing realignment of species and groups within the genus, based on evolutionary relationships. Their work was more supportive of Sleumer's original system than the later modifications introduced by Chamberlain et al.. The major finding of Goetsch and colleagues was that all species examined (except R. camtschaticum, subgenus Therorhodion) formed three major clades which they labelled , , and , with the subgenera Rhododendron and Hymenanthes as monophyletic groups nested within clades and , respectively. By contrast subgenera Azaleastrum and Pentanthera were polyphyletic, while R. camtschaticum appeared as a sister to all other rhododendrons. The small polyphyletic subgenera Pentanthera and Azaleastrum were divided between two clades. The four sections of Pentanthera between clades and , with two each, while Azaleastrum had one section in each of and . Thus subgenera Azaleastrum and Pentanthera needed to be disassembled, and Rhododendron, Hymenanthes and Tsutsusi correspondingly expanded. In addition to the two separate genera included under Rhododendron by Chamberlain (Ledum, Tsusiophyllum), Goetsch et al.. added Menziesia (clade ). Despite a degree of paraphyly, the subgenus Rhododendron was otherwise untouched with regard to its three sections but four other subgenera were eliminated and one new subgenus created, leaving a total of five subgenera in all, from eight in Chamberlain's scheme. The discontinued subgenera are Pentanthera, Tsutsusi, Candidastrum and Mumeazalea, while a new subgenus was created by elevating subgenus Azaleastrum section Choniastrum to subgenus rank. Subgenus Pentanthera (deciduous azaleas) with its four sections was dismembered by eliminating two sections and redistributing the other two between the existing subgenera in clades (Hymenanthes) and (Azaleastrum), although the name was retained in section Pentanthera (14 species) which was moved to subgenus Hymenanthes. Of the remaining three sections, monotypic Viscidula was discontinued by moving R. nipponicum to Tsutsusi (), while Rhodora (2 species) was itself polyphyletic and was broken up by moving R. canadense to section Pentanthera () and R. vaseyi to section Sciadorhodion, which then became a new section of subgenus Azaleastrum (). Subgenus Tsutsusi () was reduced to section status retaining the name, and included in subgenus Azaleastrum. Of the three minor subgenera, all in , two were discontinued. The single species of monotypic subgenus Candidastrum (R. albiflorum) was moved to subgenus Azaleastrum, section Sciadorhodion. Similarly the single species in monotypic subgenus Mumeazalea (R. semibarbatum) was placed in the new section Tsutsusi, subgenus Azaleastrum. Genus Menziesa (9 species) was also added to section Sciadorhodion. The remaining small subgenus Therorhodion with its two species was left intact. Thus two subgenera, Hymenanthes and Azaleastrum were expanded at the expense of four subgenera that were eliminated, although Azaleastrum lost one section (Choniastrum) as a new subgenus, since it was a distinct subclade in . In all, Hymenanthes increased from one to two sections, while Azaleastrum, by losing one section and gaining two increased from two to three sections. (See schemata under Subgenera.) Subsequent research has supported the revision by Goetsch, although has largely concentrated on further defining the phylogeny within the subdivisions. In 2011 the two species of Diplarche were also added to Rhododendron, incertae sedis. Subdivision This genus has been progressively subdivided into a hierarchy of subgenus, section, subsection, and species. Subgenera Terminology from the Sleumer (1949) system is frequently found in older literature, with five subgenera and is as follows; Subgenus Lepidorrhodium Koehne: Lepidotes. 3 sections Subgenus Eurhododendron Maxim.: Elipidotes. Subgenus Pseudanthodendron Sleumer: Deciduous azaleas. 3 sections Subgenus Anthodendron Rehder & Wilson: Evergreen azaleas. 3 sections Subgenus Azaleastrum Planch.: 4 sections In the later traditional classification, attributed to Chamberlain (1996), and as used by horticulturalists and the American Rhododendron Society, Rhododendron has eight subgenera based on morphology, namely the presence of scales (lepidote), deciduousness of leaves, and the floral and vegetative branching patterns, after Sleumer (1980). These consist of four large and four small subgenera. The first two subgenera (Rhododendron and Hymenanthes) represent the species commonly considered as 'Rhododendrons'. The next two smaller subgenera (Pentanthera and Tsutsusi) represent the 'Azaleas'. The remaining four subgenera contain very few species. The largest of these is subgenus Rhododendron, containing nearly half of all known species and all of the lepidote species. Subgenus Rhododendron : Small leaf or lepidotes (scales on the underside of the leaves). 3 sections, 462 species, type species: R. ferrugineum. Subgenus Hymenanthes : Large leaf or elepidotes (without scales). 1 section, 224 species, type R. degronianum. Subgenus Pentanthera : Deciduous azaleas. 4 sections, 23 species, type R. luteum. Subgenus Tsutsusi : Evergreen azaleas. 2 sections, 80 species, type R. indicum. Subgenus Azaleastrum : 2 sections, 16 species, type R. ovatum. Subgenus Candidastrum : 1 species, R. albiflorum. Subgenus Mumeazalea : 1 species, Rhododendron semibarbatum. Subgenus Therorhodion : 2 species (Rhododendron camtschaticum, Rhododendron redowskianun). For a comparison of the Sleumer and Chamberlain systems, see Goetsch et al. (2005) Table 1. This division was based on a number of what were thought to be key morphological characteristics. These included the position of the inflorescence buds (terminal or lateral), whether lepidote or elepidote, deciduousness of leaves, and whether new foliage was derived from axils from previous year's shoots or the lowest scaly leaves. Following the cladistic analysis of Goetsch et al. (2005) this scheme was simplified, based on the discovery of three major clades (A, B, C) as follows. Clade A Subgenus Rhododendron : Small leaf or lepidotes (scales on the underside of the leaves). 3 sections, about 400 species, type species: R. ferrugineum. Subgenus Choniastrum : 11 species Clade B Subgenus Hymenanthes : Large leaf or elepidotes (without scales), including deciduous azaleas. 2 sections, about 140–225 species, type R. degronianum. Clade C Subgenus Azaleastrum : Evergreen azaleas. 3 sections, about 120 species, type Rhododendron ovatum. Sister taxon Subgenus Therorhodion : 2 species (R. camtschaticum and R. redowskianun). Sections and subsections The larger subgenera are further subdivided into sections and subsections Some subgenera contain only a single section, and some sections only a single subsection. Shown here is the traditional classification, with species number after Chamberlain (1996), but this scheme is undergoing constant revision. Revisions by Goetsch et al. (2005) and by Craven et al. (2008) shown in (parenthetical italics). Older ranks such as Series (groups of species) are no longer used but may be found in the literature, but the American Rhododendron Society still uses a similar device, called Alliances Subgenus Rhododendron L. (3 sections, 462 species: increased to five sections in 2008) (Discovereya (Sleumer) Argent, raised from Vireya) Pogonathum Aitch. & Hemsl. (13 species; Himalaya and adjacent mountains) (Pseudovireya (C.B.Clarke) Argent, raised from Vireya) Rhododendron L. (149 species in 25 subsections; temperate to subarctic Northern Hemisphere) Vireya (Blume) Copel.f. (300 species in 2 subsections; tropical southeast Asia, Australasia. At one time considered separate subgenus) Subgenus Hymenanthes (Blume) K.Koch (1 section, 224 species) (Increased to two sections) Ponticum (24 subsections) (Pentanthera (2 subsections – new section, moved from subgenus Pentanthera) Subgenus Pentanthera (4 sections, 23 species) (Discontinued) Pentanthera (2 subsections – moved to subgenus Hymenanthes) Rhodora (L.) G. Don (2 species; Rhododendron canadense, Rhododendron vaseyi) (Discontinued, redistributed) Sciadorhodion Rehder & Wilson (4 species) (Moved to subgenus Azaleastrum) Viscidula Matsum. & Nakai (1 species; Rhododendron nipponicum) (Discontinued, added to section Tsutsusi, subgenus Azaleastrum) Subgenus Tsutsusi (Sweet) Pojarkova (2 sections, 80 species) (Discontinued, reduced to section and moved to subgenus Azaleastrum) Brachycalyx Sweet (3 alliances, 15 species) Tsutsusi (Sweet) Pojarkova (65 species) Subgenus Azaleastrum Planch. (2 sections, 16 species) (Increased to three sections) Azaleastrum Planch. (5 species) (Choniastrum Franch. (11 species) (Raised to subgenus)) (Sciadorhodion Rehder & Wilson (4 species) (Moved from subgenus Pentanthera)) (Tsutsusi (Sweet) Pojarkova (reduced from subgenus)) Subgenus Candidastrum Franch. (1 species: Rhododendron albiflorum) (Discontinued, moved to section Sciadorhodion, subgenus Azaleastrum) Subgenus Mumeazalea (Sleumer) W.R. Philipson & M.N. Philipson (1 species: Rhododendron semibarbatum) (Discontinued, moved to section Tsutsusi, subgenus Azaleastrum) Subgenus Therorhodion A. Gray (2 species) (Subgenus Choniastrum Franch. (11 species)) The system used by the World Flora Online uses six subgenera, four of which are divided further: subgenus Azaleastrum section Azaleastrum section Sciadorhodion section Tsutsutsi subgenus Choniastrum subgenus Hymenanthes section Pentanthera section Ponticum section Rhodora subgenus Rhododendron section Pogonanthum section Rhododendron subgenus Therorhodion subgenus Vireya section Albovireya section Discovireya section Hadranthe section Malayovireya section Pseudovireya section Schistanthe section Siphonovireya Species Distribution and habitat Species of the genus Rhododendron are widely distributed between latitudes 80°N and 20°S and are native to areas from North America to Europe, Russia, and Asia, and from Greenland to Queensland, Australia and the Solomon Islands. The centres of diversity are in the Himalayas and Maritime Southeast Asia, with the greatest species diversity in the Sino-Himalayan region, Southwest China and northern Burma, from India – Himachal Pradesh, Uttarakhand, Sikkim and Nagaland to Nepal, northwestern Yunnan and western Sichuan and southeastern Tibet. Other significant areas of diversity are in the mountains of Korea, Japan and Taiwan. More than 90% of Rhododendron sensu Chamberlain belong to the Asian subgenera Rhododendron, Hymenanthes and section Tsutsusi. Of the first two of these, the species are predominantly found in the area of the Himalayas and Southwest China (Sino-Himalayan Region). The 300 tropical species within the Vireya section of subgenus Rhododendron occupy the Maritime Southeast Asia from their presumed Southeast Asian origin to Northern Australia, with 55 known species in Borneo and 164 in New Guinea. The species in New Guinea are native to subalpine moist grasslands at around 3,000 metres above sea level in the Central Highlands. Subgenera Rhododendron and Hymenanthes, together with section Pentanthera of subgenus Pentanthera are also represented to a lesser degree in the Mountainous areas of North America and Western Eurasia. Subgenus Tsutsusi is found in the maritime regions of East Asia (Japan, Korea, Taiwan, East China), but not in North America or Eurasia. In the United States, native Rhododendron mostly occur in lowland and montane forests in the Pacific Northwest, California, the Northeast, and the Appalachian Mountains. Ecology Invasive species Rhododendron ponticum has become invasive in Ireland and the United Kingdom. It is an introduced species, spreading in woodland areas and replacing the natural understory. R. ponticum is difficult to eradicate, as its roots can make new shoots. Insects A number of insects either target rhododendrons or will opportunistically attack them. Rhododendron borers and various weevils are major pests of rhododendrons, and many caterpillars will preferentially devour them. Rhododendron species are used as food plants by the larvae (caterpillars) of some butterflies and moths; see List of Lepidoptera that feed on rhododendrons. Diseases Major diseases include Phytophthora root rot, stem and twig fungal dieback. Rhododendron bud blast, a fungal condition that causes buds to turn brown and dry before they can open, is caused by the fungus Pycnostysanus azaleae, which may be brought to the plant by the rhododendron leafhopper, Graphocephala fennahi. Conservation In the UK the forerunner of the Rhododendron, Camellia and Magnolia Group (RCMG), The Rhododendron Society was founded in 1916. while in Scotland species are being conserved by the Rhododendron Species Conservation Group. Cultivation Both species and hybrid rhododendrons (including azaleas) are used extensively as ornamental plants in landscaping in many parts of the world, including both temperate and subtemperate regions. Many species and cultivars are grown commercially for the nursery trade. Rhododendrons can be propagated by air layering or stem cuttings. They can self-propagate by sending up shoots from the roots. Sometimes an attached branch that has drooped to the ground will root in damp mulch, and the resulting rooted plant then can be cut off the parent rhododendron. They can also be reprodcued by seed dispersal - or by horticulturalists collecting the spent flower buds and saving ad drying the seed for later germination and planting. Rhododendrons are often valued in landscaping for their structure, size, flowers, and the fact that many of them are evergreen. Azaleas are frequently used around foundations and occasionally as hedges, and many larger-leafed rhododendrons lend themselves well to more informal plantings and woodland gardens, or as specimen plants. In some areas, larger rhododendrons can be pruned to encourage more tree-like form, with some species such as Rhododendron arboreum and R. falconeri eventually growing to a height of or more. Commercial growing Rhododendrons are grown commercially in many areas for sale, and seeds were occasionally collected in the wild, a practice now rare in most areas due to the Nagoya Protocol. Larger commercial growers often ship long distances; in the United States, most of them are on the west coast (Oregon, Washington state and California). Large-scale commercial growing often selects for different characteristics than hobbyist growers might want, such as resistance to root rot when overwatered, ability to be forced into budding early, ease of rooting or other propagation, and saleability. Horticultural divisions Horticulturally, rhododendrons may be divided into the following groups: Evergreen rhododendrons - large group of evergreen shrubs that vary greatly in size. Most rhododendron flowers are bell-shaped and have 10 stamens. Vireya (Malesian) rhododendrons: epiphytic tender shrubs Azaleas – group of shrubs which have smaller and thinner leaves than evergreen rhododendrons. They are generally medium-sized shrubs with smaller funnel-shaped flowers that usually have 5 stamens: Deciduous hybrid azaleas: Exbury hybrids – derived from the Knap Hill hybrids, developed by Lionel de Rothschild at the Exbury Estate in England. Ghent (Gandavense) hybrids – Belgian raised Knap Hill hybrids – developed by Anthony Waterer at the Knap Hill Nursery in England. Mollis hybrids – Dutch and Belgian raised New Zealand Ilam hybrids – derived from Knap Hill/Exbury hybrids Occidentale hybrids – English raised Rustica Flore Pleno hybrids – sweet-scented, double-flowered Evergreen hybrid azaleas: Gable hybrids – raised by Joseph B. Gable in Pennsylvania. Glenn Dale hybrids – US raised complex hybrids Indian (Indica) hybrids – mostly of Belgian origin Kaempferi hybrids – Dutch raised Kurume hybrids – Japanese raised Kyushu hybrids – very hardy Japanese azaleas (to −30 °C) Oldhamii hybrids – dwarf hybrids raised at Exbury, England Satsuki hybrids – Japanese raised, originally for bonsai Shammarello hybrids – raised in northern Ohio Vuyk (Vuykiana) hybrids – raised in the Netherlands Azaleodendrons – semi-evergreen hybrids between deciduous azaleas and rhododendrons Planting and care Like other ericaceous plants, most rhododendrons prefer acid soils with a pH of roughly 4.5–5.5; some tropical Vireyas and a few other rhododendron species grow as epiphytes and require a planting mix similar to orchids. Rhododendrons have fibrous roots and prefer well-drained soils high in organic material. In areas with poorly drained or alkaline soils, rhododendrons are often grown in raised beds using media such as composted pine bark. Mulching and careful watering are important, especially before the plant is established. A new calcium-tolerant stock of rhododendrons (trademarked as 'Inkarho') has been exhibited at the RHS Chelsea Flower Show in London (2011). Individual hybrids of rhododendrons have been grafted on to a rootstock on a single rhododendron plant that was found growing in a chalk quarry. The rootstock is able to grow in calcium-rich soil up to a pH of 7.5. Hybrids Rhododendrons are extensively hybridized in cultivation, and natural hybrids often occur in areas where species ranges overlap. There are over 28,000 cultivars of Rhododendron in the International Rhododendron Registry held by the Royal Horticultural Society. Most have been bred for their flowers, but a few are of garden interest because of ornamental leaves and some for ornamental bark or stems. Some hybrids have fragrant flowers—such as the Loderi hybrids, created by crossing Rhododendron fortunei and R. griffithianum. Other examples include the PJM hybrids, formed from a cross between Rhododendron carolinianum and R. dauricum, and named after Peter J. Mezitt of Weston Nurseries, Massachusetts. Toxicity Some species of rhododendron are poisonous to grazing animals because of a toxin called grayanotoxin in their pollen and nectar. People have been known to become ill from eating mad honey made by bees feeding on rhododendron and azalea flowers. Xenophon described the odd behaviour of Greek soldiers after having consumed honey in a village surrounded by Rhododendron ponticum during the march of the Ten Thousand in 401 BCE. Pompey's soldiers reportedly suffered lethal casualties following the consumption of honey made from Rhododendron deliberately left behind by Pontic forces in 67 BCE during the Third Mithridatic War. Later, it was recognized that honey resulting from these plants has a slightly hallucinogenic and laxative effect. The suspect rhododendrons are Rhododendron ponticum and Rhododendron luteum (formerly Azalea pontica), both found in northern Asia Minor. Eleven similar cases during the 1980s have been documented in Istanbul, Turkey. Rhododendron is extremely toxic to horses, with some animals dying within a few hours of ingesting the plant, although most horses tend to avoid it if they have access to good forage. Rhododendron, including its stems, leaves and flowers, contains toxins that, if ingested by a cat's stomach, can cause seizures and even coma and death. Uses Rhododendron species have long been used in traditional medicine. Animal studies and in vitro research have identified possible anti-inflammatory and hepatoprotective activities which may be due to the antioxidant effects of flavonoids or other phenolic compounds and saponins the plant contains. Xiong et al. have found that the root of the plant is able to reduce the activity of NF-κB in rats. In Nepal, the flower is considered edible and enjoyed for its sour taste. The pickled flower can last for months and the flower juice is also marketed. The flower, fresh or dried, is added to fish curry in the belief that it will soften the bones. The juice of rhododendron flower is used to make a squash called burans (named after the flower) in the hilly regions of Uttarakhand. It is admired for its distinctive flavor and color. Labrador tea The herbal tea called Labrador tea (not a true tea) is made from one of three closely related species: Rhododendron tomentosum (Northern Labrador tea, previously Ledum palustre) Rhododendron groenlandicum, (Bog Labrador tea, previously Ledum groenlandicum or Ledum latifolium) Rhododendron neoglandulosum, (Western Labrador tea, or trapper's tea, previously Ledum glandulosum) In culture In Uttarakhand, in north India, the Buransh flower is deeply embedded in local culture, playing a significant role in festivals like Holi and weddings, where it is used in garlands and decorations to bless attendees. The flower is also utilized in making a healthful, antioxidant-rich juice that is popular during local festivities and summer months. Additionally, Buransh flowers are incorporated into local arts and crafts, where they are used to make colorful necklaces and jewelry, symbolizing the spiritual and physical prosperity of the community. The rhododendron is the national flower of Nepal. In the language of flowers, the rhododendron symbolizes danger and to beware. Rhododendron arboreum (lali guransh) is the national flower of Nepal. R. ponticum is the state flower of Indian-administered Kashmir and Pakistan-controlled Kashmir. Rhododendron niveum is the state tree of Sikkim in India. Rhododendron arboreum is also the state tree of the state of Uttarakhand, India. Pink Rhododendron (Rhododendron campanulatum) is the state flower of Himachal Pradesh, India. Rhododendron is also the provincial flower of Jiangxi, China and the state flower of Nagaland, the 16th state of the Indian Union. Rhododendron maximum, the most widespread rhododendron of the Appalachian Mountains, is the state flower of the US state of West Virginia, and is in the Flag of West Virginia. Rhododendron macrophyllum, a widespread rhododendron of the Pacific Northwest, is the state flower of the US state of Washington. Amongst the Zomi tribes in India and Myanmar, "Rhododendrons" called "Ngeisok" is used in a poetic manner to signify a lady. In media The nineteenth-century American poet and essayist Ralph Waldo Emerson in 1834 wrote a poem titled "The Rhodora, On Being Asked, Whence Is the Flower". Rhododendrons play a role and are soliloquized in James Joyce's Ulysses. The flowers are referenced throughout Daphne Du Maurier's novel Rebecca (1938) and in Sharon Creech's young adult novel Walk Two Moons (1994). British author Jasper Fforde also uses rhododendron as a motif throughout many of his books, e.g. the Thursday Next series and Shades of Grey (2009). The effects of R. ponticum were mentioned in the 2009 film Sherlock Holmes as a proposed way to arrange a fake execution. It was also mentioned in the third episode of Season 2 of BBC's Sherlock, speculated to have been a part of Sherlock's fake death scheme.
Biology and health sciences
Ericales
null
40344
https://en.wikipedia.org/wiki/Semiconductor%20device
Semiconductor device
A semiconductor device is an electronic component that relies on the electronic properties of a semiconductor material (primarily silicon, germanium, and gallium arsenide, as well as organic semiconductors) for its function. Its conductivity lies between conductors and insulators. Semiconductor devices have replaced vacuum tubes in most applications. They conduct electric current in the solid state, rather than as free electrons across a vacuum (typically liberated by thermionic emission) or as free electrons and ions through an ionized gas. Semiconductor devices are manufactured both as single discrete devices and as integrated circuits, which consist of two or more devices—which can number from the hundreds to the billions—manufactured and interconnected on a single semiconductor wafer (also called a substrate). Semiconductor materials are useful because their behavior can be easily manipulated by the deliberate addition of impurities, known as doping. Semiconductor conductivity can be controlled by the introduction of an electric or magnetic field, by exposure to light or heat, or by the mechanical deformation of a doped monocrystalline silicon grid; thus, semiconductors can make excellent sensors. Current conduction in a semiconductor occurs due to mobile or "free" electrons and electron holes, collectively known as charge carriers. Doping a semiconductor with a small proportion of an atomic impurity, such as phosphorus or boron, greatly increases the number of free electrons or holes within the semiconductor. When a doped semiconductor contains excess holes, it is called a p-type semiconductor (p for positive electric charge); when it contains excess free electrons, it is called an n-type semiconductor (n for a negative electric charge). A majority of mobile charge carriers have negative charges. The manufacture of semiconductors controls precisely the location and concentration of p- and n-type dopants. The connection of n-type and p-type semiconductors form p–n junctions. The most common semiconductor device in the world is the MOSFET (metal–oxide–semiconductor field-effect transistor), also called the MOS transistor. As of 2013, billions of MOS transistors are manufactured every day. Semiconductor devices made per year have been growing by 9.1% on average since 1978, and shipments in 2018 are predicted for the first time to exceed 1 trillion, meaning that well over 7 trillion have been made to date. Main types Diode A semiconductor diode is a device typically made from a single p–n junction. At the junction of a p-type and an n-type semiconductor, there forms a depletion region where current conduction is inhibited by the lack of mobile charge carriers. When the device is forward biased (connected with the p-side, having a higher electric potential than the n-side), this depletion region is diminished, allowing for significant conduction. Contrariwise, only a very small current can be achieved when the diode is reverse biased (connected with the n-side at lower electric potential than the p-side, and thus the depletion region expanded). Exposing a semiconductor to light can generate electron–hole pairs, which increases the number of free carriers and thereby the conductivity. Diodes optimized to take advantage of this phenomenon are known as photodiodes. Compound semiconductor diodes can also produce light, as in light-emitting diodes and laser diode Transistor Bipolar junction transistor Bipolar junction transistors (BJTs) are formed from two p–n junctions, in either n–p–n or p–n–p configuration. The middle, or base, the region between the junctions is typically very narrow. The other regions, and their associated terminals, are known as the emitter and the collector. A small current injected through the junction between the base and the emitter changes the properties of the base-collector junction so that it can conduct current even though it is reverse biased. This creates a much larger current between the collector and emitter, controlled by the base-emitter current. Field-effect transistor Another type of transistor, the field-effect transistor (FET), operates on the principle that semiconductor conductivity can be increased or decreased by the presence of an electric field. An electric field can increase the number of free electrons and holes in a semiconductor, thereby changing its conductivity. The field may be applied by a reverse-biased p–n junction, forming a junction field-effect transistor (JFET) or by an electrode insulated from the bulk material by an oxide layer, forming a metal–oxide–semiconductor field-effect transistor (MOSFET). Metal-oxide-semiconductor The metal-oxide-semiconductor FET (MOSFET, or MOS transistor), a solid-state device, is by far the most used widely semiconductor device today. It accounts for at least 99.9% of all transistors, and there have been an estimated 13sextillion MOSFETs manufactured between 1960 and 2018. The gate electrode is charged to produce an electric field that controls the conductivity of a "channel" between two terminals, called the source and drain. Depending on the type of carrier in the channel, the device may be an n-channel (for electrons) or a p-channel (for holes) MOSFET. Although the MOSFET is named in part for its "metal" gate, in modern devices polysilicon is typically used instead. Other types Two-terminal devices: DIAC Diode (rectifier diode) Gunn diode IMPATT diode Laser diode Light-emitting diode (LED) Photocell Phototransistor PIN diode Schottky diode Solar cell Transient-voltage-suppression diode Tunnel diode VCSEL Zener diode Zen diode Three-terminal devices: Bipolar transistor Darlington transistor Field-effect transistor Insulated-gate bipolar transistor (IGBT) Silicon-controlled rectifier Thyristor TRIAC Unijunction transistor Four-terminal devices: Hall effect sensor (magnetic field sensor) Photocoupler (Optocoupler) Materials By far, silicon (Si) is the most widely used material in semiconductor devices. Its combination of low raw material cost, relatively simple processing, and a useful temperature range makes it currently the best compromise among the various competing materials. Silicon used in semiconductor device manufacturing is currently fabricated into boules that are large enough in diameter to allow the production of 300 mm (12 in.) wafers. Germanium (Ge) was a widely used early semiconductor material but its thermal sensitivity makes it less useful than silicon. Today, germanium is often alloyed with silicon for use in very-high-speed SiGe devices; IBM is a major producer of such devices. Gallium arsenide (GaAs) is also widely used in high-speed devices but so far, it has been difficult to form large-diameter boules of this material, limiting the wafer diameter to sizes significantly smaller than silicon wafers thus making mass production of GaAs devices significantly more expensive than silicon. Gallium Nitride (GaN) is gaining popularity in high-power applications including power ICs, light-emitting diodes (LEDs), and RF components due to its high strength and thermal conductivity. Compared to silicon, GaN's band gap is more than 3 times wider at 3.4 eV and it conducts electrons 1,000 times more efficiently. Other less common materials are also in use or under investigation. Silicon carbide (SiC) is also gaining popularity in power ICs and has found some application as the raw material for blue LEDs and is being investigated for use in semiconductor devices that could withstand very high operating temperatures and environments with the presence of significant levels of ionizing radiation. IMPATT diodes have also been fabricated from SiC. Various indium compounds (indium arsenide, indium antimonide, and indium phosphide) are also being used in LEDs and solid-state laser diodes. Selenium sulfide is being studied in the manufacture of photovoltaic solar cells. The most common use for organic semiconductors is organic light-emitting diodes. Applications All transistor types can be used as the building blocks of logic gates, which are fundamental in the design of digital circuits. In digital circuits like microprocessors, transistors act as on-off switches; in the MOSFET, for instance, the voltage applied to the gate determines whether the switch is on or off. Transistors used for analog circuits do not act as on-off switches; rather, they respond to a continuous range of inputs with a continuous range of outputs. Common analog circuits include amplifiers and oscillators. Circuits that interface or translate between digital circuits and analog circuits are known as mixed-signal circuits. Power semiconductor devices are discrete devices or integrated circuits intended for high current or high voltage applications. Power integrated circuits combine IC technology with power semiconductor technology, these are sometimes referred to as "smart" power devices. Several companies specialize in manufacturing power semiconductors. Component identifiers The part numbers of semiconductor devices are often manufacturer specific. Nevertheless, there have been attempts at creating standards for type codes, and a subset of devices follow those. For discrete devices, for example, there are three standards: JEDEC JESD370B in the United States, Pro Electron in Europe, and Japanese Industrial Standards (JIS). Fabrication History of development Cat's-whisker detector Semiconductors had been used in the electronics field for some time before the invention of the transistor. Around the turn of the 20th century they were quite common as detectors in radios, used in a device called a "cat's whisker" developed by Jagadish Chandra Bose and others. These detectors were somewhat troublesome, however, requiring the operator to move a small tungsten filament (the whisker) around the surface of a galena (lead sulfide) or carborundum (silicon carbide) crystal until it suddenly started working. Then, over a period of a few hours or days, the cat's whisker would slowly stop working and the process would have to be repeated. At the time their operation was completely mysterious. After the introduction of the more reliable and amplified vacuum tube based radios, the cat's whisker systems quickly disappeared. The "cat's whisker" is a primitive example of a special type of diode still popular today, called a Schottky diode. Metal rectifier Another early type of semiconductor device is the metal rectifier in which the semiconductor is copper oxide or selenium. Westinghouse Electric (1886) was a major manufacturer of these rectifiers. World War II During World War II, radar research quickly pushed radar receivers to operate at ever higher frequencies about 4000 MHz and the traditional tube-based radio receivers no longer worked well. The introduction of the cavity magnetron from Britain to the United States in 1940 during the Tizard Mission resulted in a pressing need for a practical high-frequency amplifier. On a whim, Russell Ohl of Bell Laboratories decided to try a cat's whisker. By this point, they had not been in use for a number of years, and no one at the labs had one. After hunting one down at a used radio store in Manhattan, he found that it worked much better than tube-based systems. Ohl investigated why the cat's whisker functioned so well. He spent most of 1939 trying to grow more pure versions of the crystals. He soon found that with higher-quality crystals their finicky behavior went away, but so did their ability to operate as a radio detector. One day he found one of his purest crystals nevertheless worked well, and it had a clearly visible crack near the middle. However, as he moved about the room trying to test it, the detector would mysteriously work, and then stop again. After some study he found that the behavior was controlled by the light in the room – more light caused more conductance in the crystal. He invited several other people to see this crystal, and Walter Brattain immediately realized there was some sort of junction at the crack. Further research cleared up the remaining mystery. The crystal had cracked because either side contained very slightly different amounts of the impurities Ohl could not remove – about 0.2%. One side of the crystal had impurities that added extra electrons (the carriers of electric current) and made it a "conductor". The other had impurities that wanted to bind to these electrons, making it (what he called) an "insulator". Because the two parts of the crystal were in contact with each other, the electrons could be pushed out of the conductive side which had extra electrons (soon to be known as the emitter), and replaced by new ones being provided (from a battery, for instance) where they would flow into the insulating portion and be collected by the whisker filament (named the collector). However, when the voltage was reversed the electrons being pushed into the collector would quickly fill up the "holes" (the electron-needy impurities), and conduction would stop almost instantly. This junction of the two crystals (or parts of one crystal) created a solid-state diode, and the concept soon became known as semiconduction. The mechanism of action when the diode off has to do with the separation of charge carriers around the junction. This is called a "depletion region". Development of the diode Armed with the knowledge of how these new diodes worked, a vigorous effort began to learn how to build them on demand. Teams at Purdue University, Bell Labs, MIT, and the University of Chicago all joined forces to build better crystals. Within a year germanium production had been perfected to the point where military-grade diodes were being used in most radar sets. Development of the transistor After the war, William Shockley decided to attempt the building of a triode-like semiconductor device. He secured funding and lab space, and went to work on the problem with Brattain and John Bardeen. The key to the development of the transistor was the further understanding of the process of the electron mobility in a semiconductor. It was realized that if there were some way to control the flow of the electrons from the emitter to the collector of this newly discovered diode, an amplifier could be built. For instance, if contacts are placed on both sides of a single type of crystal, current will not flow between them through the crystal. However, if a third contact could then "inject" electrons or holes into the material, the current would flow. Actually doing this appeared to be very difficult. If the crystal were of any reasonable size, the number of electrons (or holes) required to be injected would have to be very large, making it less than useful as an amplifier because it would require a large injection current to start with. That said, the whole idea of the crystal diode was that the crystal itself could provide the electrons over a very small distance, the depletion region. The key appeared to be to place the input and output contacts very close together on the surface of the crystal on either side of this region. Brattain started working on building such a device, and tantalizing hints of amplification continued to appear as the team worked on the problem. Sometimes the system would work but then stop working unexpectedly. In one instance a non-working system started working when placed in water. Ohl and Brattain eventually developed a new branch of quantum mechanics, which became known as surface physics, to account for the behavior. The electrons in any one piece of the crystal would migrate about due to nearby charges. Electrons in the emitters, or the "holes" in the collectors, would cluster at the surface of the crystal where they could find their opposite charge "floating around" in the air (or water). Yet they could be pushed away from the surface with the application of a small amount of charge from any other location on the crystal. Instead of needing a large supply of injected electrons, a very small number in the right place on the crystal would accomplish the same thing. Their understanding solved the problem of needing a very small control area to some degree. Instead of needing two separate semiconductors connected by a common, but tiny, region, a single larger surface would serve. The electron-emitting and collecting leads would both be placed very close together on the top, with the control lead placed on the base of the crystal. When current flowed through this "base" lead, the electrons or holes would be pushed out, across the block of the semiconductor, and collect on the far surface. As long as the emitter and collector were very close together, this should allow enough electrons or holes between them to allow conduction to start. First transistor The Bell team made many attempts to build such a system with various tools but generally failed. Setups, where the contacts were close enough, were invariably as fragile as the original cat's whisker detectors had been, and would work briefly, if at all. Eventually, they had a practical breakthrough. A piece of gold foil was glued to the edge of a plastic wedge, and then the foil was sliced with a razor at the tip of the triangle. The result was two very closely spaced contacts of gold. When the wedge was pushed down onto the surface of a crystal and voltage was applied to the other side (on the base of the crystal), current started to flow from one contact to the other as the base voltage pushed the electrons away from the base towards the other side near the contacts. The point-contact transistor had been invented. While the device was constructed a week earlier, Brattain's notes describe the first demonstration to higher-ups at Bell Labs on the afternoon of 23 December 1947, often given as the birthdate of the transistor. What is now known as the "p–n–p point-contact germanium transistor" operated as a speech amplifier with a power gain of 18 in that trial. John Bardeen, Walter Houser Brattain, and William Bradford Shockley were awarded the 1956 Nobel Prize in physics for their work. Etymology of "transistor" Bell Telephone Laboratories needed a generic name for their new invention: "Semiconductor Triode", "Solid Triode", "Surface States Triode", "Crystal Triode" and "Iotatron" were all considered, but "transistor", coined by John R. Pierce, won an internal ballot. The rationale for the name is described in the following extract from the company's Technical Memoranda (May 28, 1948) [26] calling for votes: Transistor. This is an abbreviated combination of the words "transconductance" or "transfer", and "varistor". The device logically belongs in the varistor family, and has the transconductance or transfer impedance of a device having gain, so that this combination is descriptive. Improvements in transistor design Shockley was upset about the device being credited to Brattain and Bardeen, who he felt had built it "behind his back" to take the glory. Matters became worse when Bell Labs lawyers found that some of Shockley's own writings on the transistor were close enough to those of an earlier 1925 patent by Julius Edgar Lilienfeld that they thought it best that his name be left off the patent application. Shockley was incensed, and decided to demonstrate who was the real brains of the operation. A few months later he invented an entirely new, considerably more robust, bipolar junction transistor type of transistor with a layer or 'sandwich' structure, used for the vast majority of all transistors into the 1960s. With the fragility problems solved, the remaining problem was purity. Making germanium of the required purity was proving to be a serious problem and limited the yield of transistors that actually worked from a given batch of material. Germanium's sensitivity to temperature also limited its usefulness. Scientists theorized that silicon would be easier to fabricate, but few investigated this possibility. Former Bell Labs scientist Gordon K. Teal was the first to develop a working silicon transistor at the nascent Texas Instruments, giving it a technological edge. From the late 1950s, most transistors were silicon-based. Within a few years transistor-based products, most notably easily portable radios, were appearing on the market. "Zone melting", a technique using a band of molten material moving through the crystal, further increased crystal purity. Metal-oxide semiconductor In 1955, Carl Frosch and Lincoln Derick accidentally grew a layer of silicon dioxide over the silicon wafer, for which they observed surface passivation effects. By 1957 Frosch and Derick, using masking and predeposition, were able to manufacture silicon dioxide field effect transistors; the first planar transistors, in which drain and source were adjacent at the same surface. They showed that silicon dioxide insulated, protected silicon wafers and prevented dopants from diffusing into the wafer. At Bell Labs, the importance of Frosch and Derick technique and transistors was immediately realized. Results of their work circulated around Bell Labs in the form of BTL memos before being published in 1957. At Shockley Semiconductor, Shockley had circulated the preprint of their article in December 1956 to all his senior staff, including Jean Hoerni, who would later invent the planar process in 1959 while at Fairchild Semiconductor. After this, J.R. Ligenza and W.G. Spitzer studied the mechanism of thermally grown oxides, fabricated a high quality Si/SiO2 stack and published their results in 1960. Following this research, Mohamed Atalla and Dawon Kahng proposed a silicon MOS transistor in 1959 and successfully demonstrated a working MOS device with their Bell Labs team in 1960. Their team included E. E. LaBate and E. I. Povilonis who fabricated the device; M. O. Thurston, L. A. D’Asaro, and J. R. Ligenza who developed the diffusion processes, and H. K. Gummel and R. Lindner who characterized the device. With its scalability, and much lower power consumption and higher density than bipolar junction transistors, the MOSFET became the most common type of transistor in computers, electronics, and communications technology such as smartphones. The US Patent and Trademark Office calls the MOSFET a "groundbreaking invention that transformed life and culture around the world". Bardeen's 1948 inversion layer concept, forms the basis of CMOS technology today. CMOS (complementary MOS) was invented by Chih-Tang Sah and Frank Wanlass at Fairchild Semiconductor in 1963. The first report of a floating-gate MOSFET was made by Dawon Kahng and Simon Sze in 1967. FinFET (fin field-effect transistor), a type of 3D multi-gate MOSFET, was proposed by H. R. Farrah (Bendix Corporation) and R. F. Steinberg in 1967 and first built by Digh Hisamoto and his team of researchers at Hitachi Central Research Laboratory in 1989.
Technology
Electronics
null
40345
https://en.wikipedia.org/wiki/MOSFET
MOSFET
In electronics, the metal–oxide–semiconductor field-effect transistor (MOSFET, MOS-FET, MOS FET, or MOS transistor) is a type of field-effect transistor (FET), most commonly fabricated by the controlled oxidation of silicon. It has an insulated gate, the voltage of which determines the conductivity of the device. This ability to change conductivity with the amount of applied voltage can be used for amplifying or switching electronic signals. The term metal–insulator–semiconductor field-effect transistor (MISFET) is almost synonymous with MOSFET. Another near-synonym is insulated-gate field-effect transistor (IGFET). The main advantage of a MOSFET is that it requires almost no input current to control the load current, when compared to bipolar junction transistors (BJTs). In an enhancement mode MOSFET, voltage applied to the gate terminal increases the conductivity of the device. In depletion mode transistors, voltage applied at the gate reduces the conductivity. The "metal" in the name MOSFET is sometimes a misnomer, because the gate material can be a layer of polysilicon (polycrystalline silicon). Similarly, "oxide" in the name can also be a misnomer, as different dielectric materials are used with the aim of obtaining strong channels with smaller applied voltages. The MOSFET is by far the most common transistor in digital circuits, as billions may be included in a memory chip or microprocessor. As MOSFETs can be made with either p-type or n-type semiconductors, complementary pairs of MOS transistors can be used to make switching circuits with very low power consumption, in the form of CMOS logic. History The basic principle of the field-effect transistor was first patented by Julius Edgar Lilienfeld in 1925. In 1934, inventor Oskar Heil independently patented a similar device in Europe. In the 1940s, Bell Labs scientists William Shockley, John Bardeen and Walter Houser Brattain attempted to build a field-effect device, which led to their discovery of the transistor effect. However, the structure failed to show the anticipated effects, due to the problem of surface states: traps on the semiconductor surface that hold electrons immobile. With no surface passivation, they were only able to build the BJT and thyristor transistors. In 1955, Carl Frosch and Lincoln Derick accidentally grew a layer of silicon dioxide over the silicon wafer, for which they observed surface passivation effects. By 1957 Frosch and Derick, using masking and predeposition, were able to manufacture silicon dioxide field effect transistors; the first planar transistors, in which drain and source were adjacent at the same surface. They showed that silicon dioxide insulated, protected silicon wafers and prevented dopants from diffusing into the wafer. At Bell Labs, the importance of Frosch and Derick technique and transistors was immediately realized. Results of their work circulated around Bell Labs in the form of BTL memos before being published in 1957. At Shockley Semiconductor, Shockley had circulated the preprint of their article in December 1956 to all his senior staff, including Jean Hoerni, who would later invent the planar process in 1959 while at Fairchild Semiconductor. After this, J.R. Ligenza and W.G. Spitzer studied the mechanism of thermally grown oxides, fabricated a high quality Si/SiO2 stack and published their results in 1960. Following this research, Mohamed Atalla and Dawon Kahng proposed a silicon MOS transistor in 1959 and successfully demonstrated a working MOS device with their Bell Labs team in 1960. Their team included E. E. LaBate and E. I. Povilonis who fabricated the device; M. O. Thurston, L. A. D’Asaro, and J. R. Ligenza who developed the diffusion processes, and H. K. Gummel and R. Lindner who characterized the device. This was a culmination of decades of field-effect research that began with Lilienfeld. The first MOS transistor at Bell Labs was about 100 times slower than contemporary bipolar transistors and was initially seen as inferior. Nevertheless, Kahng pointed out several advantages of the device, notably ease of fabrication and its application in integrated circuits. Composition Usually the semiconductor of choice is silicon. Some chip manufacturers, most notably IBM and Intel, use an alloy of silicon and germanium (SiGe) in MOSFET channels. Many semiconductors with better electrical properties than silicon, such as gallium arsenide, do not form good semiconductor-to-insulator interfaces, and thus are not suitable for MOSFETs. Research continues on creating insulators with acceptable electrical characteristics on other semiconductor materials. To overcome the increase in power consumption due to gate current leakage, a high-κ dielectric is used instead of silicon dioxide for the gate insulator, while polysilicon is replaced by metal gates (e.g. Intel, 2009). The gate is separated from the channel by a thin insulating layer, traditionally of silicon dioxide and later of silicon oxynitride. Some companies use a high-κ dielectric and metal gate combination in the 45 nanometer node. When a voltage is applied between the gate and the source, the electric field generated penetrates through the oxide and creates an inversion layer or channel at the semiconductor-insulator interface. The inversion layer provides a channel through which current can pass between source and drain terminals. Varying the voltage between the gate and body modulates the conductivity of this layer and thereby controls the current flow between drain and source. This is known as enhancement mode. Operation Metal–oxide–semiconductor structure The traditional metal–oxide–semiconductor (MOS) structure is obtained by growing a layer of silicon dioxide () on top of a silicon substrate, commonly by thermal oxidation and depositing a layer of metal or polycrystalline silicon (the latter is commonly used). As silicon dioxide is a dielectric material, its structure is equivalent to a planar capacitor, with one of the electrodes replaced by a semiconductor. When a voltage is applied across a MOS structure, it modifies the distribution of charges in the semiconductor. If we consider a p-type semiconductor (with NA the density of acceptors, p the density of holes; p = NA in neutral bulk), a positive voltage, VG, from gate to body (see figure) creates a depletion layer by forcing the positively charged holes away from the gate-insulator/semiconductor interface, leaving exposed a carrier-free region of immobile, negatively charged acceptor ions (see doping). If VG is high enough, a high concentration of negative charge carriers forms in an inversion layer located in a thin layer next to the interface between the semiconductor and the insulator. Conventionally, the gate voltage at which the volume density of electrons in the inversion layer is the same as the volume density of holes in the body is called the threshold voltage. When the voltage between transistor gate and source (VG) exceeds the threshold voltage (Vth), the difference is known as overdrive voltage. This structure with p-type body is the basis of the n-type MOSFET, which requires the addition of n-type source and drain regions. MOS capacitors and band diagrams The MOS capacitor structure is the heart of the MOSFET. Consider a MOS capacitor where the silicon base is of p-type. If a positive voltage is applied at the gate, holes which are at the surface of the p-type substrate will be repelled by the electric field generated by the voltage applied. At first, the holes will simply be repelled and what will remain on the surface will be immobile (negative) atoms of the acceptor type, which creates a depletion region on the surface. A hole is created by an acceptor atom, e.g., boron, which has one less electron than a silicon atom. Holes are not actually repelled, being non-entities; electrons are attracted by the positive field, and fill these holes. This creates a depletion region where no charge carriers exist because the electron is now fixed onto the atom and immobile. As the voltage at the gate increases, there will be a point at which the surface above the depletion region will be converted from p-type into n-type, as electrons from the bulk area will start to get attracted by the larger electric field. This is known as inversion. The threshold voltage at which this conversion happens is one of the most important parameters in a MOSFET. In the case of a p-type MOSFET, bulk inversion happens when the intrinsic energy level at the surface becomes smaller than the Fermi level at the surface. This can be seen on a band diagram. The Fermi level defines the type of semiconductor in discussion. If the Fermi level is equal to the Intrinsic level, the semiconductor is of intrinsic, or pure type. If the Fermi level lies closer to the conduction band (valence band) then the semiconductor type will be of n-type (p-type). When the gate voltage is increased in a positive sense this will shift the intrinsic energy level band so that it will curve downwards towards the valence band. If the Fermi level lies closer to the valence band (for p-type), there will be a point when the Intrinsic level will start to cross the Fermi level and when the voltage reaches the threshold voltage, the intrinsic level does cross the Fermi level, and that is what is known as inversion. At that point, the surface of the semiconductor is inverted from p-type into n-type. If the Fermi level lies above the intrinsic level, the semiconductor is of n-type, therefore at inversion, when the intrinsic level reaches and crosses the Fermi level (which lies closer to the valence band), the semiconductor type changes at the surface as dictated by the relative positions of the Fermi and Intrinsic energy levels. Structure and channel formation A MOSFET is based on the modulation of charge concentration by a MOS capacitance between a body electrode and a gate electrode located above the body and insulated from all other device regions by a gate dielectric layer. If dielectrics other than an oxide are employed, the device may be referred to as a metal-insulator-semiconductor FET (MISFET). Compared to the MOS capacitor, the MOSFET includes two additional terminals (source and drain), each connected to individual highly doped regions that are separated by the body region. These regions can be either p or n type, but they must both be of the same type, and of opposite type to the body region. The source and drain (unlike the body) are highly doped as signified by a "+" sign after the type of doping. If the MOSFET is an n-channel or nMOS FET, then the source and drain are n+ regions and the body is a p region. If the MOSFET is a p-channel or pMOS FET, then the source and drain are p+ regions and the body is a n region. The source is so named because it is the source of the charge carriers (electrons for n-channel, holes for p-channel) that flow through the channel; similarly, the drain is where the charge carriers leave the channel. The occupancy of the energy bands in a semiconductor is set by the position of the Fermi level relative to the semiconductor energy-band edges. With sufficient gate voltage, the valence band edge is driven far from the Fermi level, and holes from the body are driven away from the gate. At larger gate bias still, near the semiconductor surface the conduction band edge is brought close to the Fermi level, populating the surface with electrons in an inversion layer or n-channel at the interface between the p region and the oxide. This conducting channel extends between the source and the drain, and current is conducted through it when a voltage is applied between the two electrodes. Increasing the voltage on the gate leads to a higher electron density in the inversion layer and therefore increases the current flow between the source and drain. For gate voltages below the threshold value, the channel is lightly populated, and only a very small subthreshold leakage current can flow between the source and the drain. When a negative gate-source voltage (positive source-gate) is applied, it creates a p-channel at the surface of the n region, analogous to the n-channel case, but with opposite polarities of charges and voltages. When a voltage less negative than the threshold value (a negative voltage for the p-channel) is applied between gate and source, the channel disappears and only a very small subthreshold current can flow between the source and the drain. The device may comprise a silicon on insulator device in which a buried oxide is formed below a thin semiconductor layer. If the channel region between the gate dielectric and the buried oxide region is very thin, the channel is referred to as an ultrathin channel region with the source and drain regions formed on either side in or above the thin semiconductor layer. Other semiconductor materials may be employed. When the source and drain regions are formed above the channel in whole or in part, they are referred to as raised source/drain regions. Modes of operation The operation of a MOSFET can be separated into three different modes, depending on the voltages at the terminals. In the following discussion, a simplified algebraic model is used. Modern MOSFET characteristics are more complex than the algebraic model presented here. For an enhancement-mode, n-channel MOSFET, the three operational modes are: Cutoff, subthreshold, and weak-inversion mode When VGS < Vth: where is gate-to-source bias and is the threshold voltage of the device. According to the basic threshold model, the transistor is turned off, and there is no conduction between drain and source. A more accurate model considers the effect of thermal energy on the Fermi–Dirac distribution of electron energies which allow some of the more energetic electrons at the source to enter the channel and flow to the drain. This results in a subthreshold current that is an exponential function of gate-source voltage. While the current between drain and source should ideally be zero when the transistor is being used as a turned-off switch, there is a weak-inversion current, sometimes called subthreshold leakage. In weak inversion where the source is tied to bulk, the current varies exponentially with as given approximately by: where = current at , the thermal voltage and the slope factor n is given by: with = capacitance of the depletion layer and = capacitance of the oxide layer. This equation is generally used, but is only an adequate approximation for the source tied to the bulk. For the source not tied to the bulk, the subthreshold equation for drain current in saturation is In a long-channel device, there is no drain voltage dependence of the current once , but as channel length is reduced drain-induced barrier lowering introduces drain voltage dependence that depends in a complex way upon the device geometry (for example, the channel doping, the junction doping and so on). Frequently, threshold voltage Vth for this mode is defined as the gate voltage at which a selected value of current ID0 occurs, for example, ID0 = 1μA, which may not be the same Vth-value used in the equations for the following modes. Some micropower analog circuits are designed to take advantage of subthreshold conduction. By working in the weak-inversion region, the MOSFETs in these circuits deliver the highest possible transconductance-to-current ratio, namely: , almost that of a bipolar transistor. The subthreshold I–V curve depends exponentially upon threshold voltage, introducing a strong dependence on any manufacturing variation that affects threshold voltage; for example: variations in oxide thickness, junction depth, or body doping that change the degree of drain-induced barrier lowering. The resulting sensitivity to fabricational variations complicates optimization for leakage and performance. Triode mode or linear region (also known as the ohmic mode) When VGS > Vth and VDS < VGS − Vth: The transistor is turned on, and a channel has been created which allows current between the drain and the source. The MOSFET operates like a resistor, controlled by the gate voltage relative to both the source and drain voltages. The current from drain to source is modeled as: where is the charge-carrier effective mobility, is the gate width, is the gate length and is the gate oxide capacitance per unit area. The transition from the exponential subthreshold region to the triode region is not as sharp as the equations suggest. Saturation or active mode When VGS > Vth and VDS ≥ (VGS – Vth): The switch is turned on, and a channel has been created, which allows current between the drain and source. Since the drain voltage is higher than the source voltage, the electrons spread out, and conduction is not through a narrow channel but through a broader, two- or three-dimensional current distribution extending away from the interface and deeper in the substrate. The onset of this region is also known as pinch-off to indicate the lack of channel region near the drain. Although the channel does not extend the full length of the device, the electric field between the drain and the channel is very high, and conduction continues. The drain current is now weakly dependent upon drain voltage and controlled primarily by the gate-source voltage, and modeled approximately as: The additional factor involving λ, the channel-length modulation parameter, models current dependence on drain voltage due to the Early effect, or channel length modulation. According to this equation, a key design parameter, the MOSFET transconductance is: where the combination Vov = VGS − Vth is called the overdrive voltage, and where VDSsat = VGS − Vth accounts for a small discontinuity in which would otherwise appear at the transition between the triode and saturation regions. Another key design parameter is the MOSFET output resistance rout given by: . rout is the inverse of gDS where . ID is the expression in saturation region. If λ is taken as zero, an infinite output resistance of the device results that leads to unrealistic circuit predictions, particularly in analog circuits. As the channel length becomes very short, these equations become quite inaccurate. New physical effects arise. For example, carrier transport in the active mode may become limited by velocity saturation. When velocity saturation dominates, the saturation drain current is more nearly linear than quadratic in VGS. At even shorter lengths, carriers transport with near zero scattering, known as quasi-ballistic transport. In the ballistic regime, the carriers travel at an injection velocity that may exceed the saturation velocity and approaches the Fermi velocity at high inversion charge density. In addition, drain-induced barrier lowering increases off-state (cutoff) current and requires an increase in threshold voltage to compensate, which in turn reduces the saturation current. Body effect The occupancy of the energy bands in a semiconductor is set by the position of the Fermi level relative to the semiconductor energy-band edges. Application of a source-to-substrate reverse bias of the source-body pn-junction introduces a split between the Fermi levels for electrons and holes, moving the Fermi level for the channel further from the band edge, lowering the occupancy of the channel. The effect is to increase the gate voltage necessary to establish the channel, as seen in the figure. This change in channel strength by application of reverse bias is called the "body effect." Using an nMOS example, the gate-to-body bias VGB positions the conduction-band energy levels, while the source-to-body bias VSB positions the electron Fermi level near the interface, deciding occupancy of these levels near the interface, and hence the strength of the inversion layer or channel. The body effect upon the channel can be described using a modification of the threshold voltage, approximated by the following equation: where VTB is the threshold voltage with substrate bias present, and VT0 is the zero-VSB value of threshold voltage, is the body effect parameter, and 2φB is the approximate potential drop between surface and bulk across the depletion layer when and gate bias is sufficient to ensure that a channel is present. As this equation shows, a reverse bias causes an increase in threshold voltage VTB and therefore demands a larger gate voltage before the channel populates. The body can be operated as a second gate, and is sometimes referred to as the "back gate"; the body effect is sometimes called the "back-gate effect". Circuit symbols A variety of symbols are used for the MOSFET. The basic design is generally a line for the channel with the source and drain leaving it at right angles and then bending back at right angles into the same direction as the channel. Sometimes three line segments are used for enhancement mode and a solid line for depletion mode (see depletion and enhancement modes). Another line is drawn parallel to the channel for the gate. The bulk or body connection, if shown, is shown connected to the back of the channel with an arrow indicating pMOS or nMOS. Arrows always point from P to N, so an NMOS (N-channel in P-well or P-substrate) has the arrow pointing in (from the bulk to the channel). If the bulk is connected to the source (as is generally the case with discrete devices) it is sometimes angled to meet the source leaving the transistor. If the bulk is not shown (as is often the case in IC design as they are generally common bulk) an inversion symbol is sometimes used to indicate PMOS, alternatively an arrow on the source may be used in the same way as for bipolar transistors (out for nMOS, in for pMOS). Comparison of enhancement-mode and depletion-mode MOSFET symbols, along with JFET symbols. The orientation of the symbols, (most significantly the position of source relative to drain) is such that more positive voltages appear higher on the page than less positive voltages, implying conventional current flowing "down" the page: In schematics where G, S, D are not labeled, the detailed features of the symbol indicate which terminal is source and which is drain. For enhancement-mode and depletion-mode MOSFET symbols (in columns two and five), the source terminal is the one connected to the triangle. Additionally, in this diagram, the gate is shown as an "L" shape, whose input leg is closer to S than D, also indicating which is which. However, these symbols are often drawn with a T-shaped gate (as elsewhere on this page), so it is the triangle which must be relied upon to indicate the source terminal. For the symbols in which the bulk, or body, terminal is shown, it is here shown internally connected to the source (i.e., the black triangles in the diagrams in columns 2 and 5). This is a typical configuration, but by no means the only important configuration. In general, the MOSFET is a four-terminal device, and in integrated circuits many of the MOSFETs share a body connection, not necessarily connected to the source terminals of all the transistors. Applications Digital integrated circuits such as microprocessors and memory devices contain thousands to billions of integrated MOSFETs on each device, providing the basic switching functions required to implement logic gates and data storage. Discrete devices are widely used in applications such as switch mode power supplies, variable-frequency drives and other power electronics applications where each device may be switching thousands of watts. Radio-frequency amplifiers up to the UHF spectrum use MOSFET transistors as analog signal and power amplifiers. Radio systems also use MOSFETs as oscillators, or mixers to convert frequencies. MOSFET devices are also applied in audio-frequency power amplifiers for public address systems, sound reinforcement and home and automobile sound systems MOS integrated circuits Following the development of clean rooms to reduce contamination to levels never before thought necessary, and of photolithography and the planar process to allow circuits to be made in very few steps, the Si–SiO2 system possessed the technical attractions of low cost of production (on a per circuit basis) and ease of integration. Largely because of these two factors, the MOSFET has become the most widely used type of transistor in the Institution of Engineering and Technology (IET). General Microelectronics introduced the first commercial MOS integrated circuit in 1964. Additionally, the method of coupling two complementary MOSFETs (P-channel and N-channel) into one high/low switch, known as CMOS, means that digital circuits dissipate very little power except when actually switched. The earliest microprocessors starting in 1970 were all MOS microprocessors; i.e., fabricated entirely from PMOS logic or fabricated entirely from NMOS logic. In the 1970s, MOS microprocessors were often contrasted with CMOS microprocessors and bipolar bit-slice processors. CMOS circuits The MOSFET is used in digital complementary metal–oxide–semiconductor (CMOS) logic, which uses p- and n-channel MOSFETs as building blocks. Overheating is a major concern in integrated circuits since ever more transistors are packed into ever smaller chips. CMOS logic reduces power consumption because no current flows (ideally), and thus no power is consumed, except when the inputs to logic gates are being switched. CMOS accomplishes this current reduction by complementing every nMOSFET with a pMOSFET and connecting both gates and both drains together. A high voltage on the gates will cause the nMOSFET to conduct and the pMOSFET not to conduct and a low voltage on the gates causes the reverse. During the switching time as the voltage goes from one state to another, both MOSFETs will conduct briefly. This arrangement greatly reduces power consumption and heat generation. Digital The growth of digital technologies like the microprocessor has provided the motivation to advance MOSFET technology faster than any other type of silicon-based transistor. A big advantage of MOSFETs for digital switching is that the oxide layer between the gate and the channel prevents DC current from flowing through the gate, further reducing power consumption and giving a very large input impedance. The insulating oxide between the gate and channel effectively isolates a MOSFET in one logic stage from earlier and later stages, which allows a single MOSFET output to drive a considerable number of MOSFET inputs. Bipolar transistor-based logic (such as TTL) does not have such a high fanout capacity. This isolation also makes it easier for the designers to ignore to some extent loading effects between logic stages independently. That extent is defined by the operating frequency: as frequencies increase, the input impedance of the MOSFETs decreases. Analog The MOSFET's advantages in digital circuits do not translate into supremacy in all analog circuits. The two types of circuit draw upon different features of transistor behavior. Digital circuits switch, spending most of their time either fully on or fully off. The transition from one to the other is only of concern with regards to speed and charge required. Analog circuits depend on operation in the transition region where small changes to V can modulate the output (drain) current. The JFET and bipolar junction transistor (BJT) are preferred for accurate matching (of adjacent devices in integrated circuits), higher transconductance and certain temperature characteristics which simplify keeping performance predictable as circuit temperature varies. Nevertheless, MOSFETs are widely used in many types of analog circuits because of their own advantages (zero gate current, high and adjustable output impedance and improved robustness vs. BJTs which can be permanently degraded by even lightly breaking down the emitter-base). The characteristics and performance of many analog circuits can be scaled up or down by changing the sizes (length and width) of the MOSFETs used. By comparison, in bipolar transistors follow a different scaling law. MOSFETs' ideal characteristics regarding gate current (zero) and drain-source offset voltage (zero) also make them nearly ideal switch elements, and also make switched capacitor analog circuits practical. In their linear region, MOSFETs can be used as precision resistors, which can have a much higher controlled resistance than BJTs. In high power circuits, MOSFETs sometimes have the advantage of not suffering from thermal runaway as BJTs do. This means that complete analog circuits can be made on a silicon chip in a much smaller space and with simpler fabrication techniques. MOSFETS are ideally suited to switch inductive loads because of tolerance to inductive kickback. Some ICs combine analog and digital MOSFET circuitry on a single mixed-signal integrated circuit, making the needed board space even smaller. This creates a need to isolate the analog circuits from the digital circuits on a chip level, leading to the use of isolation rings and silicon on insulator (SOI). Since MOSFETs require more space to handle a given amount of power than a BJT, fabrication processes can incorporate BJTs and MOSFETs into a single device. Mixed-transistor devices are called bi-FETs (bipolar FETs) if they contain just one BJT-FET and BiCMOS (bipolar-CMOS) if they contain complementary BJT-FETs. Such devices have the advantages of both insulated gates and higher current density. Analog switches MOSFET analog switches use the MOSFET to pass analog signals when on, and as a high impedance when off. Signals flow in both directions across a MOSFET switch. In this application, the drain and source of a MOSFET exchange places depending on the relative voltages of the source and drain electrodes. The source is the more negative side for an N-MOS or the more positive side for a P-MOS. All of these switches are limited on what signals they can pass or stop by their gate-source, gate-drain and source–drain voltages; exceeding the voltage, current, or power limits will potentially damage the switch. Single-type This analog switch uses a four-terminal simple MOSFET of either P or N type. In the case of an n-type switch, the body is connected to the most negative supply (usually GND) and the gate is used as the switch control. Whenever the gate voltage exceeds the source voltage by at least a threshold voltage, the MOSFET conducts. The higher the voltage, the more the MOSFET can conduct. An N-MOS switch passes all voltages less than V − V. When the switch is conducting, it typically operates in the linear (or ohmic) mode of operation, since the source and drain voltages will typically be nearly equal. In the case of a P-MOS, the body is connected to the most positive voltage, and the gate is brought to a lower potential to turn the switch on. The P-MOS switch passes all voltages higher than V − V (threshold voltage V is negative in the case of enhancement-mode P-MOS). Dual-type (CMOS) This "complementary" or CMOS type of switch uses one P-MOS and one N-MOS FET to counteract the limitations of the single-type switch. The FETs have their drains and sources connected in parallel, the body of the P-MOS is connected to the high potential (VDD) and the body of the N-MOS is connected to the low potential (gnd). To turn the switch on, the gate of the P-MOS is driven to the low potential and the gate of the N-MOS is driven to the high potential. For voltages between VDD − Vtn and gnd − Vtp, both FETs conduct the signal; for voltages less than gnd − Vtp, the N-MOS conducts alone; and for voltages greater than VDD − Vtn, the P-MOS conducts alone. The voltage limits for this switch are the gate-source, gate-drain and source-drain voltage limits for both FETs. Also, the P-MOS is typically two to three times wider than the N-MOS, so the switch will be balanced for speed in the two directions. Tri-state circuitry sometimes incorporates a CMOS MOSFET switch on its output to provide for a low-ohmic, full-range output when on, and a high-ohmic, mid-level signal when off. Construction Gate material The primary criterion for the gate material is that it is a good conductor. Highly doped polycrystalline silicon is an acceptable but certainly not ideal conductor, and also suffers from some more technical deficiencies in its role as the standard gate material. Nevertheless, there are several reasons favoring use of polysilicon: The threshold voltage (and consequently the drain to source on-current) is modified by the work function difference between the gate material and channel material. Because polysilicon is a semiconductor, its work function can be modulated by adjusting the type and level of doping. Furthermore, because polysilicon has the same bandgap as the underlying silicon channel, it is quite straightforward to tune the work function to achieve low threshold voltages for both NMOS and PMOS devices. By contrast, the work functions of metals are not easily modulated, so tuning the work function to obtain low threshold voltages (LVT) becomes a significant challenge. Additionally, obtaining low-threshold devices on both PMOS and NMOS devices sometimes requires the use of different metals for each device type. The silicon-SiO2 interface has been well studied and is known to have relatively few defects. By contrast many metal-insulator interfaces contain significant levels of defects which can lead to Fermi level pinning, charging, or other phenomena that ultimately degrade device performance. In the MOSFET IC fabrication process, it is preferable to deposit the gate material prior to certain high-temperature steps in order to make better-performing transistors. Such high temperature steps would melt some metals, limiting the types of metal that can be used in a metal-gate-based process. While polysilicon gates have been the de facto standard for the last twenty years, they do have some disadvantages which have led to their likely future replacement by metal gates. These disadvantages include: Polysilicon is not a great conductor (approximately 1000 times more resistive than metals) which reduces the signal propagation speed through the material. The resistivity can be lowered by increasing the level of doping, but even highly doped polysilicon is not as conductive as most metals. To improve conductivity further, sometimes a high-temperature metal such as tungsten, titanium, cobalt, and more recently nickel is alloyed with the top layers of the polysilicon. Such a blended material is called silicide. The silicide-polysilicon combination has better electrical properties than polysilicon alone and still does not melt in subsequent processing. Also the threshold voltage is not significantly higher than with polysilicon alone, because the silicide material is not near the channel. The process in which silicide is formed on both the gate electrode and the source and drain regions is sometimes called salicide, self-aligned silicide. When the transistors are extremely scaled down, it is necessary to make the gate dielectric layer very thin, around 1 nm in state-of-the-art technologies. A phenomenon observed here is the so-called poly depletion, where a depletion layer is formed in the gate polysilicon layer next to the gate dielectric when the transistor is in the inversion. To avoid this problem, a metal gate is desired. A variety of metal gates such as tantalum, tungsten, tantalum nitride, and titanium nitride are used, usually in conjunction with high-κ dielectrics. An alternative is to use fully silicided polysilicon gates, a process known as FUSI. Present high performance CPUs use metal gate technology, together with high-κ dielectrics, a combination known as high-κ, metal gate (HKMG). The disadvantages of metal gates are overcome by a few techniques: The threshold voltage is tuned by including a thin "work function metal" layer between the high-κ dielectric and the main metal. This layer is thin enough that the total work function of the gate is influenced by both the main metal and thin metal work functions (either due to alloying during annealing, or simply due to the incomplete screening by the thin metal). The threshold voltage thus can be tuned by the thickness of the thin metal layer. High-κ dielectrics are now well studied, and their defects are understood. HKMG processes exist that do not require the metals to experience high temperature anneals; other processes select metals that can survive the annealing step. Insulator As devices are made smaller, insulating layers are made thinner, often through steps of thermal oxidation or localised oxidation of silicon (LOCOS). For nano-scaled devices, at some point tunneling of carriers through the insulator from the channel to the gate electrode takes place. To reduce the resulting leakage current, the insulator can be made thinner by choosing a material with a higher dielectric constant. To see how thickness and dielectric constant are related, note that Gauss's law connects field to charge as: with Q = charge density, κ = dielectric constant, ε0 = permittivity of empty space and E = electric field. From this law it appears the same charge can be maintained in the channel at a lower field provided κ is increased. The voltage on the gate is given by: with VG = gate voltage, Vch = voltage at channel side of insulator, and tins = insulator thickness. This equation shows the gate voltage will not increase when the insulator thickness increases, provided κ increases to keep tins / κ = constant (see the article on high-κ dielectrics for more detail, and the section in this article on gate-oxide leakage). The insulator in a MOSFET is a dielectric which can in any event be silicon oxide, formed by LOCOS but many other dielectric materials are employed. The generic term for the dielectric is gate dielectric since the dielectric lies directly below the gate electrode and above the channel of the MOSFET. Junction design The source-to-body and drain-to-body junctions are the object of much attention because of three major factors: their design affects the current-voltage (I-V) characteristics of the device, lowering output resistance, and also the speed of the device through the loading effect of the junction capacitances, and finally, the component of stand-by power dissipation due to junction leakage. The drain induced barrier lowering of the threshold voltage and channel length modulation effects upon I-V curves are reduced by using shallow junction extensions. In addition, halo doping can be used, that is, the addition of very thin heavily doped regions of the same doping type as the body tight against the junction walls to limit the extent of depletion regions. The capacitive effects are limited by using raised source and drain geometries that make most of the contact area border thick dielectric instead of silicon. These various features of junction design are shown (with artistic license) in the figure. Scaling Over the past decades, the MOSFET (as used for digital logic) has continually been scaled down in size; typical MOSFET channel lengths were once several micrometres, but modern integrated circuits are incorporating MOSFETs with channel lengths of tens of nanometers. Robert Dennard's work on scaling theory was pivotal in recognising that this ongoing reduction was possible. Intel began production of a process featuring a 32 nm feature size (with the channel being even shorter) in late 2009. The semiconductor industry maintains a "roadmap", the ITRS, which sets the pace for MOSFET development. Historically, the difficulties with decreasing the size of the MOSFET have been associated with the semiconductor device fabrication process, the need to use very low voltages, and with poorer electrical performance necessitating circuit redesign and innovation (small MOSFETs exhibit higher leakage currents and lower output resistance). Smaller MOSFETs are desirable for several reasons. The main reason to make transistors smaller is to pack more and more devices in a given chip area. This results in a chip with the same functionality in a smaller area, or chips with more functionality in the same area. Since fabrication costs for a semiconductor wafer are relatively fixed, the cost per integrated circuits is mainly related to the number of chips that can be produced per wafer. Hence, smaller ICs allow more chips per wafer, reducing the price per chip. In fact, over the past 30 years the number of transistors per chip has been doubled every 2–3 years once a new technology node is introduced. For example, the number of MOSFETs in a microprocessor fabricated in a 45 nm technology can well be twice as many as in a 65 nm chip. This doubling of transistor density was first observed by Gordon Moore in 1965 and is commonly referred to as Moore's law. It is also expected that smaller transistors switch faster. For example, one approach to size reduction is a scaling of the MOSFET that requires all device dimensions to reduce proportionally. The main device dimensions are the channel length, channel width, and oxide thickness. When they are scaled down by equal factors, the transistor channel resistance does not change, while gate capacitance is cut by that factor. Hence, the RC delay of the transistor scales with a similar factor. While this has been traditionally the case for the older technologies, for the state-of-the-art MOSFETs reduction of the transistor dimensions does not necessarily translate to higher chip speed because the delay due to interconnections is more significant. Producing MOSFETs with channel lengths much smaller than a micrometre is a challenge, and the difficulties of semiconductor device fabrication are always a limiting factor in advancing integrated circuit technology. Though processes such as ALD have improved fabrication for small components, the small size of the MOSFET (less than a few tens of nanometers) has created operational problems: Higher subthreshold conduction As MOSFET geometries shrink, the voltage that can be applied to the gate must be reduced to maintain reliability. To maintain performance, the threshold voltage of the MOSFET has to be reduced as well. As threshold voltage is reduced, the transistor cannot be switched from complete turn-off to complete turn-on with the limited voltage swing available; the circuit design is a compromise between strong current in the on case and low current in the off case, and the application determines whether to favor one over the other. Subthreshold leakage (including subthreshold conduction, gate-oxide leakage and reverse-biased junction leakage), which was ignored in the past, now can consume upwards of half of the total power consumption of modern high-performance VLSI chips. Increased gate-oxide leakage The gate oxide, which serves as insulator between the gate and channel, should be made as thin as possible to increase the channel conductivity and performance when the transistor is on and to reduce subthreshold leakage when the transistor is off. However, with current gate oxides with a thickness of around 1.2 nm (which in silicon is ~5 atoms thick) the quantum mechanical phenomenon of electron tunneling occurs between the gate and channel, leading to increased power consumption. Silicon dioxide has traditionally been used as the gate insulator. Silicon dioxide however has a modest dielectric constant. Increasing the dielectric constant of the gate dielectric allows a thicker layer while maintaining a high capacitance (capacitance is proportional to dielectric constant and inversely proportional to dielectric thickness). All else equal, a higher dielectric thickness reduces the quantum tunneling current through the dielectric between the gate and the channel. Insulators that have a larger dielectric constant than silicon dioxide (referred to as high-κ dielectrics), such as group IVb metal silicates e.g. hafnium and zirconium silicates and oxides are being used to reduce the gate leakage from the 45 nanometer technology node onwards. On the other hand, the barrier height of the new gate insulator is an important consideration; the difference in conduction band energy between the semiconductor and the dielectric (and the corresponding difference in valence band energy) also affects leakage current level. For the traditional gate oxide, silicon dioxide, the former barrier is approximately 8 eV. For many alternative dielectrics the value is significantly lower, tending to increase the tunneling current, somewhat negating the advantage of higher dielectric constant. The maximum gate-source voltage is determined by the strength of the electric field able to be sustained by the gate dielectric before significant leakage occurs. As the insulating dielectric is made thinner, the electric field strength within it goes up for a fixed voltage. This necessitates using lower voltages with the thinner dielectric. Increased junction leakage To make devices smaller, junction design has become more complex, leading to higher doping levels, shallower junctions, "halo" doping and so forth, all to decrease drain-induced barrier lowering (see the section on junction design). To keep these complex junctions in place, the annealing steps formerly used to remove damage and electrically active defects must be curtailed increasing junction leakage. Heavier doping is also associated with thinner depletion layers and more recombination centers that result in increased leakage current, even without lattice damage. Drain-induced barrier lowering and VT roll off Drain-induced barrier lowering (DIBL) and VT roll off: Because of the short-channel effect, channel formation is not entirely done by the gate, but now the drain and source also affect the channel formation. As the channel length decreases, the depletion regions of the source and drain come closer together and make the threshold voltage (VT) a function of the length of the channel. This is called VT roll-off. VT also becomes function of drain to source voltage VDS. As we increase the VDS, the depletion regions increase in size, and a considerable amount of charge is depleted by the VDS. The gate voltage required to form the channel is then lowered, and thus, the VT decreases with an increase in VDS. This effect is called drain induced barrier lowering (DIBL). Lower output resistance For analog operation, good gain requires a high MOSFET output impedance, which is to say, the MOSFET current should vary only slightly with the applied drain-to-source voltage. As devices are made smaller, the influence of the drain competes more successfully with that of the gate due to the growing proximity of these two electrodes, increasing the sensitivity of the MOSFET current to the drain voltage. To counteract the resulting decrease in output resistance, circuits are made more complex, either by requiring more devices, for example the cascode and cascade amplifiers, or by feedback circuitry using operational amplifiers, for example a circuit like that in the adjacent figure. Lower transconductance The transconductance of the MOSFET decides its gain and is proportional to hole or electron mobility (depending on device type), at least for low drain voltages. As MOSFET size is reduced, the fields in the channel increase and the dopant impurity levels increase. Both changes reduce the carrier mobility, and hence the transconductance. As channel lengths are reduced without proportional reduction in drain voltage, raising the electric field in the channel, the result is velocity saturation of the carriers, limiting the current and the transconductance. Interconnect capacitance Traditionally, switching time was roughly proportional to the gate capacitance of gates. However, with transistors becoming smaller and more transistors being placed on the chip, interconnect capacitance (the capacitance of the metal-layer connections between different parts of the chip) is becoming a large percentage of capacitance. Signals have to travel through the interconnect, which leads to increased delay and lower performance. Heat production The ever-increasing density of MOSFETs on an integrated circuit creates problems of substantial localized heat generation that can impair circuit operation. Circuits operate more slowly at high temperatures, and have reduced reliability and shorter lifetimes. Heat sinks and other cooling devices and methods are now required for many integrated circuits including microprocessors. Power MOSFETs are at risk of thermal runaway. As their on-state resistance rises with temperature, if the load is approximately a constant-current load then the power loss rises correspondingly, generating further heat. When the heatsink is not able to keep the temperature low enough, the junction temperature may rise quickly and uncontrollably, resulting in destruction of the device. Process variations With MOSFETs becoming smaller, the number of atoms in the silicon that produce many of the transistor's properties is becoming fewer, with the result that control of dopant numbers and placement is more erratic. During chip manufacturing, random process variations affect all transistor dimensions: length, width, junction depths, oxide thickness etc., and become a greater percentage of overall transistor size as the transistor shrinks. The transistor characteristics become less certain, more statistical. The random nature of manufacture means we do not know which particular example MOSFETs actually will end up in a particular instance of the circuit. This uncertainty forces a less optimal design because the design must work for a great variety of possible component MOSFETs. See process variation, design for manufacturability, reliability engineering, and statistical process control. Modeling challenges Modern ICs are computer-simulated with the goal of obtaining working circuits from the first manufactured lot. As devices are miniaturized, the complexity of the processing makes it difficult to predict exactly what the final devices look like, and modeling of physical processes becomes more challenging as well. In addition, microscopic variations in structure due simply to the probabilistic nature of atomic processes require statistical (not just deterministic) predictions. These factors combine to make adequate simulation and "right the first time" manufacture difficult. Other types Dual-gate The dual-gate MOSFET has a tetrode configuration, where both gates control the current in the device. It is commonly used for small-signal devices in radio frequency applications where biasing the drain-side gate at constant potential reduces the gain loss caused by Miller effect, replacing two separate transistors in cascode configuration. Other common uses in RF circuits include gain control and mixing (frequency conversion). The tetrode description, though accurate, does not replicate the vacuum-tube tetrode. Vacuum-tube tetrodes, using a screen grid, exhibit much lower grid-plate capacitance and much higher output impedance and voltage gains than triode vacuum tubes. These improvements are commonly an order of magnitude (10 times) or considerably more. Tetrode transistors (whether bipolar junction or field-effect) do not exhibit improvements of such a great degree. The FinFET is a double-gate silicon-on-insulator device, one of a number of geometries being introduced to mitigate the effects of short channels and reduce drain-induced barrier lowering. The fin refers to the narrow channel between source and drain. A thin insulating oxide layer on either side of the fin separates it from the gate. SOI FinFETs with a thick oxide on top of the fin are called double-gate and those with a thin oxide on top as well as on the sides are called triple-gate FinFETs. Depletion-mode There are depletion-mode MOSFET devices, which are less commonly used than the standard enhancement-mode devices already described. These are MOSFET devices that are doped so that a channel exists even with zero voltage from gate to source. To control the channel, a negative voltage is applied to the gate (for an n-channel device), depleting the channel, which reduces the current flow through the device. In essence, the depletion-mode device is equivalent to a normally closed (on) switch, while the enhancement-mode device is equivalent to a normally open (off) switch. Due to their low noise figure in the RF region, and better gain, these devices are often preferred to bipolars in RF front-ends such as in TV sets. Depletion-mode MOSFET families include the BF960 by Siemens and Telefunken, and the BF980 in the 1980s by Philips (later to become NXP Semiconductors), whose derivatives are still used in AGC and RF mixer front-ends. Metal–insulator–semiconductor field-effect transistor (MISFET) Metal–insulator–semiconductor field-effect-transistor, or MISFET, is a more general term than MOSFET and a synonym to insulated-gate field-effect transistor (IGFET). All MOSFETs are MISFETs, but not all MISFETs are MOSFETs. The gate dielectric insulator in a MISFET is a substrate oxide (hence typically silicon dioxide) in a MOSFET, but other materials can also be employed. The gate dielectric lies directly below the gate electrode and above the channel of the MISFET. The term metal is historically used for the gate material, even though now it is usually highly doped polysilicon or some other non-metal. Insulator types may be: Silicon dioxide, in silicon MOSFETs Organic insulators (e.g., undoped trans-polyacetylene; cyanoethyl pullulan, CEP), for organic-based FETs. NMOS logic For devices of equal current driving capability, n-channel MOSFETs can be made smaller than p-channel MOSFETs, due to p-channel charge carriers (holes) having lower mobility than do n-channel charge carriers (electrons), and producing only one type of MOSFET on a silicon substrate is cheaper and technically simpler. These were the driving principles in the design of NMOS logic which uses n-channel MOSFETs exclusively. However, neglecting leakage current, unlike CMOS logic, NMOS logic consumes power even when no switching is taking place. With advances in technology, CMOS logic displaced NMOS logic in the mid-1980s to become the preferred process for digital chips. Power MOSFET Power MOSFETs have a different structure. As with most power devices, the structure is vertical and not planar. Using a vertical structure, it is possible for the transistor to sustain both high blocking voltage and high current. The voltage rating of the transistor is a function of the doping and thickness of the N-epitaxial layer (see cross section), while the current rating is a function of the channel width (the wider the channel, the higher the current). In a planar structure, the current and breakdown voltage ratings are both a function of the channel dimensions (respectively width and length of the channel), resulting in inefficient use of the "silicon estate". With the vertical structure, the component area is roughly proportional to the current it can sustain, and the component thickness (actually the N-epitaxial layer thickness) is proportional to the breakdown voltage. Power MOSFETs with lateral structure are mainly used in high-end audio amplifiers and high-power PA systems. Their advantage is a better behaviour in the saturated region (corresponding to the linear region of a bipolar transistor) than the vertical MOSFETs. Vertical MOSFETs are designed for switching applications. Double-diffused metal–oxide–semiconductor () There are LDMOS (lateral double-diffused metal oxide semiconductor) and VDMOS (vertical double-diffused metal oxide semiconductor). Most power MOSFETs are made using this technology. Radiation-hardened-by-design (RHBD) Semiconductor sub-micrometer and nanometer electronic circuits are the primary concern for operating within the normal tolerance in harsh radiation environments like outer space. One of the design approaches for making a radiation-hardened-by-design (RHBD) device is enclosed-layout-transistor (ELT). Normally, the gate of the MOSFET surrounds the drain, which is placed in the center of the ELT. The source of the MOSFET surrounds the gate. Another RHBD MOSFET is called H-Gate. Both of these transistors have very low leakage currents with respect to radiation. However, they are large in size and take up more space on silicon than a standard MOSFET. In older STI (shallow trench isolation) designs, radiation strikes near the silicon oxide region cause the channel inversion at the corners of the standard MOSFET due to accumulation of radiation induced trapped charges. If the charges are large enough, the accumulated charges affect STI surface edges along the channel near the channel interface (gate) of the standard MOSFET. This causes a device channel inversion to occur along the channel edges, creating an off-state leakage path. Subsequently, the device turns on; this process severely degrades the reliability of circuits. The ELT offers many advantages, including an improvement of reliability by reducing unwanted surface inversion at the gate edges which occurs in the standard MOSFET. Since the gate edges are enclosed in ELT, there is no gate oxide edge (STI at gate interface), and thus the transistor off-state leakage is reduced very much. Low-power microelectronic circuits including computers, communication devices, and monitoring systems in space shuttles and satellites are very different from what is used on earth. They are radiation (high-speed atomic particles like proton and neutron, solar flare magnetic energy dissipation in Earth's space, energetic cosmic rays like X-ray, gamma ray etc.) tolerant circuits. These special electronics are designed by applying different techniques using RHBD MOSFETs to ensure safe space journeys and safe space-walks of astronauts.
Technology
Semiconductors
null
40367
https://en.wikipedia.org/wiki/Analog-to-digital%20converter
Analog-to-digital converter
In electronics, an analog-to-digital converter (ADC, A/D, or A-to-D) is a system that converts an analog signal, such as a sound picked up by a microphone or light entering a digital camera, into a digital signal. An ADC may also provide an isolated measurement such as an electronic device that converts an analog input voltage or current to a digital number representing the magnitude of the voltage or current. Typically the digital output is a two's complement binary number that is proportional to the input, but there are other possibilities. There are several ADC architectures. Due to the complexity and the need for precisely matched components, all but the most specialized ADCs are implemented as integrated circuits (ICs). These typically take the form of metal–oxide–semiconductor (MOS) mixed-signal integrated circuit chips that integrate both analog and digital circuits. A digital-to-analog converter (DAC) performs the reverse function; it converts a digital signal into an analog signal. Explanation An ADC converts a continuous-time and continuous-amplitude analog signal to a discrete-time and discrete-amplitude digital signal. The conversion involves quantization of the input, so it necessarily introduces a small amount of quantization error. Furthermore, instead of continuously performing the conversion, an ADC does the conversion periodically, sampling the input, and limiting the allowable bandwidth of the input signal. The performance of an ADC is primarily characterized by its bandwidth and signal-to-noise and distortion ratio (SNDR). The bandwidth of an ADC is characterized primarily by its sampling rate. The SNDR of an ADC is influenced by many factors, including the resolution, linearity and accuracy (how well the quantization levels match the true analog signal), aliasing and jitter. The SNDR of an ADC is often summarized in terms of its effective number of bits (ENOB), the number of bits of each measure it returns that are on average not noise. An ideal ADC has an ENOB equal to its resolution. ADCs are chosen to match the bandwidth and required SNDR of the signal to be digitized. If an ADC operates at a sampling rate greater than twice the bandwidth of the signal, then per the Nyquist–Shannon sampling theorem, near-perfect reconstruction is possible. The presence of quantization error limits the SNDR of even an ideal ADC. However, if the SNDR of the ADC exceeds that of the input signal, then the effects of quantization error may be neglected, resulting in an essentially perfect digital representation of the bandlimited analog input signal. Resolution The resolution of the converter indicates the number of different, i.e. discrete, values it can produce over the allowed range of analog input values. Thus a particular resolution determines the magnitude of the quantization error and therefore determines the maximum possible signal-to-noise ratio for an ideal ADC without the use of oversampling. The input samples are usually stored electronically in binary form within the ADC, so the resolution is usually expressed as the audio bit depth. In consequence, the number of discrete values available is usually a power of two. For example, an ADC with a resolution of 8 bits can encode an analog input to one in 256 different levels (28 = 256). The values can represent the ranges from 0 to 255 (i.e. as unsigned integers) or from −128 to 127 (i.e. as signed integer), depending on the application. Resolution can also be defined electrically, and expressed in volts. The change in voltage required to guarantee a change in the output code level is called the least significant bit (LSB) voltage. The resolution Q of the ADC is equal to the LSB voltage. The voltage resolution of an ADC is equal to its overall voltage measurement range divided by the number of intervals: where M is the ADC's resolution in bits and EFSR is the full-scale voltage range (also called 'span'). EFSR is given by where VRefHi and VRefLow are the upper and lower extremes, respectively, of the voltages that can be coded. Normally, the number of voltage intervals is given by where M is the ADC's resolution in bits. That is, one voltage interval is assigned in between two consecutive code levels. Example: Coding scheme as in figure 1 Full scale measurement range = 0 to 1 volt ADC resolution is 3 bits: 23 = 8 quantization levels (codes) ADC voltage resolution, Q = 1 V / ( 23 - 1 ) = 0.143 V (intervals) In many cases, the useful resolution of a converter is limited by the signal-to-noise ratio (SNR) and other errors in the overall system expressed as an ENOB. Quantization error Quantization error is introduced by the quantization inherent in an ideal ADC. It is a rounding error between the analog input voltage to the ADC and the output digitized value. The error is nonlinear and signal-dependent. In an ideal ADC, where the quantization error is uniformly distributed between − LSB and + LSB, and the signal has a uniform distribution covering all quantization levels, the signal-to-quantization-noise ratio (SQNR) is given by where Q is the number of quantization bits. For example, for a 16-bit ADC, the quantization error is 96.3 dB below the maximum level. Quantization error is distributed from DC to the Nyquist frequency. Consequently, if part of the ADC's bandwidth is not used, as is the case with oversampling, some of the quantization error will occur out-of-band, effectively improving the SQNR for the bandwidth in use. In an oversampled system, noise shaping can be used to further increase SQNR by forcing more quantization error out of band. Dither In ADCs, performance can usually be improved using dither. This is a very small amount of random noise (e.g. white noise), which is added to the input before conversion. Its effect is to randomize the state of the LSB based on the signal. Rather than the signal simply getting cut off altogether at low levels, it extends the effective range of signals that the ADC can convert, at the expense of a slight increase in noise. Dither can only increase the resolution of a sampler. It cannot improve the linearity, and thus accuracy does not necessarily improve. Quantization distortion in an audio signal of very low level with respect to the bit depth of the ADC is correlated with the signal and sounds distorted and unpleasant. With dithering, the distortion is transformed into noise. The undistorted signal may be recovered accurately by averaging over time. Dithering is also used in integrating systems such as electricity meters. Since the values are added together, the dithering produces results that are more exact than the LSB of the analog-to-digital converter. Dither is often applied when quantizing photographic images to a fewer number of bits per pixel—the image becomes noisier but to the eye looks far more realistic than the quantized image, which otherwise becomes banded. This analogous process may help to visualize the effect of dither on an analog audio signal that is converted to digital. Accuracy An ADC has several sources of errors. Quantization error and (assuming the ADC is intended to be linear) non-linearity are intrinsic to any analog-to-digital conversion. These errors are measured in a unit called the least significant bit (LSB). In the above example of an eight-bit ADC, an error of one LSB is of the full signal range, or about 0.4%. Nonlinearity All ADCs suffer from nonlinearity errors caused by their physical imperfections, causing their output to deviate from a linear function (or some other function, in the case of a deliberately nonlinear ADC) of their input. These errors can sometimes be mitigated by calibration, or prevented by testing. Important parameters for linearity are integral nonlinearity and differential nonlinearity. These nonlinearities introduce distortion that can reduce the signal-to-noise ratio performance of the ADC and thus reduce its effective resolution. Jitter When digitizing a sine wave , the use of a non-ideal sampling clock will result in some uncertainty in when samples are recorded. Provided that the actual sampling time uncertainty due to clock jitter is , the error caused by this phenomenon can be estimated as . This will result in additional recorded noise that will reduce the effective number of bits (ENOB) below that predicted by quantization error alone. The error is zero for DC, small at low frequencies, but significant with signals of high amplitude and high frequency. The effect of jitter on performance can be compared to quantization error: , where q is the number of ADC bits. Clock jitter is caused by phase noise. The resolution of ADCs with a digitization bandwidth between 1 MHz and 1 GHz is limited by jitter. For lower bandwidth conversions such as when sampling audio signals at 44.1 kHz, clock jitter has a less significant impact on performance. Sampling rate An analog signal is continuous in time and it is necessary to convert this to a flow of digital values. It is therefore required to define the rate at which new digital values are sampled from the analog signal. The rate of new values is called the sampling rate or sampling frequency of the converter. A continuously varying bandlimited signal can be sampled and then the original signal can be reproduced from the discrete-time values by a reconstruction filter. The Nyquist–Shannon sampling theorem implies that a faithful reproduction of the original signal is only possible if the sampling rate is higher than twice the highest frequency of the signal. Since a practical ADC cannot make an instantaneous conversion, the input value must necessarily be held constant during the time that the converter performs a conversion (called the conversion time). An input circuit called a sample and hold performs this task—in most cases by using a capacitor to store the analog voltage at the input, and using an electronic switch or gate to disconnect the capacitor from the input. Many ADC integrated circuits include the sample and hold subsystem internally. Aliasing An ADC works by sampling the value of the input at discrete intervals in time. Provided that the input is sampled above the Nyquist rate, defined as twice the highest frequency of interest, then all frequencies in the signal can be reconstructed. If frequencies above half the Nyquist rate are sampled, they are incorrectly detected as lower frequencies, a process referred to as aliasing. Aliasing occurs because instantaneously sampling a function at two or fewer times per cycle results in missed cycles, and therefore the appearance of an incorrectly lower frequency. For example, a 2 kHz sine wave being sampled at 1.5 kHz would be reconstructed as a 500 Hz sine wave. To avoid aliasing, the input to an ADC must be low-pass filtered to remove frequencies above half the sampling rate. This filter is called an anti-aliasing filter, and is essential for a practical ADC system that is applied to analog signals with higher frequency content. In applications where protection against aliasing is essential, oversampling may be used to greatly reduce or even eliminate it. Although aliasing in most systems is unwanted, it can be exploited to provide simultaneous down-mixing of a band-limited high-frequency signal (see undersampling and frequency mixer). The alias is effectively the lower heterodyne of the signal frequency and sampling frequency. Oversampling For economy, signals are often sampled at the minimum rate required with the result that the quantization error introduced is white noise spread over the whole passband of the converter. If a signal is sampled at a rate much higher than the Nyquist rate and then digitally filtered to limit it to the signal bandwidth produces the following advantages: Oversampling can make it easier to realize analog anti-aliasing filters Improved audio bit depth Reduced noise, especially when noise shaping is employed in addition to oversampling. Oversampling is typically used in audio frequency ADCs where the required sampling rate (typically 44.1 or 48 kHz) is very low compared to the clock speed of typical transistor circuits (>1 MHz). In this case, the performance of the ADC can be greatly increased at little or no cost. Furthermore, as any aliased signals are also typically out of band, aliasing can often be eliminated using very low cost filters. Relative speed and precision The speed of an ADC varies by type. The Wilkinson ADC is limited by the clock rate which is processable by current digital circuits. For a successive-approximation ADC, the conversion time scales with the logarithm of the resolution, i.e. the number of bits. Flash ADCs are certainly the fastest type of the three; The conversion is basically performed in a single parallel step. There is a potential tradeoff between speed and precision. Flash ADCs have drifts and uncertainties associated with the comparator levels results in poor linearity. To a lesser extent, poor linearity can also be an issue for successive-approximation ADCs. Here, nonlinearity arises from accumulating errors from the subtraction processes. Wilkinson ADCs have the best linearity of the three. Sliding scale principle The sliding scale or randomizing method can be employed to greatly improve the linearity of any type of ADC, but especially flash and successive approximation types. For any ADC the mapping from input voltage to digital output value is not exactly a floor or ceiling function as it should be. Under normal conditions, a pulse of a particular amplitude is always converted to the same digital value. The problem lies in that the ranges of analog values for the digitized values are not all of the same widths, and the differential linearity decreases proportionally with the divergence from the average width. The sliding scale principle uses an averaging effect to overcome this phenomenon. A random, but known analog voltage is added to the sampled input voltage. It is then converted to digital form, and the equivalent digital amount is subtracted, thus restoring it to its original value. The advantage is that the conversion has taken place at a random point. The statistical distribution of the final levels is decided by a weighted average over a region of the range of the ADC. This in turn desensitizes it to the width of any specific level. Types These are several common ways of implementing an electronic ADC. RC charge time Resistor-capacitor (RC) circuits have a known voltage charging and discharging curve that can be used to solve for an unknown analog value. Wilkinson The Wilkinson ADC was designed by Denys Wilkinson in 1950. The Wilkinson ADC is based on the comparison of an input voltage with that produced by a charging capacitor. The capacitor is allowed to charge until a comparator determines it matches the input voltage. Then, the capacitor is discharged linearly by using a constant current source. The time required to discharge the capacitor is proportional to the amplitude of the input voltage. While the capacitor is discharging, pulses from a high-frequency oscillator clock are counted by a register. The number of clock pulses recorded in the register is also proportional to the input voltage. Measuring analog resistance or capacitance If the analog value to measure is represented by a resistance or capacitance, then by including that element in an RC circuit (with other resistances or capacitances fixed) and measuring the time to charge the capacitance from a known starting voltage to another known ending voltage through the resistance from a known voltage supply, the value of the unknown resistance or capacitance can be determined using the capacitor charging equation: and solving for the unknown resistance or capacitance using those starting and ending datapoints. This is similar but contrasts to the Wilkinson ADC which measures an unknown voltage with a known resistance and capacitance, by instead measuring an unknown resistance or capacitance with a known voltage. For example, the positive (and/or negative) pulse width from a 555 Timer IC in monostable or astable mode represents the time it takes to charge (and/or discharge) its capacitor from  Vsupply to  Vsupply. By sending this pulse into a microcontroller with an accurate clock, the duration of the pulse can be measured and converted using the capacitor charging equation to produce the value of the unknown resistance or capacitance. Larger resistances and capacitances will take a longer time to measure than smaller one. And the accuracy is limited by the accuracy of the microcontroller clock and the amount of time available to measure the value, which potentially might even change during measurement or be affected by external parasitics. Direct-conversion A direct-conversion or flash ADC has a bank of comparators sampling the input signal in parallel, each firing for a specific voltage range. The comparator bank feeds a digital encoder logic circuit that generates a binary number on the output lines for each voltage range. ADCs of this type have a large die size and high power dissipation. They are often used for video, wideband communications, or other fast signals in optical and magnetic storage. The circuit consists of a resistive divider network, a set of op-amp comparators and a priority encoder. A small amount of hysteresis is built into the comparator to resolve any problems at voltage boundaries. At each node of the resistive divider, a comparison voltage is available. The purpose of the circuit is to compare the analog input voltage with each of the node voltages. The circuit has the advantage of high speed as the conversion takes place simultaneously rather than sequentially. Typical conversion time is 100 ns or less. Conversion time is limited only by the speed of the comparator and of the priority encoder. This type of ADC has the disadvantage that the number of comparators required almost doubles for each added bit. Also, the larger the value of n, the more complex is the priority encoder. Successive approximation A successive-approximation ADC uses a comparator and a binary search to successively narrow a range that contains the input voltage. At each successive step, the converter compares the input voltage to the output of an internal digital-to-analog converter (DAC) which initially represents the midpoint of the allowed input voltage range. At each step in this process, the approximation is stored in a successive approximation register (SAR) and the output of the digital-to-analog converter is updated for a comparison over a narrower range. Ramp-compare A ramp-compare ADC produces a saw-tooth signal that ramps up or down then quickly returns to zero. When the ramp starts, a timer starts counting. When the ramp voltage matches the input, a comparator fires, and the timer's value is recorded. Timed ramp converters can be implemented economically, however, the ramp time may be sensitive to temperature because the circuit generating the ramp is often a simple analog integrator. A more accurate converter uses a clocked counter driving a DAC. A special advantage of the ramp-compare system is that converting a second signal just requires another comparator and another register to store the timer value. To reduce sensitivity to input changes during conversion, a sample and hold can charge a capacitor with the instantaneous input voltage and the converter can time the time required to discharge with a constant current. Integrating An integrating ADC (also dual-slope or multi-slope ADC) applies the unknown input voltage to the input of an integrator and allows the voltage to ramp for a fixed time period (the run-up period). Then a known reference voltage of opposite polarity is applied to the integrator and is allowed to ramp until the integrator output returns to zero (the run-down period). The input voltage is computed as a function of the reference voltage, the constant run-up time period, and the measured run-down time period. The run-down time measurement is usually made in units of the converter's clock, so longer integration times allow for higher resolutions. Likewise, the speed of the converter can be improved by sacrificing resolution. Converters of this type (or variations on the concept) are used in most digital voltmeters for their linearity and flexibility. Charge balancing ADC The principle of charge balancing ADC is to first convert the input signal to a frequency using a voltage-to-frequency converter. This frequency is then measured by a counter and converted to an output code proportional to the analog input. The main advantage of these converters is that it is possible to transmit frequency even in a noisy environment or in isolated form. However, the limitation of this circuit is that the output of the voltage-to-frequency converter depends upon an RC product whose value cannot be accurately maintained over temperature and time. Dual-slope ADC The analog part of the circuit consists of a high input impedance buffer, precision integrator and a voltage comparator. The converter first integrates the analog input signal for a fixed duration and then it integrates an internal reference voltage of opposite polarity until the integrator output is zero. The main disadvantage of this circuit is the long duration time. They are particularly suitable for accurate measurement of slowly varying signals such as thermocouples and weighing scales. Delta-encoded A delta-encoded or counter-ramp ADC has an up-down counter that feeds a DAC. The input signal and the DAC both go to a comparator. The comparator controls the counter. The circuit uses negative feedback from the comparator to adjust the counter until the DAC's output matches the input signal and number is read from the counter. Delta converters have very wide ranges and high resolution, but the conversion time is dependent on the input signal behavior, though it will always have a guaranteed worst-case. Delta converters are often very good choices to read real-world signals as most signals from physical systems do not change abruptly. Some converters combine the delta and successive approximation approaches; this works especially well when high frequency components of the input signal are known to be small in magnitude. Pipelined A pipelined ADC (also called subranging quantizer) uses two or more conversion steps. First, a coarse conversion is done. In a second step, the difference to the input signal is determined with a DAC. This difference is then converted more precisely, and the results are combined in the last step. This can be considered a refinement of the successive-approximation ADC wherein the feedback reference signal consists of the interim conversion of a whole range of bits (for example, four bits) rather than just the next-most-significant bit. By combining the merits of the successive approximation and flash ADCs this type is fast, has a high resolution, and can be implemented efficiently. Delta-sigma A delta-sigma ADC (also known as a sigma-delta ADC) is based on a negative feedback loop with an analog filter and low resolution (often 1 bit) but high sampling rate ADC and DAC. The feedback loop continuously corrects accumulated quantization errors and performs noise shaping: quantization noise is reduced in the low frequencies of interest, but is increased in higher frequencies. Those higher frequencies may then be removed by a downsampling digital filter, which also converts the data stream from that high sampling rate with low bit depth to a lower rate with higher bit depth. Time-interleaved A time-interleaved ADC uses M parallel ADCs where each ADC samples data every M:th cycle of the effective sample clock. The result is that the sample rate is increased M times compared to what each individual ADC can manage. In practice, the individual differences between the M ADCs degrade the overall performance reducing the spurious-free dynamic range (SFDR). However, techniques exist to correct for these time-interleaving mismatch errors. Intermediate FM stage An ADC with an intermediate FM stage first uses a voltage-to-frequency converter to produce an oscillating signal with a frequency proportional to the voltage of the input signal, and then uses a frequency counter to convert that frequency into a digital count proportional to the desired signal voltage. Longer integration times allow for higher resolutions. Likewise, the speed of the converter can be improved by sacrificing resolution. The two parts of the ADC may be widely separated, with the frequency signal passed through an opto-isolator or transmitted wirelessly. Some such ADCs use sine wave or square wave frequency modulation; others use pulse-frequency modulation. Such ADCs were once the most popular way to show a digital display of the status of a remote analog sensor. Time-stretch A time-stretch analog-to-digital converter (TS-ADC) digitizes a very wide bandwidth analog signal, that cannot be digitized by a conventional electronic ADC, by time-stretching the signal prior to digitization. It commonly uses a photonic preprocessor to time-stretch the signal, which effectively slows the signal down in time and compresses its bandwidth. As a result, an electronic ADC, that would have been too slow to capture the original signal, can now capture this slowed-down signal. For continuous capture of the signal, the front end also divides the signal into multiple segments in addition to time-stretching. Each segment is individually digitized by a separate electronic ADC. Finally, a digital signal processor rearranges the samples and removes any distortions added by the preprocessor to yield the binary data that is the digital representation of the original analog signal. Measuring physical values other than voltage Although the term ADC is usually associated with measurement of an analog voltage, some partially-electronic devices that convert some measurable physical analog quantity into a digital number can also be considered ADCs, for instance: Rotary encoders convert from an analog physical quantity that mechanically produces an amount of rotation into a stream of digital Gray code that a microcontroller can digitally interpret to derive the direction of rotation, angular position, and rotational speed. Capacitive sensing converts from the analog physical quantity of a capacitance. That capacitance could be a proxy for some other physical quantity, such as the distance some metal object is from a metal sensing plate, or the amount of water in a tank, or the permittivity of a dielectric material. Capacitive-to-digital (CDC) converters determine capacitance by applying a known excitation to a plate of a capacitor and measuring its charge. Digital calipers convert from the analog physical quantity of an amount of displacement between two sliding rulers. Inductive-to-digital converters measure a change of inductance by a conductive target moving in an inductor's AC magnetic field. Time-to-digital converters recognize events and provide a digital representation of the analog time they occurred. Time of flight measurements for instance can convert from some analog quantity that affects a propagation delay for an event. Sensors in general that don't directly produce a voltage may indirectly produce a voltage or through other ways be converted into a digital value. Resistive output (e.g. from a potentiometer or a force-sensing resistor) can be made into a voltage by sending a known current through it, or can be made into a RC charging time measurement, to produce a digital result. Commercial In many cases, the most expensive part of an integrated circuit is the pins, because they make the package larger, and each pin has to be connected to the integrated circuit's silicon. To save pins, it is common for ADCs to send their data one bit at a time over a serial interface to the computer, with each bit coming out when a clock signal changes state. This saves quite a few pins on the ADC package, and in many cases, does not make the overall design any more complex. Commercial ADCs often have several inputs that feed the same converter, usually through an analog multiplexer. Different models of ADC may include sample and hold circuits, instrumentation amplifiers or differential inputs, where the quantity measured is the difference between two inputs. Applications Music recording Analog-to-digital converters are integral to modern music reproduction technology and digital audio workstation-based sound recording. Music may be produced on computers using an analog recording and therefore analog-to-digital converters are needed to create the pulse-code modulation (PCM) data streams that go onto compact discs and digital music files. The current crop of analog-to-digital converters utilized in music can sample at rates up to 192 kilohertz. Many recording studios record in 24-bit 96 kHz pulse-code modulation (PCM) format and then downsample and dither the signal for Compact Disc Digital Audio production (44.1 kHz) or to 48 kHz for radio and television broadcast applications. Digital signal processing ADCs are required in digital signal processing systems that process, store, or transport virtually any analog signal in digital form. TV tuner cards, for example, use fast video analog-to-digital converters. Slow on-chip 8-, 10-, 12-, or 16-bit analog-to-digital converters are common in microcontrollers. Digital storage oscilloscopes need very fast analog-to-digital converters, also crucial for software-defined radio and their new applications. Scientific instruments Digital imaging systems commonly use analog-to-digital converters for digitizing pixels. Some radar systems use analog-to-digital converters to convert signal strength to digital values for subsequent signal processing. Many other in situ and remote sensing systems commonly use analogous technology. Many sensors in scientific instruments produce an analog signal; temperature, pressure, pH, light intensity etc. All these signals can be amplified and fed to an ADC to produce a digital representation. Displays Flat-panel displays are inherently digital and need an ADC to process an analog signal such as composite or VGA. Electrical symbol Testing Testing an analog-to-digital converter requires an analog input source and hardware to send control signals and capture digital data output. Some ADCs also require an accurate source of reference signal. The key parameters to test an ADC are: DC offset error DC gain error signal-to-noise ratio (SNR) Total harmonic distortion (THD) Integral nonlinearity (INL) Differential nonlinearity (DNL) Spurious free dynamic range Power dissipation
Technology
Signal processing
null