id int64 39 79M | url stringlengths 31 227 | text stringlengths 6 334k | source stringlengths 1 150 ⌀ | categories listlengths 1 6 | token_count int64 3 71.8k | subcategories listlengths 0 30 |
|---|---|---|---|---|---|---|
46,182 | https://en.wikipedia.org/wiki/White%20noise | In signal processing, white noise is a random signal having equal intensity at different frequencies, giving it a constant power spectral density. The term is used with this or similar meanings in many scientific and technical disciplines, including physics, acoustical engineering, telecommunications, and statistical forecasting. White noise refers to a statistical model for signals and signal sources, not to any specific signal. White noise draws its name from white light, although light that appears white generally does not have a flat power spectral density over the visible band.
In discrete time, white noise is a discrete signal whose samples are regarded as a sequence of serially uncorrelated random variables with zero mean and finite variance; a single realization of white noise is a random shock. In some contexts, it is also required that the samples be independent and have identical probability distribution (in other words independent and identically distributed random variables are the simplest representation of white noise). In particular, if each sample has a normal distribution with zero mean, the signal is said to be additive white Gaussian noise.
The samples of a white noise signal may be sequential in time, or arranged along one or more spatial dimensions. In digital image processing, the pixels of a white noise image are typically arranged in a rectangular grid, and are assumed to be independent random variables with uniform probability distribution over some interval. The concept can be defined also for signals spread over more complicated domains, such as a sphere or a torus.
An is a purely theoretical construction. The bandwidth of white noise is limited in practice by the mechanism of noise generation, by the transmission medium and by finite observation capabilities. Thus, random signals are considered white noise if they are observed to have a flat spectrum over the range of frequencies that are relevant to the context. For an audio signal, the relevant range is the band of audible sound frequencies (between 20 and 20,000 Hz). Such a signal is heard by the human ear as a hissing sound, resembling the /h/ sound in a sustained aspiration. On the other hand, the sh sound in ash is a colored noise because it has a formant structure. In music and acoustics, the term white noise may be used for any signal that has a similar hissing sound.
In the context of phylogenetically based statistical methods, the term white noise can refer to a lack of phylogenetic pattern in comparative data. In nontechnical contexts, it is sometimes used to mean "random talk without meaningful contents".
Statistical properties
Any distribution of values is possible (although it must have zero DC component). Even a binary signal which can only take on the values 1 or -1 will be white if the sequence is statistically uncorrelated. Noise having a continuous distribution, such as a normal distribution, can of course be white.
It is often incorrectly assumed that Gaussian noise (i.e., noise with a Gaussian amplitude distributionsee normal distribution) necessarily refers to white noise, yet neither property implies the other. Gaussianity refers to the probability distribution with respect to the value, in this context the probability of the signal falling within any particular range of amplitudes, while the term 'white' refers to the way the signal power is distributed (i.e., independently) over time or among frequencies.
One form of white noise is the generalized mean-square derivative of the Wiener process or Brownian motion.
A generalization to random elements on infinite dimensional spaces, such as random fields, is the white noise measure.
Practical applications
Music
White noise is commonly used in the production of electronic music, usually either directly or as an input for a filter to create other types of noise signal. It is used extensively in audio synthesis, typically to recreate percussive instruments such as cymbals or snare drums which have high noise content in their frequency domain. A simple example of white noise is a nonexistent radio station (static).
Electronics engineering
White noise is also used to obtain the impulse response of an electrical circuit, in particular of amplifiers and other audio equipment. It is not used for testing loudspeakers as its spectrum contains too great an amount of high-frequency content. Pink noise, which differs from white noise in that it has equal energy in each octave, is used for testing transducers such as loudspeakers and microphones.
Computing
White noise is used as the basis of some random number generators. For example, Random.org uses a system of atmospheric antennas to generate random digit patterns from sources that can be well-modeled by white noise.
Tinnitus treatment
White noise is a common synthetic noise source used for sound masking by a tinnitus masker. White noise machines and other white noise sources are sold as privacy enhancers and sleep aids (see music and sleep) and to mask tinnitus. The Marpac Sleep-Mate was the first domestic use white noise machine built in 1962 by traveling salesman Jim Buckwalter. Alternatively, the use of an AM radio tuned to unused frequencies ("static") is a simpler and more cost-effective source of white noise. However, white noise generated from a common commercial radio receiver tuned to an unused frequency is extremely vulnerable to being contaminated with spurious signals, such as adjacent radio stations, harmonics from non-adjacent radio stations, electrical equipment in the vicinity of the receiving antenna causing interference, or even atmospheric events such as solar flares and especially lightning.
Work environment
The effects of white noise upon cognitive function are mixed. Recently, a small study found that white noise background stimulation improves cognitive functioning among secondary students with attention deficit hyperactivity disorder (ADHD), while decreasing performance of non-ADHD students. Other work indicates it is effective in improving the mood and performance of workers by masking background office noise, but decreases cognitive performance in complex card sorting tasks.
Similarly, an experiment was carried out on sixty-six healthy participants to observe the benefits of using white noise in a learning environment. The experiment involved the participants identifying different images whilst having different sounds in the background. Overall the experiment showed that white noise does in fact have benefits in relation to learning. The experiments showed that white noise improved the participants' learning abilities and their recognition memory slightly.
Mathematical definitions
White noise vector
A random vector (that is, a random variable with values in Rn) is said to be a white noise vector or white random vector if its components each have a probability distribution with zero mean and finite variance, and are statistically independent: that is, their joint probability distribution must be the product of the distributions of the individual components.
A necessary (but, in general, not sufficient) condition for statistical independence of two variables is that they be statistically uncorrelated; that is, their covariance is zero. Therefore, the covariance matrix R of the components of a white noise vector w with n elements must be an n by n diagonal matrix, where each diagonal element Rii is the variance of component wi; and the correlation matrix must be the n by n identity matrix.
If, in addition to being independent, every variable in w also has a normal distribution with zero mean and the same variance , w is said to be a Gaussian white noise vector. In that case, the joint distribution of w is a multivariate normal distribution; the independence between the variables then implies that the distribution has spherical symmetry in n-dimensional space. Therefore, any orthogonal transformation of the vector will result in a Gaussian white random vector. In particular, under most types of discrete Fourier transform, such as FFT and Hartley, the transform W of w will be a Gaussian white noise vector, too; that is, the n Fourier coefficients of w will be independent Gaussian variables with zero mean and the same variance .
The power spectrum P of a random vector w can be defined as the expected value of the squared modulus of each coefficient of its Fourier transform W, that is, Pi = E(|Wi|2). Under that definition, a Gaussian white noise vector will have a perfectly flat power spectrum, with Pi = σ2 for all i.
If w is a white random vector, but not a Gaussian one, its Fourier coefficients Wi will not be completely independent of each other; although for large n and common probability distributions the dependencies are very subtle, and their pairwise correlations can be assumed to be zero.
Often the weaker condition statistically uncorrelated is used in the definition of white noise, instead of statistically independent. However, some of the commonly expected properties of white noise (such as flat power spectrum) may not hold for this weaker version. Under this assumption, the stricter version can be referred to explicitly as independent white noise vector. Other authors use strongly white and weakly white instead.
An example of a random vector that is Gaussian white noise in the weak but not in the strong sense is where is a normal random variable with zero mean, and is equal to or to , with equal probability. These two variables are uncorrelated and individually normally distributed, but they are not jointly normally distributed and are not independent. If is rotated by 45 degrees, its two components will still be uncorrelated, but their distribution will no longer be normal.
In some situations, one may relax the definition by allowing each component of a white random vector to have non-zero expected value . In image processing especially, where samples are typically restricted to positive values, one often takes to be one half of the maximum sample value. In that case, the Fourier coefficient corresponding to the zero-frequency component (essentially, the average of the ) will also have a non-zero expected value ; and the power spectrum will be flat only over the non-zero frequencies.
Discrete-time white noise
A discrete-time stochastic process is a generalization of a random vector with a finite number of components to infinitely many components. A discrete-time stochastic process is called white noise if its mean is equal to zero for all , i.e. and if the autocorrelation function has a nonzero value only for , i.e. .
Continuous-time white noise
In order to define the notion of white noise in the theory of continuous-time signals, one must replace the concept of a random vector by a continuous-time random signal; that is, a random process that generates a function of a real-valued parameter .
Such a process is said to be white noise in the strongest sense if the value for any time is a random variable that is statistically independent of its entire history before . A weaker definition requires independence only between the values and at every pair of distinct times and . An even weaker definition requires only that such pairs and be uncorrelated. As in the discrete case, some authors adopt the weaker definition for white noise, and use the qualifier independent to refer to either of the stronger definitions. Others use weakly white and strongly white to distinguish between them.
However, a precise definition of these concepts is not trivial, because some quantities that are finite sums in the finite discrete case must be replaced by integrals that may not converge. Indeed, the set of all possible instances of a signal is no longer a finite-dimensional space , but an infinite-dimensional function space. Moreover, by any definition a white noise signal would have to be essentially discontinuous at every point; therefore even the simplest operations on , like integration over a finite interval, require advanced mathematical machinery.
Some authors require each value to be a real-valued random variable with expectation and some finite variance . Then the covariance between the values at two times and is well-defined: it is zero if the times are distinct, and if they are equal. However, by this definition, the integral
over any interval with positive width would be simply the width times the expectation: . This property renders the concept inadequate as a model of white noise signals either in a physical or mathematical sense.
Therefore, most authors define the signal indirectly by specifying random values for the integrals of and over each interval . In this approach, however, the value of at an isolated time cannot be defined as a real-valued random variable. Also the covariance becomes infinite when ; and the autocorrelation function must be defined as , where is some real constant and is the Dirac delta function.
In this approach, one usually specifies that the integral of over an interval is a real random variable with normal distribution, zero mean, and variance ; and also that the covariance of the integrals , is , where is the width of the intersection of the two intervals . This model is called a Gaussian white noise signal (or process).
In the mathematical field known as white noise analysis, a Gaussian white noise is defined as a stochastic tempered distribution, i.e. a random variable with values in the space of tempered distributions. Analogous to the case for finite-dimensional random vectors, a probability law on the infinite-dimensional space can be defined via its characteristic function (existence and uniqueness are guaranteed by an extension of the Bochner–Minlos theorem, which goes under the name Bochner–Minlos–Sazanov theorem); analogously to the case of the multivariate normal distribution , which has characteristic function
the white noise must satisfy
where is the natural pairing of the tempered distribution with the Schwartz function , taken scenariowise for , and .
Mathematical applications
Time series analysis and regression
In statistics and econometrics one often assumes that an observed series of data values is the sum of the values generated by a deterministic linear process, depending on certain independent (explanatory) variables, and on a series of random noise values. Then regression analysis is used to infer the parameters of the model process from the observed data, e.g. by ordinary least squares, and to test the null hypothesis that each of the parameters is zero against the alternative hypothesis that it is non-zero. Hypothesis testing typically assumes that the noise values are mutually uncorrelated with zero mean and have the same Gaussian probability distributionin other words, that the noise is Gaussian white (not just white). If there is non-zero correlation between the noise values underlying different observations then the estimated model parameters are still unbiased, but estimates of their uncertainties (such as confidence intervals) will be biased (not accurate on average). This is also true if the noise is heteroskedasticthat is, if it has different variances for different data points.
Alternatively, in the subset of regression analysis known as time series analysis there are often no explanatory variables other than the past values of the variable being modeled (the dependent variable). In this case the noise process is often modeled as a moving average process, in which the current value of the dependent variable depends on current and past values of a sequential white noise process.
Random vector transformations
These two ideas are crucial in applications such as channel estimation and channel equalization in communications and audio. These concepts are also used in data compression.
In particular, by a suitable linear transformation (a coloring transformation), a white random vector can be used to produce a non-white random vector (that is, a list of random variables) whose elements have a prescribed covariance matrix. Conversely, a random vector with known covariance matrix can be transformed into a white random vector by a suitable whitening transformation.
Generation
White noise may be generated digitally with a digital signal processor, microprocessor, or microcontroller. Generating white noise typically entails feeding an appropriate stream of random numbers to a digital-to-analog converter. The quality of the white noise will depend on the quality of the algorithm used.
Informal use
The term is sometimes used as a colloquialism to describe a backdrop of ambient sound, creating an indistinct or seamless commotion. Following are some examples:
Chatter from multiple conversations within the acoustics of a confined space.
The pleonastic jargon used by politicians to mask a point that they don't want noticed.
Music that is disagreeable, harsh, dissonant or discordant with no melody.
The term can also be used metaphorically, as in the novel White Noise (1985) by Don DeLillo which explores the symptoms of modern culture that came together so as to make it difficult for an individual to actualize their ideas and personality.
See also
References
External links
Noise (electronics)
Statistical signal processing
Data compression
Sound
Acoustics | White noise | [
"Physics",
"Engineering"
] | 3,363 | [
"Statistical signal processing",
"Classical mechanics",
"Acoustics",
"Engineering statistics"
] |
46,183 | https://en.wikipedia.org/wiki/Butter | Butter is a dairy product made from the fat and protein components of churned cream. It is a semi-solid emulsion at room temperature, consisting of approximately 80% butterfat. It is used at room temperature as a spread, melted as a condiment, and used as a fat in baking, sauce-making, pan frying, and other cooking procedures.
Most frequently made from cow's milk, butter can also be manufactured from the milk of other mammals, including sheep, goats, buffalo, and yaks. It is made by churning milk or cream to separate the fat globules from the buttermilk. Salt has been added to butter since antiquity to help preserve it, particularly when being transported; salt may still play a preservation role but is less important today as the entire supply chain is usually refrigerated. In modern times, salt may be added for taste. Food coloring is sometimes added to butter. Rendering butter, removing the water and milk solids, produces clarified butter, or ghee, which is almost entirely butterfat.
Butter is a water-in-oil emulsion resulting from an inversion of the cream, where the milk proteins are the emulsifiers. Butter remains a firm solid when refrigerated but softens to a spreadable consistency at room temperature and melts to a thin liquid consistency at . The density of butter is . It generally has a pale yellow color but varies from deep yellow to nearly white. Its natural, unmodified color is dependent on the source animal's feed and genetics, but the commercial manufacturing process sometimes alters this with food colorings like annatto or carotene.
Etymology
The word butter derives (via Germanic languages) from the Latin butyrum, which is the latinisation of the Greek βούτυρον (bouturon) and βούτυρος. This may be a compound of βοῦς (bous), "ox, cow" + τυρός (turos), "cheese", that is "cow-cheese". The word turos ("cheese") is attested in Mycenaean Greek. The Latinized form is found in the name butyric acid, a compound found in rancid butter and other dairy products.
Production
Unhomogenized milk and cream contain butterfat in microscopic globules. These globules are surrounded by membranes made of phospholipids (fatty acid emulsifiers) and proteins, which prevent the fat in milk from pooling together into a single mass. Butter is produced by agitating cream, which damages these membranes and allows the milk fats to conjoin, separating from the other parts of the cream. Variations in the production method will create butters with different consistencies, mostly due to the butterfat composition in the finished product. Butter contains fat in three separate forms: free butterfat, butterfat crystals, and undamaged fat globules. In the finished product, different proportions of these forms result in different consistencies within the butter; butters with many crystals are harder than butters dominated by free fats.
Churning produces small butter grains floating in the water-based portion of the cream. This watery liquid is called buttermilk, although the buttermilk most commonly sold today is instead directly fermented skimmed milk. The buttermilk is drained off; sometimes more buttermilk is removed by rinsing the grains with water. Then the grains are "worked": pressed and kneaded together. When prepared manually, this is done using wooden boards called scotch hands. This consolidates the butter into a solid mass and breaks up embedded pockets of buttermilk or water into tiny droplets.
Commercial butter is about 80% butterfat and 15% water; traditionally-made butter may have as little as 65% fat and 30% water. Butterfat is a mixture of triglyceride, a triester derived from glycerol, and three of any of several fatty acid groups. Annatto is sometimes added by U.S. butter manufacturers without declaring it on the label because the U.S. allows butter to have an undisclosed flavorless and natural coloring agent (whereas all other foods in the U.S. must label coloring agents). The preservative lactic acid is sometimes added instead of salt (and as a flavor enhancer), and sometimes additional diacetyl is added to boost the buttery flavor (in the U.S., both ingredients can be listed simply as "natural flavors"). When used together in the NIZO manufacturing method, these two flavorings produce the flavor of cultured butter without actually fully fermenting.
Types
Before modern factory butter making, cream was usually collected from several milkings and was therefore several days old and somewhat fermented by the time it was made into butter. Butter made in this traditional way (from a fermented cream) is known as cultured butter. During fermentation, the cream naturally sours as bacteria convert milk sugars into lactic acid. The fermentation process produces additional aroma compounds, including diacetyl, which makes for a fuller-flavored and more "buttery" tasting product.
Butter made from fresh cream is called sweet cream butter. Production of sweet cream butter first became common in the 19th century, when the development of refrigeration and the mechanical milk separator made sweet cream butter faster and cheaper to produce at scale (sweet cream butter can be made in 6 hours, whereas cultured butter can take up to 72 hours to make).
Cultured butter is preferred throughout continental Europe, while sweet cream butter dominates in the United States and the United Kingdom. Chef Jansen Chan, the director of pastry operations at the International Culinary Center in Manhattan, says, "It's no secret that dairy in France and most of Europe is higher quality than most of the U.S." The combination of butter culturing, the 82% butterfat minimum (as opposed to the 80% minimum in the U.S.), and the fact that French butter is grass-fed, accounts for why French pastry (and French food in general) has a reputation for being richer-tasting and flakier. Cultured butter is sometimes labeled "European-style" butter in the United States, although cultured butter is made and sold by some, especially Amish, dairies.
Milk that is to be made into butter is usually pasteurized during production to kill pathogenic bacteria and other microbes. Butter made from raw milk is very rare and can be dangerous because it is made from unpasteurized milk. Commercial raw milk products are not legal to sell through interstate commerce in the United States and are very rare in Europe. Raw cream butter is generally only found made at home by dairy farmers or by consumers who have purchased raw whole milk directly from them, skimmed the cream themselves, and made butter with it.
Clarified butter
Clarified butter has almost all of its water and milk solids removed, leaving almost-pure butterfat. Clarified butter is made by heating butter to its melting point and then allowing it to cool; after settling, the remaining components separate by density. At the top, whey proteins form a skin, which is removed. The resulting butterfat is then poured off from the mixture of water and casein proteins that settle to the bottom.
Ghee is clarified butter that has been heated to around 120 °C (250 °F) after the water evaporated, turning the milk solids brown. This process flavors the ghee, and also produces antioxidants that help protect it from rancidity. Because of this, ghee can be kept for six to eight months under normal conditions.
Whey butter
Cream may be separated (usually by a centrifuge or a sedimentation) from whey instead of milk, as a byproduct of cheese-making. Whey butter may be made from whey cream. Whey cream and butter have a lower fat content and taste more salty, tangy and "cheesy". They are also cheaper to make than "sweet" cream and butter. The fat content of whey is low, so 1,000 pounds of whey will typically give only three pounds of butter.
European butters
There are several butters produced in Europe with protected geographical indications; these include:
Beurre d'Ardenne, from Belgium
Beurre d'Isigny, from France
Beurre Charentes-Poitou (Which also includes: Beurre des Charentes and Beurre des Deux-Sèvres under the same classification), from France
Beurre Rose, from Luxembourg
Mantequilla de Soria, from Spain
Mantega de l'Alt Urgell i la Cerdanya, from Spain
Rucava white butter (Rucavas baltais sviests), from Latvia
History
Elaine Khosrova traces the invention of butter back to Neolithic-era Africa 8,000 BC in her book. A later Sumerian tablet, dating to approximately 2,500 B.C., describes the butter making process, from the milking of cattle, while contemporary Sumerian tablets identify butter as a ritual offering.
In the Mediterranean climate, unclarified butter spoils quickly, unlike cheese, so it is not a practical method of preserving the nutrients of milk. The ancient Greeks and Romans seemed to use the butter only as unguent and medicine and considered it as a food of the barbarians.
A play by the Greek comic poet Anaxandrides refers to Thracians as boutyrophagoi, "butter-eaters". In his Natural History, Pliny the Elder calls butter "the most delicate of food among barbarous nations" and goes on to describe its medicinal properties. Later, the physician Galen also described butter as a medicinal agent only.
Middle Ages
In the cooler climates of northern Europe, people could store butter longer before it spoiled. Scandinavia has the oldest tradition in Europe of butter export trade, dating at least to the 12th century. After the fall of Rome and through much of the Middle Ages, butter was a common food across most of Europe—but had a low reputation, and so was consumed principally by peasants. Butter slowly became more accepted by the upper class, notably when the Roman Catholic Church allowed its consumption during Lent from the early 16th century. Bread and butter became common fare among the middle class and the English, in particular, gained a reputation for their liberal use of melted butter as a sauce with meat and vegetables.
In antiquity, butter was used for fuel in lamps, as a substitute for oil. The Butter Tower of Rouen Cathedral was erected in the early 16th century when Archbishop Georges d'Amboise authorized the burning of butter during Lent, instead of oil, which was scarce at the time.
Across northern Europe, butter was sometimes packed into barrels (firkins) and buried in peat bogs, perhaps for years. Such "bog butter" would develop a strong flavor as it aged, but remain edible, in large part because of the cool, airless, antiseptic and acidic environment of a peat bog. Firkins of such buried butter are a common archaeological find in Ireland; the National Museum of Ireland – Archaeology has some containing "a grayish cheese-like substance, partially hardened, not much like butter, and quite free from putrefaction." The practice was most common in Ireland in the 11th–14th centuries; it ended entirely before the 19th century.
Industrialization
Until the 19th century, the vast majority of butter was made by hand, on farms. Butter also provided extra income to farm families. They used wood presses with carved decoration to press butter into pucks or small bricks to sell at nearby markets or general stores. The decoration identified the farm that produced the butter. This practice continued until production was mechanized and butter was produced in less decorative stick form.
Like Ireland, France became well known for its butter, particularly in Normandy and Brittany. Butter consumption in London in the mid-1840s was estimated at 15,357 tons annually.
The first butter factories appeared in the United States in the early 1860s, after the successful introduction of cheese factories a decade earlier. In the late 1870s, the centrifugal cream separator was introduced, marketed most successfully by Swedish engineer Carl Gustaf Patrik de Laval.
In 1920, Otto Hunziker authored The Butter Industry, Prepared for Factory, School and Laboratory, a well-known text in the industry that enjoyed at least three editions (1920, 1927, 1940). As part of the efforts of the American Dairy Science Association, Hunziker and others published articles regarding: causes of tallowiness (an odor defect, distinct from rancidity, a taste defect); mottles (an aesthetic issue related to uneven color); introduced salts; the impact of creamery metals and liquids; and acidity measurement. These and other ADSA publications helped standardize practices internationally.
Butter consumption declined in most western nations during the 20th century, mainly because of the rising popularity of margarine, which is less expensive and, until recent years, was perceived as being healthier. In the United States, margarine consumption overtook butter during the 1950s, and it is still the case today that more margarine than butter is eaten in the U.S. and the EU.
Worldwide production
In 1997, India produced of butter, most of which was consumed domestically. Second in production was the United States (), followed by France (), Germany (), and New Zealand (). France ranks first in per capita butter consumption with 8 kg per capita per year. In terms of absolute consumption, Germany was second after India, using of butter in 1997, followed by France (), Russia (), and the United States (). New Zealand, Australia, Denmark and Ukraine are among the few nations that export a significant percentage of the butter they produce.
Different varieties are found around the world. Smen is a spiced Moroccan clarified butter, buried in the ground and aged for months or years. A similar product is maltash of the Hunza Valley, where cow and yak butter can be buried for decades, and is used at events such as weddings. Yak butter is a specialty in Tibet; tsampa, barley flour mixed with yak butter, is a staple food. Butter tea is consumed in the Himalayan regions of Tibet, Bhutan, Nepal and India. It consists of tea served with intensely flavored—or "rancid"—yak butter and salt. In African and Asian nations, butter is sometimes traditionally made from sour milk rather than cream. It can take several hours of churning to produce workable butter grains from fermented milk.
Storage
Normal butter softens to a spreadable consistency around 15 °C (60 °F), well above refrigerator temperatures. The "butter compartment" found in many refrigerators may be one of the warmer sections inside, but it still leaves butter quite hard. Until recently, many refrigerators sold in New Zealand featured a "butter conditioner", a compartment kept warmer than the rest of the refrigerator—but still cooler than room temperature—with a small heater. Keeping butter tightly wrapped delays rancidity, which is hastened by exposure to light or air, and also helps prevent it from picking up other odors. Wrapped butter has a shelf life of several months at refrigerator temperatures. Butter can also be frozen to extend its storage life.
Packaging
United States
In the United States, butter has traditionally been made into small, rectangular blocks by means of a pair of wooden butter paddles. It is usually produced in sticks that are individually wrapped in waxed or foiled paper, and sold as a package of 4 sticks. This practice is believed to have originated in 1907, when Swift and Company began packaging butter in this manner for mass distribution.
Due to historical differences in butter printers (machines that cut and package butter), 4-ounce sticks are commonly produced in two different shapes:
The dominant shape east of the Rocky Mountains is the Elgin, or Eastern-pack shape, named for a dairy in Elgin, Illinois. The sticks measure and are typically sold stacked two by two in elongated cube-shaped boxes.
West of the Rocky Mountains, butter printers standardized on a different shape that is now referred to as the Western-pack shape. These butter sticks measure and are usually sold with four sticks packed side-by-side in a flat, rectangular box.
Most butter dishes are designed for Elgin-style butter sticks.
Elsewhere
Outside the United States, butter is measured for sale by mass (rather than by volume or unit/stick), and is often sold in and packages.
Bulk packaging
Since the 1940s, but more commonly the 1960s, butter pats have been individually wrapped and packed in cardboard boxes. Prior to use of cardboard, butter was bulk packed in wood. The earliest discoveries used firkins. From about 1882 wooden boxes were used, as the introduction of refrigeration on ships brought about longer transit times. Butter boxes were generally made with woods whose resin would not taint the butter, such as sycamore, kahikatea, hoop pine, maple, or spruce. They commonly weighed a firkin at .
In cooking and gastronomy
Butter has been considered indispensable in French cuisine since the 17th century. Chefs and cooks have extolled its importance: Fernand Point said "Donnez-moi du beurre, encore du beurre, toujours du beurre!" ('Give me butter, more butter, still more butter!'). Julia Child said, "With enough butter, anything is good."
Melted butter plays an important role in the preparation of sauces, notably in French cuisine. Beurre noisette (hazelnut butter) and Beurre noir (black butter) are sauces of melted butter cooked until the milk solids and sugars have turned golden or dark brown; they are often finished with an addition of vinegar or lemon juice. Hollandaise and béarnaise sauces are emulsions of egg yolk and melted butter. Hollandaise and béarnaise sauces are stabilized with the powerful emulsifiers in the egg yolks, but butter itself contains enough emulsifiers—mostly remnants of the fat globule membranes—to form a stable emulsion on its own.
Beurre blanc (white butter) is made by whisking butter into reduced vinegar or wine, forming an emulsion with the texture of thick cream. Beurre monté (prepared butter) is melted but still emulsified butter; it lends its name to the practice of "mounting" a sauce with butter: whisking cold butter into any water-based sauce at the end of cooking, giving the sauce a thicker body and a glossy shine—as well as a buttery taste.
Butter is used for sautéing and frying, although its milk solids brown and burn above 150 °C (250 °F)—a rather low temperature for most applications. The smoke point of butterfat is around 200 °C (400 °F), so clarified butter or ghee is better suited to frying.
Butter fills several roles in baking, including making possible a range of textures, making chemical leavenings work better, tenderizing proteins, and enhancing the tastes of other ingredients. It is used in a similar manner to other solid fats like lard, suet, or shortening, but has a flavor that may better complement sweet baked goods.
Compound butters are mixtures of butter and other ingredients used to flavor various dishes.
Nutritional information
Butter (salted during manufacturing) is 16% water, 81% fat, and 1% protein, with negligible carbohydrates (provided from table source as 100 g). Saturated fat is 51% of total fats in butter (table source).
In a reference amount of , butter supplies 717 calories and 76% of the Daily Value (DV) for vitamin A, 15% DV for vitamin E, and 28% DV for sodium, with no other micronutrients in significant content (table). In 100 grams, salted butter contains 215 mg of cholesterol (table source).
As butter is essentially just the milk fat, it contains only traces of lactose, so moderate consumption of butter is not a problem for lactose intolerant people. People with milk allergies may still need to avoid butter, which contains enough of the allergy-causing proteins to cause reactions.
Health concerns
A 2015 study concluded that "hypercholesterolemic people should keep their consumption of butter to a minimum, whereas moderate butter intake may be considered part of the diet in the normocholesterolemic population."
A meta-analysis and systematic review published in 2016 found relatively small or insignificant overall associations of a dose of 14g/day of butter with mortality and cardiovascular disease, and consumption was insignificantly inversely associated with incidence of diabetes. The study states that "findings do not support a need for major emphasis in dietary guidelines on either increasing or decreasing butter consumption."
See also
List of butter dishes
List of dairy products
List of butter sauces
List of spreads
References
Further reading
pp. 33–39, "Butter and Margarine"
Michael Douma (editor). WebExhibits' Butter pages . Retrieved 21 November 2005.
Full text online
Grigg, David B. (7 November 1974). The Agricultural Systems of the World: An Evolutionary Approach , 196–198. Google Print. (accessed 28 November 2005). Also available in print from Cambridge University Press.
External links
Manufacture of butter, The University of Guelph
"Butter", Food Resource, College of Health and Human Sciences, Oregon State University, 20 February 2007. – FAQ, links, and extensive bibliography of food science articles on butter.
Cork Butter Museum: the story of Ireland’s most important food export and the world’s largest butter market
Virtual Museum Exhibit on Milk, Cream & Butter
Dairy products
Cooking fats
Colloids
Spreads (food)
Condiments | Butter | [
"Physics",
"Chemistry",
"Materials_science"
] | 4,540 | [
"Chemical mixtures",
"Condensed matter physics",
"Colloids"
] |
46,202 | https://en.wikipedia.org/wiki/Pink%20noise | Pink noise, noise, fractional noise or fractal noise is a signal or process with a frequency spectrum such that the power spectral density (power per frequency interval) is inversely proportional to the frequency of the signal. In pink noise, each octave interval (halving or doubling in frequency) carries an equal amount of noise energy.
Pink noise sounds like a waterfall. It is often used to tune loudspeaker systems in professional audio. Pink noise is one of the most commonly observed signals in biological systems.
The name arises from the pink appearance of visible light with this power spectrum. This is in contrast with white noise which has equal intensity per frequency interval.
Definition
Within the scientific literature, the term 1/f noise is sometimes used loosely to refer to any noise with a power spectral density of the form
where f is frequency, and 0 < α < 2, with exponent α usually close to 1. One-dimensional signals with α = 1 are usually called pink noise.
The following function describes a length one-dimensional pink noise signal (i.e. a Gaussian white noise signal with zero mean and standard deviation , which has been suitably filtered), as a sum of sine waves with different frequencies, whose amplitudes fall off inversely with the square root of frequency (so that power, which is the square of amplitude, falls off inversely with frequency), and phases are random:
are iid chi-distributed variables, and are uniform random.
In a two-dimensional pink noise signal, the amplitude at any orientation falls off inversely with frequency. A pink noise square of length can be written as:
General 1/f α-like noises occur widely in nature and are a source of considerable interest in many fields. Noises with α near 1 generally come from condensed-matter systems in quasi-equilibrium, as discussed below. Noises with a broad range of α generally correspond to a wide range of non-equilibrium driven dynamical systems.
Pink noise sources include flicker noise in electronic devices. In their study of fractional Brownian motion, Mandelbrot and Van Ness proposed the name fractional noise (sometimes since called fractal noise) to describe 1/f α noises for which the exponent α is not an even integer, or that are fractional derivatives of Brownian (1/f 2) noise.
Description
In pink noise, there is equal energy per octave of frequency. The energy of pink noise at each frequency level, however, falls off at roughly 3 dB per octave. This is in contrast to white noise which has equal energy at all frequency levels.
The human auditory system, which processes frequencies in a roughly logarithmic fashion approximated by the Bark scale, does not perceive different frequencies with equal sensitivity; signals around 1–4 kHz sound loudest for a given intensity. However, humans still differentiate between white noise and pink noise with ease.
Graphic equalizers also divide signals into bands logarithmically and report power by octaves; audio engineers put pink noise through a system to test whether it has a flat frequency response in the spectrum of interest. Systems that do not have a flat response can be equalized by creating an inverse filter using a graphic equalizer. Because pink noise tends to occur in natural physical systems, it is often useful in audio production. Pink noise can be processed, filtered, and/or effects can be added to produce desired sounds. Pink-noise generators are commercially available.
One parameter of noise, the peak versus average energy contents, or crest factor, is important for testing purposes, such as for audio power amplifier and loudspeaker capabilities because the signal power is a direct function of the crest factor. Various crest factors of pink noise can be used in simulations of various levels of dynamic range compression in music signals. On some digital pink-noise generators the crest factor can be specified.
Generation
Pink noise can be computer-generated by first generating a white noise signal, Fourier-transforming it, then dividing the amplitudes of the different frequency components by the square root of the frequency (in one dimension), or by the frequency (in two dimensions) etc. This is equivalent to spatially filtering (convolving) the white noise signal with a white-to-pink-filter. For a length signal in one dimension, the filter has the following form:
Matlab programs are available to generate pink and other power-law coloured noise in one or any number of dimensions.
Properties
Power-law spectra
The power spectrum of pink noise is only for one-dimensional signals. For two-dimensional signals (e.g., images) the average power spectrum at any orientation falls as , and in dimensions, it falls as . In every case, each octave carries an equal amount of noise power.
The average amplitude and power of a pink noise signal at any orientation , and the total power across all orientations, fall off as some power of the frequency. The following table lists these power-law frequency-dependencies for pink noise signal in different dimensions, and also for general power-law colored noise with power (e.g.: Brown noise has ):
Distribution of point values
Consider pink noise of any dimension that is produced by generating a Gaussian white noise signal with mean and sd , then multiplying its spectrum with a filter (equivalent to spatially filtering it with a filter ). Then the point values of the pink noise signal will also be normally distributed, with mean and sd .
Autocorrelation
Unlike white noise, which has no correlations across the signal, a pink noise signal is correlated with itself, as follows.
1D signal
The Pearson's correlation coefficient of a one-dimensional pink noise signal (comprising discrete frequencies ) with itself across a distance in the configuration (space or time) domain is:
If instead of discrete frequencies, the pink noise comprises a superposition of continuous frequencies from to , the autocorrelation coefficient is:
where is the cosine integral function.
2D signal
The Pearson's autocorrelation coefficient of a two-dimensional pink noise signal comprising discrete frequencies is theoretically approximated as:
where is the Bessel function of the first kind.
Occurrence
Pink noise has been discovered in the statistical fluctuations of an extraordinarily diverse number of physical and biological systems (Press, 1978; see articles in Handel & Chung, 1993, and references therein). Examples of its occurrence include fluctuations in tide and river heights, quasar light emissions, heart beat, firings of single neurons, resistivity in solid-state electronics and single-molecule conductance signals resulting in flicker noise. Pink noise describes the statistical structure of many natural images.
General 1/f α noises occur in many physical, biological and economic systems, and some researchers describe them as being ubiquitous. In physical systems, they are present in some meteorological data series, the electromagnetic radiation output of some astronomical bodies. In biological systems, they are present in, for example, heart beat rhythms, neural activity, and the statistics of DNA sequences, as a generalized pattern.
An accessible introduction to the significance of pink noise is one given by Martin Gardner (1978) in his Scientific American column "Mathematical Games". In this column, Gardner asked for the sense in which music imitates nature. Sounds in nature are not musical in that they tend to be either too repetitive (bird song, insect noises) or too chaotic (ocean surf, wind in trees, and so forth). The answer to this question was given in a statistical sense by Voss and Clarke (1975, 1978), who showed that pitch and loudness fluctuations in speech and music are pink noises. So music is like tides not in terms of how tides sound, but in how tide heights vary.
Precision timekeeping
The ubiquitous 1/f noise poses a "noise floor" to precision timekeeping. The derivation is based on.
Suppose that we have a timekeeping device (it could be anything from quartz oscillators, atomic clocks, and hourglasses). Let its readout be a real number that changes with the actual time . For concreteness, let us consider a quartz oscillator. In a quartz oscillator, is the number of oscillations, and is the rate of oscillation. The rate of oscillation has a constant component and a fluctuating component , so . By selecting the right units for , we can have , meaning that on average, one second of clock-time passes for every second of real-time.
The stability of the clock is measured by how many "ticks" it makes over a fixed interval. The more stable the number of ticks, the better the stability of the clock. So, define the average clock frequency over the interval asNote that is unitless: it is the numerical ratio between ticks of the physical clock and ticks of an ideal clock.
The Allan variance of the clock frequency is half the mean square of change in average clock frequency:where is an integer large enough for the averaging to converge to a definite value.
For example, a 2013 atomic clock achieved , meaning that if the clock is used to repeatedly measure intervals of 7 hours, the standard deviation of the actually measured time would be around 40 femtoseconds.
Now we havewhere is one packet of a square wave with height and wavelength . Let be a packet of a square wave with height 1 and wavelength 2, then , and its Fourier transform satisfies .
The Allan variance is then , and the discrete averaging can be approximated by a continuous averaging: , which is the total power of the signal , or the integral of its power spectrum:
In words, the Allan variance is approximately the power of the fluctuation after bandpass filtering at with bandwidth .
For fluctuation, we have for some constant , so . In particular, when the fluctuating component is a 1/f noise, then is independent of the averaging time , meaning that the clock frequency does not become more stable by simply averaging for longer. This contrasts with a white noise fluctuation, in which case , meaning that doubling the averaging time would improve the stability of frequency by .
The cause of the noise floor is often traced to particular electronic components (such as transistors, resistors, and capacitors) within the oscillator feedback.
Humans
In brains, pink noise has been widely observed across many temporal and physical scales from ion channel gating to EEG and MEG and LFP recordings in humans. In clinical EEG, deviations from this 1/f pink noise can be used to identify epilepsy, even in the absence of a seizure, or during the interictal state. Classic models of EEG generators suggested that dendritic inputs in gray matter were principally responsible for generating the 1/f power spectrum observed in EEG/MEG signals. However, recent computational models using cable theory have shown that action potential transduction along white matter tracts in the brain also generates a 1/f spectral density. Therefore, white matter signal transduction may also contribute to pink noise measured in scalp EEG recordings,
particularly if the effects of ephaptic coupling are taken into consideration.
It has also been successfully applied to the modeling of mental states in psychology, and used to explain stylistic variations in music from different cultures and historic periods. Richard F. Voss and J. Clarke claim that almost all musical melodies, when each successive note is plotted on a scale of pitches, will tend towards a pink noise spectrum. Similarly, a generally pink distribution pattern has been observed in film shot length by researcher James E. Cutting of Cornell University, in the study of 150 popular movies released from 1935 to 2005.
Pink noise has also been found to be endemic in human response. Gilden et al. (1995) found extremely pure examples of this noise in the time series formed upon iterated production of temporal and spatial intervals. Later, Gilden (1997) and Gilden (2001) found that time series formed from reaction time measurement and from iterated two-alternative forced choice also produced pink noises.
Electronic devices
The principal sources of pink noise in electronic devices are almost invariably the slow fluctuations of properties of the condensed-matter materials of the devices. In many cases the specific sources of the fluctuations are known. These include fluctuating configurations of defects in metals, fluctuating occupancies of traps in semiconductors, and fluctuating domain structures in magnetic materials. The explanation for the approximately pink spectral form turns out to be relatively trivial, usually coming from a distribution of kinetic activation energies of the fluctuating processes. Since the frequency range of the typical noise experiment (e.g., 1 Hz – 1 kHz) is low compared with typical microscopic "attempt frequencies" (e.g., 1014 Hz), the exponential factors in the Arrhenius equation for the rates are large. Relatively small spreads in the activation energies appearing in these exponents then result in large spreads of characteristic rates. In the simplest toy case, a flat distribution of activation energies gives exactly a pink spectrum, because
There is no known lower bound to background pink noise in electronics. Measurements made down to 10−6 Hz (taking several weeks) have not shown a ceasing of pink-noise behaviour. (Kleinpenning, de Kuijper, 1988) measured the resistance in a noisy carbon-sheet resistor, and found 1/f noise behavior over the range of , a range of 9.5 decades.
A pioneering researcher in this field was Aldert van der Ziel.
Flicker noise is commonly used for the reliability characterization of electronic devices. It is also used for gas detection in chemoresistive sensors by dedicated measurement setups.
In gravitational wave astronomy
1/f α noises with α near 1 are a factor in gravitational-wave astronomy. The noise curve at very low frequencies affects pulsar timing arrays, the European Pulsar Timing Array (EPTA) and the future International Pulsar Timing Array (IPTA); at low frequencies are space-borne detectors, the formerly proposed Laser Interferometer Space Antenna (LISA) and the currently proposed evolved Laser Interferometer Space Antenna (eLISA), and at high frequencies are ground-based detectors, the initial Laser Interferometer Gravitational-Wave Observatory (LIGO) and its advanced configuration (aLIGO). The characteristic strain of potential astrophysical sources are also shown. To be detectable the characteristic strain of a signal must be above the noise curve.
Climate dynamics
Pink noise on timescales of decades has been found in climate proxy data, which may indicate amplification and coupling of processes in the climate system.
Diffusion processes
Many time-dependent stochastic processes are known to exhibit 1/f α noises with α between 0 and 2. In particular Brownian motion has a power spectral density that equals 4D/f 2, where D is the diffusion coefficient. This type of spectrum is sometimes referred to as Brownian noise. The analysis of individual Brownian motion trajectories also show 1/f 2 spectrum, albeit with random amplitudes. Fractional Brownian motion with Hurst exponent H also show 1/f α power spectral density with α=2H+1 for subdiffusive processes (H<0.5) and α=2 for superdiffusive processes (0.5<H<1).
Origin
There are many theories about the origin of pink noise. Some theories attempt to be universal, while others apply to only a certain type of material, such as semiconductors. Universal theories of pink noise remain a matter of current research interest.
A hypothesis (referred to as the Tweedie hypothesis) has been proposed to explain the genesis of pink noise on the basis of a mathematical convergence theorem related to the central limit theorem of statistics. The Tweedie convergence theorem describes the convergence of certain statistical processes towards a family of statistical models known as the Tweedie distributions. These distributions are characterized by a variance to mean power law, that have been variously identified in the ecological literature as Taylor's law and in the physics literature as fluctuation scaling. When this variance to mean power law is demonstrated by the method of expanding enumerative bins this implies the presence of pink noise, and vice versa. Both of these effects can be shown to be the consequence of mathematical convergence such as how certain kinds of data will converge towards the normal distribution under the central limit theorem. This hypothesis also provides for an alternative paradigm to explain power law manifestations that have been attributed to self-organized criticality.
There are various mathematical models to create pink noise. The superposition of exponentially decaying pulses is able to generate a signal with the -spectrum at moderate frequencies, transitioning to a constant at low frequencies and at high frequencies. In contrast, the sandpile model of self-organized criticality, which exhibits quasi-cycles of gradual stress accumulation between fast rare stress-releases, reproduces the flicker noise that corresponds to the intra-cycle dynamics. The statistical signature of self-organization is justified in It can be generated on computer, for example, by filtering white noise, inverse Fourier transform, or by multirate variants on standard white noise generation.
In supersymmetric theory of stochastics, an approximation-free theory of stochastic differential equations, 1/f noise is one of the manifestations of the spontaneous breakdown of topological supersymmetry. This supersymmetry is an intrinsic property of all stochastic differential equations and its meaning is the preservation of the continuity of the phase space by continuous time dynamics. Spontaneous breakdown of this supersymmetry is the stochastic generalization of the concept of deterministic chaos, whereas the associated emergence of the long-term dynamical memory or order, i.e., 1/f and crackling noises, the Butterfly effect etc., is the consequence of the Goldstone theorem in the application to the spontaneously broken topological supersymmetry.
Audio testing
Pink noise is commonly used to test the loudspeakers in sound reinforcement systems, with the resulting sound measured with a test microphone in the listening space connected to a spectrum analyzer or a computer running a real-time fast Fourier transform (FFT) analyzer program such as Smaart. The sound system plays pink noise while the audio engineer makes adjustments on an audio equalizer to obtain the desired results. Pink noise is predictable and repeatable, but it is annoying for a concert audience to hear. Since the late 1990s, FFT-based analysis enabled the engineer to make adjustments using pre-recorded music as the test signal, or even the music coming from the performers in real time. Pink noise is still used by audio system contractors and by computerized sound systems which incorporate an automatic equalization feature.
In manufacturing, pink noise is often used as a burn-in signal for audio amplifiers and other components, to determine whether the component will maintain performance integrity during sustained use. The process of end-users burning in their headphones with pink noise to attain higher fidelity has been called an audiophile "myth".
See also
Architectural acoustics
Audio signal processing
Brownian noise
White noise
Colors of noise
Crest factor
Fractal
Flicker noise
Johnson–Nyquist noise
Noise (electronics)
Quantum 1/f noise
Self-organised criticality
Shot noise
Sound masking
Statistics
Footnotes
References
External links
Coloured Noise: Matlab toolbox to generate power-law coloured noise signals of any dimensions.
Powernoise: Matlab software for generating 1/f noise, or more generally, 1/fα noise
1/f noise at Scholarpedia
White Noise Definition Vs Pink Noise
Noise (electronics)
Sound
Acoustics | Pink noise | [
"Physics"
] | 4,000 | [
"Classical mechanics",
"Acoustics"
] |
46,223 | https://en.wikipedia.org/wiki/Sowing | Sowing is the process of planting seeds. An area that has had seeds planted in it will be described as a sowed or sown area.
When sowing it is important to:
Use quality seeds
Maintain proper distance between seeds
Plant at correct depth
Ensure the soil is clean , healthy , and free of pathogens (disease causing microorganisms)
Plants which are usually sown
Among the major field crops, oats, wheat, and rye are sown, grasses and legumes are seeded and maize and soybeans are planted. In planting, wider rows (generally 75 cm (30 in) or more) are used, and the intent is to have precise; even spacing between individual seeds in the row, various mechanisms have been devised to count out individual seeds at exact intervals.
Depth of sowing
In sowing, little if any soil is placed over the seeds, as seeds can be generally sown into the soil by maintaining a planting depth of about 2-3 times the size of the seed.
Sowing types and patterns
For hand sowing, several sowing types exist; these include:
Flat sowing
Ridge sowing
Wide bed sowing
Several patterns for sowing may be used together with these types; these include:
Rows that are indented at the even rows (so that the seeds are
Symmetrical grid pattern – using the pattern described in The Garden of Cyrus placed in a crossed pattern). This method is much better, as more light may fall on the seedlings as they come out.
Types of sowing
Hand sowing
Hand sowing or (planting) is the process of casting handfuls of seed over prepared ground: broadcasting, that is, broadcast seeding (from which the technological term is derived). Usually, a drag or harrow is employed to incorporate the seed into the soil. Though labor-intensive for any but small areas, this method is still used in some situations. Practice is required to sow evenly and at the desired rate. A hand seeder can be used for sowing, though it is less of a help than it is for the smaller seeds of grasses and legumes.
Hand sowing may be combined with pre-sowing in seed trays. This allows the plants to come to strength indoors during cold periods (e.g. spring in temperate countries).
Seed drill
In agriculture, most seed is now sown using a seed drill, which offers greater precision; seed is sown evenly and at the desired rate. The drill also places the seed at a measured distance below the soil, so that less seed is required. The standard design uses a fluted feed metering system, which is volumetric in nature; individual seeds are not counted. Rows are typically about 10–30 cm apart, depending on the crop species and growing conditions. Several row opener types are used depending on soil type and local tradition. Grain drills are most often drawn by tractors, but can also be pulled by horses. Pickup trucks are sometimes used, since little draft is required.
A seed rate of about 100 kg of seed per hectare (2 bushels per acre) is typical, though rates vary considerably depending on crop species, soil conditions, and farmer's preference. Excessive rates can cause the crop to lodge, while too thin a rate will result in poor utilisation of the land, competition with weeds and a reduction in the yield.
Open field
Open-field planting refers to the form of sowing used historically in the agricultural context whereby fields are prepared generically and left open, as the name suggests, before being sown directly with seed. The seed is frequently left uncovered at the surface of the soil before germinating and therefore exposed to the prevailing climate and conditions like storms etc. This is in contrast to the seedbed method used more commonly in domestic gardening or more specific (modern) agricultural scenarios where the seed is applied beneath the soil surface and monitored and manually tended frequently to ensure more successful growth rates and better yields.
Pre-treatment of seed and soil before sowing
Before sowing, certain seeds first require a treatment prior to the sowing process.
This treatment may be seed scarification, stratification, seed soaking or seed cleaning with cold (or medium hot) water.
Seed soaking is generally done by placing seeds in medium hot water for at least 24 to up to 48 hours
Seed cleaning is done especially with fruit, as the flesh of the fruit around the seed can quickly become prone to attack from insects or plagues. Seed washing is generally done by submerging cleansed seeds 20 minutes in 50 degree Celsius water. This (rather hot than moderately hot) water kills any organisms that may have survived on the skin of a seed. Especially with easily infected tropical fruit such as lychees and rambutans, seed washing with high-temperature water is vital.
In addition to the mentioned seed pretreatments, seed germination is also assisted when a disease-free soil is used. Especially when trying to germinate difficult seed (e.g. certain tropical fruit), prior treatment of the soil (along with the usage of the most suitable soil; e.g. potting soil, prepared soil or other substrates) is vital. The two most used soil treatments are pasteurisation and sterilisation. Depending on the necessity, pasteurisation is to be preferred as this does not kill all organisms. Sterilisation can be done when trying to grow truly difficult crops. To pasteurise the soil, the soil is heated for 15 minutes in an oven of 120 °C.
See also
Advance sowing
Plant propagation
Planter (farm implement)
Priming (agriculture)
Seed drill; a mechanical aid allowing much better and faster seed dispersal than when done by hand
Tree planting
References
Horticultural techniques
Horticulture
Agronomy
Habitat management equipment and methods
Plant reproduction
Seeds
Agricultural practices | Sowing | [
"Biology"
] | 1,186 | [
"Behavior",
"Plant reproduction",
"Plants",
"Reproduction"
] |
46,238 | https://en.wikipedia.org/wiki/Refrigeration | Refrigeration is any of various types of cooling of a space, substance, or system to lower and/or maintain its temperature below the ambient one (while the removed heat is ejected to a place of higher temperature). Refrigeration is an artificial, or human-made, cooling method.
Refrigeration refers to the process by which energy, in the form of heat, is removed from a low-temperature medium and transferred to a high-temperature medium. This work of energy transfer is traditionally driven by mechanical means (whether ice or electromechanical machines), but it can also be driven by heat, magnetism, electricity, laser, or other means. Refrigeration has many applications, including household refrigerators, industrial freezers, cryogenics, and air conditioning. Heat pumps may use the heat output of the refrigeration process, and also may be designed to be reversible, but are otherwise similar to air conditioning units.
Refrigeration has had a large impact on industry, lifestyle, agriculture, and settlement patterns. The idea of preserving food dates back to human prehistory, but for thousands of years humans were limited regarding the means of doing so. They used curing via salting and drying, and they made use of natural coolness in caves, root cellars, and winter weather, but other means of cooling were unavailable. In the 19th century, they began to make use of the ice trade to develop cold chains. In the late 19th through mid-20th centuries, mechanical refrigeration was developed, improved, and greatly expanded in its reach. Refrigeration has thus rapidly evolved in the past century, from ice harvesting to temperature-controlled rail cars, refrigerator trucks, and ubiquitous refrigerators and freezers in both stores and homes in many countries. The introduction of refrigerated rail cars contributed to the settlement of areas that were not on earlier main transport channels such as rivers, harbors, or valley trails.
These new settlement patterns sparked the building of large cities which are able to thrive in areas that were otherwise thought to be inhospitable, such as Houston, Texas, and Las Vegas, Nevada. In most developed countries, cities are heavily dependent upon refrigeration in supermarkets in order to obtain their food for daily consumption. The increase in food sources has led to a larger concentration of agricultural sales coming from a smaller percentage of farms. Farms today have a much larger output per person in comparison to the late 1800s. This has resulted in new food sources available to entire populations, which has had a large impact on the nutrition of society.
History
Earliest forms of cooling
The seasonal harvesting of snow and ice is an ancient practice estimated to have begun earlier than 1000 BC. A Chinese collection of lyrics from this time period known as the Sleaping, describes religious ceremonies for filling and emptying ice cellars. However, little is known about the construction of these ice cellars or the purpose of the ice. The next ancient society to record the harvesting of ice may have been the Jews in the book of Proverbs, which reads, "As the cold of snow in the time of harvest, so is a faithful messenger to them who sent him." Historians have interpreted this to mean that the Jews used ice to cool beverages rather than to preserve food. Other ancient cultures such as the Greeks and the Romans dug large snow pits insulated with grass, chaff, or branches of trees as cold storage. Like the Jews, the Greeks and Romans did not use ice and snow to preserve food, but primarily as a means to cool beverages. Egyptians cooled water by evaporation in shallow earthen jars on the roofs of their houses at night. The ancient people of India used this same concept to produce ice. The Persians stored ice in a pit called a Yakhchal and may have been the first group of people to use cold storage to preserve food. In the Australian outback before a reliable electricity supply was available many farmers used a Coolgardie safe, consisting of a box frame with hessian (burlap) sides soaked in water. The water would evaporate and thereby cool the interior air, allowing many perishables such as fruit, butter, and cured meats to be kept.
Ice harvesting
Before 1830, few Americans used ice to refrigerate foods due to a lack of ice-storehouses and iceboxes. As these two things became more widely available, individuals used axes and saws to harvest ice for their storehouses. This method proved to be difficult, dangerous, and certainly did not resemble anything that could be duplicated on a commercial scale.
Despite the difficulties of harvesting ice, Frederic Tudor thought that he could capitalize on this new commodity by harvesting ice in New England and shipping it to the Caribbean islands as well as the southern states. In the beginning, Tudor lost thousands of dollars, but eventually turned a profit as he constructed icehouses in Charleston, Virginia and in the Cuban port town of Havana. These icehouses as well as better insulated ships helped reduce ice wastage from 66% to 8%. This efficiency gain influenced Tudor to expand his ice market to other towns with icehouses such as New Orleans and Savannah. This ice market further expanded as harvesting ice became faster and cheaper after one of Tudor's suppliers, Nathaniel Wyeth, invented a horse-drawn ice cutter in 1825. This invention as well as Tudor's success inspired others to get involved in the ice trade and the ice industry grew.
Ice became a mass-market commodity by the early 1830s with the price of ice dropping from six cents per pound to a half of a cent per pound. In New York City, ice consumption increased from 12,000 tons in 1843 to 100,000 tons in 1856. Boston's consumption leapt from 6,000 tons to 85,000 tons during that same period. Ice harvesting created a "cooling culture" as majority of people used ice and iceboxes to store their dairy products, fish, meat, and even fruits and vegetables. These early cold storage practices paved the way for many Americans to accept the refrigeration technology that would soon take over the country.
Refrigeration research
The history of artificial refrigeration began when Scottish professor William Cullen designed a small refrigerating machine in 1755. Cullen used a pump to create a partial vacuum over a container of diethyl ether, which then boiled, absorbing heat from the surrounding air. The experiment even created a small amount of ice, but had no practical application at that time.
In 1758, Benjamin Franklin and John Hadley, professor of chemistry, collaborated on a project investigating the principle of evaporation as a means to rapidly cool an object at Cambridge University, England. They confirmed that the evaporation of highly volatile liquids, such as alcohol and ether, could be used to drive down the temperature of an object past the freezing point of water. They conducted their experiment with the bulb of a mercury thermometer as their object and with a bellows used to quicken the evaporation; they lowered the temperature of the thermometer bulb down to , while the ambient temperature was . They noted that soon after they passed the freezing point of water , a thin film of ice formed on the surface of the thermometer's bulb and that the ice mass was about a thick when they stopped the experiment upon reaching . Franklin wrote, "From this experiment, one may see the possibility of freezing a man to death on a warm summer's day". In 1805, American inventor Oliver Evans described a closed vapor-compression refrigeration cycle for the production of ice by ether under vacuum.
In 1820, the English scientist Michael Faraday liquefied ammonia and other gases by using high pressures and low temperatures, and in 1834, an American expatriate to Great Britain, Jacob Perkins, built the first working vapor-compression refrigeration system in the world. It was a closed-cycle that could operate continuously, as he described in his patent:
I am enabled to use volatile fluids for the purpose of producing the cooling or freezing of fluids, and yet at the same time constantly condensing such volatile fluids, and bringing them again into operation without waste.
His prototype system worked although it did not succeed commercially.
In 1842, a similar attempt was made by American physician, John Gorrie, who built a working prototype, but it was a commercial failure. Like many of the medical experts during this time, Gorrie thought too much exposure to tropical heat led to mental and physical degeneration, as well as the spread of diseases such as malaria. He conceived the idea of using his refrigeration system to cool the air for comfort in homes and hospitals to prevent disease. American engineer Alexander Twining took out a British patent in 1850 for a vapour compression system that used ether.
The first practical vapour-compression refrigeration system was built by James Harrison, a British journalist who had emigrated to Australia. His 1856 patent was for a vapour-compression system using ether, alcohol, or ammonia. He built a mechanical ice-making machine in 1851 on the banks of the Barwon River at Rocky Point in Geelong, Victoria, and his first commercial ice-making machine followed in 1854. Harrison also introduced commercial vapour-compression refrigeration to breweries and meat-packing houses, and by 1861, a dozen of his systems were in operation. He later entered the debate of how to compete against the American advantage of unrefrigerated beef sales to the United Kingdom. In 1873 he prepared the sailing ship Norfolk for an experimental beef shipment to the United Kingdom, which used a cold room system instead of a refrigeration system. The venture was a failure as the ice was consumed faster than expected.
The first gas absorption refrigeration system using gaseous ammonia dissolved in water (referred to as "aqua ammonia") was developed by Ferdinand Carré of France in 1859 and patented in 1860. Carl von Linde, an engineer specializing in steam locomotives and professor of engineering at the Technological University of Munich in Germany, began researching refrigeration in the 1860s and 1870s in response to demand from brewers for a technology that would allow year-round, large-scale production of lager; he patented an improved method of liquefying gases in 1876. His new process made possible using gases such as ammonia, sulfur dioxide (SO2) and methyl chloride (CH3Cl) as refrigerants and they were widely used for that purpose until the late 1920s.
Thaddeus Lowe, an American balloonist, held several patents on ice-making machines. His "Compression Ice Machine" would revolutionize the cold-storage industry. In 1869, he and other investors purchased an old steamship onto which they loaded one of Lowe's refrigeration units and began shipping fresh fruit from New York to the Gulf Coast area, and fresh meat from Galveston, Texas back to New York, but because of Lowe's lack of knowledge about shipping, the business was a costly failure.
Commercial use
In 1842, John Gorrie created a system capable of refrigerating water to produce ice. Although it was a commercial failure, it inspired scientists and inventors around the world. France's Ferdinand Carre was one of the inspired and he created an ice producing system that was simpler and smaller than that of Gorrie. During the Civil War, cities such as New Orleans could no longer get ice from New England via the coastal ice trade. Carre's refrigeration system became the solution to New Orleans' ice problems and, by 1865, the city had three of Carre's machines. In 1867, in San Antonio, Texas, a French immigrant named Andrew Muhl built an ice-making machine to help service the expanding beef industry before moving it to Waco in 1871. In 1873, the patent for this machine was contracted by the Columbus Iron Works, a company acquired by the W.C. Bradley Co., which went on to produce the first commercial ice-makers in the US.
By the 1870s, breweries had become the largest users of harvested ice. Though the ice-harvesting industry had grown immensely by the turn of the 20th century, pollution and sewage had begun to creep into natural ice, making it a problem in the metropolitan suburbs. Eventually, breweries began to complain of tainted ice. Public concern for the purity of water, from which ice was formed, began to increase in the early 1900s with the rise of germ theory. Numerous media outlets published articles connecting diseases such as typhoid fever with natural ice consumption. This caused ice harvesting to become illegal in certain areas of the country. All of these scenarios increased the demands for modern refrigeration and manufactured ice. Ice producing machines like that of Carre's and Muhl's were looked to as means of producing ice to meet the needs of grocers, farmers, and food shippers.
Refrigerated railroad cars were introduced in the US in the 1840s for short-run transport of dairy products, but these used harvested ice to maintain a cool temperature.
The new refrigerating technology first met with widespread industrial use as a means to freeze meat supplies for transport by sea in reefer ships from the British Dominions and other countries to the British Isles. Although not actually the first to achieve successful transportation of frozen goods overseas (the Strathleven had arrived at the London docks on 2 February 1880 with a cargo of frozen beef, mutton and butter from Sydney and Melbourne ), the breakthrough is often attributed to William Soltau Davidson, an entrepreneur who had emigrated to New Zealand. Davidson thought that Britain's rising population and meat demand could mitigate the slump in world wool markets that was heavily affecting New Zealand. After extensive research, he commissioned the Dunedin to be refitted with a compression refrigeration unit for meat shipment in 1881. On February 15, 1882, the Dunedin sailed for London with what was to be the first commercially successful refrigerated shipping voyage, and the foundation of the refrigerated meat industry.
The Times commented "Today we have to record such a triumph over physical difficulties, as would have been incredible, even unimaginable, a very few days ago...". The Marlborough—sister ship to the Dunedin – was immediately converted and joined the trade the following year, along with the rival New Zealand Shipping Company vessel Mataurua, while the German Steamer Marsala began carrying frozen New Zealand lamb in December 1882. Within five years, 172 shipments of frozen meat were sent from New Zealand to the United Kingdom, of which only 9 had significant amounts of meat condemned. Refrigerated shipping also led to a broader meat and dairy boom in Australasia and South America. J & E Hall of Dartford, England outfitted the SS Selembria with a vapor compression system to bring 30,000 carcasses of mutton from the Falkland Islands in 1886. In the years ahead, the industry rapidly expanded to Australia, Argentina and the United States.
By the 1890s, refrigeration played a vital role in the distribution of food. The meat-packing industry relied heavily on natural ice in the 1880s and continued to rely on manufactured ice as those technologies became available. By 1900, the meat-packing houses of Chicago had adopted ammonia-cycle commercial refrigeration. By 1914, almost every location used artificial refrigeration. The major meat packers, Armour, Swift, and Wilson, had purchased the most expensive units which they installed on train cars and in branch houses and storage facilities in the more remote distribution areas.
By the middle of the 20th century, refrigeration units were designed for installation on trucks or lorries. Refrigerated vehicles are used to transport perishable goods, such as frozen foods, fruit and vegetables, and temperature-sensitive chemicals. Most modern refrigerators keep the temperature between –40 and –20 °C, and have a maximum payload of around 24,000 kg gross weight (in Europe).
Although commercial refrigeration quickly progressed, it had limitations that prevented it from moving into the household. First, most refrigerators were far too large. Some of the commercial units being used in 1910 weighed between five and two hundred tons. Second, commercial refrigerators were expensive to produce, purchase, and maintain. Lastly, these refrigerators were unsafe. It was not uncommon for commercial refrigerators to catch fire, explode, or leak toxic gases. Refrigeration did not become a household technology until these three challenges were overcome.
Home and consumer use
During the early 1800s, consumers preserved their food by storing food and ice purchased from ice harvesters in iceboxes. In 1803, Thomas Moore patented a metal-lined butter-storage tub which became the prototype for most iceboxes. These iceboxes were used until nearly 1910 and the technology did not progress. In fact, consumers that used the icebox in 1910 faced the same challenge of a moldy and stinky icebox that consumers had in the early 1800s.
General Electric (GE) was one of the first companies to overcome these challenges. In 1911, GE released a household refrigeration unit that was powered by gas. The use of gas eliminated the need for an electric compressor motor and decreased the size of the refrigerator. However, electric companies that were customers of GE did not benefit from a gas-powered unit. Thus, GE invested in developing an electric model. In 1927, GE released the Monitor Top, the first refrigerator to run on electricity.
In 1930, Frigidaire, one of GE's main competitors, synthesized Freon. With the invention of synthetic refrigerants based mostly on a chlorofluorocarbon (CFC) chemical, safer refrigerators were possible for home and consumer use. Freon led to the development of smaller, lighter, and cheaper refrigerators. The average price of a refrigerator dropped from $275 to $154 with the synthesis of Freon. This lower price allowed ownership of refrigerators in American households to exceed 50% by 1940. Freon is a trademark of the DuPont Corporation and refers to these CFCs, and later hydro chlorofluorocarbon (HCFC) and hydro fluorocarbon (HFC), refrigerants developed in the late 1920s. These refrigerants were considered — at the time — to be less harmful than the commonly-used refrigerants of the time, including methyl formate, ammonia, methyl chloride, and sulfur dioxide. The intent was to provide refrigeration equipment for home use without danger. These CFC refrigerants answered that need. In the 1970s, though, the compounds were found to be reacting with atmospheric ozone, an important protection against solar ultraviolet radiation, and their use as a refrigerant worldwide was curtailed in the Montreal Protocol of 1987.
Impact on settlement patterns in the United States of America
In the last century, refrigeration allowed new settlement patterns to emerge. This new technology has allowed for new areas to be settled that are not on a natural channel of transport such as a river, valley trail or harbor that may have otherwise not been settled. Refrigeration has given opportunities to early settlers to expand westward and into rural areas that were unpopulated. These new settlers with rich and untapped soil saw opportunity to profit by sending raw goods to the eastern cities and states. In the 20th century, refrigeration has made "Galactic Cities" such as Dallas, Phoenix, and Los Angeles possible.
Refrigerated rail cars
The refrigerated rail car (refrigerated van or refrigerator car), along with the dense railroad network, became an exceedingly important link between the marketplace and the farm allowing for a national opportunity rather than a just a regional one. Before the invention of the refrigerated rail car, it was impossible to ship perishable food products long distances. The beef packing industry made the first demand push for refrigeration cars. The railroad companies were slow to adopt this new invention because of their heavy investments in cattle cars, stockyards, and feedlots. Refrigeration cars were also complex and costly compared to other rail cars, which also slowed the adoption of the refrigerated rail car. After the slow adoption of the refrigerated car, the beef packing industry dominated the refrigerated rail car business with their ability to control ice plants and the setting of icing fees. The United States Department of Agriculture estimated that, in 1916, over sixty-nine percent of the cattle killed in the country was done in plants involved in interstate trade. The same companies that were also involved in the meat trade later implemented refrigerated transport to include vegetables and fruit. The meat packing companies had much of the expensive machinery, such as refrigerated cars, and cold storage facilities that allowed for them to effectively distribute all types of perishable goods. During World War I, a national refrigerator car pool was established by the United States Administration to deal with problem of idle cars and was later continued after the war. The idle car problem was the problem of refrigeration cars sitting pointlessly in between seasonal harvests. This meant that very expensive cars sat in rail yards for a good portion of the year while making no revenue for the car's owner. The car pool was a system where cars were distributed to areas as crops matured ensuring maximum use of the cars. Refrigerated rail cars moved eastward from vineyards, orchards, fields, and gardens in western states to satisfy Americas consuming market in the east. The refrigerated car made it possible to transport perishable crops hundreds and even thousands of kilometres or miles. The most noticeable effect the car gave was a regional specialization of vegetables and fruits. The refrigeration rail car was widely used for the transportation of perishable goods up until the 1950s. By the 1960s, the nation's interstate highway system was adequately complete allowing for trucks to carry the majority of the perishable food loads and to push out the old system of the refrigerated rail cars.
Expansion west and into rural areas
The widespread use of refrigeration allowed for a vast amount of new agricultural opportunities to open up in the United States. New markets emerged throughout the United States in areas that were previously uninhabited and far-removed from heavily populated areas. New agricultural opportunity presented itself in areas that were considered rural, such as states in the south and in the west. Shipments on a large scale from the south and California were both made around the same time, although natural ice was used from the Sierras in California rather than manufactured ice in the south. Refrigeration allowed for many areas to specialize in the growing of specific fruits. California specialized in several fruits, grapes, peaches, pears, plums, and apples, while Georgia became famous for specifically its peaches. In California, the acceptance of the refrigerated rail cars led to an increase of car loads from 4,500 carloads in 1895 to between 8,000 and 10,000 carloads in 1905. The Gulf States, Arkansas, Missouri and Tennessee entered into strawberry production on a large-scale while Mississippi became the center of the tomato industry. New Mexico, Colorado, Arizona, and Nevada grew cantaloupes. Without refrigeration, this would have not been possible. By 1917, well-established fruit and vegetable areas that were close to eastern markets felt the pressure of competition from these distant specialized centers. Refrigeration was not limited to meat, fruit and vegetables but it also encompassed dairy product and dairy farms. In the early twentieth century, large cities got their dairy supply from farms as far as . Dairy products were not as easily transported over great distances like fruits and vegetables due to greater perishability. Refrigeration made production possible in the west far from eastern markets, so much in fact that dairy farmers could pay transportation cost and still undersell their eastern competitors. Refrigeration and the refrigerated rail gave opportunity to areas with rich soil far from natural channel of transport such as a river, valley trail or harbors.
Rise of the galactic city
"Edge city" was a term coined by Joel Garreau, whereas the term "galactic city" was coined by Lewis Mumford. These terms refer to a concentration of business, shopping, and entertainment outside a traditional downtown or central business district in what had previously been a residential or rural area. There were several factors contributing to the growth of these cities such as Los Angeles, Las Vegas, Houston, and Phoenix. The factors that contributed to these large cities include reliable automobiles, highway systems, refrigeration, and agricultural production increases. Large cities such as the ones mentioned above have not been uncommon in history, but what separates these cities from the rest are that these cities are not along some natural channel of transport, or at some crossroad of two or more channels such as a trail, harbor, mountain, river, or valley. These large cities have been developed in areas that only a few hundred years ago would have been uninhabitable. Without a cost efficient way of cooling air and transporting water and food from great distances, these large cities would have never developed. The rapid growth of these cities was influenced by refrigeration and an agricultural productivity increase, allowing more distant farms to effectively feed the population.
Impact on agriculture and food production
Agriculture's role in developed countries has drastically changed in the last century due to many factors, including refrigeration. Statistics from the 2007 census gives information on the large concentration of agricultural sales coming from a small portion of the existing farms in the United States today. This is a partial result of the market created for the frozen meat trade by the first successful shipment of frozen sheep carcasses coming from New Zealand in the 1880s. As the market continued to grow, regulations on food processing and quality began to be enforced. Eventually, electricity was introduced into rural homes in the United States, which allowed refrigeration technology to continue to expand on the farm, increasing output per person. Today, refrigeration's use on the farm reduces humidity levels, avoids spoiling due to bacterial growth, and assists in preservation.
Demographics
The introduction of refrigeration and evolution of additional technologies drastically changed agriculture in the United States. During the beginning of the 20th century, farming was a common occupation and lifestyle for United States citizens, as most farmers actually lived on their farm. In 1935, there were 6.8 million farms in the United States and a population of 127 million. Yet, while the United States population has continued to climb, citizens pursuing agriculture continue to decline. Based on the 2007 US Census, less than one percent of a population of 310 million people claim farming as an occupation today. However, the increasing population has led to an increasing demand for agricultural products, which is met through a greater variety of crops, fertilizers, pesticides, and improved technology. Improved technology has decreased the risk and time involved for agricultural management and allows larger farms to increase their output per person to meet society's demand.
Meat packing and trade
Prior to 1882, the South Island of New Zealand had been experimenting with sowing grass and crossbreeding sheep, which immediately gave their farmers economic potential in the exportation of meat. In 1882, the first successful shipment of sheep carcasses was sent from Port Chalmers in Dunedin, New Zealand, to London. By the 1890s, the frozen meat trade became increasingly more profitable in New Zealand, especially in Canterbury, where 50% of exported sheep carcasses came from in 1900. It was not long before Canterbury meat was known for the highest quality, creating a demand for New Zealand meat around the world. In order to meet this new demand, the farmers improved their feed so sheep could be ready for the slaughter in only seven months. This new method of shipping led to an economic boom in New Zealand by the mid 1890s.
In the United States, the Meat Inspection Act of 1891 was put in place in the United States because local butchers felt the refrigerated railcar system was unwholesome. When meat packing began to take off, consumers became nervous about the quality of the meat for consumption. Upton Sinclair's 1906 novel The Jungle brought negative attention to the meat packing industry, by drawing to light unsanitary working conditions and processing of diseased animals. The book caught the attention of President Theodore Roosevelt, and the 1906 Meat Inspection Act was put into place as an amendment to the Meat Inspection Act of 1891. This new act focused on the quality of the meat and environment it is processed in.
Electricity in rural areas
In the early 1930s, 90 percent of the urban population of the United States had electric power, in comparison to only 10 percent of rural homes. At the time, power companies did not feel that extending power to rural areas (rural electrification) would produce enough profit to make it worth their while. However, in the midst of the Great Depression, President Franklin D. Roosevelt realized that rural areas would continue to lag behind urban areas in both poverty and production if they were not electrically wired. On May 11, 1935, the president signed an executive order called the Rural Electrification Administration, also known as REA. The agency provided loans to fund electric infrastructure in the rural areas. In just a few years, 300,000 people in rural areas of the United States had received power in their homes.
While electricity dramatically improved working conditions on farms, it also had a large impact on the safety of food production. Refrigeration systems were introduced to the farming and food distribution processes, which helped in food preservation and kept food supplies safe. Refrigeration also allowed for shipment of perishable commodities throughout the United States. As a result, United States farmers quickly became the most productive in the world, and entire new food systems arose.
Farm use
In order to reduce humidity levels and spoiling due to bacterial growth, refrigeration is used for meat, produce, and dairy processing in farming today. Refrigeration systems are used the heaviest in the warmer months for farming produce, which must be cooled as soon as possible in order to meet quality standards and increase the shelf life. Meanwhile, dairy farms refrigerate milk year round to avoid spoiling.
Effects on lifestyle and diet
In the late 19th Century and into the very early 20th Century, except for staple foods (sugar, rice, and beans) that needed no refrigeration, the available foods were affected heavily by the seasons and what could be grown locally. Refrigeration has removed these limitations. Refrigeration played a large part in the feasibility and then popularity of the modern supermarket. Fruits and vegetables out of season, or grown in distant locations, are now available at relatively low prices. Refrigerators have led to a huge increase in meat and dairy products as a portion of overall supermarket sales. As well as changing the goods purchased at the market, the ability to store these foods for extended periods of time has led to an increase in leisure time. Prior to the advent of the household refrigerator, people would have to shop on a daily basis for the supplies needed for their meals.
Impact on nutrition
The introduction of refrigeration allowed for the hygienic handling and storage of perishables, and as such, promoted output growth, consumption, and the availability of nutrition. The change in our method of food preservation moved us away from salts to a more manageable sodium level. The ability to move and store perishables such as meat and dairy led to a 1.7% increase in dairy consumption and overall protein intake by 1.25% annually in the US after the 1890s.
People were not only consuming these perishables because it became easier for they themselves to store them, but because the innovations in refrigerated transportation and storage led to less spoilage and waste, thereby driving the prices of these products down. Refrigeration accounts for at least 5.1% of the increase in adult stature (in the US) through improved nutrition, and when the indirect effects associated with improvements in the quality of nutrients and the reduction in illness is additionally factored in, the overall impact becomes considerably larger. Recent studies have also shown a negative relationship between the number of refrigerators in a household and the rate of gastric cancer mortality.
Current applications of refrigeration
Probably the most widely used current applications of refrigeration are for air conditioning of private homes and public buildings, and refrigerating foodstuffs in homes, restaurants and large storage warehouses. The use of refrigerators and walk-in coolers and freezers in kitchens, factories and warehouses for storing and processing fruits and vegetables has allowed adding fresh salads to the modern diet year round, and storing fish and meats safely for long periods.
The optimum temperature range for perishable food storage is .
In commerce and manufacturing, there are many uses for refrigeration. Refrigeration is used to liquefy gases – oxygen, nitrogen, propane, and methane, for example. In compressed air purification, it is used to condense water vapor from compressed air to reduce its moisture content. In oil refineries, chemical plants, and petrochemical plants, refrigeration is used to maintain certain processes at their needed low temperatures (for example, in alkylation of butenes and butane to produce a high-octane gasoline component). Metal workers use refrigeration to temper steel and cutlery. When transporting temperature-sensitive foodstuffs and other materials by trucks, trains, airplanes and seagoing vessels, refrigeration is a necessity.
Dairy products are constantly in need of refrigeration, and it was only discovered in the past few decades that eggs needed to be refrigerated during shipment rather than waiting to be refrigerated after arrival at the grocery store. Meats, poultry and fish all must be kept in climate-controlled environments before being sold. Refrigeration also helps keep fruits and vegetables edible longer.
One of the most influential uses of refrigeration was in the development of the sushi/sashimi industry in Japan. Before the discovery of refrigeration, many sushi connoisseurs were at risk of contracting diseases. The dangers of unrefrigerated sashimi were not brought to light for decades due to the lack of research and healthcare distribution across rural Japan. Around mid-century, the Zojirushi corporation, based in Kyoto, made breakthroughs in refrigerator designs, making refrigerators cheaper and more accessible for restaurant proprietors and the general public.
Methods of refrigeration
Methods of refrigeration can be classified as non-cyclic, cyclic, thermoelectric and magnetic.
Non-cyclic refrigeration
This refrigeration method cools a contained area by melting ice, or by sublimating dry ice. Perhaps the simplest example of this is a portable cooler, where items are put in it, then ice is poured over the top. Regular ice can maintain temperatures near, but not below the freezing point, unless salt is used to cool the ice down further (as in a traditional ice-cream maker). Dry ice can reliably bring the temperature well below water freezing point.
Cyclic refrigeration
This consists of a refrigeration cycle, where heat is removed from a low-temperature space or source and rejected to a high-temperature sink with the help of external work, and its inverse, the thermodynamic power cycle. In the power cycle, heat is supplied from a high-temperature source to the engine, part of the heat being used to produce work and the rest being rejected to a low-temperature sink. This satisfies the second law of thermodynamics.
A refrigeration cycle describes the changes that take place in the refrigerant as it alternately absorbs and rejects heat as it circulates through a refrigerator. It is also applied to heating, ventilation, and air conditioning HVACR work, when describing the "process" of refrigerant flow through an HVACR unit, whether it is a packaged or split system.
Heat naturally flows from hot to cold. Work is applied to cool a living space or storage volume by pumping heat from a lower temperature heat source into a higher temperature heat sink. Insulation is used to reduce the work and energy needed to achieve and maintain a lower temperature in the cooled space. The operating principle of the refrigeration cycle was described mathematically by Sadi Carnot in 1824 as a heat engine.
The most common types of refrigeration systems use the reverse-Rankine vapor-compression refrigeration cycle, although absorption heat pumps are used in a minority of applications.
Cyclic refrigeration can be classified as:
Vapor cycle, and
Gas cycle
Vapor cycle refrigeration can further be classified as:
Vapor-compression refrigeration
Sorption Refrigeration
Vapor-absorption refrigeration
Adsorption refrigeration
Vapor-compression cycle
The vapor-compression cycle is used in most household refrigerators as well as in many large commercial and industrial refrigeration systems. Figure 1 provides a schematic diagram of the components of a typical vapor-compression refrigeration system.
The thermodynamics of the cycle can be analyzed on a diagram as shown in Figure 2. In this cycle, a circulating refrigerant such as a low boiling hydrocarbon or hydrofluorocarbons enters the compressor as a vapour. From point 1 to point 2, the vapor is compressed at constant entropy and exits the compressor as a vapor at a higher temperature, but still below the vapor pressure at that temperature. From point 2 to point 3 and on to point 4, the vapor travels through the condenser which cools the vapour until it starts condensing, and then condenses the vapor into a liquid by removing additional heat at constant pressure and temperature. Between points 4 and 5, the liquid refrigerant goes through the expansion valve (also called a throttle valve) where its pressure abruptly decreases, causing flash evaporation and auto-refrigeration of, typically, less than half of the liquid.
That results in a mixture of liquid and vapour at a lower temperature and pressure as shown at point 5. The cold liquid-vapor mixture then travels through the evaporator coil or tubes and is completely vaporized by cooling the warm air (from the space being refrigerated) being blown by a fan across the evaporator coil or tubes. The resulting refrigerant vapour returns to the compressor inlet at point 1 to complete the thermodynamic cycle.
The above discussion is based on the ideal vapour-compression refrigeration cycle, and does not take into account real-world effects like frictional pressure drop in the system, slight thermodynamic irreversibility during the compression of the refrigerant vapor, or non-ideal gas behavior, if any. Vapor compression refrigerators can be arranged in two stages in cascade refrigeration systems, with the second stage cooling the condenser of the first stage. This can be used for achieving very low temperatures.
More information about the design and performance of vapor-compression refrigeration systems is available in the classic Perry's Chemical Engineers' Handbook.
Sorption cycle
Absorption cycle
In the early years of the twentieth century, the vapor absorption cycle using water-ammonia systems or LiBr-water was popular and widely used. After the development of the vapor compression cycle, the vapor absorption cycle lost much of its importance because of its low coefficient of performance (about one fifth of that of the vapor compression cycle). Today, the vapor absorption cycle is used mainly where fuel for heating is available but electricity is not, such as in recreational vehicles that carry LP gas. It is also used in industrial environments where plentiful waste heat overcomes its inefficiency.
The absorption cycle is similar to the compression cycle, except for the method of raising the pressure of the refrigerant vapor. In the absorption system, the compressor is replaced by an absorber which dissolves the refrigerant in a suitable liquid, a liquid pump which raises the pressure and a generator which, on heat addition, drives off the refrigerant vapor from the high-pressure liquid. Some work is needed by the liquid pump but, for a given quantity of refrigerant, it is much smaller than needed by the compressor in the vapor compression cycle. In an absorption refrigerator, a suitable combination of refrigerant and absorbent is used. The most common combinations are ammonia (refrigerant) with water (absorbent), and water (refrigerant) with lithium bromide (absorbent).
Adsorption cycle
The main difference with absorption cycle, is that in adsorption cycle, the refrigerant (adsorbate) could be ammonia, water, methanol, etc., while the adsorbent is a solid, such as silica gel, activated carbon, or zeolite, unlike in the absorption cycle where absorbent is liquid.
The reason adsorption refrigeration technology has been extensively researched in recent 30 years lies in that the operation of an adsorption refrigeration system is often noiseless, non-corrosive and environment friendly.
Gas cycle
When the working fluid is a gas that is compressed and expanded but does not change phase, the refrigeration cycle is called a gas cycle. Air is most often this working fluid. As there is no condensation and evaporation intended in a gas cycle, components corresponding to the condenser and evaporator in a vapor compression cycle are the hot and cold gas-to-gas heat exchangers in gas cycles.
The gas cycle is less efficient than the vapor compression cycle because the gas cycle works on the reverse Brayton cycle instead of the reverse Rankine cycle. As such, the working fluid does not receive and reject heat at constant temperature. In the gas cycle, the refrigeration effect is equal to the product of the specific heat of the gas and the rise in temperature of the gas in the low temperature side. Therefore, for the same cooling load, a gas refrigeration cycle needs a large mass flow rate and is bulky.
Because of their lower efficiency and larger bulk, air cycle coolers are not often used nowadays in terrestrial cooling devices. However, the air cycle machine is very common on gas turbine-powered jet aircraft as cooling and ventilation units, because compressed air is readily available from the engines' compressor sections. Such units also serve the purpose of pressurizing the aircraft.
Thermoelectric refrigeration
Thermoelectric cooling uses the Peltier effect to create a heat flux between the junction of two types of material. This effect is commonly used in camping and portable coolers and for cooling electronic components and small instruments. Peltier coolers are often used where a traditional vapor-compression cycle refrigerator would be impractical or take up too much space, and in cooled image sensors as an easy, compact and lightweight, if inefficient, way to achieve very low temperatures, using two or more stage peltier coolers arranged in a cascade refrigeration configuration, meaning that two or more Peltier elements are stacked on top of each other, with each stage being larger than the one before it, in order to extract more heat and waste heat generated by the previous stages. Peltier cooling has a low COP (efficiency) when compared with that of the vapor-compression cycle, so it emits more waste heat (heat generated by the Peltier element or cooling mechanism) and consumes more power for a given cooling capacity.
Magnetic refrigeration
Magnetic refrigeration, or adiabatic demagnetization, is a cooling technology based on the magnetocaloric effect, an intrinsic property of magnetic solids. The refrigerant is often a paramagnetic salt, such as cerium magnesium nitrate. The active magnetic dipoles in this case are those of the electron shells of the paramagnetic atoms.
A strong magnetic field is applied to the refrigerant, forcing its various magnetic dipoles to align and putting these degrees of freedom of the refrigerant into a state of lowered entropy. A heat sink then absorbs the heat released by the refrigerant due to its loss of entropy. Thermal contact with the heat sink is then broken so that the system is insulated, and the magnetic field is switched off. This increases the heat capacity of the refrigerant, thus decreasing its temperature below the temperature of the heat sink.
Because few materials exhibit the needed properties at room temperature, applications have so far been limited to cryogenics and research.
Other methods
Other methods of refrigeration include the air cycle machine used in aircraft; the vortex tube used for spot cooling, when compressed air is available; and thermoacoustic refrigeration using sound waves in a pressurized gas to drive heat transfer and heat exchange; steam jet cooling popular in the early 1930s for air conditioning large buildings; thermoelastic cooling using a smart metal alloy stretching and relaxing. Many Stirling cycle heat engines can be run backwards to act as a refrigerator, and therefore these engines have a niche use in cryogenics. In addition, there are other types of cryocoolers such as Gifford-McMahon coolers, Joule-Thomson coolers, pulse-tube refrigerators and, for temperatures between 2 mK and 500 mK, dilution refrigerators.
Elastocaloric refrigeration
Another potential solid-state refrigeration technique and a relatively new area of study comes from a special property of super elastic materials. These materials undergo a temperature change when experiencing an applied mechanical stress (called the elastocaloric effect). Since super elastic materials deform reversibly at high strains, the material experiences a flattened elastic region in its stress-strain curve caused by a resulting phase transformation from an austenitic to a martensitic crystal phase.
When a super elastic material experiences a stress in the austenitic phase, it undergoes an exothermic phase transformation to the martensitic phase, which causes the material to heat up. Removing the stress reverses the process, restores the material to its austenitic phase, and absorbs heat from the surroundings cooling down the material.
The most appealing part of this research is how potentially energy efficient and environmentally friendly this cooling technology is. The different materials used, commonly shape-memory alloys, provide a non-toxic source of emission free refrigeration. The most commonly studied materials studied are shape-memory alloys, like nitinol and Cu-Zn-Al. Nitinol is of the more promising alloys with output heat at about 66 J/cm3 and a temperature change of about 16–20 K. Due to the difficulty in manufacturing some of the shape memory alloys, alternative materials like natural rubber have been studied. Even though rubber may not give off as much heat per volume (12 J/cm3 ) as the shape memory alloys, it still generates a comparable temperature change of about 12 K and operates at a suitable temperature range, low stresses, and low cost.
The main challenge however comes from potential energy losses in the form of hysteresis, often associated with this process. Since most of these losses comes from incompatibilities between the two phases, proper alloy tuning is necessary to reduce losses and increase reversibility and efficiency. Balancing the transformation strain of the material with the energy losses enables a large elastocaloric effect to occur and potentially a new alternative for refrigeration.
Fridge Gate
The Fridge Gate method is a theoretical application of using a single logic gate to drive a refrigerator in the most energy efficient way possible without violating the laws of thermodynamics. It operates on the fact that there are two energy states in which a particle can exist: the ground state and the excited state. The excited state carries a little more energy than the ground state, small enough so that the transition occurs with high probability. There are three components or particle types associated with the fridge gate. The first is on the interior of the refrigerator, the second on the outside and the third is connected to a power supply which heats up every so often that it can reach the E state and replenish the source. In the cooling step on the inside of the refrigerator, the g state particle absorbs energy from ambient particles, cooling them, and itself jumping to the e state. In the second step, on the outside of the refrigerator where the particles are also at an e state, the particle falls to the g state, releasing energy and heating the outside particles. In the third and final step, the power supply moves a particle at the e state, and when it falls to the g state it induces an energy-neutral swap where the interior e particle is replaced by a new g particle, restarting the cycle.
Passive systems
When combining a passive daytime radiative cooling system with thermal insulation and evaporative cooling, one study found a 300% increase in ambient cooling power when compared to a stand-alone radiative cooling surface, which could extend the shelf life of food by 40% in humid climates and 200% in desert climates without refrigeration. The system's evaporative cooling layer would require water "re-charges" every 10 days to a month in humid areas and every 4 days in hot and dry areas.
Capacity ratings
The refrigeration capacity of a refrigeration system is the product of the evaporators' enthalpy rise and the evaporators' mass flow rate. The measured capacity of refrigeration is often dimensioned in the unit of kW or BTU/h. Domestic and commercial refrigerators may be rated in kJ/s, or Btu/h of cooling. For commercial and industrial refrigeration systems, the kilowatt (kW) is the basic unit of refrigeration, except in North America, where both ton of refrigeration and BTU/h are used.
A refrigeration system's coefficient of performance (CoP) is very important in determining a system's overall efficiency. It is defined as refrigeration capacity in kW divided by the energy input in kW. While CoP is a very simple measure of performance, it is typically not used for industrial refrigeration in North America. Owners and manufacturers of these systems typically use performance factor (PF). A system's PF is defined as a system's energy input in horsepower divided by its refrigeration capacity in TR. Both CoP and PF can be applied to either the entire system or to system components. For example, an individual compressor can be rated by comparing the energy needed to run the compressor versus the expected refrigeration capacity based on inlet volume flow rate. It is important to note that both CoP and PF for a refrigeration system are only defined at specific operating conditions, including temperatures and thermal loads. Moving away from the specified operating conditions can dramatically change a system's performance.
Air conditioning systems used in residential application typically use SEER (Seasonal Energy Efficiency Ratio)for the energy performance rating. Air conditioning systems for commercial application often use EER (Energy Efficiency Ratio) and IEER (Integrated Energy Efficiency Ratio) for the energy efficiency performance rating.
See also
Air conditioning
Auto-defrost
Beef ring
Carnot heat engine
Cold chain
Coolgardie safe
Cryocooler
Darcy friction factor formulae
Einstein refrigerator
Freezer
Heat pump
Heat pump and refrigeration cycle
Heating, ventilation, and air conditioning (HVAC, HVACR)
Icebox
Icyball
Joule–Thomson effect
Laser cooling
Pot-in-pot refrigerator
Pumpable ice technology
Quantum refrigerators
Redundant refrigeration system
Reefer ship
Refrigerant
Refrigerated container
Refrigerator
Refrigerator car
Refrigerator truck
Seasonal energy efficiency ratio (SEER)
Steam jet cooling
Thermoacoustics
Vapor-compression refrigeration
Working fluid
World Refrigeration Day
References
Further reading
Refrigeration volume, ASHRAE Handbook, ASHRAE, Inc., Atlanta, GA
Stoecker and Jones, Refrigeration and Air Conditioning, Tata-McGraw Hill Publishers
Mathur, M.L., Mehta, F.S., Thermal Engineering Vol II
MSN Encarta Encyclopedia
External links
Green Cooling Initiative on alternative natural refrigerants cooling technologies
"The Refrigeration Cycle", from HowStuffWorks
"The Refrigeration", from frigokey
American Society of Heating, Refrigerating and Air-Conditioning Engineers (ASHRAE)
International Institute of Refrigeration (IIR)
British Institute of Refrigeration
Scroll down to "Continuous-Cycle Absorption System"
US Department of Energy: Technology Basics of Absorption Cycles
Institute of Refrigeration
Chemical processes
Cooling technology
Food preservation
Heating, ventilation, and air conditioning
Thermodynamics | Refrigeration | [
"Physics",
"Chemistry",
"Mathematics"
] | 10,896 | [
"Chemical processes",
"Thermodynamics",
"nan",
"Chemical process engineering",
"Dynamical systems"
] |
46,242 | https://en.wikipedia.org/wiki/Charles%20Lane%20Poor | Charles Lane Poor (January 18, 1866 – September 27, 1951) was an American astronomy professor, noted for his opposition to Einstein's theory of relativity.
Biography
He was born on January 18, 1866, in Hackensack, New Jersey, to Edward Erie Poor.
He graduated from the City College of New York and received a PhD in 1892 from Johns Hopkins University. Poor became an astronomer and professor of celestial mechanics at Columbia University from 1903 to 1944, when he was named Professor Emeritus. He published several works disputing the evidence for Einstein's theory of relativity during the 1920s, and reflecting objections to the theory.
For 25 years, Poor was chairman of the admissions committee of the New York Yacht Club. In addition, he was a fellow of the Royal Astronomical Society and an associate fellow of the American Academy of Arts and Sciences. He served several terms as mayor of Dering Harbor on Long Island, New York. Poor also invented a circular slide rule used with a sextant for yachting navigation. At Columbia University, Poor was a teacher of the astronomer Samuel A. Mitchell, who went on to become director of the Leander McCormick Observatory at the University of Virginia.
He died on September 27, 1951.
Legacy
One of Poor's sons, Edmund Ward Poor, was one of ten co-founders of Grumman Aircraft on Long Island. Another son was Alfred Easton Poor, an architect.
Selected publications
The Solar System: A Study of Recent Observations (1908)
Nautical Science (1910)
Gravitation versus Relativity (with a preliminary essay by Thomas Chrowder Chamberlin, 1922)
Is Einstein Wrong? A Debate (Jun. 1924)
Rebuttal to Prof. Henderson's Article (Aug. 1924)
The Relativity Deflection of Light (Jul. 1927)
Relativity and the Law of Gravitation (Jan. 1930)
The Deflection of Light as Observed at Total Solar Eclipses (Apr. 1930)
What Einstein Really Did (Nov. 1930)
See also
Criticism of relativity theory
References
American astronomers
Johns Hopkins University alumni
City College of New York alumni
Columbia University faculty
People from Hackensack, New Jersey
1866 births
1951 deaths
Members of the New York Yacht Club
Relativity critics | Charles Lane Poor | [
"Physics"
] | 446 | [
"Relativity critics",
"Theory of relativity"
] |
46,253 | https://en.wikipedia.org/wiki/Fever | Fever or pyrexia in humans is a symptom of an anti-infection defense mechanism that appears with body temperature exceeding the normal range due to an increase in the body's temperature set point in the hypothalamus. There is no single agreed-upon upper limit for normal temperature: sources use values ranging between in humans.
The increase in set point triggers increased muscle contractions and causes a feeling of cold or chills. This results in greater heat production and efforts to conserve heat. When the set point temperature returns to normal, a person feels hot, becomes flushed, and may begin to sweat. Rarely a fever may trigger a febrile seizure, with this being more common in young children. Fevers do not typically go higher than .
A fever can be caused by many medical conditions ranging from non-serious to life-threatening. This includes viral, bacterial, and parasitic infections—such as influenza, the common cold, meningitis, urinary tract infections, appendicitis, Lassa fever, COVID-19, and malaria. Non-infectious causes include vasculitis, deep vein thrombosis, connective tissue disease, side effects of medication or vaccination, and cancer. It differs from hyperthermia, in that hyperthermia is an increase in body temperature over the temperature set point, due to either too much heat production or not enough heat loss.
Treatment to reduce fever is generally not required. Treatment of associated pain and inflammation, however, may be useful and help a person rest. Medications such as ibuprofen or paracetamol (acetaminophen) may help with this as well as lower temperature. Children younger than three months require medical attention, as might people with serious medical problems such as a compromised immune system or people with other symptoms. Hyperthermia requires treatment.
Fever is one of the most common medical signs. It is part of about 30% of healthcare visits by children and occurs in up to 75% of adults who are seriously sick. While fever evolved as a defense mechanism, treating a fever does not appear to improve or worsen outcomes. Fever is often viewed with greater concern by parents and healthcare professionals than is usually deserved, a phenomenon known as "fever phobia."
Associated symptoms
A fever is usually accompanied by sickness behavior, which consists of lethargy, depression, loss of appetite, sleepiness, hyperalgesia, dehydration, and the inability to concentrate. Sleeping with a fever can often cause intense or confusing nightmares, commonly called "fever dreams". Mild to severe delirium (which can also cause hallucinations) may also present itself during high fevers.
Diagnosis
A range for normal temperatures has been found. Central temperatures, such as rectal temperatures, are more accurate than peripheral temperatures.
Fever is generally agreed to be present if the elevated temperature is caused by a raised set point and:
Temperature in the anus (rectum/rectal) is at or over . An ear (tympanic) or forehead (temporal) temperature may also be used.
Temperature in the mouth (oral) is at or over in the morning or over in the afternoon
Temperature under the arm (axillary) is usually about below core body temperature.
In adults, the normal range of oral temperatures in healthy individuals is among men and among women, while when taken rectally it is among men and among women, and for ear measurement it is among men and among women.
Normal body temperatures vary depending on many factors, including age, sex, time of day, ambient temperature, activity level, and more. Normal daily temperature variation has been described as 0.5 °C (0.9 °F). A raised temperature is not always a fever. For example, the temperature rises in healthy people when they exercise, but this is not considered a fever, as the set point is normal. On the other hand, a "normal" temperature may be a fever, if it is unusually high for that person; for example, medically frail elderly people have a decreased ability to generate body heat, so a "normal" temperature of may represent a clinically significant fever.
Hyperthermia
Hyperthermia is an elevation of body temperature over the temperature set point, due to either too much heat production or not enough heat loss. Hyperthermia is thus not considered fever. Hyperthermia should not be confused with hyperpyrexia (which is a very high fever).
Clinically, it is important to distinguish between fever and hyperthermia as hyperthermia may quickly lead to death and does not respond to antipyretic medications. The distinction may however be difficult to make in an emergency setting, and is often established by identifying possible causes.
Types
Various patterns of measured patient temperatures have been observed, some of which may be indicative of a particular medical diagnosis:
Continuous fever, where temperature remains above normal and does not fluctuate more than in 24 hours (e.g. in bacterial pneumonia, typhoid fever, infective endocarditis, tuberculosis, or typhus).
Intermittent fever is present only for a certain period, later cycling back to normal (e.g., in malaria, leishmaniasis, pyemia, sepsis, or African trypanosomiasis).
Remittent fever, where the temperature remains above normal throughout the day and fluctuates more than in 24 hours (e.g., in infective endocarditis or brucellosis).
Pel–Ebstein fever is a cyclic fever that is rarely seen in patients with Hodgkin's lymphoma.
Undulant fever, seen in brucellosis.
Typhoid fever is a continuous fever showing a characteristic step-ladder pattern, a step-wise increase in temperature with a high plateau.
Among the types of intermittent fever are ones specific to cases of malaria caused by different pathogens. These are:
Quotidian fever, with a 24-hour periodicity, typical of malaria caused by Plasmodium knowlesi (P. knowlesi);
Tertian fever, with a 48-hour periodicity, typical of later course malaria caused by P. falciparum, P. vivax, or P. ovale;
Quartan fever, with a 72-hour periodicity, typical of later course malaria caused by P. malariae.
In addition, there is disagreement regarding whether a specific fever pattern is associated with Hodgkin's lymphoma—the Pel–Ebstein fever, with patients argued to present high temperature for one week, followed by low for the next week, and so on, where the generality of this pattern is debated.
Persistent fever that cannot be explained after repeated routine clinical inquiries is called fever of unknown origin. A neutropenic fever, also called febrile neutropenia, is a fever in the absence of normal immune system function. Because of the lack of infection-fighting neutrophils, a bacterial infection can spread rapidly; this fever is, therefore, usually considered to require urgent medical attention. This kind of fever is more commonly seen in people receiving immune-suppressing chemotherapy than in apparently healthy people.
Hyperpyrexia
Hyperpyrexia is an extreme elevation of body temperature which, depending upon the source, is classified as a core body temperature greater than or equal to ; the range of hyperpyrexia includes cases considered severe (≥ 40 °C) and extreme (≥ 42 °C). It differs from hyperthermia in that one's thermoregulatory system's set point for body temperature is set above normal, then heat is generated to achieve it. In contrast, hyperthermia involves body temperature rising above its set point due to outside factors. The high temperatures of hyperpyrexia are considered medical emergencies, as they may indicate a serious underlying condition or lead to severe morbidity (including permanent brain damage), or to death. A common cause of hyperpyrexia is an intracranial hemorrhage. Other causes in emergency room settings include sepsis, Kawasaki syndrome, neuroleptic malignant syndrome, drug overdose, serotonin syndrome, and thyroid storm.
Differential diagnosis
Fever is a common symptom of many medical conditions:
Infectious disease, e.g., COVID-19, dengue, Ebola, gastroenteritis, HIV, influenza, Lyme disease, rocky mountain spotted fever, secondary syphilis, malaria, mononucleosis, as well as infections of the skin, e.g., abscesses and boils.
Immunological diseases, e.g., relapsing polychondritis, autoimmune hepatitis, granulomatosis with polyangiitis, Horton disease, inflammatory bowel diseases, Kawasaki disease, lupus erythematosus, sarcoidosis, Still's disease, rheumatoid arthritis, lymphoproliferative disorders and psoriasis;
Tissue destruction, as a result of cerebral bleeding, crush syndrome, hemolysis, infarction, rhabdomyolysis, surgery, etc.;
Cancers, particularly blood cancers such as leukemia and lymphomas;
Metabolic disorders, e.g., gout, and porphyria; and
Inherited metabolic disorder, e.g., Fabry disease.
Adult and pediatric manifestations for the same disease may differ; for instance, in COVID-19, one metastudy describes 92.8% of adults versus 43.9% of children presenting with fever.
In addition, fever can result from a reaction to an incompatible blood product.
Function
Immune function
Fever is thought to contribute to host defense, as the reproduction of pathogens with strict temperature requirements can be hindered, and the rates of some important immunological reactions are increased by temperature. Fever has been described in teaching texts as assisting the healing process in various ways, including:
increased mobility of leukocytes;
enhanced leukocyte phagocytosis;
decreased endotoxin effects; and
increased proliferation of T cells.
Advantages and disadvantages
A fever response to an infectious disease is generally regarded as protective, whereas fever in non-infections may be maladaptive. Studies have not been consistent on whether treating fever generally worsens or improves mortality risk. Benefits or harms may depend on the type of infection, health status of the patient and other factors. Studies using warm-blooded vertebrates suggest that they recover more rapidly from infections or critical illness due to fever. In sepsis, fever is associated with reduced mortality.
Pathophysiology of fever induction
Hypothalamus
Temperature is regulated in the hypothalamus. The trigger of a fever, called a pyrogen, results in the release of prostaglandin E2 (PGE2). PGE2 in turn acts on the hypothalamus, which creates a systemic response in the body, causing heat-generating effects to match a new higher temperature set point. There are four receptors in which PGE2 can bind (EP1-4), with a previous study showing the EP3 subtype is what mediates the fever response. Hence, the hypothalamus can be seen as working like a thermostat. When the set point is raised, the body increases its temperature through both active generation of heat and retention of heat. Peripheral vasoconstriction both reduces heat loss through the skin and causes the person to feel cold. Norepinephrine increases thermogenesis in brown adipose tissue, and muscle contraction through shivering raises the metabolic rate.
If these measures are insufficient to make the blood temperature in the brain match the new set point in the hypothalamus, the brain orchestrates heat effector mechanisms via the autonomic nervous system or primary motor center for shivering. These may be:
Increased heat production by increased muscle tone, shivering (muscle movements to produce heat) and release of hormones like epinephrine; and
Prevention of heat loss, e.g., through vasoconstriction.
When the hypothalamic set point moves back to baseline—either spontaneously or via medication—normal functions such as sweating, and the reverse of the foregoing processes (e.g., vasodilation, end of shivering, and nonshivering heat production) are used to cool the body to the new, lower setting.
This contrasts with hyperthermia, in which the normal setting remains, and the body overheats through undesirable retention of excess heat or over-production of heat. Hyperthermia is usually the result of an excessively hot environment (heat stroke) or an adverse reaction to drugs. Fever can be differentiated from hyperthermia by the circumstances surrounding it and its response to anti-pyretic medications.
In infants, the autonomic nervous system may also activate brown adipose tissue to produce heat (non-shivering thermogenesis).
Increased heart rate and vasoconstriction contribute to increased blood pressure in fever.
Pyrogens
A pyrogen is a substance that induces fever. In the presence of an infectious agent, such as bacteria, viruses, viroids, etc., the immune response of the body is to inhibit their growth and eliminate them. The most common pyrogens are endotoxins, which are lipopolysaccharides (LPS) produced by Gram-negative bacteria such as E. coli. But pyrogens include non-endotoxic substances (derived from microorganisms other than gram-negative-bacteria or from chemical substances) as well. The types of pyrogens include internal (endogenous) and external (exogenous) to the body.
The "pyrogenicity" of given pyrogens varies: in extreme cases, bacterial pyrogens can act as superantigens and cause rapid and dangerous fevers.
Endogenous
Endogenous pyrogens are cytokines released from monocytes (which are part of the immune system). In general, they stimulate chemical responses, often in the presence of an antigen, leading to a fever. Whilst they can be a product of external factors like exogenous pyrogens, they can also be induced by internal factors like damage associated molecular patterns such as cases like rheumatoid arthritis or lupus.
Major endogenous pyrogens are interleukin 1 (α and β) and interleukin 6 (IL-6). Minor endogenous pyrogens include interleukin-8, tumor necrosis factor-β, macrophage inflammatory protein-α and macrophage inflammatory protein-β as well as interferon-α, interferon-β, and interferon-γ. Tumor necrosis factor-α (TNF) also acts as a pyrogen, mediated by interleukin 1 (IL-1) release. These cytokine factors are released into general circulation, where they migrate to the brain's circumventricular organs where they are more easily absorbed than in areas protected by the blood–brain barrier. The cytokines then bind to endothelial receptors on vessel walls to receptors on microglial cells, resulting in activation of the arachidonic acid pathway.
Of these, IL-1β, TNF, and IL-6 are able to raise the temperature setpoint of an organism and cause fever. These proteins produce a cyclooxygenase which induces the hypothalamic production of PGE2 which then stimulates the release of neurotransmitters such as cyclic adenosine monophosphate and increases body temperature.
Exogenous
Exogenous pyrogens are external to the body and are of microbial origin. In general, these pyrogens, including bacterial cell wall products, may act on Toll-like receptors in the hypothalamus and elevate the thermoregulatory setpoint.
An example of a class of exogenous pyrogens are bacterial lipopolysaccharides (LPS) present in the cell wall of gram-negative bacteria. According to one mechanism of pyrogen action, an immune system protein, lipopolysaccharide-binding protein (LBP), binds to LPS, and the LBP–LPS complex then binds to a CD14 receptor on a macrophage. The LBP-LPS binding to CD14 results in cellular synthesis and release of various endogenous cytokines, e.g., interleukin 1 (IL-1), interleukin 6 (IL-6), and tumor necrosis factor-alpha (TNFα). A further downstream event is activation of the arachidonic acid pathway.
PGE2 release
PGE2 release comes from the arachidonic acid pathway. This pathway (as it relates to fever), is mediated by the enzymes phospholipase A2 (PLA2), cyclooxygenase-2 (COX-2), and prostaglandin E2 synthase. These enzymes ultimately mediate the synthesis and release of PGE2.
PGE2 is the ultimate mediator of the febrile response. The setpoint temperature of the body will remain elevated until PGE2 is no longer present. PGE2 acts on neurons in the preoptic area (POA) through the prostaglandin E receptor 3 (EP3). EP3-expressing neurons in the POA innervate the dorsomedial hypothalamus (DMH), the rostral raphe pallidus nucleus in the medulla oblongata (rRPa), and the paraventricular nucleus (PVN) of the hypothalamus. Fever signals sent to the DMH and rRPa lead to stimulation of the sympathetic output system, which evokes non-shivering thermogenesis to produce body heat and skin vasoconstriction to decrease heat loss from the body surface. It is presumed that the innervation from the POA to the PVN mediates the neuroendocrine effects of fever through the pathway involving pituitary gland and various endocrine organs.
Management
Fever does not necessarily need to be treated, and most people with a fever recover without specific medical attention. Although it is unpleasant, fever rarely rises to a dangerous level even if untreated. Damage to the brain generally does not occur until temperatures reach , and it is rare for an untreated fever to exceed . Treating fever in people with sepsis does not affect outcomes. Small trials have shown no benefit of treating fevers of or higher of critically ill patients in ICUs, and one trial was terminated early because patients receiving aggressive fever treatment were dying more often.
According to the NIH, the two assumptions which are generally used to argue in favor of treating fevers have not been experimentally validated. These are that (1) a fever is noxious, and (2) suppression of a fever will reduce its noxious effect. Most of the other studies supporting the association of fever with poorer outcomes have been observational in nature. In theory, these critically ill patients and those faced with additional physiologic stress may benefit from fever reduction, but the evidence on both sides of the argument appears to be mostly equivocal.
Conservative measures
Limited evidence supports sponging or bathing feverish children with tepid water. The use of a fan or air conditioning may somewhat reduce the temperature and increase comfort. If the temperature reaches the extremely high level of hyperpyrexia, aggressive cooling is required (generally produced mechanically via conduction by applying numerous ice packs across most of the body or direct submersion in ice water). In general, people are advised to keep adequately hydrated. Whether increased fluid intake improves symptoms or shortens respiratory illnesses such as the common cold is not known.
Medications
Medications that lower fevers are called antipyretics. The antipyretic ibuprofen is effective in reducing fevers in children. It is more effective than acetaminophen (paracetamol) in children. Ibuprofen and acetaminophen may be safely used together in children with fevers. The efficacy of acetaminophen by itself in children with fevers has been questioned. Ibuprofen is also superior to aspirin in children with fevers. Additionally, aspirin is not recommended in children and young adults (those under the age of 16 or 19 depending on the country) due to the risk of Reye's syndrome.
Using both paracetamol and ibuprofen at the same time or alternating between the two is more effective at decreasing fever than using only paracetamol or ibuprofen. It is not clear if it increases child comfort. Response or nonresponse to medications does not predict whether or not a child has a serious illness.
With respect to the effect of antipyretics on the risk of death in those with infection, studies have found mixed results, as of 2019.
Epidemiology
Fever is one of the most common medical signs. It is part of about 30% of healthcare visits by children, and occurs in up to 75% of adults who are seriously sick. About 5% of people who go to an emergency room have a fever.
History
A number of types of fever were known as early as 460 BC to 370 BC when Hippocrates was practicing medicine including that due to malaria (tertian or every 2 days and quartan or every 3 days). It also became clear around this time that fever was a symptom of disease rather than a disease in and of itself.
Infections presenting with fever were a major source of mortality in humans for about 200,000 years. Until the late nineteenth century, approximately half of all humans died from infections before the age of fifteen.
An older term, febricula (a diminutive form of the Latin word for fever), was once used to refer to a low-grade fever lasting only a few days. This term fell out of use in the early 20th century, and the symptoms it referred to are now thought to have been caused mainly by various minor viral respiratory infections.
Society and culture
Mythology
Febris (fever in Latin) is the goddess of fever in Roman mythology. People with fevers would visit her temples.
Tertiana and Quartana are the goddesses of tertian and quartan fevers of malaria in Roman mythology.
Jvarasura (fever-demon in Hindi) is the personification of fever and disease in Hindu and Buddhist mythology.
Pediatrics
Fever is often viewed with greater concern by parents and healthcare professionals than might be deserved, a phenomenon known as fever phobia, which is based in both caregiver's and parents' misconceptions about fever in children. Among them, many parents incorrectly believe that fever is a disease rather than a medical sign, that even low fevers are harmful, and that any temperature even briefly or slightly above the oversimplified "normal" number marked on a thermometer is a clinically significant fever. They are also afraid of harmless side effects like febrile seizures and dramatically overestimate the likelihood of permanent damage from typical fevers. The underlying problem, according to professor of pediatrics Barton D. Schmitt, is that "as parents we tend to suspect that our children's brains may melt." As a result of these misconceptions parents are anxious, give the child fever-reducing medicine when the temperature is technically normal or only slightly elevated, and interfere with the child's sleep to give the child more medicine.
Other species
Fever is an important metric for the diagnosis of disease in domestic animals. The body temperature of animals, which is taken rectally, is different from one species to another. For example, a horse is said to have a fever above (). In species that allow the body to have a wide range of "normal" temperatures, such as camels, whose body temperature varies as the environmental temperature varies, the body temperature which constitutes a febrile state differs depending on the environmental temperature. Fever can also be behaviorally induced by invertebrates that do not have immune-system based fever. For instance, some species of grasshopper will thermoregulate to achieve body temperatures that are 2–5 °C higher than normal in order to inhibit the growth of fungal pathogens such as Beauveria bassiana and Metarhizium acridum. Honeybee colonies are also able to induce a fever in response to a fungal parasite Ascosphaera apis.
References
Further reading
External links
Fever and Taking Your Child's Temperature
US National Institute of Health factsheet
Drugs most commonly associated with the adverse event Pyrexia (Fever) as reported the FDA
Fever at MedlinePlus
Why are We So Afraid of Fevers? at The New York Times
Wikipedia medicine articles ready to translate
Wikipedia emergency medicine articles ready to translate
Symptoms and signs
Thermoregulation | Fever | [
"Biology"
] | 5,215 | [
"Thermoregulation",
"Homeostasis"
] |
46,256 | https://en.wikipedia.org/wiki/Telemetry | Telemetry is the in situ collection of measurements or other data at remote points and their automatic transmission to receiving equipment (telecommunication) for monitoring. The word is derived from the Greek roots tele, 'far off', and metron, 'measure'. Systems that need external instructions and data to operate require the counterpart of telemetry: telecommand.
Although the term commonly refers to wireless data transfer mechanisms (e.g., using radio, ultrasonic, or infrared systems), it also encompasses data transferred over other media such as a telephone or computer network, optical link or other wired communications like power line carriers. Many modern telemetry systems take advantage of the low cost and ubiquity of GSM networks by using SMS to receive and transmit telemetry data.
A telemeter is a physical device used in telemetry. It consists of a sensor, a transmission path, and a display, recording, or control device. Electronic devices are widely used in telemetry and can be wireless or hard-wired, analog or digital. Other technologies are also possible, such as mechanical, hydraulic and optical.
Telemetry may be commutated to allow the transmission of multiple data streams in a fixed frame.
History
The beginning of industrial telemetry lies in the steam age, although the sensor was not called telemeter at that time. Examples are James Watt's (1736-1819) additions to his steam engines for monitoring from a (near) distance such as the mercury pressure gauge and the fly-ball governor.
Although the original telemeter referred to a ranging device (the rangefinding telemeter), by the late 19th century the same term had been in wide use by electrical engineers applying it refer to electrically operated devices measuring many other quantities besides distance (for instance, in the patent of an "Electric Telemeter Transmitter"). General telemeters included such sensors as the thermocouple (from the work of Thomas Johann Seebeck), the resistance thermometer (by William Siemens based on the work of Humphry Davy), and the electrical strain gauge (based on Lord Kelvin's discovery that conductors under mechanical strain change their resistance) and output devices such as Samuel Morse's telegraph sounder and the relay. In 1889 this led an author in the Institution of Civil Engineers proceedings to suggest that the term for the rangefinder telemeter might be replaced with tacheometer.
In the 1930s use of electrical telemeters grew rapidly. The electrical strain gauge was widely used in rocket and aviation research and the radiosonde was invented for meteorological measurements. The advent of World War II gave an impetus to industrial development and henceforth many of these telemeters became commercially viable.
Carrying on from rocket research, radio telemetry was used routinely as space exploration got underway. Spacecraft are in a place where a physical connection is not possible, leaving radio or other electromagnetic waves (such as infrared lasers) as the only viable option for telemetry. During crewed space missions it is used to monitor not only parameters of the vehicle, but also the health and life support of the astronauts. During the Cold War telemetry found uses in espionage. US intelligence found that they could monitor the telemetry from Soviet missile tests by building a telemeter of their own to intercept the radio signals and hence learn a great deal about Soviet capabilities.
Types of telemeter
Telemeters are the physical devices used in telemetry. It consists of a sensor, a transmission path, and a display, recording, or control device. Electronic devices are widely used in telemetry and can be wireless or hard-wired, analog or digital. Other technologies are also possible, such as mechanical, hydraulic and optical.
Telemetering information over wire had its origins in the 19th century. One of the first data-transmission circuits was developed in 1845 between the Russian Tsar's Winter Palace and army headquarters. In 1874, French engineers built a system of weather and snow-depth sensors on Mont Blanc that transmitted real-time information to Paris. In 1901 the American inventor C. Michalke patented the selsyn, a circuit for sending synchronized rotation information over a distance. In 1906 a set of seismic stations were built with telemetering to the Pulkovo Observatory in Russia. In 1912, Commonwealth Edison developed a system of telemetry to monitor electrical loads on its power grid. The Panama Canal (completed 1913–1914) used extensive telemetry systems to monitor locks and water levels.
Wireless telemetry made early appearances in the radiosonde, developed concurrently in 1930 by Robert Bureau in France and Pavel Molchanov in Russia. Molchanov's system modulated temperature and pressure measurements by converting them to wireless Morse code. The German V-2 rocket used a system of primitive multiplexed radio signals called "Messina" to report four rocket parameters, but it was so unreliable that Wernher von Braun once claimed it was more useful to watch the rocket through binoculars.
In the US and the USSR, the Messina system was quickly replaced with better systems; in both cases, based on pulse-position modulation (PPM).
Early Soviet missile and space telemetry systems which were developed in the late 1940s used either PPM (e.g., the Tral telemetry system developed by OKB-MEI) or pulse-duration modulation (e.g., the RTS-5 system developed by NII-885). In the United States, early work employed similar systems, but were later replaced by pulse-code modulation (PCM) (for example, in the Mars probe Mariner 4). Later Soviet interplanetary probes used redundant radio systems, transmitting telemetry by PCM on a decimeter band and PPM on a centimeter band.
Applications
Meteorology
Weather balloons use telemetry to transmit meteorological data since 1920.
Oil and gas industry
Telemetry is used to transmit drilling mechanics and formation evaluation information uphole, in real time, as a well is drilled. These services are known as Measurement while drilling and Logging while drilling. Information acquired thousands of feet below ground, while drilling, is sent through the drilling hole to the surface sensors and the demodulation software. The pressure wave (sana) is translated into useful information after DSP and noise filters. This information is used for Formation evaluation, Drilling Optimization, and Geosteering.
Motor racing
Telemetry is a key factor in modern motor racing, allowing race engineers to interpret data collected during a test or race and use it to properly tune the car for optimum performance. Systems used in series such as Formula One have become advanced to the point where the potential lap time of the car can be calculated, and this time is what the driver is expected to meet. Examples of measurements on a race car include accelerations (G forces) in three axes, temperature readings, wheel speed, and suspension displacement. In Formula One, driver input is also recorded so the team can assess driver performance and (in case of an accident) the FIA can determine or rule out driver error as a possible cause.
Later developments include two-way telemetry which allows engineers to update calibrations on the car in real time (even while it is out on the track). In Formula One, two-way telemetry surfaced in the early 1990s and consisted of a message display on the dashboard which the team could update. Its development continued until May 2001, when it was first allowed on the cars. By 2002, teams were able to change engine mapping and deactivate engine sensors from the pit while the car was on the track. For the 2003 season, the FIA banned two-way telemetry from Formula One; however, the technology may be used in other types of racing or on road cars.
One way telemetry system has also been applied in R/C racing car to get information by car's sensors like: engine RPM, voltage, temperatures, throttle.
Transportation
In the transportation industry, telemetry provides meaningful information about a vehicle or driver's performance by collecting data from sensors within the vehicle. This is undertaken for various reasons ranging from staff compliance monitoring, insurance rating to predictive maintenance.
Telemetry is used to link traffic counter devices to data recorders to measure traffic flows and vehicle lengths and weights.
Telemetry is used by the railway industry for measuring the health of trackage. This permits optimized and focused predictive and preventative maintenance. Typically this is done with specialized trains, such as the New Measurement Train used in the United Kingdom by Network Rail, which can check for track defects, such as problems with gauge, and deformations in the rail. Japan uses similar, but quicker trains, nicknamed Doctor Yellow. Such trains, besides checking the tracks, can also verify whether or not there are any problems with the overhead power supply (catenary), where it is installed. Dedicated rail inspection companies, such as Sperry Rail, have their own customized rail cars and rail-wheel equipped trucks, that use a variety of methods, including lasers, ultrasound, and induction (measuring resulting magnetic fields from running electricity into rails) to find any defects.
Agriculture
Most activities related to healthy crops and good yields depend on timely availability of weather and soil data. Therefore, wireless weather stations play a major role in disease prevention and precision irrigation. These stations transmit parameters necessary for decision-making to a base station: air temperature and relative humidity, precipitation and leaf wetness (for disease prediction models), solar radiation and wind speed (to calculate evapotranspiration), water deficit stress (WDS) leaf sensors and soil moisture (crucial to irrigation decisions).
Because local micro-climates can vary significantly, such data needs to come from within the crop. Monitoring stations usually transmit data back by terrestrial radio, although occasionally satellite systems are used. Solar power is often employed to make the station independent of the power grid.
Water management
Telemetry is important in water management, including water quality and stream gauging functions. Major applications include AMR (automatic meter reading), groundwater monitoring, leak detection in distribution pipelines and equipment surveillance. Having data available in almost real time allows quick reactions to events in the field. Telemetry control allows engineers to intervene with assets such as pumps and by remotely switching pumps on or off depending on the circumstances. Watershed telemetry is an excellent strategy of how to implement a water management system.
Defense, space and resource exploration
Telemetry is used in complex systems such as missiles, RPVs, spacecraft, oil rigs, and chemical plants since it allows the automatic monitoring, alerting, and record-keeping necessary for efficient and safe operation. Space agencies such as NASA, ISRO, the European Space Agency (ESA), and other agencies use telemetry and/or telecommand systems to collect data from spacecraft and satellites.
Telemetry is vital in the development of missiles, satellites and aircraft because the system might be destroyed during or after the test. Engineers need critical system parameters to analyze (and improve) the performance of the system. In the absence of telemetry, this data would often be unavailable.
Space science
Telemetry is used by crewed or uncrewed spacecraft for data transmission. Distances of more than 10 billion kilometres have been covered, e.g., by Voyager 1.
Rocketry
In rocketry, telemetry equipment forms an integral part of the rocket range assets used to monitor the position and health of a launch vehicle to determine range safety flight termination criteria (Range purpose is for public safety). Problems include the extreme environment (temperature, acceleration and vibration), the energy supply, antenna alignment and (at long distances, e.g., in spaceflight) signal travel time.
Flight testing
Today nearly every type of aircraft, missiles, or spacecraft carries a wireless telemetry system as it is tested. Aeronautical mobile telemetry is used for the safety of the pilots and persons on the ground during flight tests. Telemetry from an on-board flight test instrumentation system is the primary source of real-time measurement and status information transmitted during the testing of crewed and uncrewed aircraft.
Military intelligence
Intercepted telemetry was an important source of intelligence for the United States and UK when Soviet missiles were tested; for this purpose, the United States operated a listening post in Iran. Eventually, the Russians discovered the United States intelligence-gathering network and encrypted their missile-test telemetry signals. Telemetry was also a source for the Soviets, who operated listening ships in Cardigan Bay to eavesdrop on UK missile tests performed in the area.
Energy monitoring
In factories, buildings and houses, energy consumption of systems such as HVAC are monitored at multiple locations; related parameters (e.g., temperature) are sent via wireless telemetry to a central location. The information is collected and processed, enabling the most efficient use of energy. Such systems also facilitate predictive maintenance.
Resource distribution
Many resources need to be distributed over wide areas. Telemetry is useful in these cases, since it allows the logistics system to channel resources where they are needed, as well as provide security for those assets; principal examples of this are dry goods, fluids, and granular bulk solids.
Dry goods
Dry goods, such as packaged merchandise, may be tracked and remotely monitored, tracked and inventoried by RFID sensing systems, barcode reader, optical character recognition (OCR) reader, or other sensing devices—coupled to telemetry devices, to detect RFID tags, barcode labels or other identifying markers affixed to the item, its package, or (for large items and bulk shipments) affixed to its shipping container or vehicle. This facilitates knowledge of their location, and can record their status and disposition, as when merchandise with barcode labels is scanned through a checkout reader at point-of-sale systems in a retail store. Stationary or hand-held barcode RFID scanners or Optical reader with remote communications, can be used to expedite inventory tracking and counting in stores, warehouses, shipping terminals, transportation carriers and factories.
Fluids
Fluids stored in tanks are a principal object of constant commercial telemetry. This typically includes monitoring of tank farms in gasoline refineries and chemical plants—and distributed or remote tanks, which must be replenished when empty (as with gas station storage tanks, home heating oil tanks, or ag-chemical tanks at farms), or emptied when full (as with production from oil wells, accumulated waste products, and newly produced fluids). Telemetry is used to communicate the variable measurements of flow and tank level sensors detecting fluid movements and/or volumes by pneumatic, hydrostatic, or differential pressure; tank-confined ultrasonic, radar or Doppler effect echoes; or mechanical or magnetic sensors.
Bulk solids
Telemetry of bulk solids is common for tracking and reporting the volume status and condition of grain and livestock feed bins, powdered or granular food, powders and pellets for manufacturing, sand and gravel, and other granular bulk solids. While technology associated with fluid tank monitoring also applies, in part, to granular bulk solids, reporting of overall container weight, or other gross characteristics and conditions, are sometimes required, owing to bulk solids' more complex and variable physical characteristics.
Medicine/healthcare
Telemetry is used for patients (biotelemetry) who are at risk of abnormal heart activity, generally in a coronary care unit. Telemetry specialists are sometimes used to monitor many patients within a hospital. Such patients are outfitted with measuring, recording and transmitting devices. A data log can be useful in diagnosis of the patient's condition by doctors. An alerting function can alert nurses if the patient is suffering from an acute (or dangerous) condition.
Systems are available in medical-surgical nursing for monitoring to rule out a heart condition, or to monitor a response to antiarrhythmic medications such as amiodarone.
A new and emerging application for telemetry is in the field of neurophysiology, or neurotelemetry. Neurophysiology is the study of the central and peripheral nervous systems through the recording of bioelectrical activity, whether spontaneous or stimulated. In neurotelemetry (NT) the electroencephalogram (EEG) of a patient is monitored remotely by a registered EEG technologist using advanced communication software. The goal of neurotelemetry is to recognize a decline in a patient's condition before physical signs and symptoms are present.
Neurotelemetry is synonymous with real-time continuous video EEG monitoring and has application in the epilepsy monitoring unit, neuro ICU, pediatric ICU and newborn ICU. Due to the labor-intensive nature of continuous EEG monitoring NT is typically done in the larger academic teaching hospitals using in-house programs that include R.EEG Technologists, IT support staff, neurologist and neurophysiologist and monitoring support personnel.
Modern microprocessor speeds, software algorithms and video data compression allow hospitals to centrally record and monitor continuous digital EEGs of multiple critically ill patients simultaneously.
Neurotelemetry and continuous EEG monitoring provides dynamic information about brain function that permits early detection of changes in neurologic status, which is especially useful when the clinical examination is limited.
Fishery and wildlife research and management
Telemetry is used to study wildlife, and has been useful for monitoring threatened species at the individual level. Animals under study can be outfitted with instrumentation tags, which include sensors that measure temperature, diving depth and duration (for marine animals), speed and location (using GPS or Argos packages). Telemetry tags can give researchers information about animal behavior, functions, and their environment. This information is then either stored (with archival tags) or the tags can send (or transmit) their information to a satellite or handheld receiving device. Capturing and marking wild animals can put them at some risk, so it is important to minimize these impacts.
Retail
At a 2005 workshop in Las Vegas, a seminar noted the introduction of telemetry equipment which would allow vending machines to communicate sales and inventory data to a route truck or to a headquarters. This data could be used for a variety of purposes, such as eliminating the need for drivers to make a first trip to see which items needed to be restocked before delivering the inventory.
Retailers also use RFID tags to track inventory and prevent shoplifting. Most of these tags passively respond to RFID readers (e.g., at the cashier), but active RFID tags are available which periodically transmit location information to a base station.
Law enforcement
Telemetry hardware is useful for tracking persons and property in law enforcement. An ankle collar worn by convicts on probation can warn authorities if a person violates the terms of his or her parole, such as by straying from authorized boundaries or visiting an unauthorized location. Telemetry has also enabled bait cars, where law enforcement can rig a car with cameras and tracking equipment and leave it somewhere they expect it to be stolen. When stolen the telemetry equipment reports the location of the vehicle, enabling law enforcement to deactivate the engine and lock the doors when it is stopped by responding officers.
Energy providers
In some countries, telemetry is used to measure the amount of electrical energy consumed. The electricity meter communicates with a concentrator, and the latter sends the information through GPRS or GSM to the energy provider's server. Telemetry is also used for the remote monitoring of substations and their equipment. For data transmission, phase line carrier systems operating on frequencies between 30 and 400 kHz are sometimes used.
Falconry
In falconry, "telemetry" means a small radio transmitter carried by a bird of prey that will allow the bird's owner to track it when it is out of sight.
Testing
Telemetry is used in testing hostile environments which are dangerous to humans. Examples include munitions storage facilities, radioactive sites, volcanoes, deep sea, and outer space.
Communications
Telemetry is used in many battery operated wireless systems to inform monitoring personnel when the battery power is reaching a low point and the end item needs fresh batteries.
Mining
In the mining industry, telemetry serves two main purposes: the measurement of key parameters from mining equipment and the monitoring of safety practices. The information provided by the collection and analysis of key parameters allows for root-cause identification of inefficient operations, unsafe practices and incorrect equipment usage for maximizing productivity and safety. Further applications of the technology allow for sharing knowledge and best practices across the organization.
Software
In software, telemetry is used to gather data on the use and performance of applications and application components, e.g. how often certain features are used, measurements of start-up time and processing time, hardware, application crashes, and general usage statistics and/or user behavior. In some cases, very detailed data is reported like individual window metrics, counts of used features, and individual function timings.
This kind of telemetry can be essential to software developers to receive data from a wide variety of endpoints that can't possibly all be tested in-house, as well as getting data on the popularity of certain features and whether they should be given priority or be considered for removal. Due to concerns about privacy since software telemetry can easily be used to profile users, telemetry in user software is often user choice, commonly presented as an opt-out feature (requiring explicit user action to disable it) or user choice during the software installation process.
International standards
As in other telecommunications fields, international standards exist for telemetry equipment and software. International standards producing bodies include Consultative Committee for Space Data Systems (CCSDS) for space agencies, Inter-Range Instrumentation Group (IRIG) for missile ranges, and Telemetering Standards Coordination Committee (TSCC), an organisation of the International Foundation for Telemetering.
See also
Data collection satellite
Instrumentation
Machine to Machine (M2M)
MQ Telemetry Transport (MQTT)
Portable telemetry
Reconnaissance satellite, tapping of communications routing or switching centers (e.g., Echelon)
Remote monitoring and control
Remote sensing
Remote Terminal Unit (RTU)
SBMV Protocol
SCADA
Telecommand
Telematics
Wireless sensor network
References
External links
International Foundation for Telemetering
IRIG 106 — Digital telemetry standard
The European Society of Telemetering
Telecommunications
Measurement
Spaceflight technology | Telemetry | [
"Physics",
"Mathematics",
"Technology"
] | 4,609 | [
"Information and communications technology",
"Physical quantities",
"Quantity",
"Telecommunications",
"Measurement",
"Size"
] |
46,262 | https://en.wikipedia.org/wiki/Nightmare | A nightmare, also known as a bad dream, is an unpleasant dream that can cause a strong emotional response from the mind, typically fear but also despair, anxiety, disgust or sadness. The dream may contain situations of discomfort, psychological or physical terror, or panic. After a nightmare, a person will often awaken in a state of distress and may be unable to return to sleep for a short period of time. Recurrent nightmares may require medical help, as they can interfere with sleeping patterns and cause insomnia.
Nightmares can have physical causes such as sleeping in an uncomfortable position or having a fever, or psychological causes such as stress or anxiety. Eating before going to sleep, which triggers an increase in the body's metabolism and brain activity, can be a potential stimulus for nightmares.
The prevalence of nightmares in children (5–12 years old) is between 20 and 30%, and for adults between 8 and 30%. In common language, the meaning of nightmare has extended as a metaphor to many bad things, such as a bad situation or a scary monster or person.
Etymology
The word nightmare is derived from the Old English , a mythological demon or goblin who torments others with frightening dreams. The term has no connection with the Modern English word for a female horse. The word nightmare is cognate with the Dutch term and German (dated).
History and folklore
The sorcerous demons of Iranian mythology known as Divs are likewise associated with the ability to afflict their victims with nightmares.
The mare of Germanic and Slavic folklore were thought to ride on people's chests while they sleep, causing nightmares.
Signs and symptoms
Those with nightmares experience abnormal sleep architecture. The impact of having a nightmare during the night has been found to be very similar to that of insomnia. This is thought to be caused by frequent nocturnal awakenings and fear of falling asleep. When awoken from REM sleep by a nightmare, the dreamer can usually recall the nightmare in detail. They may also awaken in a heightened state of distress, with an elevated heart rate or increased perspiration. Nightmare disorder symptoms include repeated awakenings from the major sleep period or naps with detailed recall of extended and extremely frightening dreams, usually involving threats to survival, security, or self-esteem. The awakenings generally occur during the second half of the sleep period.
Classification
According to the International Classification of Sleep Disorders-Third Edition (ICSD-3), the nightmare disorder, together with REM sleep behaviour disorder (RBD) and recurrent isolated sleep paralysis, form the REM-related parasomnias subcategory of the Parasomnias cluster. Nightmares may be idiopathic without any signs of psychopathology or associated with disorders like stress, anxiety, substance abuse, psychiatric illness or PTSD (>80% of PTSD patients report nightmares). As regarding the dream content of the dreams they are usually imprinting negative emotions like sadness, fear or rage. According to the clinical studies the content can include being chased, injury or death of others, falling, natural disasters or accidents. Typical dreams or recurrent dreams may also have some of these topics.
Cause
Scientific research shows that nightmares may have many causes. In a study focusing on children, researchers were able to conclude that nightmares directly correlate with the stress in children's lives. Children who experienced the death of a family member or a close friend or know someone with a chronic illness have more frequent nightmares than those who are only faced with stress from school or stress from social aspects of daily life.
A study researching the causes of nightmares focuses on patients who have sleep apnea. The study was conducted to determine whether or not nightmares may be caused by sleep apnea, or being unable to breathe. In the nineteenth century, authors believed that nightmares were caused by not having enough oxygen, therefore it was believed that those with sleep apnea had more frequent nightmares than those without it. The results actually showed that healthy people have more nightmares than sleep apnea patients.
Another study supports the hypothesis. In this study, 48 patients (aged 20–85 yrs) with obstructive airways disease (OAD), including 21 with and 27 without asthma, were compared with 149 sex- and age-matched controls without respiratory disease. OAD subjects with asthma reported approximately 3 times as many nightmares as controls or OAD subjects without asthma. The evolutionary purpose of nightmares then could be a mechanism to awaken a person who is in danger.
Lucid-dreaming advocate Stephen LaBerge has outlined a possible reason for how dreams are formulated and why nightmares occur. To LaBerge, a dream starts with an individual thought or scene, such as walking down a dimly lit street. Since dreams are not predetermined, the brain responds to the situation by either thinking a good thought or a bad thought, and the dream framework follows from there. If bad thoughts in a dream are more prominent than good thoughts, the dream may proceed to be a nightmare.
There is a view, possibly featured in the story A Christmas Carol, that eating cheese before sleep can cause nightmares, but there is little scientific evidence for this. A single, biased study conducted by the British Cheese Board in 2005 argued that consuming cheese could trigger more vivid dreams, but this study was not backed up with sufficient research, and contradicts existing studies which found that consuming dairy products is associated with better overall sleep quality.
Severe nightmares are also likely to occur when a person has a fever; these nightmares are often referred to as fever dreams.
Recent research has shown that frequent nightmares may precede the development of neurodegenerative diseases, such as Parkinson's disease and dementia.
Treatment
Sigmund Freud and Carl Jung seemed to have shared a belief that people frequently distressed by nightmares could be re-experiencing some stressful event from the past. Both perspectives on dreams suggest that therapy can provide relief from the dilemma of the nightmarish experience.
Halliday (1987) grouped treatment techniques into four classes. Direct nightmare interventions that combine compatible techniques from one or more of these classes may enhance overall treatment effectiveness:
Analytic and cathartic techniques
Storyline alteration procedures
Face-and-conquer approaches
Desensitization and related behavioral techniques
Post-traumatic stress disorder
Recurring post-traumatic stress disorder (PTSD) nightmares in which traumas are re-experienced respond well to a technique called imagery rehearsal. This involves dreamers coming up with alternative, mastery outcomes to the nightmares, mentally rehearsing those outcomes while awake and then reminding themselves at bedtime that they wish these alternative outcomes should the nightmares recur. Research has found that this technique not only reduces the occurrence of nightmares and insomnia but also improves other daytime PTSD symptoms. The most common variations of imagery rehearsal therapy (IRT) "relate to the number of sessions, duration of treatment, and the degree to which exposure therapy is included in the protocol".
Medication
Prazosin (alpha-1 blocker) appears useful in decreasing the number of nightmares and the distress caused by them in people with PTSD.
Risperidone (atypical antipsychotic) at a dosage of 2 mg per day, has been shown in a case report to lead to the remission of nightmares on the first night.
Trazodone (antidepressant) has been shown in a case report to treat nightmares associated with depressed patients.
Trials have included hydrocortisone, gabapentin, paroxetine, tetrahydrocannabinol, eszopiclone, Sodium oxybate, and carvedilol.
See also
Bogeyman
False awakening
Horror and terror
Incubus
Mare (folklore)
Night terror
Nightmare disorder
Nocnitsa
Sleep disorder
Sleep paralysis
Succubus
A Nightmare on Elm Street, 1984 film
References
Further reading
External links
Night-Mares: Demons that Cause Nightmares
Dream
Fear
Sleep disorders | Nightmare | [
"Biology"
] | 1,602 | [
"Dream",
"Behavior",
"Sleep",
"Sleep disorders"
] |
46,310 | https://en.wikipedia.org/wiki/Lobster | Lobsters are malacostracans of the family Nephropidae or its synonym Homaridae. They have long bodies with muscular tails and live in crevices or burrows on the sea floor. Three of their five pairs of legs have claws, including the first pair, which are usually much larger than the others. Highly prized as seafood, lobsters are economically important and are often one of the most profitable commodities in the coastal areas they populate.
Commercially important species include two species of Homarus from the northern Atlantic Ocean and scampi (which look more like a shrimp, or a "mini lobster")—the Northern Hemisphere genus Nephrops and the Southern Hemisphere genus Metanephrops.
Distinction
Although several other groups of crustaceans have the word "lobster" in their names, the unqualified term "lobster" generally refers to the clawed lobsters of the family Nephropidae. Clawed lobsters are not closely related to spiny lobsters or slipper lobsters, which have no claws (chelae), or to squat lobsters. The most similar living relatives of clawed lobsters are the reef lobsters and the three families of freshwater crayfish.
Description
Body
Lobsters are invertebrates with a hard protective exoskeleton. Like most arthropods, lobsters must shed to grow, which leaves them vulnerable. During the shedding process, several species change color. Lobsters have eight walking legs; the front three pairs bear claws, the first of which are larger than the others. The front pincers are also biologically considered legs, so they belong in the order Decapods ("ten-footed"). Although lobsters are largely bilaterally symmetrical like most other arthropods, some genera possess unequal, specialized claws.
Lobster anatomy includes two main body parts: the cephalothorax and the abdomen. The cephalothorax fuses the head and the thorax, both of which are covered by a chitinous carapace. The lobster's head bears antennae, antennules, mandibles, the first and second maxillae. The head also bears the (usually stalked) compound eyes. Because lobsters live in murky environments at the bottom of the ocean, they mostly use their antennae as sensors. The lobster eye has a reflective structure above a convex retina. In contrast, most complex eyes use refractive ray concentrators (lenses) and a concave retina. The lobster's thorax is composed of maxillipeds, appendages that function primarily as mouthparts, and pereiopods, appendages that serve for walking and for gathering food. The abdomen includes pleopods (also known as swimmerets), used for swimming, as well as the tail fan, composed of uropods and the telson.
Lobsters, like snails and spiders, have blue blood due to the presence of hemocyanin, which contains copper. In contrast, vertebrates, and many other animals have red blood from iron-rich hemoglobin. Lobsters possess a green hepatopancreas, called the tomalley by chefs, which functions as the animal's liver and pancreas.
Lobsters of the family Nephropidae are similar in overall form to several other related groups. They differ from freshwater crayfish in lacking the joint between the last two segments of the thorax, and they differ from the reef lobsters of the family Enoplometopidae in having full claws on the first three pairs of legs, rather than just one. The distinctions from fossil families such as the Chilenophoberidae are based on the pattern of grooves on the carapace.
Analysis of the neural gene complement revealed extraordinary development of the chemosensory machinery, including a profound diversification of ligand-gated ion channels and secretory molecules.
Coloring
Typically, lobsters are dark colored, either bluish-green or greenish-brown, to blend in with the ocean floor, but they can be found in many colors. Lobsters with atypical coloring are extremely rare, accounting for only a few of the millions caught every year, and due to their rarity, they usually are not eaten, instead being released back into the wild or donated to aquariums. Often, in cases of atypical coloring, there is a genetic factor, such as albinism or hermaphroditism. Special coloring does not appear to affect the lobster's taste once cooked; except for albinos, all lobsters possess astaxanthin, which is responsible for the bright red color lobsters turn after being cooked.
Longevity
Lobsters live up to an estimated 45 to 50 years in the wild, although determining age is difficult: it is typically estimated from size and other variables. Newer techniques may lead to more accurate age estimates.
Research suggests that lobsters may not slow down, weaken, or lose fertility with age and that older lobsters may be more fertile than younger lobsters. This longevity may be due to telomerase, an enzyme that repairs long repetitive sections of DNA sequences at the ends of chromosomes, referred to as telomeres. Telomerase is expressed by most vertebrates during embryonic stages but is generally absent from adult stages of life. However, unlike most vertebrates, lobsters express telomerase as adults through most tissue, which has been suggested to be related to their longevity. Telomerase is especially present in green spotted lobsters, whose markings are thought to be produced by the enzyme interacting with their shell pigmentation. Lobster longevity is limited by their size. Moulting requires metabolic energy, and the larger the lobster, the more energy is needed; 10 to 15% of lobsters die of exhaustion during moulting, while in older lobsters, moulting ceases and the exoskeleton degrades or collapses entirely, leading to death.
Like many decapod crustaceans, lobsters grow throughout life and can add new muscle cells at each moult. Lobster longevity allows them to reach impressive sizes. According to Guinness World Records, the largest lobster ever caught was in Nova Scotia, Canada, weighing .
Ecology
Lobsters live in all oceans, on rocky, sandy, or muddy bottoms from the shoreline to beyond the edge of the continental shelf, contingent largely on size and age. Smaller, younger lobsters are typically found in crevices or in burrows under rocks and do not typically migrate. Larger, older lobsters are more likely to be found in deeper seas, migrating back to shallow waters seasonally.
Lobsters are omnivores and typically eat live prey such as fish, mollusks, other crustaceans, worms, and some plant life. They scavenge if necessary and are known to resort to cannibalism in captivity. However, when lobster skin is found in lobster stomachs, this is not necessarily evidence of cannibalism because lobsters eat their shed skin after moulting. While cannibalism was thought to be nonexistent among wild lobster populations, it was observed in 2012 by researchers studying wild lobsters in Maine. These first known instances of lobster cannibalism in the wild are theorized to be attributed to a local population explosion among lobsters caused by the disappearance of many of the Maine lobsters' natural predators.
In general, lobsters are long and move by slowly walking on the sea floor. However, they swim backward quickly when they flee by curling and uncurling their abdomens. A speed of has been recorded. This is known as the caridoid escape reaction.
Symbiotic animals of the genus Symbion, the only known member of the phylum Cycliophora, live exclusively on lobster gills and mouthparts. Different species of Symbion have been found on the three commercially important lobsters of the North Atlantic Ocean: Nephrops norvegicus, Homarus gammarus, and Homarus americanus.
As food
Lobster is commonly served boiled or steamed in the shell. Diners crack the shell with lobster crackers and fish out the meat with lobster picks. The meat is often eaten with melted butter and lemon juice. Lobster is also used in soup, bisque, lobster rolls, cappon magro, and dishes such as lobster Newberg and lobster Thermidor.
Cooks boil or steam live lobsters. When a lobster is cooked, its shell's color changes from brown to orange because the heat from cooking breaks down a protein called crustacyanin, which suppresses the orange hue of the chemical astaxanthin, which is also found in the shell.
According to the United States Food and Drug Administration (FDA), the mean level of mercury in American lobster between 2005 and 2007 was 0.107ppm.
History
Humans are claimed to have eaten lobster since early history. Large piles of lobster shells near areas populated by fishing communities attest to the crustacean's extreme popularity during this period. Evidence indicates that lobster was being consumed as a regular food product in fishing communities along the shores of Britain, South Africa, Australia, and Papua New Guinea years ago. Lobster became a significant source of nutrients among European coastal dwellers. Historians suggest lobster was an important secondary food source for most European coastal dwellers, and it was a primary food source for coastal communities in Britain during this time.
Lobster became a popular mid-range delicacy during the mid to late Roman period. The price of lobster could vary widely due to various factors, but evidence indicates that lobster was regularly transported inland over long distances to meet popular demand. A mosaic found in the ruins of Pompeii suggests that the spiny lobster was of considerable interest to the Roman population during the early imperial period.
Lobster was a popular food among the Moche people of Peru between 50 CE and 800 CE. Besides its use as food, lobster shells were also used to create a light pink dye, ornaments, and tools. A mass-produced lobster-shaped effigy vessel dated to this period attests to lobster's popularity at this time, though the purpose of this vessel has not been identified.
The Viking period saw an increase in lobster and other shellfish consumption among northern Europeans. This can be attributed to the overall increase in marine activity due to the development of better boats and the increasing cultural investment in building ships and training sailors. The consumption of marine life went up overall in this period, and the consumption of lobster went up in accordance with this general trend.
Unlike fish, however, lobster had to be cooked within two days of leaving salt water, limiting the availability of lobster for inland dwellers. Thus lobster, more than fish, became a food primarily available to the relatively well-off, at least among non-coastal dwellers.
Lobster is first mentioned in cookbooks during the medieval period. Le Viandier de Taillevent, a French recipe collection written around 1300, suggests that lobster (also called saltwater crayfish) be "Cooked in wine and water, or in the oven; eaten in vinegar." Le Viandier de Taillevent is considered to be one of the first "haute cuisine" cookbooks, advising on how to cook meals that would have been quite elaborate for the period and making usage of expensive and hard to obtain ingredients. Though the original edition, which includes the recipe for lobster, was published before the birth of French court cook Guillaume Tirel, Tirel later expanded and republished this recipe collection, suggesting that the recipes included in both editions were popular among the highest circles of French nobility, including King Philip VI. The inclusion of a lobster recipe in this cookbook, especially one which does not make use of other more expensive ingredients, attests to the popularity of lobster among the wealthy.
The French household guidebook Le Ménagier de Paris, published in 1393, includes no less than five recipes including lobster, which vary in elaboration. A guidebook intended to provide advice for women running upper-class households, Le Ménagier de Paris is similar to its predecessor in that it indicates the popularity of lobster as a food among the upper classes.
That lobster was first mentioned in cookbooks during the 1300s and only mentioned in two during this century should not be taken as an implication that lobster was not widely consumed before or during this time. Recipe collections were virtually non-existent before the 1300s, and only a handful exist from the medieval period.
During the early 1400s, lobster was still a popular dish among the upper classes. During this time, influential households used the variety and variation of species served at feasts to display wealth and prestige. Lobster was commonly found among these spreads, indicating that it continued to be held in high esteem among the wealthy. In one notable instance, the Bishop of Salisbury offered at least 42 kinds of crustaceans and fish at his feasts over nine months, including several varieties of lobster. However, lobster was not a food exclusively accessed by the wealthy. The general population living on the coasts made use of the various food sources provided by the ocean, and shellfish especially became a more popular source of nutrition. Among the general population, lobster was generally eaten boiled during the mid-15th century, but the influence of the cuisine of higher society can be seen in that it was now also regularly eaten cold with vinegar. The inland peasantry would still have generally been unfamiliar with lobster during this time.
Lobster continued to be eaten as a delicacy and a general staple food among coastal communities until the late 17th century. During this time, the influence of the Church and the government regulating and sometimes banning meat consumption during certain periods continued to encourage the popularity of seafood, especially shellfish, as a meat alternative among all classes. Throughout this period, lobster was eaten fresh, pickled, and salted. From the late 17th century onward, developments in fishing, transportation, and cooking technology allowed lobster to more easily make its way inland, and the variety of dishes involving lobster and cooking techniques used with the ingredient expanded. However, these developments coincided with a decrease in the lobster population, and lobster increasingly became a delicacy food, valued among the rich as a status symbol and less likely to be found in the diet of the general population.
The American lobster was not originally popular among European colonists in North America. This was partially due to the European inlander's association of lobster with barely edible salted seafood and partially due to a cultural opinion that seafood was a lesser alternative to meat that did not provide the taste or nutrients desired. It was also due to the extreme abundance of lobster at the time of the colonists' arrival, which contributed to a general perception of lobster as an undesirable peasant food. The American lobster did not achieve popularity until the mid-19th century when New Yorkers and Bostonians developed a taste for it, and commercial lobster fisheries only flourished after the development of the lobster smack, a custom-made boat with open holding wells on the deck to keep the lobsters alive during transport.
Before this time, lobster was considered a poverty food or as a food for indentured servants or lower members of society in Maine, Massachusetts, and the Canadian Maritimes. Some servants specified in employment agreements that they would not eat lobster more than twice per week, however there is limited evidence for this. Lobster was also commonly served in prisons, much to the displeasure of inmates. American lobster was initially deemed worthy only of being used as fertilizer or fish bait, and until well into the 20th century, it was not viewed as more than a low-priced canned staple food.
As a crustacean, lobster remains a taboo food in the dietary laws of Judaism and certain streams of Islam.
Grading
Caught lobsters are graded as new-shell, hard-shell, or old-shell. Because lobsters that have recently shed their shells are the most delicate, an inverse relationship exists between the price of American lobster and its flavor. New-shell lobsters have paper-thin shells and a worse meat-to-shell ratio, but the meat is very sweet. However, the lobsters are so delicate that even transport to Boston almost kills them, making the market for new-shell lobsters strictly local to the fishing towns where they are offloaded. Hard-shell lobsters with firm shells but less sweet meat can survive shipping to Boston, New York, and even Los Angeles, so they command a higher price than new-shell lobsters. Meanwhile, old-shell lobsters, which have not shed since the previous season and have a coarser flavor, can be air-shipped anywhere in the world and arrive alive, making them the most expensive.
Killing methods and animal welfare
Several methods are used for killing lobsters. The most common way of killing lobsters is by placing them live in boiling water, sometimes after being placed in a freezer for a period. Another method is to split the lobster or sever the body in half lengthwise. Lobsters may also be killed or immobilized immediately before boiling by a stab into the brain (pithing), in the belief that this will stop suffering. However, a lobster's brain operates from not one but several ganglia, and disabling only the frontal ganglion does not usually result in death. The boiling method is illegal in some places, such as in Italy, where offenders face fines up to €495. Lobsters can be killed by electrocution prior to cooking with a device called the CrustaStun. Since March 2018, lobsters in Switzerland need to be knocked out, or killed instantly, before they are boiled. They also receive other forms of protection while in transit.
Fishery and aquaculture
Lobsters are caught using baited one-way traps with a color-coded marker buoy to mark cages. Lobster is fished in water between , although some lobsters live at . Cages are of plastic-coated galvanized steel or wood. A lobster fisher may tend to as many as 2,000 traps.
Around the year 2000, owing to overfishing and high demand, lobster aquaculture expanded.
Species
The fossil record of clawed lobsters extends back at least to the Valanginian age of the Cretaceous (140 million years ago). This list contains all 54 extant species in the family Nephropidae:
Acanthacaris
Acanthacaris caeca A. Milne-Edwards, 1881
Acanthacaris tenuimana Bate, 1888
Dinochelus Ahyong, Chan & Bouchet, 2010
Dinochelus ausubeli Ahyong, Chan & Bouchet, 2010
Eunephrops Smith, 1885
Eunephrops bairdii Smith, 1885
Eunephrops cadenasi Chace, 1939
Eunephrops luckhursti Manning, 1997
Eunephrops manningi Holthuis, 1974
Homarinus Kornfield, Williams & Steneck, 1995
Homarinus capensis (Herbst, 1792) – Cape lobster
Homarus Weber, 1795
Homarus americanus H. Milne-Edwards, 1837 – American lobster
Homarus gammarus (Linnaeus, 1758) – European lobster
Metanephrops Jenkins, 1972
Metanephrops andamanicus (Wood-Mason, 1892) – Andaman lobster
Metanephrops arafurensis (De Man, 1905)
Metanephrops armatus Chan & Yu, 1991
Metanephrops australiensis (Bruce, 1966) – Australian scampi
Metanephrops binghami (Boone, 1927) – Caribbean lobster
Metanephrops boschmai (Holthuis, 1964) – Bight lobster
Metanephrops challengeri (Balss, 1914) – New Zealand scampi
Metanephrops formosanus Chan & Yu, 1987
Metanephrops japonicus (Tapparone-Canefri, 1873) – Japanese lobster
Metanephrops mozambicus Macpherson, 1990
Metanephrops neptunus (Bruce, 1965)
Metanephrops rubellus (Moreira, 1903)
Metanephrops sagamiensis (Parisi, 1917)
Metanephrops sibogae (De Man, 1916)
Metanephrops sinensis (Bruce, 1966) – China lobster
Metanephrops taiwanicus (Hu, 1983)
Metanephrops thomsoni (Bate, 1888)
Metanephrops velutinus Chan & Yu, 1991
Nephropides Manning, 1969
Nephropides caribaeus Manning, 1969
Nephrops Leach, 1814
Nephrops norvegicus (Linnaeus, 1758) – Norway lobster, Dublin Bay prawn, langoustine
Nephropsis Wood-Mason, 1872
Nephropsis acanthura Macpherson, 1990
Nephropsis aculeata Smith, 1881 – Florida lobsterette
Nephropsis agassizii A. Milne-Edwards, 1880
Nephropsis atlantica Norman, 1882
Nephropsis carpenteri Wood-Mason, 1885
Nephropsis ensirostris Alcock, 1901
Nephropsis holthuisii Macpherson, 1993
Nephropsis malhaensis Borradaile, 1910
Nephropsis neglecta Holthuis, 1974
Nephropsis occidentalis Faxon, 1893
Nephropsis rosea Bate, 1888
Nephropsis serrata Macpherson, 1993
Nephropsis stewarti Wood-Mason, 1872
Nephropsis suhmi Bate, 1888
Nephropsis sulcata Macpherson, 1990
Thaumastocheles Wood-Mason, 1874
Thaumastocheles dochmiodon Chan & Saint Laurent, 1999
Thaumastocheles japonicus Calman, 1913
Thaumastocheles zaleucus (Thomson, 1873)
Thaumastochelopsis Bruce, 1988
Thaumastochelopsis brucei Ahyong, Chu & Chan, 2007
Thaumastochelopsis wardi Bruce, 1988
Thymopides Burukovsky & Averin, 1977
Thymopides grobovi (Burukovsky & Averin, 1976)
Thymopides laurentae Segonzac & Macpherson, 2003
Thymops Holthuis, 1974
Thymops birsteini (Zarenkov & Semenov, 1972)
Thymopsis Holthuis, 1974
Thymopsis nilenta Holthuis, 1974
See also
Gérard de Nerval, a French writer who kept a lobster as a pet
Lobster War, an early-1960s diplomatic conflict between Brazil and France over spiny lobster fishing territories
Lobstering, an innate escape mechanism in marine and freshwater crustaceans
Notes
References
Further reading
External links
Atlantic Veterinary College Lobster Science Centre
Animal-based seafood
Articles containing video clips
Commercial crustaceans
Edible crustaceans
Negligibly senescent organisms
Seafood
Taxa named by James Dwight Dana
True lobsters
Extant Valanginian first appearances | Lobster | [
"Biology"
] | 4,716 | [
"Senescence",
"Negligibly senescent organisms",
"Organisms by adaptation"
] |
46,331 | https://en.wikipedia.org/wiki/Flatfish | A flatfish is a member of the ray-finned demersal fish superorder Pleuronectoidei, also called the Heterosomata. In many species, both eyes lie on one side of the head, one or the other migrating through or around the head during development. Some species face their left sides upward, some face their right sides upward, and others face either side upward. The most primitive members of the group, the threadfins, do not resemble the flatfish but are their closest relatives.
Many important food fish are in this order, including the flounders, soles, turbot, plaice, and halibut. Some flatfish can camouflage themselves on the ocean floor.
Taxonomy
Due to their highly distinctive morphology, flatfishes were previously treated as belonging to their own order, Pleuronectiformes. However, more recent taxonomic studies have found them to group within a diverse group of nektonic marine fishes known as the Carangiformes, which also includes jacks and billfish. Specifically, flatfish are most closely related to the threadfins, which are now also placed in the suborder Pleuronectoidei. Together, the group is most closely related to the archerfish and beachsalmons within Toxotoidei. Due to this, they are now treated as a suborder of the Carangiformes.
Over 800 described species are placed into 16 families. When they were treated as an order, the flatfishes are divided into two suborders, Psettodoidei and Pleuronectoidei, with > 99% of the species diversity found within the Pleuronectoidei. The largest families are Soleidae, Bothidae and Cynoglossidae with more than 150 species each. There also exist two monotypic families (Paralichthodidae and Oncopteridae). Some families are the results of relatively recent splits. For example, the Achiridae were classified as a subfamily of Soleidae in the past, and the Samaridae were considered a subfamily of the Pleuronectidae. The families Paralichthodidae, Poecilopsettidae, and Rhombosoleidae were also traditionally treated as subfamilies of Pleuronectidae, but are now recognised as families in their own right. The Paralichthyidae has long been indicated to be paraphyletic, with the formal description of Cyclopsettidae in 2019 resulting in the split of this family as well.
The taxonomy of some groups is in need of a review. The last monograph covering the entire order was John Roxborough Norman's Monograph of the Flatfishes published in 1934. In particular, Tephrinectes sinensis may represent a family-level lineage and requires further evaluation e.g. New species are described with some regularity and undescribed species likely remain.
Hybrids
Hybrids are well known in flatfishes. The Pleuronectidae have the largest number of reported hybrids of marine fishes. Two of the most famous intergeneric hybrids are between the European plaice (Pleuronectes platessa) and European flounder (Platichthys flesus) in the Baltic Sea, and between the English sole (Parophrys vetulus) and starry flounder (Platichthys stellatus) in Puget Sound. The offspring of the latter species pair is popularly known as the hybrid sole and was initially believed to be a valid species in its own right.
Distribution
Flatfishes are found in oceans worldwide, ranging from the Arctic, through the tropics, to Antarctica. Species diversity is centered in the Indo-West Pacific and declines following both latitudinal and longitudinal gradients away from the Indo-West Pacific. Most species are found in depths between 0 and , but a few have been recorded from depths in excess of . None have been confirmed from the abyssal or hadal zones. An observation of a flatfish from the Bathyscaphe Trieste at the bottom of the Mariana Trench at a depth of almost has been questioned by fish experts, and recent authorities do not recognize it as valid. Among the deepwater species, Symphurus thermophilus lives congregating around "ponds" of sulphur at hydrothermal vents on the seafloor. No other flatfish is known from hydrothermal vents. Many species will enter brackish or fresh water, and a smaller number of soles (families Achiridae and Soleidae) and tonguefish (Cynoglossidae) are entirely restricted to fresh water.
Characteristics
The most obvious characteristic of the flatfish is its asymmetry, with both eyes lying on the same side of the head in the adult fish. In some families, the eyes are usually on the right side of the body (dextral or right-eyed flatfish), and in others, they are usually on the left (sinistral or left-eyed flatfish). The primitive spiny turbots include equal numbers of right- and left-sided individuals, and are generally less asymmetrical than the other families. Other distinguishing features of the order are the presence of protrusible eyes, another adaptation to living on the seabed (benthos), and the extension of the dorsal fin onto the head.
The most basal members of the group, the threadfins, do not closely resemble the flatfishes.
The surface of the fish facing away from the sea floor is pigmented, often serving to camouflage the fish, but sometimes with striking coloured patterns. Some flatfishes are also able to change their pigmentation to match the background, in a manner similar to some cephalopods. The side of the body without the eyes, facing the seabed, is usually colourless or very pale.
In general, flatfishes rely on their camouflage for avoiding predators, but some have aposematic traits such as conspicuous eyespots (e.g., Microchirus ocellatus) and several small tropical species (at least Aseraggodes, Pardachirus and Zebrias) are poisonous. Juveniles of Soleichthys maculosus mimic toxic flatworms of the genus Pseudobiceros in both colours and swimming mode. Conversely, a few octopus species have been reported to mimic flatfishes in colours, shape and swimming mode.
The flounders and spiny turbots eat smaller fish, and have well-developed teeth. They sometimes seek prey in the midwater, away from the bottom, and show fewer extreme adaptations than other families. The soles, by contrast, are almost exclusively bottom-dwellers, and feed on invertebrates. They show a more extreme asymmetry, and may lack teeth on one side of the jaw.
Flatfishes range in size from Tarphops oligolepis, measuring about in length, and weighing , to the Atlantic halibut, at and .
Species and species groups
Brill
Dab
Sanddab
Flounder
Halibut
Megrim
Plaice
Sole
Tonguefish
Turbot
Reproduction
Flatfishes lay eggs that hatch into larvae resembling typical, symmetrical, fish. These are initially elongated, but quickly develop into a more rounded form. The larvae typically have protective spines on the head, over the gills, and in the pelvic and pectoral fins. They also possess a swim bladder, and do not dwell on the bottom, instead dispersing from their hatching grounds as plankton.
The length of the planktonic stage varies between different types of flatfishes, but eventually they begin to metamorphose into the adult form. One of the eyes migrates across the top of the head and onto the other side of the body, leaving the fish blind on one side. The larva also loses its swim bladder and spines, and sinks to the bottom, laying its blind side on the underlying surface.
Origin and evolution
Scientists have been proposing since the 1910s that flatfishes evolved from percoid ancestors. There has been some disagreement whether they are a monophyletic group. Some palaeontologists think that some percomorph groups other than flatfishes were "experimenting" with head asymmetry during the Eocene, and certain molecular studies conclude that the primitive family of Psettodidae evolved their flat bodies and asymmetrical head independently of other flatfish groups. Many scientists, however, argue that pleuronectiformes are monophyletic.
The fossil record indicates that flatfishes might have been present before the Eocene, based on fossil otoliths resembling those of modern pleuronectiforms dating back to the Thanetian and Ypresian stages (57-53 million years ago).
Flatfishes have been cited as dramatic examples of evolutionary adaptation. Richard Dawkins, in The Blind Watchmaker, explains the flatfishes' evolutionary history thus:
...bony fish as a rule have a marked tendency to be flattened in a vertical direction.... It was natural, therefore, that when the ancestors of [flatfish] took to the sea bottom, they should have lain on one side.... But this raised the problem that one eye was always looking down into the sand and was effectively useless. In evolution this problem was solved by the lower eye 'moving' round to the upper side.
The origin of the unusual morphology of flatfishes was enigmatic up to the 2000s, and early researchers suggested that it came about as a result of saltation rather than gradual evolution through natural selection, because a partially migrated eye were considered to have been maladaptive. This started to change in 2008 with a study on the two fossil genera Amphistium and Heteronectes, dated to about 50 million years ago. These genera retain primitive features not seen in modern types of flatfishes. In addition, their heads are less asymmetric than modern flatfishes, retaining one eye on each side of their heads, although the eye on one side is closer to the top of the head than on the other. The more recently described fossil genera Quasinectes and Anorevus have been proposed to show similar morphologies and have also been classified as "stem pleuronectiforms". Suchs findings lead Friedman to conclude that the evolution of flatfish morphology "happened gradually, in a way consistent with evolution via natural selection—not suddenly, as researchers once had little choice but to believe."
To explain the survival advantage of a partially migrated eye, it has been proposed that primitive flatfishes like Amphistium rested with the head propped up above the seafloor (a behaviour sometimes observed in modern flatfishes), enabling them to use their partially migrated eye to see things closer to the seafloor.
While known basal genera like Amphistium and Heteronectes support a gradual acquisition of the flatfish morphology, they were probably not direct ancestors to living pleuronectiforms, as fossil evidence indicate that most flatfish lineages living today were present in the Eocene and contemporaneous with them. It has been suggested that the more primitive forms were eventually outcompeted.
As food
Flatfish is considered a Whitefish because of the high concentration of oils within its liver. Its lean flesh makes for a unique flavor that differs from species to species. Methods of cooking include grilling, pan-frying, baking and deep-frying.
Timeline of genera
See also
Sinistral and dextral
References
Further reading
Gibson, Robin N (Ed) (2008) Flatfishes: biology and exploitation. Wiley.
Munroe, Thomas A (2005) "Distributions and biogeography." Flatfishes: Biology and Exploitation: 42–67.
External links
Information on Canadian fisheries of plaice
Commercial fish
Articles which contain graphical timelines
Extant Paleocene first appearances
Asymmetry | Flatfish | [
"Physics"
] | 2,455 | [
"Symmetry",
"Asymmetry"
] |
46,371 | https://en.wikipedia.org/wiki/Phalangeriformes | Phalangeriformes is a paraphyletic suborder of about 70 species of small to medium-sized arboreal marsupials native to Australia, New Guinea, and Sulawesi. The species are commonly known as possums, opossums, gliders, and cuscus. The common name "(o)possum" for various Phalangeriformes species derives from the creatures' resemblance to the opossums of the Americas (the term comes from Powhatan language aposoum "white animal", from Proto-Algonquian *wa·p-aʔɬemwa "white dog"). However, although opossums are also marsupials, Australasian possums are more closely related to other Australasian marsupials such as kangaroos.
Phalangeriformes are quadrupedal diprotodont marsupials with long tails. The smallest species, indeed the smallest diprotodont marsupial, is the Tasmanian pygmy possum, with an adult head-body length of and a weight of . The largest are the two species of bear cuscus, which may exceed . Phalangeriformes species are typically nocturnal and at least partially arboreal. They inhabit most vegetated habitats, and several species have adjusted well to urban settings. Diets range from generalist herbivores or omnivores (the common brushtail possum) to specialist browsers of eucalyptus (greater glider), insectivores (mountain pygmy possum) and nectar-feeders (honey possum).
Classification
About two-thirds of Australian marsupials belong to the order Diprotodontia, which is split into three suborders, namely the Vombatiformes (wombats and the koala, four species in total); the large and diverse Phalangeriformes (the possums and gliders) and Macropodiformes (kangaroos, potoroos, wallabies and the musky rat-kangaroo). Note: this classification is based on Ruedas & Morales 2005. However, Phalangeriformes has been recovered as paraphyletic with respect to Macropodiformes, rendering the latter a subset of the former if Phalangeriformes are to be considered a natural group.
Suborder Phalangeriformes: possums, gliders and allies
Superfamily Phalangeroidea
Family †Ektopodontidae:
Genus †Ektopodon
†Ektopodon serratus
†Ektopodon stirtoni
†Ektopodon ulta
Family Burramyidae: (pygmy possums)
Genus Burramys
Mountain pygmy possum, B. parvus
Genus Cercartetus
Long-tailed pygmy possum, C. caudatus
Southwestern pygmy possum, C. concinnus
Tasmanian pygmy possum, C. lepidus
Eastern pygmy possum, C. nanus
Family Phalangeridae: (brushtail possums and cuscuses)
Subfamily Ailuropinae
Genus Ailurops
Talaud bear cuscus, A. melanotis
Sulawesi bear cuscus, A. ursinus
Genus Strigocuscus
Sulawesi dwarf cuscus, S. celebensis
Banggai cuscus, S. pelegensis
Subfamily Phalangerinae
Tribe Phalangerini
Genus Phalanger
Gebe cuscus, P. alexandrae
Mountain cuscus, P. carmelitae
Ground cuscus, P. gymnotis
Eastern common cuscus, P. intercastellanus
Woodlark cuscus, P. lullulae
Blue-eyed cuscus, P. matabiru
Telefomin cuscus, P. matanim
Southern common cuscus, P. mimicus
Northern common cuscus, P. orientalis
Ornate cuscus, P. ornatus
Rothschild's cuscus, P. rothschildi
Silky cuscus, P. sericeus
Stein's cuscus, P. vestitus
Genus Spilocuscus
Admiralty Island cuscus, S. kraemeri
Common spotted cuscus, S. maculatus
Waigeou cuscus, S. papuensis
Black-spotted cuscus, S. rufoniger
Blue-eyed spotted cuscus, S. wilsoni
Tribe Trichosurini
Genus Trichosurus
Northern brushtail possum, T. arnhemensis
Short-eared possum, T. caninus
Mountain brushtail possum, T. cunninghami
Coppery brushtail possum, T. johnstonii
Common brushtail possum, T. vulpecula
Genus Wyulda
Scaly-tailed possum, W. squamicaudata
Superfamily Petauroidea
Family Pseudocheiridae: (ring-tailed possums and allies)
Subfamily Hemibelideinae
Genus Hemibelideus
Lemur-like ringtail possum, H. lemuroides
Genus Petauroides
Central greater glider, P. armillatus
Northern greater glider, P. minor
Southern greater glider, P. volans
Subfamily Pseudocheirinae
Genus Petropseudes
Rock-haunting ringtail possum, P. dahli
Genus Pseudocheirus
Common ringtail possum, P. peregrinus
Genus Pseudochirulus
Lowland ringtail possum, P. canescens
Weyland ringtail possum, P. caroli
Cinereus ringtail possum, P. cinereus
Painted ringtail possum, P. forbesi
Herbert River ringtail possum, P. herbertensis
Masked ringtail possum, P. larvatus
Pygmy ringtail possum, P. mayeri
Vogelkop ringtail possum, P. schlegeli
Subfamily Pseudochiropsinae
Genus Pseudochirops
D'Albertis' ringtail possum, P. albertisii
Green ringtail possum, P. archeri
Plush-coated ringtail possum, P. corinnae
Reclusive ringtail possum, P. coronatus
Coppery ringtail possum, P. cupreus
Family Petauridae: (striped possum, Leadbeater's possum, yellow-bellied glider, sugar glider, mahogany glider, squirrel glider)
Genus Dactylopsila
Great-tailed triok, D. megalura
Long-fingered triok, D. palpator
Tate's triok, D. tatei
Striped possum, D. trivirgata
Genus Gymnobelideus
Leadbeater's possum, G. leadbeateri
Genus Petaurus
Northern glider, P. abidi
Savanna glider, P. ariel
Yellow-bellied glider, P. australis
Biak glider, P. biacensis
Sugar glider, P. breviceps
Mahogany glider, P. gracilis
Squirrel glider, P. norfolcensis
Krefft's glider, P. notatus
Family Tarsipedidae: (honey possum)
Genus Tarsipes
Honey possum or noolbenger, T. rostratus
Family Acrobatidae: (feathertail glider and feather-tailed possum)
Genus Acrobates
Feathertail glider, A. pygmaeus
Genus Distoechurus
Feather-tailed possum, D. pennatus
See also
Fauna of Australia
References
Further reading
Possums and Gliders – Australia Zoo
Urban Possums – ABC (Science), Australian Broadcasting Corporation
Possums or Opossums? on Museum of New Zealand Te Papa Tongarewa
Marsupials of Oceania
Extant Oligocene first appearances
Diprotodonts
Paraphyletic groups | Phalangeriformes | [
"Biology"
] | 1,652 | [
"Phylogenetics",
"Paraphyletic groups"
] |
46,374 | https://en.wikipedia.org/wiki/Diatom | A diatom (Neo-Latin diatoma) is any member of a large group comprising several genera of algae, specifically microalgae, found in the oceans, waterways and soils of the world. Living diatoms make up a significant portion of the Earth's biomass: they generate about 20 to 50 percent of the oxygen produced on the planet each year, take in over 6.7 billion tonnes of silicon each year from the waters in which they live, and constitute nearly half of the organic material found in the oceans. The shells of dead diatoms can reach as much as a half-mile (800 m) deep on the ocean floor, and the entire Amazon basin is fertilized annually by 27 million tons of diatom shell dust transported by transatlantic winds from the African Sahara, much of it from the Bodélé Depression, which was once made up of a system of fresh-water lakes.
Diatoms are unicellular organisms: they occur either as solitary cells or in colonies, which can take the shape of ribbons, fans, zigzags, or stars. Individual cells range in size from 2 to 2000 micrometers. In the presence of adequate nutrients and sunlight, an assemblage of living diatoms doubles approximately every 24 hours by asexual multiple fission; the maximum life span of individual cells is about six days. Diatoms have two distinct shapes: a few (centric diatoms) are radially symmetric, while most (pennate diatoms) are broadly bilaterally symmetric.
The unique feature of diatoms is that they are surrounded by a cell wall made of silica (hydrated silicon dioxide), called a frustule. These frustules produce structural coloration, prompting them to be described as "jewels of the sea" and "living opals".
Movement in diatoms primarily occurs passively as a result of both ocean currents and wind-induced water turbulence; however, male gametes of centric diatoms have flagella, permitting active movement to seek female gametes. Similar to plants, diatoms convert light energy to chemical energy by photosynthesis, but their chloroplasts were acquired in different ways.
Unusually for autotrophic organisms, diatoms possess a urea cycle, a feature that they share with animals, although this cycle is used to different metabolic ends in diatoms. The family Rhopalodiaceae also possess a cyanobacterial endosymbiont called a spheroid body. This endosymbiont has lost its photosynthetic properties, but has kept its ability to perform nitrogen fixation, allowing the diatom to fix atmospheric nitrogen. Other diatoms in symbiosis with nitrogen-fixing cyanobacteria are among the genera Hemiaulus, Rhizosolenia and Chaetoceros.
Dinotoms are diatoms that have become endosymbionts inside dinoflagellates. Research on the dinoflagellates Durinskia baltica and Glenodinium foliaceum has shown that the endosymbiont event happened so recently, evolutionarily speaking, that their organelles and genome are still intact with minimal to no gene loss. The main difference between these and free living diatoms is that they have lost their cell wall of silica, making them the only known shell-less diatoms.
The study of diatoms is a branch of phycology. Diatoms are classified as eukaryotes, organisms with a nuclear envelope-bound cell nucleus, that separates them from the prokaryotes archaea and bacteria. Diatoms are a type of plankton called phytoplankton, the most common of the plankton types. Diatoms also grow attached to benthic substrates, floating debris, and on macrophytes. They comprise an integral component of the periphyton community. Another classification divides plankton into eight types based on size: in this scheme, diatoms are classed as microalgae. Several systems for classifying the individual diatom species exist.
Fossil evidence suggests that diatoms originated during or before the early Jurassic period, which was about 150 to 200 million years ago. The oldest fossil evidence for diatoms is a specimen of extant genus Hemiaulus in Late Jurassic aged amber from Thailand.
Diatoms are used to monitor past and present environmental conditions, and are commonly used in studies of water quality. Diatomaceous earth (diatomite) is a collection of diatom shells found in the Earth's crust. They are soft, silica-containing sedimentary rocks which are easily crumbled into a fine powder and typically have a particle size of 10 to 200 μm. Diatomaceous earth is used for a variety of purposes including for water filtration, as a mild abrasive, in cat litter, and as a dynamite stabilizer.
Overview
Diatoms are protists that form massive annual spring and fall blooms in aquatic environments and are estimated to be responsible for about half of photosynthesis in the global oceans. This predictable annual bloom dynamic fuels higher trophic levels and initiates delivery of carbon into the deep ocean biome. Diatoms have complex life history strategies that are presumed to have contributed to their rapid genetic diversification into ~200,000 species that are distributed between the two major diatom groups: centrics and pennates.
Morphology
Diatoms are generally 20 to 200 micrometers in size, with a few larger species. Their yellowish-brown chloroplasts, the site of photosynthesis, are typical of heterokonts, having four cell membranes and containing pigments such as the carotenoid fucoxanthin. Individuals usually lack flagella, but they are present in male gametes of the centric diatoms and have the usual heterokont structure, including the hairs (mastigonemes) characteristic in other groups.
Diatoms are often referred as "jewels of the sea" or "living opals" due to their optical properties. The biological function of this structural coloration is not clear, but it is speculated that it may be related to communication, camouflage, thermal exchange and/or UV protection.
Diatoms build intricate hard but porous cell walls called frustules composed primarily of silica. This siliceous wall can be highly patterned with a variety of pores, ribs, minute spines, marginal ridges and elevations; all of which can be used to delineate genera and species.
The cell itself consists of two halves, each containing an essentially flat plate, or valve, and marginal connecting, or girdle band. One half, the hypotheca, is slightly smaller than the other half, the epitheca. Diatom morphology varies. Although the shape of the cell is typically circular, some cells may be triangular, square, or elliptical. Their distinguishing feature is a hard mineral shell or frustule composed of opal (hydrated, polymerized silicic acid).
Diatoms are divided into two groups that are distinguished by the shape of the frustule: the centric diatoms and the pennate diatoms.
Pennate diatoms are bilaterally symmetric. Each one of their valves have openings that are slits along the raphes and their shells are typically elongated parallel to these raphes. They generate cell movement through cytoplasm that streams along the raphes, always moving along solid surfaces.
Centric diatoms are radially symmetric. They are composed of upper and lower valves – epitheca and hypotheca – each consisting of a valve and a girdle band that can easily slide underneath each other and expand to increase cell content over the diatoms progression. The cytoplasm of the centric diatom is located along the inner surface of the shell and provides a hollow lining around the large vacuole located in the center of the cell. This large, central vacuole is filled by a fluid known as "cell sap" which is similar to seawater but varies with specific ion content. The cytoplasmic layer is home to several organelles, like the chloroplasts and mitochondria. Before the centric diatom begins to expand, its nucleus is at the center of one of the valves and begins to move towards the center of the cytoplasmic layer before division is complete. Centric diatoms have a variety of shapes and sizes, depending on from which axis the shell extends, and if spines are present.
Silicification
Diatom cells are contained within a unique silica cell wall known as a frustule made up of two valves called thecae, that typically overlap one another. The biogenic silica composing the cell wall is synthesised intracellularly by the polymerisation of silicic acid monomers. This material is then extruded to the cell exterior and added to the wall. In most species, when a diatom divides to produce two daughter cells, each cell keeps one of the two-halves and grows a smaller half within it. As a result, after each division cycle, the average size of diatom cells in the population gets smaller. Once such cells reach a certain minimum size, rather than simply divide, they reverse this decline by forming an auxospore, usually through meiosis and sexual reproduction, but exceptions exist. The auxospore expands in size to give rise to a much larger cell, which then returns to size-diminishing divisions.
The exact mechanism of transferring silica absorbed by the diatom to the cell wall is unknown. Much of the sequencing of diatom genes comes from the search for the mechanism of silica uptake and deposition in nano-scale patterns in the frustule. The most success in this area has come from two species, Thalassiosira pseudonana, which has become the model species, as the whole genome was sequenced and methods for genetic control were established, and Cylindrotheca fusiformis, in which the important silica deposition proteins silaffins were first discovered. Silaffins, sets of polycationic peptides, were found in C. fusiformis cell walls and can generate intricate silica structures. These structures demonstrated pores of sizes characteristic to diatom patterns. When T. pseudonana underwent genome analysis it was found that it encoded a urea cycle, including a higher number of polyamines than most genomes, as well as three distinct silica transport genes. In a phylogenetic study on silica transport genes from 8 diverse groups of diatoms, silica transport was found to generally group with species. This study also found structural differences between the silica transporters of pennate (bilateral symmetry) and centric (radial symmetry) diatoms. The sequences compared in this study were used to create a diverse background in order to identify residues that differentiate function in the silica deposition process. Additionally, the same study found that a number of the regions were conserved within species, likely the base structure of silica transport.
These silica transport proteins are unique to diatoms, with no homologs found in other species, such as sponges or rice. The divergence of these silica transport genes is also indicative of the structure of the protein evolving from two repeated units composed of five membrane bound segments, which indicates either gene duplication or dimerization. The silica deposition that takes place from the membrane bound vesicle in diatoms has been hypothesized to be a result of the activity of silaffins and long chain polyamines. This Silica Deposition Vesicle (SDV) has been characterized as an acidic compartment fused with Golgi-derived vesicles. These two protein structures have been shown to create sheets of patterned silica in-vivo with irregular pores on the scale of diatom frustules. One hypothesis as to how these proteins work to create complex structure is that residues are conserved within the SDV's, which is unfortunately difficult to identify or observe due to the limited number of diverse sequences available. Though the exact mechanism of the highly uniform deposition of silica is as yet unknown, the Thalassiosira pseudonana genes linked to silaffins are being looked to as targets for genetic control of nanoscale silica deposition.
The ability of diatoms to make silica-based cell walls has been the subject of fascination for centuries. It started with a microscopic observation by an anonymous English country nobleman in 1703, who observed an object that looked like a chain of regular parallelograms and debated whether it was just crystals of salt, or a plant. The viewer decided that it was a plant because the parallelograms didn't separate upon agitation, nor did they vary in appearance when dried or subjected to warm water (in an attempt to dissolve the "salt"). Unknowingly, the viewer's confusion captured the essence of diatoms—mineral utilizing plants. It is not clear when it was determined that diatom cell walls are made of silica, but in 1939 a seminal reference characterized the material as silicic acid in a "subcolloidal" state Identification of the main chemical component of the cell wall spurred investigations into how it was made. These investigations have involved, and been propelled by, diverse approaches including, microscopy, chemistry, biochemistry, material characterisation, molecular biology, 'omics, and transgenic approaches. The results from this work have given a better understanding of cell wall formation processes, establishing fundamental knowledge which can be used to create models that contextualise current findings and clarify how the process works.
The process of building a mineral-based cell wall inside the cell, then exporting it outside, is a massive event that must involve large numbers of genes and their protein products. The act of building and exocytosing this large structural object in a short time period, synched with cell cycle progression, necessitates substantial physical movements within the cell as well as dedication of a significant proportion of the cell's biosynthetic capacities.
The first characterisations of the biochemical processes and components involved in diatom silicification were made in the late 1990s. These were followed by insights into how higher order assembly of silica structures might occur. More recent reports describe the identification of novel components involved in higher order processes, the dynamics documented through real-time imaging, and the genetic manipulation of silica structure. The approaches established in these recent works provide practical avenues to not only identify the components involved in silica cell wall formation but to elucidate their interactions and spatio-temporal dynamics. This type of holistic understanding will be necessary to achieve a more complete understanding of cell wall synthesis.
Behaviour
Most centric and araphid pennate diatoms are nonmotile, and their relatively dense cell walls cause them to readily sink. Planktonic forms in open water usually rely on turbulent mixing of the upper layers of the oceanic waters by the wind to keep them suspended in sunlit surface waters. Many planktonic diatoms have also evolved features that slow their sinking rate, such as spines or the ability to grow in colonial chains. These adaptations increase their surface area to volume ratio and drag, allowing them to stay suspended in the water column longer. Individual cells may regulate buoyancy via an ionic pump.
Some pennate diatoms are capable of a type of locomotion called "gliding", which allows them to move across surfaces via adhesive mucilage secreted through a seamlike structure called the raphe. In order for a diatom cell to glide, it must have a solid substrate for the mucilage to adhere to.
Cells are solitary or united into colonies of various kinds, which may be linked by siliceous structures; mucilage pads, stalks or tubes; amorphous masses of mucilage; or by threads of chitin (polysaccharide), which are secreted through strutted processes of the cell.
Life cycle
Reproduction and cell size
Reproduction among these organisms is asexual by binary fission, during which the diatom divides into two parts, producing two "new" diatoms with identical genes. Each new organism receives one of the two frustules – one larger, the other smaller – possessed by the parent, which is now called the epitheca; and is used to construct a second, smaller frustule, the hypotheca. The diatom that received the larger frustule becomes the same size as its parent, but the diatom that received the smaller frustule remains smaller than its parent. This causes the average cell size of this diatom population to decrease. It has been observed, however, that certain taxa have the ability to divide without causing a reduction in cell size. Nonetheless, in order to restore the cell size of a diatom population for those that do endure size reduction, sexual reproduction and auxospore formation must occur.
Cell division
Vegetative cells of diatoms are diploid (2N) and so meiosis can take place, producing male and female gametes which then fuse to form the zygote. The zygote sheds its silica theca and grows into a large sphere covered by an organic membrane, the auxospore. A new diatom cell of maximum size, the initial cell, forms within the auxospore thus beginning a new generation. Resting spores may also be formed as a response to unfavourable environmental conditions with germination occurring when conditions improve.
A defining characteristic of all diatoms is their restrictive and bipartite silica cell wall that causes them to progressively shrink during asexual cell division. At a critically small cell size and under certain conditions, auxosporulation restitutes cell size and prevents clonal death. The entire lifecycles of only a few diatoms have been described and rarely have sexual events been captured in the environment.
Sexual reproduction
Most eukaryotes are capable of sexual reproduction involving meiosis. Sexual reproduction appears to be an obligatory phase in the life cycle of diatoms, particularly as cell size decreases with successive vegetative divisions. Sexual reproduction involves production of gametes and the fusion of gametes to form a zygote in which maximal cell size is restored. The signaling that triggers the sexual phase is favored when cells accumulate together, so that the distance between them is reduced and the contacts and/or the perception of chemical cues is facilitated.
An exploration of the genomes of five diatoms and one diatom transcriptome led to the identification of 42 genes potentially involved in meiosis. Thus a meiotic toolkit appears to be conserved in these six diatom species, indicating a central role of meiosis in diatoms as in other eukaryotes.
Sperm motility
Diatoms are mostly non-motile; however, sperm found in some species can be flagellated, though motility is usually limited to a gliding motion. In centric diatoms, the small male gametes have one flagellum while the female gametes are large and non-motile (oogamous). Conversely, in pennate diatoms both gametes lack flagella (isogamous). Certain araphid species, that is pennate diatoms without a raphe (seam), have been documented as anisogamous and are, therefore, considered to represent a transitional stage between centric and raphid pennate diatoms, diatoms with a raphe.
Degradation by microbes
Certain species of bacteria in oceans and lakes can accelerate the rate of dissolution of silica in dead and living diatoms by using hydrolytic enzymes to break down the organic algal material.
Ecology
Distribution
Diatoms are a widespread group and can be found in the oceans, in fresh water, in soils, and on damp surfaces. They are one of the dominant components of phytoplankton in nutrient-rich coastal waters and during oceanic spring blooms, since they can divide more rapidly than other groups of phytoplankton. Most live pelagically in open water, although some live as surface films at the water-sediment interface (benthic), or even under damp atmospheric conditions. They are especially important in oceans, where a 2003 study found that they contribute an estimated 45% of the total oceanic primary production of organic material. However, a more recent 2016 study estimates that the number is closer to 20%. Spatial distribution of marine phytoplankton species is restricted both horizontally and vertically.
Growth
Planktonic diatoms in freshwater and marine environments typically exhibit a "boom and bust" (or "bloom and bust") lifestyle. When conditions in the upper mixed layer (nutrients and light) are favourable (as at the spring), their competitive edge and rapid growth rate enables them to dominate phytoplankton communities ("boom" or "bloom"). As such they are often classed as opportunistic r-strategists (i.e. those organisms whose ecology is defined by a high growth rate, r).
Impact
The freshwater diatom Didymosphenia geminata, commonly known as Didymo, causes severe environmental degradation in water-courses where it blooms, producing large quantities of a brown jelly-like material called "brown snot" or "rock snot". This diatom is native to Europe and is an invasive species both in the antipodes and in parts of North America. The problem is most frequently recorded from Australia and New Zealand.
When conditions turn unfavourable, usually upon depletion of nutrients, diatom cells typically increase in sinking rate and exit the upper mixed layer ("bust"). This sinking is induced by either a loss of buoyancy control, the synthesis of mucilage that sticks diatoms cells together, or the production of heavy resting spores. Sinking out of the upper mixed layer removes diatoms from conditions unfavourable to growth, including grazer populations and higher temperatures (which would otherwise increase cell metabolism). Cells reaching deeper water or the shallow seafloor can then rest until conditions become more favourable again. In the open ocean, many sinking cells are lost to the deep, but refuge populations can persist near the thermocline.
Ultimately, diatom cells in these resting populations re-enter the upper mixed layer when vertical mixing entrains them. In most circumstances, this mixing also replenishes nutrients in the upper mixed layer, setting the scene for the next round of diatom blooms. In the open ocean (away from areas of continuous upwelling), this cycle of bloom, bust, then return to pre-bloom conditions typically occurs over an annual cycle, with diatoms only being prevalent during the spring and early summer. In some locations, however, an autumn bloom may occur, caused by the breakdown of summer stratification and the entrainment of nutrients while light levels are still sufficient for growth. Since vertical mixing is increasing, and light levels are falling as winter approaches, these blooms are smaller and shorter-lived than their spring equivalents.
In the open ocean, the diatom (spring) bloom is typically ended by a shortage of silicon. Unlike other minerals, the requirement for silicon is unique to diatoms and it is not regenerated in the plankton ecosystem as efficiently as, for instance, nitrogen or phosphorus nutrients. This can be seen in maps of surface nutrient concentrations – as nutrients decline along gradients, silicon is usually the first to be exhausted (followed normally by nitrogen then phosphorus).
Because of this bloom-and-bust cycle, diatoms are believed to play a disproportionately important role in the export of carbon from oceanic surface waters (see also the biological pump). Significantly, they also play a key role in the regulation of the biogeochemical cycle of silicon in the modern ocean.
Reason for success
Diatoms are ecologically successful, and occur in virtually every environment that contains water – not only oceans, seas, lakes, and streams, but also soil and wetlands. The use of silicon by diatoms is believed by many researchers to be the key to this ecological success. Raven (1983) noted that, relative to organic cell walls, silica frustules require less energy to synthesize (approximately 8% of a comparable organic wall), potentially a significant saving on the overall cell energy budget. In a now classic study, Egge and Aksnes (1992) found that diatom dominance of mesocosm communities was directly related to the availability of silicic acid – when concentrations were greater than 2 μmol m−3, they found that diatoms typically represented more than 70% of the phytoplankton community. Other researchers have suggested that the biogenic silica in diatom cell walls acts as an effective pH buffering agent, facilitating the conversion of bicarbonate to dissolved CO2 (which is more readily assimilated). More generally, notwithstanding these possible advantages conferred by their use of silicon, diatoms typically have higher growth rates than other algae of the same corresponding size.
Sources for collection
Diatoms can be obtained from multiple sources. Marine diatoms can be collected by direct water sampling, and benthic forms can be secured by scraping barnacles, oyster and other shells. Diatoms are frequently present as a brown, slippery coating on submerged stones and sticks, and may be seen to "stream" with river current. The surface mud of a pond, ditch, or lagoon will almost always yield some diatoms. Living diatoms are often found clinging in great numbers to filamentous algae, or forming gelatinous masses on various submerged plants. Cladophora is frequently covered with Cocconeis, an elliptically shaped diatom; Vaucheria is often covered with small forms. Since diatoms form an important part of the food of molluscs, tunicates, and fishes, the alimentary tracts of these animals often yield forms that are not easily secured in other ways. Diatoms can be made to emerge by filling a jar with water and mud, wrapping it in black paper and letting direct sunlight fall on the surface of the water. Within a day, the diatoms will come to the top in a scum and can be isolated.
Biogeochemistry
Silica cycle
The diagram shows the major fluxes of silicon in the current ocean. Most biogenic silica in the ocean (silica produced by biological activity) comes from diatoms. Diatoms extract dissolved silicic acid from surface waters as they grow, and return it to the water column when they die. Inputs of silicon arrive from above via aeolian dust, from the coasts via rivers, and from below via seafloor sediment recycling, weathering, and hydrothermal activity.
Although diatoms may have existed since the Triassic, the timing of their ascendancy and "take-over" of the silicon cycle occurred more recently. Prior to the Phanerozoic (before 544 Ma), it is believed that microbial or inorganic processes weakly regulated the ocean's silicon cycle. Subsequently, the cycle appears dominated (and more strongly regulated) by the radiolarians and siliceous sponges, the former as zooplankton, the latter as sedentary filter-feeders primarily on the continental shelves. Within the last 100 My, it is thought that the silicon cycle has come under even tighter control, and that this derives from the ecological ascendancy of the diatoms.
However, the precise timing of the "take-over" remains unclear, and different authors have conflicting interpretations of the fossil record. Some evidence, such as the displacement of siliceous sponges from the shelves, suggests that this takeover began in the Cretaceous (146 Ma to 66 Ma), while evidence from radiolarians suggests "take-over" did not begin until the Cenozoic (66 Ma to present).
Carbon cycle
The diagram depicts some mechanisms by which marine diatoms contribute to the biological carbon pump and influence the ocean carbon cycle. The anthropogenic CO2 emission to the atmosphere (mainly generated by fossil fuel burning and deforestation) is nearly 11 gigatonne carbon (GtC) per year, of which almost 2.5 GtC is taken up by the surface ocean. In surface seawater (pH 8.1–8.4), bicarbonate () and carbonate ions () constitute nearly 90 and <10% of dissolved inorganic carbon (DIC) respectively, while dissolved CO2 (CO2 aqueous) contributes <1%. Despite this low level of CO2 in the ocean and its slow diffusion rate in water, diatoms fix 10–20 GtC annually via photosynthesis thanks to their carbon dioxide concentrating mechanisms, allowing them to sustain marine food chains. In addition, 0.1–1% of this organic material produced in the euphotic layer sinks down as particles, thus transferring the surface carbon toward the deep ocean and sequestering atmospheric CO2 for thousands of years or longer. The remaining organic matter is remineralized through respiration. Thus, diatoms are one of the main players in this biological carbon pump, which is arguably the most important biological mechanism in the Earth System allowing CO2 to be removed from the carbon cycle for very long period.
Urea cycle
A feature of diatoms is the urea cycle, which links them evolutionarily to animals. In 2011, Allen et al. established that diatoms have a functioning urea cycle. This result was significant, since prior to this, the urea cycle was thought to have originated with the metazoans which appeared several hundreds of millions of years before the diatoms. Their study demonstrated that while diatoms and animals use the urea cycle for different ends, they are seen to be evolutionarily linked in such a way that animals and plants are not.
While often overlooked in photosynthetic organisms, the mitochondria also play critical roles in energy balance. Two nitrogen-related pathways are relevant and they may also change under ammonium () nutrition compared with nitrate () nutrition. First, in diatoms, and likely some other algae, there is a urea cycle. The long-known function of the urea cycle in animals is to excrete excess nitrogen produced by amino acid Catabolism; like photorespiration, the urea cycle had long been considered a waste pathway. However, in diatoms the urea cycle appears to play a role in exchange of nutrients between the mitochondria and the cytoplasm, and potentially the plastid and may help to regulate ammonium metabolism. Because of this cycle, marine diatoms, in contrast to chlorophytes, also have acquired a mitochondrial urea transporter and, in fact, based on bioinformatics, a complete mitochondrial GS-GOGAT cycle has been hypothesised.
Other
Diatoms are mainly photosynthetic; however a few are obligate heterotrophs and can live in the absence of light provided an appropriate organic carbon source is available.
Photosynthetic diatoms that find themselves in an environment absent of oxygen and/or sunlight can switch to anaerobic respiration known as nitrate respiration (DNRA), and stay dormant for up till months and decades.
Major pigments of diatoms are chlorophylls a and c, beta-carotene, fucoxanthin, diatoxanthin and diadinoxanthin.
Taxonomy
Diatoms belong to a large group of protists, many of which contain plastids rich in chlorophylls a and c. The group has been variously referred to as heterokonts, chrysophytes, chromists or stramenopiles. Many are autotrophs such as golden algae and kelp; and heterotrophs such as water moulds, opalinids, and actinophryid heliozoa. The classification of this area of protists is still unsettled. In terms of rank, they have been treated as a division, phylum, kingdom, or something intermediate to those. Consequently, diatoms are ranked anywhere from a class, usually called Diatomophyceae or Bacillariophyceae, to a division (=phylum), usually called Bacillariophyta, with corresponding changes in the ranks of their subgroups.
Genera and species
An estimated 20,000 extant diatom species are believed to exist, of which around 12,000 have been named to date according to Guiry, 2012 (other sources give a wider range of estimates). Around 1,000–1,300 diatom genera have been described, both extant and fossil, of which some 250–300 exist only as fossils.
Classes and orders
For many years the diatoms—treated either as a class (Bacillariophyceae) or a phylum (Bacillariophyta)—were divided into just 2 orders, corresponding to the centric and the pennate diatoms (Centrales and Pennales). This classification was extensively overhauled by Round, Crawford and Mann in 1990 who treated the diatoms at a higher rank (division, corresponding to phylum in zoological classification), and promoted the major classification units to classes, maintaining the centric diatoms as a single class Coscinodiscophyceae, but splitting the former pennate diatoms into 2 separate classes, Fragilariophyceae and Bacillariophyceae (the latter older name retained but with an emended definition), between them encompassing 45 orders, the majority of them new.
Today (writing at mid 2020) it is recognised that the 1990 system of Round et al. is in need of revision with the advent of newer molecular work, however the best system to replace it is unclear, and current systems in widespread use such as AlgaeBase, the World Register of Marine Species and its contributing database DiatomBase, and the system for "all life" represented in Ruggiero et al., 2015, all retain the Round et al. treatment as their basis, albeit with diatoms as a whole treated as a class rather than division/phylum, and Round et al.'s classes reduced to subclasses, for better agreement with the treatment of phylogenetically adjacent groups and their containing taxa. (For references refer the individual sections below).
One proposal, by Linda Medlin and co-workers commencing in 2004, is for some of the centric diatom orders considered more closely related to the pennates to be split off as a new class, Mediophyceae, itself more closely aligned with the pennate diatoms than the remaining centrics. This hypothesis—later designated the Coscinodiscophyceae-Mediophyceae-Bacillariophyceae, or Coscinodiscophyceae+(Mediophyceae+Bacillariophyceae) (CMB) hypothesis—has been accepted by D.G. Mann among others, who uses it as the basis for the classification of diatoms as presented in Adl. et al.'s series of syntheses (2005, 2012, 2019), and also in the Bacillariophyta chapter of the 2017 Handbook of the Protists edited by Archibald et al., with some modifications reflecting the apparent non-monophyly of Medlin et al. original "Coscinodiscophyceae". Meanwhile, a group led by E.C. Theriot favours a different hypothesis of phylogeny, which has been termed the structural gradation hypothesis (SGH) and does not recognise the Mediophyceae as a monophyletic group, while another analysis, that of Parks et al., 2018, finds that the radial centric diatoms (Medlin et al.'s Coscinodiscophyceae) are not monophyletic, but supports the monophyly of Mediophyceae minus Attheya, which is an anomalous genus. Discussion of the relative merits of these conflicting schemes continues by the various parties involved.
Adl et al., 2019 treatment
In 2019, Adl et al. presented the following classification of diatoms, while noting: "This revision reflects numerous advances in the phylogeny of the diatoms over the last decade. Due to our poor taxon sampling outside of the Mediophyceae and pennate diatoms, and the known and anticipated diversity of all diatoms, many clades appear at a high classification level (and the higher level classification is rather flat)." This classification treats diatoms as a phylum (Diatomeae/Bacillariophyta), accepts the class Mediophyceae of Medlin and co-workers, introduces new subphyla and classes for a number of otherwise isolated genera, and re-ranks a number of previously established taxa as subclasses, but does not list orders or families. Inferred ranks have been added for clarity (Adl. et al. do not use ranks, but the intended ones in this portion of the classification are apparent from the choice of endings used, within the system of botanical nomenclature employed).
Clade Diatomista Derelle et al. 2016, emend. Cavalier-Smith 2017 (diatoms plus a subset of other ochrophyte groups)
Phylum Diatomeae Dumortier 1821 [= Bacillariophyta Haeckel 1878] (diatoms)
Subphylum Leptocylindrophytina D.G. Mann in Adl et al. 2019
Class Leptocylindrophyceae D.G. Mann in Adl et al. 2019 (Leptocylindrus, Tenuicylindrus)
Class Corethrophyceae D.G. Mann in Adl et al. 2019 (Corethron)
Subphylum Ellerbeckiophytina D.G. Mann in Adl et al. 2019 (Ellerbeckia)
Subphylum Probosciophytina D.G. Mann in Adl et al. 2019 (Proboscia)
Subphylum Melosirophytina D.G. Mann in Adl et al. 2019 (Aulacoseira, Melosira, Hyalodiscus, Stephanopyxis, Paralia, Endictya)
Subphylum Coscinodiscophytina Medlin & Kaczmarska 2004, emend. (Actinoptychus, Coscinodiscus, Actinocyclus, Asteromphalus, Aulacodiscus, Stellarima)
Subphylum Rhizosoleniophytina D.G. Mann in Adl et al. 2019 (Guinardia, Rhizosolenia, Pseudosolenia)
Subphylum Arachnoidiscophytina D.G. Mann in Adl et al. 2019 (Arachnoidiscus)
Subphylum Bacillariophytina Medlin & Kaczmarska 2004, emend.
Class Mediophyceae Jouse & Proshkina-Lavrenko in Medlin & Kaczmarska 2004
Subclass Chaetocerotophycidae Round & R.M. Crawford in Round et al. 1990, emend.
Subclass Lithodesmiophycidae Round & R.M. Crawford in Round et al. 1990, emend.
Subclass Thalassiosirophycidae Round & R.M. Crawford in Round et al. 1990
Subclass Cymatosirophycidae Round & R.M. Crawford in Round et al. 1990
Subclass Odontellophycidae D.G. Mann in Adl et al. 2019
Subclass Chrysanthemodiscophycidae D.G. Mann in Adl et al. 2019
Class Biddulphiophyceae D.G. Mann in Adl et al. 2019
Subclass Biddulphiophycidae Round and R.M. Crawford in Round et al. 1990, emend.
Biddulphiophyceae incertae sedis (Attheya)
Class Bacillariophyceae Haeckel 1878, emend.
Bacillariophyceae incertae sedis (Striatellaceae)
Subclass Urneidophycidae Medlin 2016
Subclass Fragilariophycidae Round in Round, Crawford & Mann 1990, emend.
Subclass Bacillariophycidae D.G. Mann in Round, Crawford & Mann 1990, emend.
See taxonomy of diatoms for more details.
Gallery
Three diatom species were sent to the International Space Station, including the huge (6 mm length) diatoms of Antarctica and the exclusive colonial diatom, Bacillaria paradoxa. The cells of Bacillaria moved next to each other in partial but opposite synchrony by a microfluidics method.
Evolution and fossil record
Origin
Heterokont chloroplasts appear to derive from those of red algae, rather than directly from prokaryotes as occurred in plants. This suggests they had a more recent origin than many other algae. However, fossil evidence is scant, and only with the evolution of the diatoms themselves do the heterokonts make a serious impression on the fossil record.
Earliest fossils
The earliest known fossil diatoms date from the early Jurassic (~185 Ma ago), although the molecular clock and sedimentary evidence suggests an earlier origin. It has been suggested that their origin may be related to the end-Permian mass extinction (~250 Ma), after which many marine niches were opened. The gap between this event and the time that fossil diatoms first appear may indicate a period when diatoms were unsilicified and their evolution was cryptic. Since the advent of silicification, diatoms have made a significant impression on the fossil record, with major fossil deposits found as far back as the early Cretaceous, and with some rocks such as diatomaceous earth, being composed almost entirely of them.
Relation to grasslands
The expansion of grassland biomes and the evolutionary radiation of grasses during the Miocene is believed to have increased the flux of soluble silicon to the oceans, and it has been argued that this promoted the diatoms during the Cenozoic era. Recent work suggests that diatom success is decoupled from the evolution of grasses, although both diatom and grassland diversity increased strongly from the middle Miocene.
Relation to climate
Diatom diversity over the Cenozoic has been very sensitive to global temperature, particularly to the equator-pole temperature gradient. Warmer oceans, particularly warmer polar regions, have in the past been shown to have had substantially lower diatom diversity. Future warm oceans with enhanced polar warming, as projected in global-warming scenarios, could thus in theory result in a significant loss of diatom diversity, although from current knowledge it is impossible to say if this would occur rapidly or only over many tens of thousands of years.
Method of investigation
The fossil record of diatoms has largely been established through the recovery of their siliceous frustules in marine and non-marine sediments. Although diatoms have both a marine and non-marine stratigraphic record, diatom biostratigraphy, which is based on time-constrained evolutionary originations and extinctions of unique taxa, is only well developed and widely applicable in marine systems. The duration of diatom species ranges have been documented through the study of ocean cores and rock sequences exposed on land. Where diatom biozones are well established and calibrated to the geomagnetic polarity time scale (e.g., Southern Ocean, North Pacific, eastern equatorial Pacific), diatom-based age estimates may be resolved to within <100,000 years, although typical age resolution for Cenozoic diatom assemblages is several hundred thousand years.
Diatoms preserved in lake sediments are widely used for paleoenvironmental reconstructions of Quaternary climate, especially for closed-basin lakes which experience fluctuations in water depth and salinity.
Isotope records
When diatoms die their shells (frustules) can settle on the seafloor and become microfossils. Over time, these microfossils become buried as opal deposits in the marine sediment. Paleoclimatology is the study of past climates. Proxy data is used in order to relate elements collected in modern-day sedimentary samples to climatic and oceanic conditions in the past. Paleoclimate proxies refer to preserved or fossilized physical markers which serve as substitutes for direct meteorological or ocean measurements. An example of proxies is the use of diatom isotope records of δ13C, δ18O, δ30Si (δ13Cdiatom, δ18Odiatom, and δ30Sidiatom). In 2015, Swann and Snelling used these isotope records to document historic changes in the photic zone conditions of the north-west Pacific Ocean, including nutrient supply and the efficiency of the soft-tissue biological pump, from the modern day back to marine isotope stage 5e, which coincides with the last interglacial period. Peaks in opal productivity in the marine isotope stage are associated with the breakdown of the regional halocline stratification and increased nutrient supply to the photic zone.
The initial development of the halocline and stratified water column has been attributed to the onset of major Northern Hemisphere glaciation at 2.73 Ma, which increased the flux of freshwater to the region, via increased monsoonal rainfall and/or glacial meltwater, and sea surface temperatures. The decrease of abyssal water upwelling associated with this may have contributed to the establishment of globally cooler conditions and the expansion of glaciers across the Northern Hemisphere from 2.73 Ma. While the halocline appears to have prevailed through the late Pliocene and early Quaternary glacial–interglacial cycles, other studies have shown that the stratification boundary may have broken down in the late Quaternary at glacial terminations and during the early part of interglacials.
Diversification
The Cretaceous record of diatoms is limited, but recent studies reveal a progressive diversification of diatom types. The Cretaceous–Paleogene extinction event, which in the oceans dramatically affected organisms with calcareous skeletons, appears to have had relatively little impact on diatom evolution.
Turnover
Although no mass extinctions of marine diatoms have been observed during the Cenozoic, times of relatively rapid evolutionary turnover in marine diatom species assemblages occurred near the Paleocene–Eocene boundary, and at the Eocene–Oligocene boundary. Further turnover of assemblages took place at various times between the middle Miocene and late Pliocene, in response to progressive cooling of polar regions and the development of more endemic diatom assemblages.
A global trend toward more delicate diatom frustules has been noted from the Oligocene to the Quaternary. This coincides with an increasingly more vigorous circulation of the ocean's surface and deep waters brought about by increasing latitudinal thermal gradients at the onset of major ice sheet expansion on Antarctica and progressive cooling through the Neogene and Quaternary towards a bipolar glaciated world. This caused diatoms to take in less silica for the formation of their frustules. Increased mixing of the oceans renews silica and other nutrients necessary for diatom growth in surface waters, especially in regions of coastal and oceanic upwelling.
Genetics
Expressed sequence tagging
In 2002, the first insights into the properties of the Phaeodactylum tricornutum gene repertoire were described using 1,000 expressed sequence tags (ESTs). Subsequently, the number of ESTs was extended to 12,000 and the diatom EST database was constructed for functional analyses. These sequences have been used to make a comparative analysis between P. tricornutum and the putative complete proteomes from the green alga Chlamydomonas reinhardtii, the red alga Cyanidioschyzon merolae, and the diatom Thalassiosira pseudonana. The diatom EST database now consists of over 200,000 ESTs from P. tricornutum (16 libraries) and T. pseudonana (7 libraries) cells grown in a range of different conditions, many of which correspond to different abiotic stresses.
Genome sequencing
In 2004, the entire genome of the centric diatom, Thalassiosira pseudonana (32.4 Mb) was sequenced, followed in 2008 with the sequencing of the pennate diatom, Phaeodactylum tricornutum (27.4 Mb). Comparisons of the two reveal that the P. tricornutum genome includes fewer genes (10,402 opposed to 11,776) than T. pseudonana; no major synteny (gene order) could be detected between the two genomes. T. pseudonana genes show an average of ~1.52 introns per gene as opposed to 0.79 in P. tricornutum, suggesting recent widespread intron gain in the centric diatom. Despite relatively recent evolutionary divergence (90 million years), the extent of molecular divergence between centrics and pennates indicates rapid evolutionary rates within the Bacillariophyceae compared to other eukaryotic groups. Comparative genomics also established that a specific class of transposable elements, the Diatom Copia-like retrotransposons (or CoDis), has been significantly amplified in the P. tricornutum genome with respect to T. pseudonana, constituting 5.8 and 1% of the respective genomes.
Endosymbiotic gene transfer
Diatom genomics brought much information about the extent and dynamics of the endosymbiotic gene transfer (EGT) process. Comparison of the T. pseudonana proteins with homologs in other organisms suggested that hundreds have their closest homologs in the Plantae lineage. EGT towards diatom genomes can be illustrated by the fact that the T. pseudonana genome encodes six proteins which are most closely related to genes encoded by the Guillardia theta (cryptomonad) nucleomorph genome. Four of these genes are also found in red algal plastid genomes, thus demonstrating successive EGT from red algal plastid to red algal nucleus (nucleomorph) to heterokont host nucleus. More recent phylogenomic analyses of diatom proteomes provided evidence for a prasinophyte-like endosymbiont in the common ancestor of chromalveolates as supported by the fact the 70% of diatom genes of Plantae origin are of green lineage provenance and that such genes are also found in the genome of other stramenopiles. Therefore, it was proposed that chromalveolates are the product of serial secondary endosymbiosis first with a green algae, followed by a second one with a red algae that conserved the genomic footprints of the previous but displaced the green plastid. However, phylogenomic analyses of diatom proteomes and chromalveolate evolutionary history will likely take advantage of complementary genomic data from under-sequenced lineages such as red algae.
Horizontal gene transfer
In addition to EGT, horizontal gene transfer (HGT) can occur independently of an endosymbiotic event. The publication of the P. tricornutum genome reported that at least 587 P. tricornutum genes appear to be most closely related to bacterial genes, accounting for more than 5% of the P. tricornutum proteome. About half of these are also found in the T. pseudonana genome, attesting their ancient incorporation in the diatom lineage.
Genetic engineering
To understand the biological mechanisms which underlie the great importance of diatoms in geochemical cycles, scientists have used the Phaeodactylum tricornutum and Thalassiosira spp. species as model organisms since the 90's.
Few molecular biology tools are currently available to generate mutants or transgenic lines : plasmids containing transgenes are inserted into the cells using the biolistic method or transkingdom bacterial conjugation (with 10−6 and 10−4 yield respectively), and other classical transfection methods such as electroporation or use of PEG have been reported to provide results with lower efficiencies.
Transfected plasmids can be either randomly integrated into the diatom's chromosomes or maintained as stable circular episomes (thanks to the CEN6-ARSH4-HIS3 yeast centromeric sequence). The phleomycin/zeocin resistance gene Sh Ble is commonly used as a selection marker, and various transgenes have been successfully introduced and expressed in diatoms with stable transmissions through generations, or with the possibility to remove it.
Furthermore, these systems now allow the use of the CRISPR-Cas genome edition tool, leading to a fast production of functional knock-out mutants and a more accurate comprehension of the diatoms' cellular processes.
Human uses
Paleontology
Decomposition and decay of diatoms leads to organic and inorganic (in the form of silicates) sediment, the inorganic component of which can lead to a method of analyzing past marine environments by corings of ocean floors or bay muds, since the inorganic matter is embedded in deposition of clays and silts and forms a permanent geological record of such marine strata (see siliceous ooze).
Industrial
Diatoms, and their shells (frustules) as diatomite or diatomaceous earth, are important industrial resources used for fine polishing and liquid filtration. The complex structure of their microscopic shells has been proposed as a material for nanotechnology.
Diatomite is considered to be a natural nano material and has many uses and applications such as: production of various ceramic products, construction ceramics, refractory ceramics, special oxide ceramics, for production of humidity control materials, used as filtration material, material in the cement production industry, initial material for production of prolonged-release drug carriers, absorption material in an industrial scale, production of porous ceramics, glass industry, used as catalyst support, as a filler in plastics and paints, purification of industrial waters, pesticide holder, as well as for improving the physical and chemical characteristics of certain soils, and other uses.
Diatoms are also used to help determine the origin of materials containing them, including seawater.
Nanotechnology
The deposition of silica by diatoms may also prove to be of utility to nanotechnology. Diatom cells repeatedly and reliably manufacture valves of various shapes and sizes, potentially allowing diatoms to manufacture micro- or nano-scale structures which may be of use in a range of devices, including: optical systems; semiconductor nanolithography; and even vehicles for drug delivery. With an appropriate artificial selection procedure, diatoms that produce valves of particular shapes and sizes might be evolved for cultivation in chemostat cultures to mass-produce nanoscale components. It has also been proposed that diatoms could be used as a component of solar cells by substituting photosensitive titanium dioxide for the silicon dioxide that diatoms normally use to create their cell walls. Diatom biofuel producing solar panels have also been proposed.
Forensic
The main goal of diatom analysis in forensics is to differentiate a death by submersion from a post-mortem immersion of a body in water. Laboratory tests may reveal the presence of diatoms in the body. Since the silica-based skeletons of diatoms do not readily decay, they can sometimes be detected even in heavily decomposed bodies. As they do not occur naturally in the body, if laboratory tests show diatoms in the corpse that are of the same species found in the water where the body was recovered, then it may be good evidence of drowning as the cause of death. The blend of diatom species found in a corpse may be the same or different from the surrounding water, indicating whether the victim drowned in the same site in which the body was found.
History of discovery
The first illustrations of diatoms are found in an article from 1703 in Transactions of the Royal Society showing unmistakable drawings of Tabellaria. Although the publication was authored by an unnamed English gentleman, there is recent evidence that he was Charles King of Staffordshire. The first formally identified diatom, the colonial Bacillaria paxillifera, was discovered and described in 1783 by Danish naturalist Otto Friedrich Müller. Like many others after him, he wrongly thought that it was an animal due to its ability to move. Even Charles Darwin saw diatom remains in dust whilst in the Cape Verde Islands, although he was not sure what they were. It was only later that they were identified for him as siliceous polygastrics. The infusoria that Darwin later noted in the face paint of Fueguinos, native inhabitants of Tierra del Fuego in the southern end of South America, were later identified in the same way. During his lifetime, the siliceous polygastrics were clarified as belonging to the Diatomaceae, and Darwin struggled to understand the reasons underpinning their beauty. He exchanged opinions with the noted cryptogamist G. H. K. Thwaites on the topic. In the fourth edition of On the Origin of Species, he wrote, "Few objects are more beautiful than the minute siliceous cases of the diatomaceae: were these created that they might be examined and admired under the high powers of the microscope?" and reasoned that their exquisite morphologies must have functional underpinnings rather than having been created purely for humans to admire.
See also
Highly branched isoprenoid, long-chain alkenes produced by a small number of marine diatoms
Notes
References
External links
Diatom EST database,
Plankton*Net, taxonomic database including images of diatom species
Life History and Ecology of Diatoms, University of California Museum of Paleontology
Diatoms: 'Nature's Marbles', Eureka site, University of Bergen
Diatom life history and ecology , Microfossil Image Recovery and Circulation for Learning and Education (MIRACLE), University College London
Diatom page , Royal Botanic Garden Edinburgh
Geometry and Pattern in Nature 3: The holes in radiolarian and diatom tests
Diatom QuickFacts, Monterey Bay Aquarium Research Institute
Algae image database Academy of Natural Sciences of Philadelphia (ANSP)
Diatom taxa Academy of Natural Sciences of Philadelphia (ANSP)
An Introduction to the Microscopical Study of Diatoms by Robert B. McLaughlin
Algae | Diatom | [
"Biology"
] | 12,245 | [
"Diatoms",
"Algae"
] |
46,408 | https://en.wikipedia.org/wiki/Magenta | Magenta () is a purplish-red color. On color wheels of the RGB (additive) and CMY (subtractive) color models, it is located precisely midway between blue and red. It is one of the four colors of ink used in color printing by an inkjet printer, along with yellow, cyan, and black to make all the other colors. The tone of magenta used in printing, printer's magenta, is redder than the magenta of the RGB (additive) model, the former being closer to rose.
Magenta took its name from an aniline dye made and patented in 1859 by the French chemist François-Emmanuel Verguin, who originally called it fuchsine. It was renamed to celebrate the Italian-French victory at the Battle of Magenta fought between the French and Austrians on 4 June 1859 near the Italian town of Magenta in Lombardy. A virtually identical color, called roseine, was created in 1860 by two British chemists, Edward Chambers Nicholson, and George Maule.
The web color magenta is also called fuchsia.
In optics and color science
Magenta is an extra-spectral color, meaning that it is not a hue associated with monochromatic visible light. Magenta is associated with perception of spectral power distributions concentrated mostly in two bands: longer wavelength reddish components and shorter wavelength blueish components.
In the RGB color system, used to create all the colors on a television or computer display, magenta is a secondary color, made by combining equal amounts of red and blue light at a high intensity. In this system, magenta is the complementary color of green, and combining green and magenta light on a black screen will create white.
In the CMYK color model, used in color printing, it is one of the three primary colors, along with cyan and yellow, used to print all the rest of the colors. If magenta, cyan, and yellow are printed on top of each other on a page, they make black. In this model, magenta is the complementary color of green. If combined, green and magenta ink will look dark brown or black. The magenta used in color printing, sometimes called process magenta, is a darker shade than the color used on computer screens.
In terms of physiology, the color is stimulated in the brain when the eye reports input from short wave blue cone cells along with a sub-sensitivity of the long wave cones which respond secondarily to that same deep blue color, but with little or no input from the middle wave cones. The brain interprets that combination as some hue of magenta or purple, depending on the relative strengths of the cone responses.
In the Munsell color system, magenta is called red-purple.
If the spectrum is wrapped to form a color wheel, magenta (additive secondary) appears midway between red and violet. Violet and red, the two components of magenta, are at opposite ends of the visible spectrum and have very different wavelengths. The additive secondary color magenta is made by combining violet and red light at equal intensity; it is not present in the spectrum itself.
Fuchsia and magenta
The web colors fuchsia and magenta are identical, made by mixing the same proportions of blue and red light. In design and printing, there is more variation. The French version of fuchsia in the RGB color model and in printing contains a higher proportion of red than the American version of fuchsia.
Gallery
History
Fuchsine and magenta dye (1859)
The color magenta was the result of the industrial chemistry revolution of the mid-nineteenth century, which began with the invention by William Perkin of mauveine in 1856, which was the first synthetic aniline dye. The enormous commercial success of the dye and the new color it produced, mauve, inspired other chemists in Europe to develop new colors made from aniline dyes.
In France, François-Emmanuel Verguin, the director of the chemical factory of Louis Rafard near Lyon, tried many different formulae before finally in late 1858 or early 1859, mixing aniline with carbon tetrachloride, producing a reddish-purple dye which he called "fuchsine", after the color of the flower of the fuchsia plant. He quit the Rafard factory and took his color to a firm of paint manufacturers, Francisque and Joseph Renard, who began to manufacture the dye in 1859.
In the same year, two British chemists, Edward Chambers Nicholson and George Maule, working at the laboratory of the paint manufacturer George Simpson, located in Walworth, south of London, made another aniline dye with a similar red-purple color, which they began to manufacture in 1860 under the name "roseine". In 1860, they changed the name of the color to "magenta", in honor of the Battle of Magenta fought by the armies of France and Sardinia against Austrians at Magenta, Lombardy the year before, and the new color became a commercial success.
Starting in 1935, the family of quinacridone dyes was developed. These have colors ranging from red to violet, so nowadays a quinacridone dye is often used for magenta. Various tones of magenta—light, bright, brilliant, vivid, rich, or deep—may be formulated by adding varying amounts of white to quinacridone artist's paints.
Another dye used for magenta is Lithol Rubine BK. One of its uses is as a food coloring.
Process magenta (pigment magenta; printer's magenta) (1890s)
In color printing, the color called process magenta, pigment magenta, or printer's magenta is one of the three primary pigment colors which, along with yellow and cyan, constitute the three subtractive primary colors of pigment. (The secondary colors of pigment are blue, green, and red.) As such, the hue magenta is the complement of green: magenta pigments absorb green light; thus magenta and green are opposite colors.
The CMYK printing process was invented in the 1890s, when newspapers began to publish color comic strips.
Process magenta is not an RGB color, and there is no fixed conversion from CMYK primaries to RGB. Different formulations are used for printer's ink, so there may be variations in the printed color that is pure magenta ink.
Web colors magenta and fuchsia
The web color magenta is one of the three secondary colors in the RGB color model.
On the RGB color wheel, magenta is the color between rose and violet, and halfway between red and blue.
This color is called magenta in X11 and fuchsia in HTML. In the RGB color model, it is created by combining equal intensities of red and blue light. The two web colors magenta and fuchsia are exactly the same color. Sometimes the web color magenta is called electric magenta or electronic magenta.
While the magenta used in printing and the web color have the same name, they have important differences. Process magenta (the color used for magenta printing ink—also called printer's or pigment magenta) is much less vivid than the color magenta achievable on a computer screen. CMYK printing technology cannot accurately reproduce on paper the color on the computer screen. When the web color magenta is reproduced on paper, it is called fuchsia and it is physically impossible for it to appear on paper as vivid as on a computer screen.
Colored pencils and crayons called "magenta" are usually colored the color of process magenta (printer's magenta).
In science and culture
In art
Paul Gauguin (1848–1903) used a shade of magenta in 1890 in his portrait of Marie Lagadu, and in some of his South Seas paintings.
Henri Matisse and the members of the Fauvist movement used magenta and other non-traditional colors to surprise viewers, and to move their emotions through the use of bold colors.
Since the mid-1960s, water based fluorescent magenta paint has been available to paint psychedelic black light paintings. (Fluorescent cerise, fluorescent chartreuse yellow, fluorescent blue, and fluorescent green.)
In literature
The color plays a central role in Craig Laurance Gidney's novel A Spectral Hue.
In film
The titular alien entity in the 2019 horror film Color Out of Space, an adaptation of the 1927 H. P. Lovecraft short story The Colour Out of Space, is depicted as being magenta due to the color's extra-spectral status.
In astronomy
Astronomers have reported that spectral class T brown dwarfs (the ones with the coolest temperatures except for the recently discovered Y brown dwarfs) are colored magenta because of absorption by sodium and potassium atoms of light in the green portion of the spectrum.
In biology: magenta insects, birds, fish, and mammals
In botany
Magenta is a common color for flowers, particularly in the tropics and sub-tropics. Because magenta is the complementary color of green, magenta flowers have the highest contrast with the green foliage, and therefore are more visible to the animals needed for their pollination.
In business
The German telecommunications company Deutsche Telekom uses a magenta logo. It has sought to prevent use of any similar color by other businesses, even those in unrelated fields, such as the insurance company Lemonade.
In public transport
Magenta was the English name of Tokyo's Oedo subway line color. It was later changed to ruby.
It is also the color of the Metropolitan line of the London Underground.
In transportation
In aircraft autopilot systems, the path that pilot or plane should follow to its destination is usually indicated in cockpit displays using the color magenta.
In numismatics
The Reserve Bank of India (RBI) issued a Magenta colored banknote of ₹2000 denomination on 8 November 2016 under Mahatma Gandhi New Series. This is the highest currency note printed by RBI that is in active circulation in India.
In vexillology and heraldry
Magenta is an extremely rare color to find on heraldic flags and coats of arms, since its adoption dates back to relatively recent times. However, there are some examples of its use:
In politics
Throughout much of Europe, the color of magenta (or variants of such, such as Pink or Amaranth) is used to symbolise social liberalism or classical liberalism
The color magenta is used to symbolize anti-racism by the Amsterdam-based anti-racism Magenta Foundation.
In Danish politics, magenta is the color of Det Radikale Venstre, the Danish social-liberal party.
In Austrian politics, it is used to represent NEOS – The New Austria and Liberal Forum, a social liberal party.
In Belgium, it is used by DéFI, a social liberal party.
In Germany, Magenta is one of the colors of the Free Democratic Party, or FDP.
See also
Fuchsia (color)
List of colors
Rose
Shades of magenta
References
External links
Pictures of actual aniline dye samples in various shades of magenta.
Magenta is a product of the brain rather than a spectral frequency
Color Mixing and the Mystery of Magenta , Royal Institution video
Primary colors
Secondary colors
Tertiary colors
Optical spectrum
Web colors | Magenta | [
"Physics"
] | 2,330 | [
"Optical spectrum",
"Spectrum (physical sciences)",
"Electromagnetic spectrum"
] |
46,439 | https://en.wikipedia.org/wiki/Philosophy%20of%20mathematics | Philosophy of mathematics is the branch of philosophy that deals with the nature of mathematics and its relationship to other areas of philosophy, particularly epistemology and metaphysics. Central questions posed include whether or not mathematical objects are purely abstract entities or are in some way concrete, and in what the relationship such objects have with physical reality consists.
Major themes that are dealt with in philosophy of mathematics include:
Reality: The question is whether mathematics is a pure product of human mind or whether it has some reality by itself.
Logic and rigor
Relationship with physical reality
Relationship with science
Relationship with applications
Mathematical truth
Nature as human activity (science, art, game, or all together)
Major themes
Reality
Logic and rigor
Mathematical reasoning requires rigor. This means that the definitions must be absolutely unambiguous and the proofs must be reducible to a succession of applications of syllogisms or inference rules, without any use of empirical evidence and intuition.
The rules of rigorous reasoning have been established by the ancient Greek philosophers under the name of logic. Logic is not specific to mathematics, but, in mathematics, the standard of rigor is much higher than elsewhere.
For many centuries, logic, although used for mathematical proofs, belonged to philosophy and was not specifically studied by mathematicians. Circa the end of the 19th century, several paradoxes made questionable the logical foundation of mathematics, and consequently the validity of the whole of mathematics. This has been called the foundational crisis of mathematics. Some of these paradoxes consist of results that seem to contradict the common intuition, such as the possibility to construct valid non-Euclidean geometries in which the parallel postulate is wrong, the Weierstrass function that is continuous but nowhere differentiable, and the study by Georg Cantor of infinite sets, which led to consider several sizes of infinity (infinite cardinals). Even more striking, Russell's paradox shows that the phrase "the set of all sets" is self contradictory.
Several methods have been proposed to solve the problem by changing of logical framework, such as constructive mathematics and intuitionistic logic. Roughly speaking, the first one consists of requiring that every existence theorem must provide an explicit example, and the second one excludes from mathematical reasoning the law of excluded middle and double negation elimination.
These logics have less inference rules than classical logic. On the other hand classical logic was a first order logic, which means roughly that quantifiers cannot be applied to infinite sets. This means, for example that the sentence "every set of natural numbers has a least element" is nonsensical in any formalization of classical logic. This led to the introduction of higher-order logics, which are presently used commonly in mathematics,
The problems of foundation of mathematics has been eventually resolved with the rise of mathematical logic as a new area of mathematics. In this framework, a mathematical or logical theory consists of a formal language that defines the well-formed of assertions, a set of basic assertions called axioms and a set of inference rules that allow producing new assertions from one or several known assertions. A theorem of such a theory is either an axiom or an assertion that can be obtained from previously known theorems by the application of an inference rule. The Zermelo–Fraenkel set theory with the axiom of choice, generally called ZFC, is a higher-order logic in which all mathematics have been restated; it is used implicitely in all mathematics texts that do not specify explicitly on which foundations they are based. Moreover, the other proposed foundations can be modeled and studied inside ZFC.
It results that "rigor" is no more a relevant concept in mathematics, as a proof is either correct or erroneous, and a "rigorous proof" is simply a pleonasm. Where a special concept of rigor comes into play is in the socialized aspects of a proof. In particular, proofs are rarely written in full details, and some steps of a proof are generally considered as trivial, easy, or straightforward, and therefore left to the reader. As most proof errors occur in these skipped steps, a new proof requires to be verified by other specialists of the subject, and can be considered as reliable only after having been accepted by the community of the specialists, which may need several years.
Also, the concept of "rigor" may remain useful for teaching to beginners what is a mathematical proof.
Relationship with sciences
Mathematics is used in most sciences for modeling phenomena, which then allows predictions to be made from experimental laws. The independence of mathematical truth from any experimentation implies that the accuracy of such predictions depends only on the adequacy of the model. Inaccurate predictions, rather than being caused by invalid mathematical concepts, imply the need to change the mathematical model used. For example, the perihelion precession of Mercury could only be explained after the emergence of Einstein's general relativity, which replaced Newton's law of gravitation as a better mathematical model.
There is still a philosophical debate whether mathematics is a science. However, in practice, mathematicians are typically grouped with scientists, and mathematics shares much in common with the physical sciences. Like them, it is falsifiable, which means in mathematics that if a result or a theory is wrong, this can be proved by providing a counterexample. Similarly as in science, theories and results (theorems) are often obtained from experimentation. In mathematics, the experimentation may consist of computation on selected examples or of the study of figures or other representations of mathematical objects (often mind representations without physical support). For example, when asked how he came about his theorems, Gauss once replied "durch planmässiges Tattonieren" (through systematic experimentation). However, some authors emphasize that mathematics differs from the modern notion of science by not on empirical evidence.
Unreasonable effectiveness
The unreasonable effectiveness of mathematics is a phenomenon that was named and first made explicit by physicist Eugene Wigner. It is the fact that many mathematical theories (even the "purest") have applications outside their initial object. These applications may be completely outside their initial area of mathematics, and may concern physical phenomena that were completely unknown when the mathematical theory was introduced. Examples of unexpected applications of mathematical theories can be found in many areas of mathematics.
A notable example is the prime factorization of natural numbers that was discovered more than 2,000 years before its common use for secure internet communications through the RSA cryptosystem. A second historical example is the theory of ellipses. They were studied by the ancient Greek mathematicians as conic sections (that is, intersections of cones with planes). It is almost 2,000 years later that Johannes Kepler discovered that the trajectories of the planets are ellipses.
In the 19th century, the internal development of geometry (pure mathematics) led to definition and study of non-Euclidean geometries, spaces of dimension higher than three and manifolds. At this time, these concepts seemed totally disconnected from the physical reality, but at the beginning of the 20th century, Albert Einstein developed the theory of relativity that uses fundamentally these concepts. In particular, spacetime of special relativity is a non-Euclidean space of dimension four, and spacetime of general relativity is a (curved) manifold of dimension four.
A striking aspect of the interaction between mathematics and physics is when mathematics drives research in physics. This is illustrated by the discoveries of the positron and the baryon In both cases, the equations of the theories had unexplained solutions, which led to conjecture of the existence of an unknown particle, and the search for these particles. In both cases, these particles were discovered a few years later by specific experiments.
History
The origin of mathematics is of arguments and disagreements. Whether the birth of mathematics was by chance or induced by necessity during the development of similar subjects, such as physics, remains an area of contention.
Many thinkers have contributed their ideas concerning the nature of mathematics. Today, some philosophers of mathematics aim to give accounts of this form of inquiry and its products as they stand, while others emphasize a role for themselves that goes beyond simple interpretation to critical analysis. There are traditions of mathematical philosophy in both Western philosophy and Eastern philosophy. Western philosophies of mathematics go as far back as Pythagoras, who described the theory "everything is mathematics" (mathematicism), Plato, who paraphrased Pythagoras, and studied the ontological status of mathematical objects, and Aristotle, who studied logic and issues related to infinity (actual versus potential).
Greek philosophy on mathematics was strongly influenced by their study of geometry. For example, at one time, the Greeks held the opinion that 1 (one) was not a number, but rather a unit of arbitrary length. A number was defined as a multitude. Therefore, 3, for example, represented a certain multitude of units, and was thus "truly" a number. At another point, a similar argument was made that 2 was not a number but a fundamental notion of a pair. These views come from the heavily geometric straight-edge-and-compass viewpoint of the Greeks: just as lines drawn in a geometric problem are measured in proportion to the first arbitrarily drawn line, so too are the numbers on a number line measured in proportion to the arbitrary first "number" or "one".
These earlier Greek ideas of numbers were later upended by the discovery of the irrationality of the square root of two. Hippasus, a disciple of Pythagoras, showed that the diagonal of a unit square was incommensurable with its (unit-length) edge: in other words he proved there was no existing (rational) number that accurately depicts the proportion of the diagonal of the unit square to its edge. This caused a significant re-evaluation of Greek philosophy of mathematics. According to legend, fellow Pythagoreans were so traumatized by this discovery that they murdered Hippasus to stop him from spreading his heretical idea. Simon Stevin was one of the first in Europe to challenge Greek ideas in the 16th century. Beginning with Leibniz, the focus shifted strongly to the relationship between mathematics and logic. This perspective dominated the philosophy of mathematics through the time of Boole, Frege and Russell, but was brought into question by developments in the late 19th and early 20th centuries.
Contemporary philosophy
A perennial issue in the philosophy of mathematics concerns the relationship between logic and mathematics at their joint foundations. While 20th-century philosophers continued to ask the questions mentioned at the outset of this article, the philosophy of mathematics in the 20th century was characterized by a predominant interest in formal logic, set theory (both naive set theory and axiomatic set theory), and foundational issues.
It is a profound puzzle that on the one hand mathematical truths seem to have a compelling inevitability, but on the other hand the source of their "truthfulness" remains elusive. Investigations into this issue are known as the foundations of mathematics program.
At the start of the 20th century, philosophers of mathematics were already beginning to divide into various schools of thought about all these questions, broadly distinguished by their pictures of mathematical epistemology and ontology. Three schools, formalism, intuitionism, and logicism, emerged at this time, partly in response to the increasingly widespread worry that mathematics as it stood, and analysis in particular, did not live up to the standards of certainty and rigor that had been taken for granted. Each school addressed the issues that came to the fore at that time, either attempting to resolve them or claiming that mathematics is not entitled to its status as our most trusted knowledge.
Surprising and counter-intuitive developments in formal logic and set theory early in the 20th century led to new questions concerning what was traditionally called the foundations of mathematics. As the century unfolded, the initial focus of concern expanded to an open exploration of the fundamental axioms of mathematics, the axiomatic approach having been taken for granted since the time of Euclid around 300 BCE as the natural basis for mathematics. Notions of axiom, proposition and proof, as well as the notion of a proposition being true of a mathematical object , were formalized, allowing them to be treated mathematically. The Zermelo–Fraenkel axioms for set theory were formulated which provided a conceptual framework in which much mathematical discourse would be interpreted. In mathematics, as in physics, new and unexpected ideas had arisen and significant changes were coming. With Gödel numbering, propositions could be interpreted as referring to themselves or other propositions, enabling inquiry into the consistency of mathematical theories. This reflective critique in which the theory under review "becomes itself the object of a mathematical study" led Hilbert to call such study metamathematics or proof theory.
At the middle of the century, a new mathematical theory was created by Samuel Eilenberg and Saunders Mac Lane, known as category theory, and it became a new contender for the natural language of mathematical thinking. As the 20th century progressed, however, philosophical opinions diverged as to just how well-founded were the questions about foundations that were raised at the century's beginning. Hilary Putnam summed up one common view of the situation in the last third of the century by saying:
When philosophy discovers something wrong with science, sometimes science has to be changed—Russell's paradox comes to mind, as does Berkeley's attack on the actual infinitesimal—but more often it is philosophy that has to be changed. I do not think that the difficulties that philosophy finds with classical mathematics today are genuine difficulties; and I think that the philosophical interpretations of mathematics that we are being offered on every hand are wrong, and that "philosophical interpretation" is just what mathematics doesn't need.
Philosophy of mathematics today proceeds along several different lines of inquiry, by philosophers of mathematics, logicians, and mathematicians, and there are many schools of thought on the subject. The schools are addressed separately in the next section, and their assumptions explained.
Contemporary schools of thought
Contemporary schools of thought in the philosophy of mathematics include: artistic, Platonism, mathematicism, logicism, formalism, conventionalism, intuitionism, constructivism, finitism, structuralism, embodied mind theories (Aristotelian realism, psychologism, empiricism), fictionalism, social constructivism, and non-traditional schools.
Artistic
The view that claims that mathematics is the aesthetic combination of assumptions, and then also claims that mathematics is an art. A famous mathematician who claims that is the British G. H. Hardy. For Hardy, in his book, A Mathematician's Apology, the definition of mathematics was more like the aesthetic combination of concepts.
Platonism
Mathematicism
Max Tegmark's mathematical universe hypothesis (or mathematicism) goes further than Platonism in asserting that not only do all mathematical objects exist, but nothing else does. Tegmark's sole postulate is: All structures that exist mathematically also exist physically. That is, in the sense that "in those [worlds] complex enough to contain self-aware substructures [they] will subjectively perceive themselves as existing in a physically 'real' world".
Logicism
Logicism is the thesis that mathematics is reducible to logic, and hence nothing but a part of logic. Logicists hold that mathematics can be known a priori, but suggest that our knowledge of mathematics is just part of our knowledge of logic in general, and is thus analytic, not requiring any special faculty of mathematical intuition. In this view, logic is the proper foundation of mathematics, and all mathematical statements are necessary logical truths.
Rudolf Carnap (1931) presents the logicist thesis in two parts:
The concepts of mathematics can be derived from logical concepts through explicit definitions.
The theorems of mathematics can be derived from logical axioms through purely logical deduction.
Gottlob Frege was the founder of logicism. In his seminal Die Grundgesetze der Arithmetik (Basic Laws of Arithmetic) he built up arithmetic from a system of logic with a general principle of comprehension, which he called "Basic Law V" (for concepts F and G, the extension of F equals the extension of G if and only if for all objects a, Fa equals Ga), a principle that he took to be acceptable as part of logic.
Frege's construction was flawed. Bertrand Russell discovered that Basic Law V is inconsistent (this is Russell's paradox). Frege abandoned his logicist program soon after this, but it was continued by Russell and Whitehead. They attributed the paradox to "vicious circularity" and built up what they called ramified type theory to deal with it. In this system, they were eventually able to build up much of modern mathematics but in an altered, and excessively complex form (for example, there were different natural numbers in each type, and there were infinitely many types). They also had to make several compromises in order to develop much of mathematics, such as the "axiom of reducibility". Even Russell said that this axiom did not really belong to logic.
Modern logicists (like Bob Hale, Crispin Wright, and perhaps others) have returned to a program closer to Frege's. They have abandoned Basic Law V in favor of abstraction principles such as Hume's principle (the number of objects falling under the concept F equals the number of objects falling under the concept G if and only if the extension of F and the extension of G can be put into one-to-one correspondence). Frege required Basic Law V to be able to give an explicit definition of the numbers, but all the properties of numbers can be derived from Hume's principle. This would not have been enough for Frege because (to paraphrase him) it does not exclude the possibility that the number 3 is in fact Julius Caesar. In addition, many of the weakened principles that they have had to adopt to replace Basic Law V no longer seem so obviously analytic, and thus purely logical.
Formalism
Formalism holds that mathematical statements may be thought of as statements about the consequences of certain string manipulation rules. For example, in the "game" of Euclidean geometry (which is seen as consisting of some strings called "axioms", and some "rules of inference" to generate new strings from given ones), one can prove that the Pythagorean theorem holds (that is, one can generate the string corresponding to the Pythagorean theorem). According to formalism, mathematical truths are not about numbers and sets and triangles and the like—in fact, they are not "about" anything at all.
Another version of formalism is known as deductivism. In deductivism, the Pythagorean theorem is not an absolute truth, but a relative one, if it follows deductively from the appropriate axioms. The same is held to be true for all other mathematical statements.
Formalism need not mean that mathematics is nothing more than a meaningless symbolic game. It is usually hoped that there exists some interpretation in which the rules of the game hold. (Compare this position to structuralism.) But it does allow the working mathematician to continue in his or her work and leave such problems to the philosopher or scientist. Many formalists would say that in practice, the axiom systems to be studied will be suggested by the demands of science or other areas of mathematics.
A major early proponent of formalism was David Hilbert, whose program was intended to be a complete and consistent axiomatization of all of mathematics. Hilbert aimed to show the consistency of mathematical systems from the assumption that the "finitary arithmetic" (a subsystem of the usual arithmetic of the positive integers, chosen to be philosophically uncontroversial) was consistent. Hilbert's goals of creating a system of mathematics that is both complete and consistent were seriously undermined by the second of Gödel's incompleteness theorems, which states that sufficiently expressive consistent axiom systems can never prove their own consistency. Since any such axiom system would contain the finitary arithmetic as a subsystem, Gödel's theorem implied that it would be impossible to prove the system's consistency relative to that (since it would then prove its own consistency, which Gödel had shown was impossible). Thus, in order to show that any axiomatic system of mathematics is in fact consistent, one needs to first assume the consistency of a system of mathematics that is in a sense stronger than the system to be proven consistent.
Hilbert was initially a deductivist, but, as may be clear from above, he considered certain metamathematical methods to yield intrinsically meaningful results and was a realist with respect to the finitary arithmetic. Later, he held the opinion that there was no other meaningful mathematics whatsoever, regardless of interpretation.
Other formalists, such as Rudolf Carnap, Alfred Tarski, and Haskell Curry, considered mathematics to be the investigation of formal axiom systems. Mathematical logicians study formal systems but are just as often realists as they are formalists.
Formalists are relatively tolerant and inviting to new approaches to logic, non-standard number systems, new set theories, etc. The more games we study, the better. However, in all three of these examples, motivation is drawn from existing mathematical or philosophical concerns. The "games" are usually not arbitrary.
The main critique of formalism is that the actual mathematical ideas that occupy mathematicians are far removed from the string manipulation games mentioned above. Formalism is thus silent on the question of which axiom systems ought to be studied, as none is more meaningful than another from a formalistic point of view.
Recently, some formalist mathematicians have proposed that all of our formal mathematical knowledge should be systematically encoded in computer-readable formats, so as to facilitate automated proof checking of mathematical proofs and the use of interactive theorem proving in the development of mathematical theories and computer software. Because of their close connection with computer science, this idea is also advocated by mathematical intuitionists and constructivists in the "computability" tradition—.
Conventionalism
The French mathematician Henri Poincaré was among the first to articulate a conventionalist view. Poincaré's use of non-Euclidean geometries in his work on differential equations convinced him that Euclidean geometry should not be regarded as a priori truth. He held that axioms in geometry should be chosen for the results they produce, not for their apparent coherence with human intuitions about the physical world.
Intuitionism
In mathematics, intuitionism is a program of methodological reform whose motto is that "there are no non-experienced mathematical truths" (L. E. J. Brouwer). From this springboard, intuitionists seek to reconstruct what they consider to be the corrigible portion of mathematics in accordance with Kantian concepts of being, becoming, intuition, and knowledge. Brouwer, the founder of the movement, held that mathematical objects arise from the a priori forms of the volitions that inform the perception of empirical objects.
A major force behind intuitionism was L. E. J. Brouwer, who rejected the usefulness of formalized logic of any sort for mathematics. His student Arend Heyting postulated an intuitionistic logic, different from the classical Aristotelian logic; this logic does not contain the law of the excluded middle and therefore frowns upon proofs by contradiction. The axiom of choice is also rejected in most intuitionistic set theories, though in some versions it is accepted.
In intuitionism, the term "explicit construction" is not cleanly defined, and that has led to criticisms. Attempts have been made to use the concepts of Turing machine or computable function to fill this gap, leading to the claim that only questions regarding the behavior of finite algorithms are meaningful and should be investigated in mathematics. This has led to the study of the computable numbers, first introduced by Alan Turing. Not surprisingly, then, this approach to mathematics is sometimes associated with theoretical computer science.
Constructivism
Like intuitionism, constructivism involves the regulative principle that only mathematical entities which can be explicitly constructed in a certain sense should be admitted to mathematical discourse. In this view, mathematics is an exercise of the human intuition, not a game played with meaningless symbols. Instead, it is about entities that we can create directly through mental activity. In addition, some adherents of these schools reject non-constructive proofs, such as using proof by contradiction when showing the existence of an object or when trying to establish the truth of some proposition. Important work was done by Errett Bishop, who managed to prove versions of the most important theorems in real analysis as constructive analysis in his 1967 Foundations of Constructive Analysis.
Finitism
Finitism is an extreme form of constructivism, according to which a mathematical object does not exist unless it can be constructed from natural numbers in a finite number of steps. In her book Philosophy of Set Theory, Mary Tiles characterized those who allow countably infinite objects as classical finitists, and those who deny even countably infinite objects as strict finitists.
The most famous proponent of finitism was Leopold Kronecker, who said:
Ultrafinitism is an even more extreme version of finitism, which rejects not only infinities but finite quantities that cannot feasibly be constructed with available resources. Another variant of finitism is Euclidean arithmetic, a system developed by John Penn Mayberry in his book The Foundations of Mathematics in the Theory of Sets. Mayberry's system is Aristotelian in general inspiration and, despite his strong rejection of any role for operationalism or feasibility in the foundations of mathematics, comes to somewhat similar conclusions, such as, for instance, that super-exponentiation is not a legitimate finitary function.
Structuralism
Structuralism is a position holding that mathematical theories describe structures, and that mathematical objects are exhaustively defined by their places in such structures, consequently having no intrinsic properties. For instance, it would maintain that all that needs to be known about the number 1 is that it is the first whole number after 0. Likewise all the other whole numbers are defined by their places in a structure, the number line. Other examples of mathematical objects might include lines and planes in geometry, or elements and operations in abstract algebra.
Structuralism is an epistemologically realistic view in that it holds that mathematical statements have an objective truth value. However, its central claim only relates to what kind of entity a mathematical object is, not to what kind of existence mathematical objects or structures have (not, in other words, to their ontology). The kind of existence mathematical objects have would clearly be dependent on that of the structures in which they are embedded; different sub-varieties of structuralism make different ontological claims in this regard.
The ante rem structuralism ("before the thing") has a similar ontology to Platonism. Structures are held to have a real but abstract and immaterial existence. As such, it faces the standard epistemological problem of explaining the interaction between such abstract structures and flesh-and-blood mathematicians .
The in re structuralism ("in the thing") is the equivalent of Aristotelian realism. Structures are held to exist inasmuch as some concrete system exemplifies them. This incurs the usual issues that some perfectly legitimate structures might accidentally happen not to exist, and that a finite physical world might not be "big" enough to accommodate some otherwise legitimate structures.
The post rem structuralism ("after the thing") is anti-realist about structures in a way that parallels nominalism. Like nominalism, the post rem approach denies the existence of abstract mathematical objects with properties other than their place in a relational structure. According to this view mathematical systems exist, and have structural features in common. If something is true of a structure, it will be true of all systems exemplifying the structure. However, it is merely instrumental to talk of structures being "held in common" between systems: they in fact have no independent existence.
Embodied mind theories
Embodied mind theories hold that mathematical thought is a natural outgrowth of the human cognitive apparatus which finds itself in our physical universe. For example, the abstract concept of number springs from the experience of counting discrete objects (requiring the human senses such as sight for detecting the objects, touch; and signalling from the brain). It is held that mathematics is not universal and does not exist in any real sense, other than in human brains. Humans construct, but do not discover, mathematics.
The cognitive processes of pattern-finding and distinguishing objects are also subject to neuroscience; if mathematics is considered to be relevant to a natural world (such as from realism or a degree of it, as opposed to pure solipsism).
Its actual relevance to reality, while accepted to be a trustworthy approximation (it is also suggested the evolution of perceptions, the body, and the senses may have been necessary for survival) is not necessarily accurate to a full realism (and is still subject to flaws such as illusion, assumptions (consequently; the foundations and axioms in which mathematics have been formed by humans), generalisations, deception, and hallucinations). As such, this may also raise questions for the modern scientific method for its compatibility with general mathematics; as while relatively reliable, it is still limited by what can be measured by empiricism which may not be as reliable as previously assumed (see also: 'counterintuitive' concepts in such as quantum nonlocality, and action at a distance).
Another issue is that one numeral system may not necessarily be applicable to problem solving. Subjects such as complex numbers or imaginary numbers require specific changes to more commonly used axioms of mathematics; otherwise they cannot be adequately understood.
Alternatively, computer programmers may use hexadecimal for its 'human-friendly' representation of binary-coded values, rather than decimal (convenient for counting because humans have ten fingers). The axioms or logical rules behind mathematics also vary through time (such as the adaption and invention of zero).
As perceptions from the human brain are subject to illusions, assumptions, deceptions, (induced) hallucinations, cognitive errors or assumptions in a general context, it can be questioned whether they are accurate or strictly indicative of truth (see also: philosophy of being), and the nature of empiricism itself in relation to the universe and whether it is independent to the senses and the universe.
The human mind has no special claim on reality or approaches to it built out of math. If such constructs as Euler's identity are true then they are true as a map of the human mind and cognition.
Embodied mind theorists thus explain the effectiveness of mathematics—mathematics was constructed by the brain in order to be effective in this universe.
The most accessible, famous, and infamous treatment of this perspective is Where Mathematics Comes From, by George Lakoff and Rafael E. Núñez. In addition, mathematician Keith Devlin has investigated similar concepts with his book The Math Instinct, as has neuroscientist Stanislas Dehaene with his book The Number Sense.
Aristotelian realism
Aristotelian realism holds that mathematics studies properties such as symmetry, continuity and order that can be literally realized in the physical world (or in any other world there might be). It contrasts with Platonism in holding that the objects of mathematics, such as numbers, do not exist in an "abstract" world but can be physically realized. For example, the number 4 is realized in the relation between a heap of parrots and the universal "being a parrot" that divides the heap into so many parrots. Aristotelian realism is defended by James Franklin and the Sydney School in the philosophy of mathematics and is close to the view of Penelope Maddy that when an egg carton is opened, a set of three eggs is perceived (that is, a mathematical entity realized in the physical world). A problem for Aristotelian realism is what account to give of higher infinities, which may not be realizable in the physical world.
The Euclidean arithmetic developed by John Penn Mayberry in his book The Foundations of Mathematics in the Theory of Sets also falls into the Aristotelian realist tradition. Mayberry, following Euclid, considers numbers to be simply "definite multitudes of units" realized in nature—such as "the members of the London Symphony Orchestra" or "the trees in Birnam wood". Whether or not there are definite multitudes of units for which Euclid's Common Notion 5 (the whole is greater than the part) fails and which would consequently be reckoned as infinite is for Mayberry essentially a question about Nature and does not entail any transcendental suppositions.
Psychologism
Psychologism in the philosophy of mathematics is the position that mathematical concepts and/or truths are grounded in, derived from or explained by psychological facts (or laws).
John Stuart Mill seems to have been an advocate of a type of logical psychologism, as were many 19th-century German logicians such as Sigwart and Erdmann as well as a number of psychologists, past and present: for example, Gustave Le Bon. Psychologism was famously criticized by Frege in his The Foundations of Arithmetic, and many of his works and essays, including his review of Husserl's Philosophy of Arithmetic. Edmund Husserl, in the first volume of his Logical Investigations, called "The Prolegomena of Pure Logic", criticized psychologism thoroughly and sought to distance himself from it. The "Prolegomena" is considered a more concise, fair, and thorough refutation of psychologism than the criticisms made by Frege, and also it is considered today by many as being a memorable refutation for its decisive blow to psychologism. Psychologism was also criticized by Charles Sanders Peirce and Maurice Merleau-Ponty.
Empiricism
Mathematical empiricism is a form of realism that denies that mathematics can be known a priori at all. It says that we discover mathematical facts by empirical research, just like facts in any of the other sciences. It is not one of the classical three positions advocated in the early 20th century, but primarily arose in the middle of the century. However, an important early proponent of a view like this was John Stuart Mill. Mill's view was widely criticized, because, according to critics, such as A.J. Ayer, it makes statements like come out as uncertain, contingent truths, which we can only learn by observing instances of two pairs coming together and forming a quartet.
Karl Popper was another philosopher to point out empirical aspects of mathematics, observing that "most mathematical theories are, like those of physics and biology, hypothetico-deductive: pure mathematics therefore turns out to be much closer to the natural sciences whose hypotheses are conjectures, than it seemed even recently." Popper also noted he would "admit a system as empirical or scientific only if it is capable of being tested by experience."
Contemporary mathematical empiricism, formulated by W. V. O. Quine and Hilary Putnam, is primarily supported by the indispensability argument: mathematics is indispensable to all empirical sciences, and if we want to believe in the reality of the phenomena described by the sciences, we ought also believe in the reality of those entities required for this description. That is, since physics needs to talk about electrons to say why light bulbs behave as they do, then electrons must exist. Since physics needs to talk about numbers in offering any of its explanations, then numbers must exist. In keeping with Quine and Putnam's overall philosophies, this is a naturalistic argument. It argues for the existence of mathematical entities as the best explanation for experience, thus stripping mathematics of being distinct from the other sciences.
Putnam strongly rejected the term "Platonist" as implying an over-specific ontology that was not necessary to mathematical practice in any real sense. He advocated a form of "pure realism" that rejected mystical notions of truth and accepted much quasi-empiricism in mathematics. This grew from the increasingly popular assertion in the late 20th century that no one foundation of mathematics could be ever proven to exist. It is also sometimes called "postmodernism in mathematics" although that term is considered overloaded by some and insulting by others. Quasi-empiricism argues that in doing their research, mathematicians test hypotheses as well as prove theorems. A mathematical argument can transmit falsity from the conclusion to the premises just as well as it can transmit truth from the premises to the conclusion. Putnam has argued that any theory of mathematical realism would include quasi-empirical methods. He proposed that an alien species doing mathematics might well rely on quasi-empirical methods primarily, being willing often to forgo rigorous and axiomatic proofs, and still be doing mathematics—at perhaps a somewhat greater risk of failure of their calculations. He gave a detailed argument for this in New Directions. Quasi-empiricism was also developed by Imre Lakatos.
The most important criticism of empirical views of mathematics is approximately the same as that raised against Mill. If mathematics is just as empirical as the other sciences, then this suggests that its results are just as fallible as theirs, and just as contingent. In Mill's case the empirical justification comes directly, while in Quine's case it comes indirectly, through the coherence of our scientific theory as a whole, i.e. consilience after E.O. Wilson. Quine suggests that mathematics seems completely certain because the role it plays in our web of belief is extraordinarily central, and that it would be extremely difficult for us to revise it, though not impossible.
For a philosophy of mathematics that attempts to overcome some of the shortcomings of Quine and Gödel's approaches by taking aspects of each see Penelope Maddy's Realism in Mathematics. Another example of a realist theory is the embodied mind theory.
Fictionalism
Mathematical fictionalism was brought to fame in 1980 when Hartry Field published Science Without Numbers, which rejected and in fact reversed Quine's indispensability argument. Where Quine suggested that mathematics was indispensable for our best scientific theories, and therefore should be accepted as a body of truths talking about independently existing entities, Field suggested that mathematics was dispensable, and therefore should be considered as a body of falsehoods not talking about anything real. He did this by giving a complete axiomatization of Newtonian mechanics with no reference to numbers or functions at all. He started with the "betweenness" of Hilbert's axioms to characterize space without coordinatizing it, and then added extra relations between points to do the work formerly done by vector fields. Hilbert's geometry is mathematical, because it talks about abstract points, but in Field's theory, these points are the concrete points of physical space, so no special mathematical objects at all are needed.
Having shown how to do science without using numbers, Field proceeded to rehabilitate mathematics as a kind of useful fiction. He showed that mathematical physics is a conservative extension of his non-mathematical physics (that is, every physical fact provable in mathematical physics is already provable from Field's system), so that mathematics is a reliable process whose physical applications are all true, even though its own statements are false. Thus, when doing mathematics, we can see ourselves as telling a sort of story, talking as if numbers existed. For Field, a statement like is just as fictitious as "Sherlock Holmes lived at 221B Baker Street"—but both are true according to the relevant fictions.
Another fictionalist, Mary Leng, expresses the perspective succinctly by dismissing any seeming connection between mathematics and the physical world as "a happy coincidence". This rejection separates fictionalism from other forms of anti-realism, which see mathematics itself as artificial but still bounded or fitted to reality in some way.
By this account, there are no metaphysical or epistemological problems special to mathematics. The only worries left are the general worries about non-mathematical physics, and about fiction in general. Field's approach has been very influential, but is widely rejected. This is in part because of the requirement of strong fragments of second-order logic to carry out his reduction, and because the statement of conservativity seems to require quantification over abstract models or deductions.
Social constructivism
Social constructivism sees mathematics primarily as a social construct, as a product of culture, subject to correction and change. Like the other sciences, mathematics is viewed as an empirical endeavor whose results are constantly evaluated and may be discarded. However, while on an empiricist view the evaluation is some sort of comparison with "reality", social constructivists emphasize that the direction of mathematical research is dictated by the fashions of the social group performing it or by the needs of the society financing it. However, although such external forces may change the direction of some mathematical research, there are strong internal constraints—the mathematical traditions, methods, problems, meanings and values into which mathematicians are enculturated—that work to conserve the historically defined discipline.
This runs counter to the traditional beliefs of working mathematicians, that mathematics is somehow pure or objective. But social constructivists argue that mathematics is in fact grounded by much uncertainty: as mathematical practice evolves, the status of previous mathematics is cast into doubt, and is corrected to the degree it is required or desired by the current mathematical community. This can be seen in the development of analysis from reexamination of the calculus of Leibniz and Newton. They argue further that finished mathematics is often accorded too much status, and folk mathematics not enough, due to an overemphasis on axiomatic proof and peer review as practices.
The social nature of mathematics is highlighted in its subcultures. Major discoveries can be made in one branch of mathematics and be relevant to another, yet the relationship goes undiscovered for lack of social contact between mathematicians. Social constructivists argue each speciality forms its own epistemic community and often has great difficulty communicating, or motivating the investigation of unifying conjectures that might relate different areas of mathematics. Social constructivists see the process of "doing mathematics" as actually creating the meaning, while social realists see a deficiency either of human capacity to abstractify, or of human's cognitive bias, or of mathematicians' collective intelligence as preventing the comprehension of a real universe of mathematical objects. Social constructivists sometimes reject the search for foundations of mathematics as bound to fail, as pointless or even meaningless.
Contributions to this school have been made by Imre Lakatos and Thomas Tymoczko, although it is not clear that either would endorse the title. More recently Paul Ernest has explicitly formulated a social constructivist philosophy of mathematics. Some consider the work of Paul Erdős as a whole to have advanced this view (although he personally rejected it) because of his uniquely broad collaborations, which prompted others to see and study "mathematics as a social activity", e.g., via the Erdős number. Reuben Hersh has also promoted the social view of mathematics, calling it a "humanistic" approach, similar to but not quite the same as that associated with Alvin White; one of Hersh's co-authors, Philip J. Davis, has expressed sympathy for the social view as well.
Beyond the traditional schools
Unreasonable effectiveness
Rather than focus on narrow debates about the true nature of mathematical truth, or even on practices unique to mathematicians such as the proof, a growing movement from the 1960s to the 1990s began to question the idea of seeking foundations or finding any one right answer to why mathematics works. The starting point for this was Eugene Wigner's famous 1960 paper "The Unreasonable Effectiveness of Mathematics in the Natural Sciences", in which he argued that the happy coincidence of mathematics and physics being so well matched seemed to be unreasonable and hard to explain.
Popper's two senses of number statements
Realist and constructivist theories are normally taken to be contraries. However, Karl Popper argued that a number statement such as can be taken in two senses. In one sense it is irrefutable and logically true. In the second sense it is factually true and falsifiable. Another way of putting this is to say that a single number statement can express two propositions: one of which can be explained on constructivist lines; the other on realist lines.
Philosophy of language
Innovations in the philosophy of language during the 20th century renewed interest in whether mathematics is, as is often said, the language of science. Although some mathematicians and philosophers would accept the statement "mathematics is a language" (most consider that the language of mathematics is a part of mathematics to which mathematics cannot be reduced), linguists believe that the implications of such a statement must be considered. For example, the tools of linguistics are not generally applied to the symbol systems of mathematics, that is, mathematics is studied in a markedly different way from other languages. If mathematics is a language, it is a different type of language from natural languages. Indeed, because of the need for clarity and specificity, the language of mathematics is far more constrained than natural languages studied by linguists. However, the methods developed by Frege and Tarski for the study of mathematical language have been extended greatly by Tarski's student Richard Montague and other linguists working in formal semantics to show that the distinction between mathematical language and natural language may not be as great as it seems.
Mohan Ganesalingam has analysed mathematical language using tools from formal linguistics. Ganesalingam notes that some features of natural language are not necessary when analysing mathematical language (such as tense), but many of the same analytical tools can be used (such as context-free grammars). One important difference is that mathematical objects have clearly defined types, which can be explicitly defined in a text: "Effectively, we are allowed to introduce a word in one part of a sentence, and declare its part of speech in another; and this operation has no analogue in natural language."
Arguments
Indispensability argument for realism
This argument, associated with Willard Quine and Hilary Putnam, is considered by Stephen Yablo to be one of the most challenging arguments in favor of the acceptance of the existence of abstract mathematical entities, such as numbers and sets. The form of the argument is as follows.
One must have ontological commitments to all entities that are indispensable to the best scientific theories, and to those entities only (commonly referred to as "all and only").
Mathematical entities are indispensable to the best scientific theories. Therefore,
One must have ontological commitments to mathematical entities.
The justification for the first premise is the most controversial. Both Putnam and Quine invoke naturalism to justify the exclusion of all non-scientific entities, and hence to defend the "only" part of "all and only". The assertion that "all" entities postulated in scientific theories, including numbers, should be accepted as real is justified by confirmation holism. Since theories are not confirmed in a piecemeal fashion, but as a whole, there is no justification for excluding any of the entities referred to in well-confirmed theories. This puts the nominalist who wishes to exclude the existence of sets and non-Euclidean geometry, but to include the existence of quarks and other undetectable entities of physics, for example, in a difficult position.
Epistemic argument against realism
The anti-realist "epistemic argument" against Platonism has been made by Paul Benacerraf and Hartry Field. Platonism posits that mathematical objects are abstract entities. By general agreement, abstract entities cannot interact causally with concrete, physical entities ("the truth-values of our mathematical assertions depend on facts involving Platonic entities that reside in a realm outside of space-time"). Whilst our knowledge of concrete, physical objects is based on our ability to perceive them, and therefore to causally interact with them, there is no parallel account of how mathematicians come to have knowledge of abstract objects. Another way of making the point is that if the Platonic world were to disappear, it would make no difference to the ability of mathematicians to generate proofs, etc., which is already fully accountable in terms of physical processes in their brains.
Field developed his views into fictionalism. Benacerraf also developed the philosophy of mathematical structuralism, according to which there are no mathematical objects. Nonetheless, some versions of structuralism are compatible with some versions of realism.
The argument hinges on the idea that a satisfactory naturalistic account of thought processes in terms of brain processes can be given for mathematical reasoning along with everything else. One line of defense is to maintain that this is false, so that mathematical reasoning uses some special intuition that involves contact with the Platonic realm. A modern form of this argument is given by Sir Roger Penrose.
Another line of defense is to maintain that abstract objects are relevant to mathematical reasoning in a way that is non-causal, and not analogous to perception. This argument is developed by Jerrold Katz in his 2000 book Realistic Rationalism.
A more radical defense is denial of physical reality, i.e. the mathematical universe hypothesis. In that case, a mathematician's knowledge of mathematics is one mathematical object making contact with another.
Aesthetics
Many practicing mathematicians have been drawn to their subject because of a sense of beauty they perceive in it. One sometimes hears the sentiment that mathematicians would like to leave philosophy to the philosophers and get back to mathematics—where, presumably, the beauty lies.
In his work on the divine proportion, H.E. Huntley relates the feeling of reading and understanding someone else's proof of a theorem of mathematics to that of a viewer of a masterpiece of art—the reader of a proof has a similar sense of exhilaration at understanding as the original author of the proof, much as, he argues, the viewer of a masterpiece has a sense of exhilaration similar to the original painter or sculptor. Indeed, one can study mathematical and scientific writings as literature.
Philip J. Davis and Reuben Hersh have commented that the sense of mathematical beauty is universal amongst practicing mathematicians. By way of example, they provide two proofs of the irrationality of . The first is the traditional proof by contradiction, ascribed to Euclid; the second is a more direct proof involving the fundamental theorem of arithmetic that, they argue, gets to the heart of the issue. Davis and Hersh argue that mathematicians find the second proof more aesthetically appealing because it gets closer to the nature of the problem.
Paul Erdős was well known for his notion of a hypothetical "Book" containing the most elegant or beautiful mathematical proofs. There is not universal agreement that a result has one "most elegant" proof; Gregory Chaitin has argued against this idea.
Philosophers have sometimes criticized mathematicians' sense of beauty or elegance as being, at best, vaguely stated. By the same token, however, philosophers of mathematics have sought to characterize what makes one proof more desirable than another when both are logically sound.
Another aspect of aesthetics concerning mathematics is mathematicians' views towards the possible uses of mathematics for purposes deemed unethical or inappropriate. The best-known exposition of this view occurs in G. H. Hardy's book A Mathematician's Apology, in which Hardy argues that pure mathematics is superior in beauty to applied mathematics precisely because it cannot be used for war and similar ends.
See also
Definitions of mathematics
Formal language
Foundations of mathematics
Golden ratio
Model theory
Non-standard analysis
Philosophy of language
Philosophy of logic
Philosophy of science
Philosophy of physics
Philosophy of probability
Rule of inference
Science studies
Scientific method
Related works
The Analyst
Euclid's Elements
"On Formally Undecidable Propositions of Principia Mathematica and Related Systems"
"On Computable Numbers, with an Application to the Entscheidungsproblem"
Introduction to Mathematical Philosophy
"New Foundations for Mathematical Logic"
Principia Mathematica
The Simplest Mathematics
Historical topics
History and philosophy of science
History of mathematics
History of philosophy
Journals
Philosophia Mathematica
Philosophy of Mathematics Education Journal
Notes
References
Further reading
External links
Mathematical Structuralism, Internet Encyclopaedia of Philosophy
Abstractionism, Internet Encyclopaedia of Philosophy
The London Philosophy Study Guide offers many suggestions on what to read, depending on the student's familiarity with the subject:
Philosophy of Mathematics
Mathematical Logic
Set Theory & Further Logic
R.B. Jones' philosophy of mathematics page | Philosophy of mathematics | [
"Mathematics"
] | 10,827 | [
"nan"
] |
46,545 | https://en.wikipedia.org/wiki/Telecommunications%20network | A telecommunications network is a group of nodes interconnected by telecommunications links that are used to exchange messages between the nodes. The links may use a variety of technologies based on the methodologies of circuit switching, message switching, or packet switching, to pass messages and signals.
Multiple nodes may cooperate to pass the message from an originating node to the destination node, via multiple network hops. For this routing function, each node in the network is assigned a network address for identification and locating it on the network. The collection of addresses in the network is called the address space of the network.
Examples of telecommunications networks include computer networks, the Internet, the public switched telephone network (PSTN), the global Telex network, the aeronautical ACARS network, and the wireless radio networks of cell phone telecommunication providers.
Network structure
this is the structure of network general, every telecommunications network conceptually consists of three parts, or planes (so-called because they can be thought of as being and often are, separate overlay networks):
The data plane (also user plane, bearer plane, or forwarding plane) carries the network's users' traffic, the actual payload.
The control plane carries control information (also known as signaling).
The management plane carries the operations, administration and management traffic required for network management. The management plane is sometimes considered a part of the control plane.
Data networks
Data networks are used extensively throughout the world for communication between individuals and organizations. Data networks can be connected to allow users seamless access to resources that are hosted outside of the particular provider they are connected to. The Internet is the best example of the internetworking of many data networks from different organizations.
Terminals attached to IP networks like the Internet are addressed using IP addresses. Protocols of the Internet protocol suite (TCP/IP) provide the control and routing of messages across the and IP data network. There are many different network structures that IP can be used across to efficiently route messages, for example:
Wide area networks (WAN)
Metropolitan area networks (MAN)
Local area networks (LAN)
There are three features that differentiate MANs from LANs or WANs:
The area of the network size is between LANs and WANs. The MAN will have a physical area between 5 and 50 km in diameter.
MANs do not generally belong to a single organization. The equipment that interconnects the network, the links, and the MAN itself are often owned by an association or a network provider that provides or leases the service to others.
A MAN is a means for sharing resources at high speeds within the network. It often provides connections to WAN networks for access to resources outside the scope of the MAN.
Data center networks also rely highly on TCP/IP for communication across machines. They connect thousands of servers, are designed to be highly robust, provide low latency and high bandwidth. Data center network topology plays a significant role in determining the level of failure resiliency, ease of incremental expansion, communication bandwidth and latency.
Capacity and speed
In analogy to the improvements in the speed and capacity of digital computers, provided by advances in semiconductor technology and expressed in the bi-yearly doubling of transistor density, which is described empirically by Moore's law, the capacity and speed of telecommunications networks have followed similar advances, for similar reasons. In telecommunication, this is expressed in Edholm's law, proposed by and named after Phil Edholm in 2004. This empirical law holds that the bandwidth of telecommunication networks doubles every 18 months, which has proven to be true since the 1970s. The trend is evident in the Internet, cellular (mobile), wireless and wired local area networks (LANs), and personal area networks. This development is the consequence of rapid advances in the development of metal-oxide-semiconductor technology.
See also
Transcoder free operation
References
Telecommunications engineering
Network architecture
Telecommunications infrastructure | Telecommunications network | [
"Engineering"
] | 782 | [
"Network architecture",
"Electrical engineering",
"Telecommunications engineering",
"Computer networks engineering"
] |
46,553 | https://en.wikipedia.org/wiki/Escherichia%20coli%20O157%3AH7 | Escherichia coli O157:H7 is a serotype of the bacterial species Escherichia coli and is one of the Shiga-like toxin–producing types of E. coli. It is a cause of disease, typically foodborne illness, through consumption of contaminated and raw food, including raw milk and undercooked ground beef. Infection with this type of pathogenic bacteria may lead to hemorrhagic diarrhea, and to kidney failure; these have been reported to cause the deaths of children younger than five years of age, of elderly patients, and of patients whose immune systems are otherwise compromised.
Transmission is via the fecal–oral route, and most illness has been through distribution of contaminated raw leaf green vegetables, undercooked meat and raw milk.
Signs and symptoms
E. coli O157:H7 infection often causes severe, acute hemorrhagic diarrhea (although nonhemorrhagic diarrhea is also possible) and abdominal cramps. Usually little or no fever is present, and the illness resolves in 5 to 10 days. It can also sometimes be asymptomatic.
In some people, particularly children under five years of age, persons whose immunologies are otherwise compromised, and the elderly, the infection can cause hemolytic–uremic syndrome (HUS), in which the red blood cells are destroyed and the kidneys fail. About 2–7% of infections lead to this complication. In the United States, HUS is the principal cause of acute kidney failure in children, and most cases of HUS are caused by E. coli O157:H7.
Bacteriology
Like the other strains of the E. coli, O157:H7 is gram-negative and oxidase-negative. Unlike many other strains, it does not ferment sorbitol, which provides a basis for clinical laboratory differentiation of the strain. Strains of E. coli that express Shiga and Shiga-like toxins gained that ability via infection with a prophage containing the structural gene coding for the toxin, and nonproducing strains may become infected and produce shiga-like toxins after incubation with shiga toxin positive strains. The prophage responsible seems to have infected the strain's ancestors fairly recently, as viral particles have been observed to replicate in the host if it is stressed in some way (e.g. antibiotics).
All clinical isolates of E. coli O157:H7 possess the plasmid pO157. The periplasmic catalase is encoded on pO157 and may enhance the virulence of the bacterium by providing additional oxidative protection when infecting the host. E. coli O157:H7 non-hemorrhagic strains are converted to hemorrhagic strains by lysogenic conversion after bacteriophage infection of non-hemorrhagic cells.
Natural habitat
While it is relatively uncommon, the E. coli serotype O157:H7 can naturally be found in the intestinal contents of some cattle, goats, and even sheep. The digestive tract of cattle lack the Shiga toxin receptor globotriaosylceramide, and thus, these can be asymptomatic carriers of the bacterium. The prevalence of E. coli O157:H7 in North American feedlot cattle herds ranges from 0 to 60%.
Some cattle may also be so-called "super-shedders" of the bacterium. Super-shedders may be defined as cattle exhibiting rectoanal junction colonization and excreting >103 to 4 CFU g−1 feces. Super-shedders have been found to constitute a small proportion of the cattle in a feedlot (<10%) but they may account for >90% of all E. coli O157:H7 excreted.
Transmission
Infection with E. coli O157:H7 can come from ingestion of contaminated food or water, or oral contact with contaminated surfaces. Examples of this can be undercooked ground beef but also leafy vegetables and raw milk. Fields often become contaminated with the bacterium through irrigation processes or contaminated water naturally entering the soil. It is highly virulent, with a low infectious dose: an inoculation of fewer than 10 to 100 colony-forming units (CFU) of E. coli O157:H7 is sufficient to cause infection, compared to over a million CFU for other pathogenic E. coli strains.
Diagnosis
A stool culture can detect the bacterium. The sample is cultured on sorbitol-MacConkey (SMAC) agar, or the variant cefixime potassium tellurite sorbitol-MacConkey agar (CT-SMAC). On SMAC agar, O157:H7 colonies appear clear due to their inability to ferment sorbitol, while the colonies of the usual sorbitol-fermenting serotypes of E. coli appear red. Sorbitol non-fermenting colonies are tested for the somatic O157 antigen before being confirmed as E. coli O157:H7. Like all cultures, diagnosis is time-consuming with this method; swifter diagnosis is possible using quick E. coli DNA extraction method plus polymerase chain reaction techniques. Newer technologies using fluorescent and antibody detection are also under development.
Prevention
Avoiding the consumption of, or contact with, unpasteurized dairy products, undercooked beef, uncleaned vegetables, and non disinfected water reduces the risk of an E. coli infection. Proper hand washing with water that has been treated with adequate levels of chlorine or other effective disinfectants after using the lavatory or changing a diaper, especially among children or those with diarrhea, reduces the risk of transmission.
E. coli O157:H7 infection is a nationally reportable disease in the US, Great Britain, and Germany. It is also reportable in most states of Australia including Queensland.
Treatment
While fluid replacement and blood pressure support may be necessary to prevent death from dehydration, most patients recover without treatment in 5–10 days. There is no evidence that antibiotics improve the course of disease, and treatment with antibiotics may precipitate hemolytic–uremic syndrome (HUS). The antibiotics are thought to trigger prophage induction, and the prophages released by the dying bacteria infect other susceptible bacteria, converting them into toxin-producing forms. Antidiarrheal agents, such as loperamide (imodium), should also be avoided as they may prolong the duration of the infection.
Certain novel treatment strategies, such as the use of anti-induction strategies to prevent toxin production and the use of anti-Shiga toxin antibodies, have also been proposed.
History
The common ancestor of Escherichia coli O157:H7 was found to have originated in the Netherlands around 1890 as estimated by molecular biologists. It is thought that international spread was through animal movements, like Holstein Friesian cattle. E.coli O157:H7 is thought to have moved from Europe to Australia around 1937, to the United States in 1941, to Canada in 1960, and from Australia to New Zealand in 1966.
The first recorded observation of human E. coli O157:H7 infection was in 1975, in association with a sporadic case of hemorrhagic colitis, but it was not identified as pathogenic then. It was first recognized as a human pathogen following a 1982 hemorrhagic colitis outbreak in Oregon and Michigan, in which at least 47 people were sickened by eating beef hamburger patties from a fast food chain that were found to be contaminated with it.
The United States Department of Agriculture banned the sale of ground beef contaminated with the O157:H7 strain in 1994.
Culture and society
The pathogen results in an estimated 2,100 hospitalizations annually in the United States. The illness is often misdiagnosed; therefore, expensive and invasive diagnostic procedures may be performed. Patients who develop HUS often require prolonged hospitalization, dialysis, and long-term followup.
See also
1993 Jack in the Box E. coli outbreak
1996 Odwalla E. coli outbreak
2011 Germany E. coli O104:H4 outbreak
2024 McDonald's E. coli outbreak
Escherichia coli O104:H4
Escherichia coli O121
Food-induced purpura
List of foodborne illness outbreaks
Walkerton E. coli outbreak
References
External links
Haemolytic Uraemic Syndrome Help (HUSH) – a UK based charity
E. coli: Protecting yourself and your family from a sometimes deadly bacterium
Escherichia coli O157:H7 genomes and related information at PATRIC, a Bioinformatics Resource Center funded by NIAID
For more information about reducing your risk of foodborne illness, visit the US Department of Agriculture's Food Safety and Inspection Service website or The Partnership for Food Safety Education | Fight BAC!
briandeer.com, report from The Sunday Times on a UK outbreak, May 17, 1998
CBS5 report on September 2006 outbreak
Escherichia coli
Bovine diseases
Zoonoses
Foodborne illnesses
Infraspecific bacteria taxa
Pathogenic bacteria
bg:Escherichia coli O157:H7 | Escherichia coli O157:H7 | [
"Biology"
] | 1,959 | [
"Model organisms",
"Escherichia coli"
] |
46,594 | https://en.wikipedia.org/wiki/Straw | Straw is an agricultural byproduct consisting of the dry stalks of cereal plants after the grain and chaff have been removed. It makes up about half of the yield by weight of cereal crops such as barley, oats, rice, rye and wheat. It has a number of different uses, including fuel, livestock bedding and fodder, thatching and basket making.
Straw is usually gathered and stored in a straw bale, which is a bale, or bundle, of straw tightly bound with twine, wire, or string. Straw bales may be square, rectangular, star shaped or round, and can be very large, depending on the type of baler used.
Uses
Current and historic uses of straw include:
Animal feed
Straw may be fed as part of the roughage component of the diet to cattle or horses that are on a near maintenance level of energy requirement. It has a low digestible energy and nutrient content (as opposed to hay, which is much more nutritious). The heat generated when microorganisms in a herbivore's gut digest straw can be useful in maintaining body temperature in cold climates. Due to the risk of impaction and its poor nutrient profile, it should always be restricted to part of the diet. It may be fed as it is, or chopped into short lengths, known as chaff.
Basketry
Bee skeps and linen baskets are made from coiled and bound together continuous lengths of straw. The technique is known as lip work.
Bedding
Straw is commonly used as bedding for ruminants and horses. It may be used as bedding and food for small animals, but this often leads to injuries to mouth, nose and eyes as straw is quite sharp.
The straw-filled mattress, also known as a palliasse, is still used by people in many parts of the world.
Bioplastic
Rice straw, an agricultural waste which is not usually recovered, can be turned into bioplastic with mechanical properties akin to polystyrene in its dry state.
Chemicals
Straw is being investigated as a source of fine chemicals including alkaloids, flavonoids, lignins, phenols, and steroids.
Construction material
In many parts of the world, straw is used to bind clay and concrete. A mixture of clay and straw, known as cob, can be used as a building material. There are many recipes for making cob.
When baled, straw has moderate insulation characteristics (about R-1.5/inch according to Oak Ridge National Lab and Forest Product Lab testing). It can be used, alone or in a post-and-beam construction, to build straw bale houses. When bales are used to build or insulate buildings, the straw bales are commonly finished with earthen plaster. The plastered walls provide some thermal mass, compressive and ductile structural strength, and acceptable fire resistance as well as thermal resistance (insulation), somewhat in excess of North American building code. Straw is an abundant agricultural waste product, and requires little energy to bale and transport for construction. For these reasons, straw bale construction is gaining popularity as part of passive solar and other renewable energy projects.
Wheat straw can be used as a fibrous filler combined with polymers to produce composite lumber.
Enviroboard can be made from straw.
Strawblocks are strawbales that have been recompressed to the density of woodblocks, for compact cargo container shipment, or for straw-bale construction of load-bearing walls that support roof-loads, such as a "living" or green roofs.
Crafts
Craft usages of straw include:
Corn dollies
Straw marquetry
Straw mobile (straw art)
Straw painting
Straw plaiting
Scarecrows
Japanese Traditional Cat's House
Japanese wara art
Construction site sediment control
Straw bales are sometimes used for sediment control at construction sites. However, bales are often ineffective in protecting water quality and are maintenance-intensive. For these reasons the U.S. Environmental Protection Agency (EPA) and various state agencies recommend use of alternative sediment control practices where possible, such as silt fences, fiber rolls and geotextiles.
They can also be used as burned area emergency response, as ground cover or as in-stream check dams.
Fuel source
The use of straw as a carbon-neutral energy source is increasing rapidly, especially for biobutanol. Straw or hay briquettes are a biofuel substitute to coal.
Straw, processed first as briquettes, has been fed into a biogas plant in Aarhus University, Denmark, in a test to see if higher gas yields could be attained.
The use of straw in large-scale biomass power plants is becoming mainstream in the EU, with several facilities already online. The straw is either used directly in the form of bales, or densified into pellets which allows for the feedstock to be transported over longer distances. Finally, torrefaction of straw with pelletisation is gaining attention, because it increases the energy density of the resource, making it possible to transport it still further. This processing step also makes storage much easier, because torrefied straw pellets are hydrophobic. Torrefied straw in the form of pellets can be directly co-fired with coal or natural gas at very high rates and make use of the processing infrastructures at existing coal and gas plants. Because the torrefied straw pellets have superior structural, chemical and combustion properties to coal, they can replace all coal and turn a coal plant into an entirely biomass-fed power station. First generation pellets are limited to a co-firing rate of 15% in modern IGCC plants.
Gardening
Straw bale gardening is also popular among gardeners who do not have enough space for soil gardening. When properly conditioned, straw bales can be used as a perfect soil substitute.
Hats
There are several styles of straw hats that are made of woven straw.
Many thousands of women and children in England (primarily in the Luton district of Bedfordshire), and large numbers in the United States (mostly Massachusetts), were employed in plaiting straw for making hats. By the late 19th century, vast quantities of plaits were being imported to England from Canton in China, and in the United States most of the straw plait was imported.
A fiber analogous to straw is obtained from the plant Carludovica palmata, and is used to make Panama hats.
Traditional Japanese rain protection consisted of a straw hat and a mino cape.
Horticulture
Straw is used in cucumber houses and for mushroom growing.
In Japan, certain trees are wrapped with straw to protect them from the effects of a hard winter as well as to use them as a trap for parasite insects. (see Komomaki)
It is also used in ponds to reduce algae by changing the nutrient ratios in the water.
The soil under strawberries is covered with straw to protect the ripe berries from dirt, and straw is also used to cover the plants during winter to prevent the cold from killing them.
Straw also makes an excellent mulch.
Packaging
Straw is resistant to being crushed and therefore makes a good packing material. A company in France makes a straw mat sealed in thin plastic sheets.
Straw envelopes for wine bottles have become rarer, but are still to be found at some wine merchants.
Wheat straw is also used in compostable food packaging such as compostable plates. Packaging made from wheat straw can be certified compostable and will biodegrade in a commercial composting environment.
Paper
Straw can be pulped to make paper.
Rope
Rope made from straw was used by thatchers, in the packaging industry and even in iron foundries.
Saekki is a traditional Korean rope made of woven straw.
Shoes
The Chinese wore cailu or caixie, shoes and sandals made of straw, well into modernity.
Koreans wear jipsin, sandals made of straw.
Several types of traditional Japanese shoes, such as waraji and zōri, are made of straw.
In some parts of Germany like Black Forest and Hunsrück people wear straw shoes at home or at carnival.
Targets
Heavy-gauge straw rope is coiled and sewn tightly together to make archery targets. This is no longer done entirely by hand, but is partially mechanised. Sometimes a paper or plastic target is set up in front of straw bales, which serve to support the target and provide a safe backdrop.
Thatching
Thatching uses straw, reed or similar materials to make a waterproof, lightweight roof with good insulation properties. Straw for this purpose (often wheat straw) is grown specially and harvested using a reaper-binder.
Health and safety
Dried straw presents a fire hazard that can ignite easily if exposed to sparks or an open flame. It can also trigger allergic rhinitis in people who are hypersensitive to airborne allergens such as straw dust.
See also
Corn stover (corn straw)
Crop residue
Drinking straw
Hay
Straw (colour)
Sheaf (agriculture), a bundle of straw
Stook, a stack of straw
Straw dog
Wood wool
Yule Goat
References
External links
Biodegradable materials
Biomass
Packaging materials
Building insulation materials
Soil erosion
Natural materials
By-products | Straw | [
"Physics",
"Chemistry"
] | 1,887 | [
"Natural materials",
"Biodegradable materials",
"Biodegradation",
"Materials",
"Matter"
] |
46,595 | https://en.wikipedia.org/wiki/Loom | A loom is a device used to weave cloth and tapestry. The basic purpose of any loom is to hold the warp threads under tension to facilitate the interweaving of the weft threads. The precise shape of the loom and its mechanics may vary, but the basic function is the same.
Etymology and usage
The word "loom" derives from the Old English geloma, formed from ge- (perfective prefix) and loma, a root of unknown origin; the whole word geloma meant a utensil, tool, or machine of any kind. In 1404 "lome" was used to mean a machine to enable weaving thread into cloth.
By 1838 "loom" had gained the additional meaning of a machine for interlacing thread.
Components and actions
Basic structure
Weaving is done on two sets of threads or yarns, which cross one another. The warp threads are the ones stretched on the loom (from the Proto-Indo-European *werp, "to bend"). Each thread of the weft (i.e. "that which is woven") is inserted so that it passes over and under the warp threads.
The ends of the warp threads are usually fastened to beams. One end is fastened to one beam, the other end to a second beam, so that the warp threads all lie parallel and are all the same length. The beams are held apart to keep the warp threads taut.
The textile is woven starting at one end of the warp threads, and progressing towards the other end. The beam on the finished-fabric end is called the cloth beam. The other beam is called the warp beam.
Beams may be used as rollers to allow the weaver to weave a piece of cloth longer than the loom. As the cloth is woven, the warp threads are gradually unrolled from the warp beam, and the woven portion of the cloth is rolled up onto the cloth beam (which is also called the takeup roll). The portion of the fabric that has already been formed but not yet rolled up on the takeup roll is called the fell.
Not all looms have two beams. For instance, warp-weighted looms have only one beam; the warp yarns hang from this beam. The bottom ends of the warp yarns are tied to dangling loom weights.
Motions
A loom has to perform three principal motions: shedding, picking, and battening.
Shedding. Shedding is pulling part of the warp threads aside to form a shed (the space between the raised and unraised warp yarns). The shed is the space through which the filling yarn, carried by the shuttle, can be inserted, forming the weft.
Sheds may be simple: for instance, lifting all the odd threads and all the even threads alternately produces a tabby weave (the two sheds are called the shed and countershed). More intricate shedding sequences can produce more complex weaves, such as twill.
Picking. A single crossing of the weft thread from one side of the loom to the other, through the shed, is known as a pick. Picking is passing the weft through the shed. A new shed is then formed before a new pick is inserted.
Conventional shuttle looms can operate at speeds of about 150 to 160 picks per minute.
Battening. After the pick, the new pass of weft thread has to be tamped up against the fell, to avoid making a fabric with large, irregular gaps between the weft threads. This compression of the weft threads is called battening.
There are also usually two secondary motions, because the newly constructed fabric must be wound onto cloth beam. This process is called taking up. At the same time, the warp yarns must be let off or released from the warp beam, unwinding from it. To become fully automatic, a loom needs a tertiary motion, the filling stop motion. This will brake the loom if the weft thread breaks. An automatic loom requires 0.125 hp to 0.5 hp to operate (100W to 400W).
Components
A loom, then, usually needs two beams, and some way to hold them apart. It generally has additional components to make shedding, picking, and battening faster and easier. There are also often components to help take up the fell.
The nature of the loom frame and the shedding, picking, and battening devices vary. Looms come in a wide variety of types, many of them specialized for specific types of weaving. They are also specialized for the lifestyle of the weaver. For instance, nomadic weavers tend to use lighter, more portable looms, while weavers living in cramped city dwellings are more likely to use a tall upright loom, or a loom that folds into a narrow space when not in use.
Shedding methods
It is possible to weave by manually threading the weft over and under the warp threads, but this is slow. Some tapestry techniques use manual shedding. Pin looms and peg looms also generally have no shedding devices. Pile carpets generally do not use shedding for the pile, because each pile thread is individually knotted onto the warps, but there may be shedding for the weft holding the carpet together.
Usually weaving uses shedding devices. These devices pull some of the warp threads to each side, so that a shed is formed between them, and the weft is passed through the shed. There are a variety of methods for forming the shed. At least two sheds must be formed, the shed and the countershed. Two sheds is enough for tabby weave; more complex weaves, such as twill weaves, satin weaves, diaper weaves, and figured (picture-forming) weaves, require more sheds.
Heddle-bar and shed-rod
Heddle-rods and shedding-sticks are not the fastest way to weave, but they are very simple to make, needing only sticks and yarn. They are often used on vertical and backstrap looms. They allow the creation of elaborate supplementary-weft brocades. They are also used on modern tapestry looms; the frequent changing of weft colour in tapestry makes weaving tapestry slow, so using faster, more complex shedding systems isn't worthwhile. The same is true of looms for handmade knotted-pile carpet; hand-knotting each pile thread to the warp takes far more time than weaving a couple of weft threads to hold the pile in place.
At its simplest, a heddle-bar is simply a stick placed across the warp and tied to individual warp threads. It is not tied to all of the warp threads; for a plain tabby weave, it is tied to every other thread. The little loops of string used to tie the wraps to the heddle bar are called heddles or leashes. When the heddle-bar is pulled perpendicular to the warp, it pulls the warp threads it is tied to out of position, creating a shed.
A warp-weighted loom (see diagram) typically uses a heddle-bar, or several. It has two upright posts (C); they support a horizontal beam (D), which is cylindrical so that the finished cloth can be rolled around it, allowing the loom to be used to weave a piece of cloth taller than the loom, and preserving an ergonomic working height. The warp threads (F, and A and B) hang from the beam and rest against the shed rod (E). The heddle-bar (G) is tied to some of the warp threads (A, but not B), using loops of string called leashes (H). So when the heddle rod is pulled out and placed in the forked sticks protruding from the posts (not lettered, no technical term given in citation), the shed (1) is replaced by the counter-shed (2). By passing the weft through the shed and the counter-shed, alternately, cloth is woven.
Several heddle-bars can be used side-by-side; three or more can be used to weave twill weaves, for instance.
There are also other ways to create counter-sheds. A shed-rod is simpler and easier to set up than a heddle-bar, and can make a counter-shed. A shed-rod (shedding stick, shed roll) is simply a stick woven through the warp threads. When pulled perpendicular to the threads (or rotated to stand on edge, for wide, flat shedding rods), it creates a counter shed. The combination of a heddle-bar and a shedding-stick can create the shed and countershed needed for a plain tabby weave, as in the video.
There are also slitted heddle-rods, which are sawn partway through, with evenly-placed slits. Each warp thread goes in a slit. The odd-numbered slits are at 90 degrees to the even slits. The rod is rotated back and forth to create the shed and countershed, so it is often large-diameter.
Tablet weaving
Tablet weaving uses cards punched with holes. The warp threads pass through the holes, and the cards are twisted and shifted to created varied sheds. This shedding technique is used for narrow work. It is also used to finish edges, weaving decorative selvage bands instead of hemming.
Rotating-hook heddles
There are heddles made of flip-flopping rotating hooks, which raise and lower the warp, creating sheds. The hooks, when vertical, have the weft threads looped around them horizontally. If the hooks are flopped over on side or another, the loop of weft twists, raising one or the other side of the loop, which creates the shed and countershed.
Rigid heddles
Rigid heddles are generally used on single-shaft looms. Odd warp threads go through the slots, and even ones through the circular holes, or vice-versa. The shed is formed by lifting the heddle, and the countershed by depressing it. The warp threads in the slots stay where they are, and the ones in the circular holes are pulled back and forth. A single rigid heddle can hold all the warp threads, though sometimes multiple rigid heddles are used.
Treadles may be used to drive the rigid heddle up and down.
Non-rigid heddles
Rigid heddles or (above) are called "rigid" to distinguish them from string and wire heddles. Rigid heddles are one-piece, by non-rigid ones are multi-piece. Each warp thread has its own heald (also, confusingly, called a heddle). The heald has an eyelet at each end (for the staves, also called shafts) and one in the middle, called the mail, (for the warp thread). A row of these healds is slid onto two staves, the upper and lower staves; the staves together, or the staves together with the healds, may be called a heald frame, which is, confusingly, also called a shaft and a harness. Replacable, interchangable healds can be smaller, allowing finer weaves.
Unlike a rigid heddle, a flexible heddle cannot push the warp thread. This means that two heald frames are needed even for a plain tabby weave. Twill weaves require three or more heald frames (depending on the type of twill), and more complex figured weaves require still more frames.
The different heald frames must be controlled by some mechanism, and the mechanism must be able to pull them in both directions. They are mostly controlled by treadles; creating the shed with the feet leaves the hands free to ply the shuttle. However in some tabletop looms, heald frames are also controlled by levers.
Treadle-controlled looms
In treadle looms, the weaver controls the shedding with their feet, by treading on treadles. Different treadles and combinations of treadles produce different sheds. The weaver must remember the sequence of treadling needed to produce the pattern.
The precise mechanism by which the treadles control the heddles varies. Rigid-heddle treadle looms do exist, but the heddles are usually flexible. Sometimes, the treadles are tied directly to the staves (with a Y-shaped bridle so they stay level). Alternately, they may be tied to a stick called a lamm, which in turn is tied to the stave, to make the motion more controlled and regular. The lamm may pivot or slide.
Counterbalance looms are the most common type of treadle loom globally, as they are simple and give a smooth, quiet, quick motion. The heald frames are joined together in pairs, by a cord running over heddle pulleys or a heddle roller. When one heald frame rises, the other falls. It takes a pair of treadles to control a pair of frames. Counterbalance looms are usually used with two or four frames, though some have as many as ten.
In theory each pair of heald frames has to have an equal number to warps pulled by each frame, so the patterns that can be made on them are limited. In practice, fairly unbalanced tie-ups just make the shed a bit smaller, and as the shed on a counterbalance loom is adjustable in size and quite large to start with (compared to other types of loom), so it is entirely possible to weave good cloth on a counterbalance loom with unbalanced heald frames, unless the loom is extremely shallow (that is, the length of warp being pulled on is short, less than 1 meter or 3 feet), which exacerbates the slightly uneven tension. Limited patterns are not, of course, a disadvantage when weaving plainer patterns, such as tabbies and twills.
Jack looms (also called single-tieup-looms and rising-shed looms), have their treadles connected to jacks, levers that push or pull the heald frames up; the harnesses are weighted to fall back into place by gravity. Several frames can be connected to a single treadle. Frames can also be raised by more than one treadle. This allows treadles to control arbitrary combinations of frames, which vastly increases the number of different sheds that can be created from the same number of frames. Any number of treadles can also be engaged at once, meaning that the number of different sheds that can be selected is two to the power of the number of treadles. Eight is a large but reasonable number of treadles, giving a maximum of 28=256 sheds (some of which will probably not have enough threads on one side to be useful). Having more possible sheds allows more complex patterns, such as diaper weaves.
Jack looms are easy to make and to tie up (if not quite as easy as counterbalance looms). The gravity return makes jack looms heavy to operate. The shed of a jack loom is smaller for a given length of warp being pulled aside by the heddles (loom depth). The warp threads being pulled up by the jacks are also tauter than the other warp threads (unlike a counter balance loom, where the threads are pulled an equal amount in opposite directions). Uneven tension makes weaving evenly harder. It also lowers the maximum tension at which one can practically weave. If the threads are rough, closely-spaced, very long or numerous, it can be hard to open the sheds on the jack loom. Jack looms without castles (the superstructure above the weft) have to lift the heald frames from below, and are noiser due to the impact of wood on wood; elastomer pads can reduce the noise.
In countermarch looms, the treadles are tied to lamms, which may pivot at one end or slide up and down. Half of the lamms in turn connect to jacks, which also pivot, and push or pull the staves up or down. Some countermarches have two horizontal jacks per shaft, others a single vertical jack. Each treadle is tied to all of the heald frames, moving some of them up and the rest of them down. This allows the complex combinatorial treadles of a jack loom, with the large shed and balanced, even tension of a counterbalance loom, with its quiet, light operation. Unfortunately, countermarch looms are more complex, harder to build, slower to tie up, and more prone to malfunction.
Figure harness and the drawloom
A drawloom is for weaving figured cloth. In a drawloom, a "figure harness" is used to control each warp thread separately, allowing very complex patterns. A drawloom requires two operators, the weaver, and an assistant called a "drawboy" to manage the figure harness.
The earliest confirmed drawloom fabrics come from the State of Chu and date c. 400 BC. Some scholars speculate an independent invention in ancient Syria, since drawloom fabrics found in Dura-Europas are thought to date before 256 AD. The draw loom was invented in China during the Han dynasty (State of Liu?); foot-powered multi-harness looms and jacquard looms were used for silk weaving and embroidery, both of which were cottage industries with imperial workshops. The drawloom enhanced and sped up the production of silk and played a significant role in Chinese silk weaving. The loom was introduced to Persia, India, and Europe.
Dobby head
A dobby head is a device that replaces the drawboy, the weaver's helper who used to control the warp threads by pulling on draw threads. "Dobby" is a corruption of "draw boy". Mechanical dobbies pull on the draw threads using pegs in bars to lift a set of levers. The placement of the pegs determines which levers are lifted. The sequence of bars (they are strung together) effectively remembers the sequence for the weaver. Computer-controlled dobbies use solenoids instead of pegs.
Jacquard head
The Jacquard loom is a mechanical loom, invented by Joseph Marie Jacquard in 1801, which simplifies the process of manufacturing figured textiles with complex patterns such as brocade, damask, and matelasse. The loom is controlled by punched cards with punched holes, each row of which corresponds to one row of the design. Multiple rows of holes are punched on each card and the many cards that compose the design of the textile are strung together in order. It is based on earlier inventions by the Frenchmen Basile Bouchon (1725), Jean Baptiste Falcon (1728), and Jacques Vaucanson (1740). To call it a loom is a misnomer. A Jacquard head could be attached to a power loom or a handloom, the head controlling which warp thread was raised during shedding. Multiple shuttles could be used to control the colour of the weft during picking. The Jacquard loom is the predecessor to the computer punched card readers of the 19th and 20th centuries.
Picking (weft insertion)
The weft may be passed across the shed as a ball of yarn, but usually this is too bulky and unergonomic. Shuttles are designed to be slim, so they pass through the shed; to carry a lot of yarn, so the weaver does not need to refill them too often; and to be an ergonomic size and shape for the particular weaver, loom, and yarn. They may also be designed for low friction.
Stick shuttles
Unnotched stick shuttles
At their simplest, these are just sticks wrapped with yarn. They may be specially shaped, as with the bobbins and bones used in tapestry-making (bobbins are used on vertical warps, and bones on horizontal ones).
Notched stick shuttles, rag shuttles, and ski shuttles
Boat shuttles
Boat shuttles may be closed (central hollow with a solid bottom) or open (central hole goes right through). The yarn may be side-feed or end-feed. They are commonly made for 10-cm (4-inch) and 15-cm (6-inch) bobbin lengths.
Flying shuttle
Hand weavers who threw a shuttle could only weave a cloth as wide as their armspan. If cloth needed to be wider, two people would do the task (often this would be an adult with a child). John Kay (1704–1779) patented the flying shuttle in 1733. The weaver held a picking stick that was attached by cords to a device at both ends of the shed. With a flick of the wrist, one cord was pulled and the shuttle was propelled through the shed to the other end with considerable force, speed and efficiency. A flick in the opposite direction and the shuttle was propelled back. A single weaver had control of this motion but the flying shuttle could weave much wider fabric than an arm's length at much greater speeds than had been achieved with the hand thrown shuttle.
The flying shuttle was one of the key developments in weaving that helped fuel the Industrial Revolution. The whole picking motion no longer relied on manual skill and it was just a matter of time before it could be powered by something other than a human.
Weft insertion in power looms
Different types of power looms are most often defined by the way that the weft, or pick, is inserted into the warp. Many advances in weft insertion have been made in order to make manufactured cloth more cost effective. Weft insertion rate is a limiting factor in production speed. , industrial looms can weave at 2,000 weft insertions per minute.
There are five main types of weft insertion and they are as follows:
Shuttle: The first-ever powered looms were shuttle-type looms. Spools of weft are unravelled as the shuttle travels across the shed. This is very similar to projectile methods of weaving, except that the weft spool is stored on the shuttle. These looms are considered obsolete in modern industrial fabric manufacturing because they can only reach a maximum of 300 picks per minute.
Air jet: An air-jet loom uses short quick bursts of compressed air to propel the weft through the shed in order to complete the weave. Air jets are the fastest traditional method of weaving in modern manufacturing and they are able to achieve up to 1,500 picks per minute. However, the amounts of compressed air required to run these looms, as well as the complexity in the way the air jets are positioned, make them more costly than other looms.
Water jet: Water-jet looms use the same principle as air-jet looms, but they take advantage of pressurized water to propel the weft. The advantage of this type of weaving is that water power is cheaper where water is directly available on site. Picks per minute can reach as high as 1,000.
Rapier loom: This type of weaving is very versatile, in that rapier looms can weave using a large variety of threads. There are several types of rapiers, but they all use a hook system attached to a rod or metal band to pass the pick across the shed. These machines regularly reach 700 picks per minute in normal production.
Projectile: Projectile looms utilize an object that is propelled across the shed, usually by spring power, and is guided across the width of the cloth by a series of reeds. The projectile is then removed from the weft fibre and it is returned to the opposite side of the machine so it can be reused. Multiple projectiles are in use in order to increase the pick speed. Maximum speeds on these machines can be as high as 1,050 ppm.
Circular: Modern circular looms use up to ten shuttles, driven in a circular motion from below by electromagnets, for the weft yarns, and cams to control the warp threads. The warps rise and fall with each shuttle passage, unlike the common practice of lifting all of them at once. Circular looms are used to create seamless tubes of fabric for products such as hosiery, sacks, clothing, fabric hoses (such as fire hoses) and the like.
Battening
The newest weft thread must be beaten against the fell. Battening can be done with a long stick placed in the shed parallel to the weft (a sword batten), a shorter stick threaded between the warp threads perpendicular to warp and weft (a pin batten), a comb, or a reed (a comb with both ends closed, so that it has to be sleyed, that is have the warp threads threaded through it, when the loom is warped). For rigid-heddle looms, the heddle may be used as a reed.
Secondary motions
Dandy mechanism
Patented in 1802, dandy looms automatically rolled up the finished cloth, keeping the fell always the same length. They significantly speeded up hand weaving (still a major part of the textile industry in the 1800s). Similar mechanisms were used in power looms.
Temples
The temples act to keep the cloth from shrinking sideways as it is woven. Some warp-weighted looms had temples made of loom weights, suspended by strings so that they pulled the cloth breadthwise. Other looms may have temples tied to the frame, or temples that are hooks with an adjustable shaft between them. Power looms may use temple cylinders. Pins can leave a series of holes in the selvages (these may be from stenter pins used in post-processing).
Frames
Loom frames can be roughly divided, by the orientation of the warp threads, into horizontal looms and vertical looms. There are many finer divisions. Most handloom frame designs can be constructed fairly simply.
Backstrap loom
The back-strap loom (also known as belt loom) is a simple loom with ancient roots, still used in many cultures around the world (such as Andean textiles, and in Central, East and South Asia). It consists of two sticks or bars between which the warps are stretched. One bar is attached to a fixed object and the other to the weaver, usually by means of a strap around the weaver's back. The weaver leans back and uses their body weight to tension the loom.
Both simple and complex textiles can be woven on backstrap looms. They produce narrowcloth: width is limited to the weaver's armspan. They can readily produce warp-faced textiles, often decorated with intricate pick-up patterns woven in complementary and supplementary warp techniques, and brocading. Balanced weaves are also possible on the backstrap loom.
Warp-weighted loom
The warp-weighted loom is a vertical loom that may have originated in the Neolithic period. Its defining characteristic is hanging weights (loom weights) which keep bundles of the warp threads taut. Frequently, extra warp thread is wound around the weights. When a weaver has woven far enough down, the completed section (fell) can be rolled around the top beam, and additional lengths of warp threads can be unwound from the weights to continue. This frees the weaver from vertical size constraint. Horizontally, breadth is limited by armspan; making broadwoven cloth requires two weavers, standing side by side at the loom.
Simple weaves, and complex weaves that need more than two different sheds, can both be woven on a warp-weighted loom. They can also be used to produce tapestries.
Pegged or floor loom
In pegged looms, the beams can be simply held apart by hooking them behind pegs driven into the ground, with wedges or lashings used to adjust the tension. Pegged looms may, however, also have horizontal sidepieces holding the beams apart.
Such looms are easy to set up and dismantle, and are easy to transport, so they are popular with nomadic weavers. They are generally only used for comparatively small woven articles. Urbanites are unlikely to use horizontal floor looms as they take up a lot of floor space, and full-time professional weavers are unlikely to use them as they are unergonomic. Their cheapness and portability is less valuable to urban professional weavers.
Treadle loom
In a treadle loom, the shedding is controlled by the feet, which tread on the treadles.
The earliest evidence of a horizontal loom is found on a pottery dish in ancient Egypt, dated to 4400 BC. It was a frame loom, equipped with treadles to lift the warp threads, leaving the weaver's hands free to pass and beat the weft thread.
A pit loom has a pit for the treadles, reducing the stress transmitted through the much shorter frame.
In a wooden vertical-shaft loom, the heddles are fixed in place in the shaft. The warp threads pass alternately through a heddle, and through a space between the heddles (the shed), so that raising the shaft raises half the threads (those passing through the heddles), and lowering the shaft lowers the same threads — the threads passing through the spaces between the heddles remain in place.
A treadle loom for figured weaving may have a large number of harnesses or a control head. It can, for instance, have a Jacquard machine attached to it .
Tapestry looms
Tapestry can have extremely complex wefts, as different strands of wefts of different colours are used to form the pattern. Speed is lower, and shedding and picking devices may be simpler. Looms used for weaving traditional tapestry are called not as "vertical-warp" and "horizontal-warp", but as "high-warp" or "low-warp" (the French terms haute-lisse and are also used in English).
Ribbon, Band, and Inkle weaving
Inkle looms are narrow looms used for narrow work. They are used to make narrow warp-faced strips such as ribbons, bands, or tape. They are often quite small; some are used on a tabletop. others are backstraps looms with a rigid heddle, and very portable.
Darning looms
There exist very small hand-held looms known as darning looms. They are made to fit under the fabric being mended, and are often held in place by an elastic band on one side of the cloth and a groove around the loom's darning-egg portion on the other. They may have heddles made of flip-flopping rotating hooks . Other devices sold as darning looms are just a darning egg and a separate comb-like piece with teeth to hook the warp over; these are used for repairing knitted garments and are like a linear knitting spool. Darning looms were sold during World War Two clothing rationing in the United Kingdom and Canada, and some are homemade.
Circular handlooms
Circular looms are used to create seamless tubes of fabric for products such as hosiery, sacks, clothing, fabric hoses (such as fire hoses) and the like. Tablet weaving can be used to knit tubes, including tubes that split and join.
Small jigs also used for circular knitting are also sometimes called circular looms, but they are used for knitting, not weaving.
Handlooms to power looms
A power loom is a loom powered by a source of energy other than the weaver's muscles. When power looms were developed, other looms came to be referred to as handlooms. Most cloth is now woven on power looms, but some is still woven on handlooms.
The development of power looms was gradual. The capabilities of power looms gradually expanded, but handlooms remained the most cost-effective way to make some types of textiles for most of the 1800s. Many improvements in loom mechanisms were first applied to hand looms (like the dandy loom), and only later integrated into power looms.
Edmund Cartwright built and patented a power loom in 1785, and it was this that was adopted by the nascent cotton industry in England. The silk loom made by Jacques Vaucanson in 1745 operated on the same principles but was not developed further. The invention of the flying shuttle by John Kay allowed a hand weaver to weave broadwoven cloth without an assistant, and was also critical to the development of a commercially successful power loom. Cartwright's loom was impractical but the ideas behind it were developed by numerous inventors in the Manchester area of England. By 1818, there were 32 factories containing 5,732 looms in the region.
The Horrocks loom was viable, but it was the Roberts Loom in 1830 that marked the turning point. Incremental changes to the three motions continued to be made. The problems of sizing, stop-motions, consistent take-up, and a temple to maintain the width remained. In 1841, Kenworthy and Bullough produced the Lancashire Loom which was self-acting or semi-automatic. This enabled a youngster to run six looms at the same time. Thus, for simple calicos, the power loom became more economical to run than the handloom – with complex patterning that used a dobby or Jacquard head, jobs were still put out to handloom weavers until the 1870s. Incremental changes were made such as the Dickinson Loom, culminating in the fully automatic Northrop Loom, developed by the Keighley-born inventor Northrop, who was working for the Draper Corporation in Hopedale. This loom recharged the shuttle when the pirn was empty. The Draper E and X models became the leading products from 1909. They were challenged by synthetic fibres such as rayon.
By 1942, faster, more efficient, and shuttleless Sulzer and rapier looms had been introduced.
Symbolism and cultural significance
The loom is a symbol of cosmic creation and the structure upon which individual destiny is woven. This symbolism is encapsulated in the classical myth of Arachne who was changed into a spider by the goddess Athena, who was jealous of her skill at the godlike craft of weaving. In Maya civilization the goddess Ixchel taught the first woman how to weave at the beginning of time.
Gallery
See also
Bunkar: The Last of the Varanasi Weavers (documentary film)
Fashion and Textile Museum
Textile manufacturing
Timeline of clothing and textiles technology
Weaving (mythology)
Luddite
References
Bibliography
External links
Loom demonstration video
"Caring for your loom" article
"The Art and History of Weaving"
The Medieval Technology Pages: "The Horizontal Loom"
Articles containing video clips
Egyptian inventions
Han dynasty
Machines
Textile industry
Textile engineering
Weaving equipment | Loom | [
"Physics",
"Technology",
"Engineering"
] | 7,224 | [
"Machines",
"Applied and interdisciplinary physics",
"Weaving equipment",
"Physical systems",
"Mechanical engineering",
"Textile engineering"
] |
46,618 | https://en.wikipedia.org/wiki/Shovelware | Shovelware is a term for individual video games or software bundles known more for the quantity of what is included than for the quality or usefulness.
The metaphor implies that the creators showed little care for the quality of the original software, as if the new compilation or version had been created by indiscriminately adding titles "by the shovel" in the same way someone would shovel bulk material into a pile. The term "shovelware" is coined by semantic analogy to phrases like shareware and freeware, which describe methods of software distribution. It first appeared in the early 1990s when large amounts of shareware demo programs were copied onto CD-ROMs and advertised in magazines or sold at computer flea markets.
Shovelware CD-ROMs
Computer Gaming World wrote in 1990 that for "those who do not wish to wait" for software that used the new CD-ROM format, The Software Toolworks and Access Software planned to release "game packs of several classic titles". By 1993 the magazine referred to software repackaged on CD-ROM as "shovelware", describing one collection from Access as having a "rather dusty menu" and another from The Software Toolworks ("the reigning king of software repackaging efforts") as including games that were "mostly mediocre even in their prime"; the one exception, Chessmaster 2000, used "stunning CGA graphics". In 1994 the magazine described shovelware as "old and/or weak programs shoveled onto a CD to turn a quick buck".
The capacity of a CD-ROM was 450–700 times that of the floppy disk, and 6–16 times larger than the hard disks with which personal computers were commonly outfitted in 1990. This outsized capacity meant that very few users would install the discs' entire contents, encouraging producers to fill them by including as much existing content as possible, often without regard to the quality of the material. Advertising the number of titles on the disc often took precedence over the quality of the content. Software reviewers, displeased with huge collections of inconsistent quality, dubbed this practice "shovelware" in the early 1990s. Additionally, some CD-ROM computer games had software that did not fill the disc to capacity, which enabled game companies to bundle demo versions of other products on the same disc.
The prevalence of shovelware has decreased due to the practice of downloading individual programs from a crowdsourced or curated app store becoming the predominant mode of software distribution. It continues in some cases with bundled or pre-installed software, where many extra programs of dubious quality and functionality are included with a piece of hardware.
Shovelware video games
Low-budget, poor-quality video games, released in the hopes of being purchased by unsuspecting customers, are often referred to as "shovelware". This can lead to discoverability issues when a platform has no type of quality control.
Some developers and publishers have become well-known as creators of shovelware. Blast! Entertainment, a defunct video game developer and publisher, was known for releasing licensed shovelware games based on movies, television shows and books such as An American Tail, Beverly Hills Cop, Jumanji, and Lassie, the majority of which received negative reception. Another defunct European publisher, Phoenix Games, was known for its line of value-priced titles for the PlayStation 2, Wii, DS, and PC. A number of their in-house games are adaptations of low-budget animated mockbusters, which largely function as interactive "activity centre" games with minimal actual gameplay. Games made by other studios, including Mere Mortals, but published by Phoenix, have a similarly poor reputation.
The Nintendo Wii became known for large amounts of shovelware, including ports of PlayStation 2 games which had previously only been released in Europe. Data Design Interactive became known for creating shovelware for the Wii. Their games Ninjabread Man, Anubis II, Rock 'n' Roll Adventures, and Myth Makers: Trixie in Toyland all used the exact same gameplay and level layouts, but changed the art and character design to make them appear to be unique properties. The eShop on Nintendo's later console, the Nintendo Switch, has also become notorious for featuring an abundance of low-quality games and software.
Asset flips are a subset of shovelware that largely or entirely use pre-made assets in order to release games en masse. Called fake games by Valve Corporation, 173 were removed from Steam in one 2017 purge that included several sock puppets of Silicon Echo Studios.
See also
References
External links
Archive of CD-ROM compilations at Textfiles.com
Alistair B. Fraser on Academic Shovelware
Wired: On Wii Shovelware
PC World: Make your new PC hassle free
Software distribution
Bundled products or services
Computer jargon
Criticisms of software and websites | Shovelware | [
"Technology"
] | 973 | [
"Computing terminology",
"Criticisms of software and websites",
"Computer jargon",
"Natural language and computing"
] |
46,628 | https://en.wikipedia.org/wiki/ATM | An automated teller machine (ATM) is an electronic telecommunications device that enables customers of financial institutions to perform financial transactions, such as cash withdrawals, deposits, funds transfers, balance inquiries or account information inquiries, at any time and without the need for direct interaction with bank staff.
ATMs are known by a variety of other names, including automatic teller machines (ATMs) in the United States (sometimes redundantly as "ATM machine"). In Canada, the term automated banking machine (ABM) is also used, although ATM is also very commonly used in Canada, with many Canadian organizations using ATM rather than ABM. In British English, the terms cashpoint, cash machine and hole in the wall are also used. ATMs that are not operated by a financial institution are known as "white-label" ATMs.
Using an ATM, customers can access their bank deposit or credit accounts in order to make a variety of financial transactions, most notably cash withdrawals and balance checking, as well as transferring credit to and from mobile phones. ATMs can also be used to withdraw cash in a foreign country. If the currency being withdrawn from the ATM is different from that in which the bank account is denominated, the money will be converted at the financial institution's exchange rate. Customers are typically identified by inserting a plastic ATM card (or some other acceptable payment card) into the ATM, with authentication being by the customer entering a personal identification number (PIN), which must match the PIN stored in the chip on the card (if the card is so equipped), or in the issuing financial institution's database.
According to the ATM Industry Association (ATMIA), , there were close to 3.5 million ATMs installed worldwide. However, the use of ATMs is gradually declining with the increase in cashless payment systems.
History
The idea of out-of-hours cash distribution was first put into practice in Japan, the United Kingdom and Sweden.
In 1960, Armenian-American inventor Luther Simjian invented an automated deposit machine (accepting coins, cash and cheques) although it did not have cash dispensing features. His US patent was first filed on 30 June 1960 and granted on 26 February 1963. The roll-out of this machine, called Bankograph, was delayed by a couple of years, due in part to Simjian's Reflectone Electronics Inc. being acquired by Universal Match Corporation. An experimental Bankograph was installed in New York City in 1961 by the City Bank of New York, but removed after six months due to the lack of customer acceptance.
In 1962 Adrian Ashfield invented the idea of a card system to securely identify a user and control and monitor the dispensing of goods or services. This was granted UK Patent 959,713 in June 1964 and assigned to Kins Developments Limited.
Invention
A Japanese device called the "Computer Loan Machine" supplied cash as a three-month loan at 5% p.a. after inserting a credit card. The device was operational in 1966. However, little is known about the device.
A cash machine was put into use by Barclays Bank, Enfield, north London in the United Kingdom, on 27 June 1967, which is recognized as the world's first ATM. This machine was inaugurated by English actor Reg Varney. This invention is credited to the engineering team led by John Shepherd-Barron of printing firm De La Rue, who was awarded an OBE in the 2005 New Year Honours. Transactions were initiated by inserting paper cheques issued by a teller or cashier, marked with carbon-14 for machine readability and security, which in a later model were matched with a four-digit personal identification number (PIN). Shepherd-Barron stated:
The Barclays–De La Rue machine (called De La Rue Automatic Cash System or DACS) beat the Swedish saving banks' and a company called Metior's machine (a device called Bankomat) by a mere nine days and British Westminster Bank's Smith Industries Chubb system (called Chubb MD2) by a month. The online version of the Swedish machine is listed to have been operational on 6 May 1968, while claiming to be the first online ATM in the world, ahead of similar claims by IBM and Lloyds Bank in 1971, and Oki in 1970. The collaboration of a small start-up called Speytec and Midland Bank developed a fourth machine which was marketed after 1969 in Europe and the US by the Burroughs Corporation. The patent for this device (GB1329964) was filed in September 1969 (and granted in 1973) by John David Edwards, Leonard Perkins, John Henry Donald, Peter Lee Chappell, Sean Benjamin Newcombe, and Malcom David Roe. Both the DACS and MD2 accepted only a single-use token or voucher which was retained by the machine, while the Speytec worked with a card with a magnetic stripe at the back. They used principles including Carbon-14 and low-coercivity magnetism in order to make fraud more difficult.
The idea of a PIN stored on the card was developed by a group of engineers working at Smiths Group on the Chubb MD2 in 1965 and which has been credited to James Goodfellow (patent GB1197183 filed on 2 May 1966 with Anthony Davies). The essence of this system was that it enabled the verification of the customer with the debited account without human intervention. This patent is also the earliest instance of a complete "currency dispenser system" in the patent record. This patent was filed on 5 March 1968 in the US (US 3543904) and granted on 1 December 1970. It had a profound influence on the industry as a whole. Not only did future entrants into the cash dispenser market such as NCR Corporation and IBM licence Goodfellow's PIN system, but a number of later patents reference this patent as "Prior Art Device".
Propagation
Devices designed by British (i.e. Chubb, De La Rue) and Swedish (i.e. Asea Meteor) manufacturers quickly spread out. For example, given its link with Barclays, Bank of Scotland deployed a DACS in 1968 under the 'Scotcash' brand. Customers were given personal code numbers to activate the machines, similar to the modern PIN. They were also supplied with £10 vouchers. These were fed into the machine, and the corresponding amount debited from the customer's account.
A Chubb-made ATM appeared in Sydney in 1969. This was the first ATM installed in Australia. The machine only dispensed $25 at a time and the bank card itself would be mailed to the user after the bank had processed the withdrawal.
Asea Metior's Bancomat was the first ATM installed in Spain on 9 January 1969, in central Madrid by Banesto. This device dispensed 1,000 peseta bills (1 to 5 max). Each user had to introduce a security personal key using a combination of the ten numeric buttons. In March of the same year an ad with the instructions to use the Bancomat was published in the same newspaper.
In West Germany, the first ATM was installed in the 50,000-people university city of Tübingen on May 27, 1968, by Kreissparkasse Tübingen. It was built by Aalen-based safe builder Ostertag AG in cooperation with AEG-Telefunken. Each of the 1,000 selected users were given a double-bit key to open the safe with "Geldausgabe" written on it, a plastic identification card, and ten punched cards. One punch card functioned as a withdrawal slip for a 100 DM bill, the maximum limit for daily use was 400 DM.
Docutel in the United States
After looking firsthand at the experiences in Europe, in 1968 the ATM was pioneered in the U.S. by Donald Wetzel, who was a department head at a company called Docutel. Docutel was a subsidiary of Recognition Equipment Inc of Dallas, Texas, which was producing optical scanning equipment and had instructed Docutel to explore automated baggage handling and automated gasoline pumps.
On 2 September 1969, Chemical Bank installed a prototype ATM in the U.S. at its branch in Rockville Centre, New York. The first ATMs were designed to dispense a fixed amount of cash when a user inserted a specially coded card. A Chemical Bank advertisement boasted "On Sept. 2 our bank will open at 9:00 and never close again." Chemical's ATM, initially known as a Docuteller was designed by Donald Wetzel and his company Docutel. Chemical executives were initially hesitant about the electronic banking transition given the high cost of the early machines. Additionally, executives were concerned that customers would resist having machines handling their money. In 1995, the Smithsonian National Museum of American History recognised Docutel and Wetzel as the inventors of the networked ATM. To show confidence in Docutel, Chemical installed the first four production machines in a marketing test that proved they worked reliably, customers would use them and even pay a fee for usage. Based on this, banks around the country began to experiment with ATM installations.
By 1974, Docutel had acquired 70 percent of the U.S. market; but as a result of the early 1970s worldwide recession and its reliance on a single product line, Docutel lost its independence and was forced to merge with the U.S. subsidiary of Olivetti.
In 1973, Wetzel was granted U.S. Patent # 3,761,682 ; the application had been filed in October 1971. However, the U.S. patent record cites at least three previous applications from Docutel, all relevant to the development of the ATM and where Wetzel does not figure, namely US Patent # 3,662,343 , U.S. Patent # 3651976 and U.S. Patent # 3,68,569 . These patents are all credited to Kenneth S. Goldstein, MR Karecki, TR Barnes, GR Chastian and John D. White.
Further advances
In April 1971, Busicom began to manufacture ATMs based on the first commercial microprocessor, the Intel 4004. Busicom manufactured these microprocessor-based automated teller machines for several buyers, with NCR Corporation as the main customer.
Mohamed Atalla invented the first hardware security module (HSM), dubbed the "Atalla Box", a security system which encrypted PIN and ATM messages, and protected offline devices with an un-guessable PIN-generating key. In March 1972, Atalla filed for his PIN verification system, which included an encoded card reader and described a system that utilized encryption techniques to assure telephone link security while entering personal ID information that was transmitted to a remote location for verification.
He founded Atalla Corporation (now Utimaco Atalla) in 1972, and commercially launched the "Atalla Box" in 1973. The product was released as the Identikey. It was a card reader and customer identification system, providing a terminal with plastic card and PIN capabilities. The Identikey system consisted of a card reader console, two customer PIN pads, intelligent controller and built-in electronic interface package. The device consisted of two keypads, one for the customer and one for the teller. It allowed the customer to type in a secret code, which is transformed by the device, using a microprocessor, into another code for the teller. During a transaction, the customer's account number was read by the card reader. This process replaced manual entry and avoided possible key stroke errors. It allowed users to replace traditional customer verification methods such as signature verification and test questions with a secure PIN system. The success of the "Atalla Box" led to the wide adoption of hardware security modules in ATMs. Its PIN verification process was similar to the later IBM 3624. Atalla's HSM products protect 250million card transactions every day as of 2013, and secure the majority of the world's ATM transactions as of 2014.
The IBM 2984 was a modern ATM and came into use at Lloyds Bank, High Street, Brentwood, Essex, the UK in December 1972. The IBM 2984 was designed at the request of Lloyds Bank. The 2984 Cash Issuing Terminal was a true ATM, similar in function to today's machines and named Cashpoint by Lloyds Bank. Cashpoint is still a registered trademark of Lloyds Banking Group in the UK but is often used as a generic trademark to refer to ATMs of all UK banks. All were online and issued a variable amount which was immediately deducted from the account. A small number of 2984s were supplied to a U.S. bank. A couple of well known historical models of ATMs include the Atalla Box, IBM 3614, IBM 3624 and 473x series, Diebold 10xx and TABS 9000 series, NCR 1780 and earlier NCR 770 series.
The first switching system to enable shared automated teller machines between banks went into production operation on 3 February 1979, in Denver, Colorado, in an effort by Colorado National Bank of Denver and Kranzley and Company of Cherry Hill, New Jersey.
In 2012, a new ATM at Royal Bank of Scotland allowed customers to withdraw cash up to £130 without a card by inputting a six-digit code requested through their smartphones.
Location
ATMs can be placed at any location but are most often placed near or inside banks, shopping centers, airports, railway stations, metro stations, grocery stores, gas stations, restaurants, and other locations. ATMs are also found on cruise ships and on some US Navy ships, where sailors can draw out their pay.
ATMs may be on- and off-premises. On-premises ATMs are typically more advanced, multi-function machines that complement a bank branch's capabilities, and are thus more expensive. Off-premises machines are deployed by financial institutions where there is a simple need for cash, so they are generally cheaper single-function devices. Independent ATM deployers unaffiliated with banks install and maintain white-label ATMs.
In the US, Canada and some Gulf countries, banks may have drive-thru lanes providing access to ATMs using an automobile.
In recent times, countries like India and some countries in Africa are installing solar-powered ATMs in rural areas.
The world's highest ATM is located at the Khunjerab Pass in Pakistan. Installed at an elevation of by the National Bank of Pakistan, it is designed to work in temperatures as low as -40-degree Celsius.
Financial networks
Most ATMs are connected to interbank networks, enabling people to withdraw and deposit money from machines not belonging to the bank where they have their accounts or in the countries where their accounts are held (enabling cash withdrawals in local currency). Some examples of interbank networks include NYCE, PULSE, PLUS, Cirrus, AFFN, Interac, Interswitch, STAR, LINK, MegaLink, and BancNet.
ATMs rely on the authorization of a financial transaction by the card issuer or other authorizing institution on a communications network. This is often performed through an ISO 8583 messaging system.
Many banks charge ATM usage fees. In some cases, these fees are charged solely to users who are not customers of the bank that operates the ATM; in other cases, they apply to all users.
In order to allow a more diverse range of devices to attach to their networks, some interbank networks have passed rules expanding the definition of an ATM to be a terminal that either has the vault within its footprint or utilises the vault or cash drawer within the merchant establishment, which allows for the use of a scrip cash dispenser.
ATMs typically connect directly to their host or ATM Controller on either ADSL or dial-up modem over a telephone line or directly on a leased line. Leased lines are preferable to plain old telephone service (POTS) lines because they require less time to establish a connection. Less-trafficked machines will usually rely on a dial-up modem on a POTS line rather than using a leased line, since a leased line may be comparatively more expensive to operate compared to a POTS line. That dilemma may be solved as high-speed Internet VPN connections become more ubiquitous. Common lower-level layer communication protocols used by ATMs to communicate back to the bank include SNA over SDLC, a multidrop protocol over Async, X.25, and TCP/IP over Ethernet.
In addition to methods employed for transaction security and secrecy, all communications traffic between the ATM and the Transaction Processor may also be encrypted using methods such as SSL.
Global use
There are no hard international or government-compiled numbers totaling the complete number of ATMs in use worldwide. Estimates developed by ATMIA placed the number of ATMs in use at 3 million units, or approximately 1 ATM per 3,000 people in the world.
To simplify the analysis of ATM usage around the world, financial institutions generally divide the world into seven regions, based on the penetration rates, usage statistics, and features deployed. Four regions (USA, Canada, Europe, and Japan) have high numbers of ATMs per million people. Despite the large number of ATMs, there is additional demand for machines in the Asia/Pacific area as well as in Latin America. Macau may have the highest density of ATMs at 254 ATMs per 100,000 adults.
With the uptake of cashless payment solutions in the late 2010s, ATM numbers and usage started to decline. This happened first in developed countries at a time when ATM number were still increasing in Asia and Africa. , there had been a global decline in the number of ATMs in use, with the average dropping to 39 per 100,000 adults from a peak of 41 per 100,000 adults in 2020.
Hardware
An ATM is typically made up of the following devices:
CPU (to control the user interface and transaction devices)
Magnetic or chip card reader (to identify the customer)
a PIN pad for accepting and encrypting personal identification number EPP4 (similar in layout to a touch tone or calculator keypad), manufactured as part of a secure enclosure
Secure cryptoprocessor, generally within a secure enclosure
Display (used by the customer for performing the transaction)
Function key buttons (usually close to the display) or a touchscreen (used to select the various aspects of the transaction)
Record printer (to provide the customer with a record of the transaction)
Vault (to store the parts of the machinery requiring restricted access)
Housing (for aesthetics and to attach signage to)
Sensors and indicators
Due to heavier computing demands and the falling price of personal computer–like architectures, ATMs have moved away from custom hardware architectures using microcontrollers or application-specific integrated circuits and have adopted the hardware architecture of a personal computer, such as USB connections for peripherals, Ethernet and IP communications, and use personal computer operating systems.
Business owners often lease ATMs from service providers. However, based on the economies of scale, the price of equipment has dropped to the point where many business owners are simply paying for ATMs using a credit card.
New ADA voice and text-to-speech guidelines imposed in 2010, but required by March 2012 have forced many ATM owners to either upgrade non-compliant machines or dispose them if they are not upgradable, and purchase new compliant equipment. This has created an avenue for hackers and thieves to obtain ATM hardware at junkyards from improperly disposed decommissioned machines.
The vault of an ATM is within the footprint of the device itself and is where items of value are kept. Scrip cash dispensers, which print a receipt or scrip instead of cash, do not incorporate a vault.
Mechanisms found inside the vault may include:
Dispensing mechanism (to provide cash or other items of value)
Deposit mechanism including a cheque processing module and bulk note acceptor (to allow the customer to make deposits)
Security sensors (magnetic, thermal, seismic, gas)
Locks (to control access to the contents of the vault)
Journaling systems; many are electronic (a sealed flash memory device based on in-house standards) or a solid-state device (an actual printer) which accrues all records of activity including access timestamps, number of notes dispensed, etc. This is considered sensitive data and is secured in similar fashion to the cash as it is a similar liability.
ATM vaults are supplied by manufacturers in several grades. Factors influencing vault grade selection include cost, weight, regulatory requirements, ATM type, operator risk avoidance practices and internal volume requirements. Industry standard vault configurations include Underwriters Laboratories UL-291 "Business Hours" and Level 1 Safes, RAL TL-30 derivatives, and CEN EN 1143-1 - CEN III and CEN IV.
ATM manufacturers recommend that a vault be attached to the floor to prevent theft, though there is a record of a theft conducted by tunnelling into an ATM floor.
Software
With the migration to commodity Personal Computer hardware, standard commercial "off-the-shelf" operating systems and programming environments can be used inside of ATMs. Typical platforms previously used in ATM development include RMX or OS/2.
Today, the vast majority of ATMs worldwide use Microsoft Windows. In early 2014, 95% of ATMs were running Windows XP. A small number of deployments may still be running older versions of the Windows OS, such as Windows NT, Windows CE, or Windows 2000, even though Microsoft still supports only Windows 10 and Windows 11.
There is a computer industry security view that general public desktop operating systems have greater risks as operating systems for cash dispensing machines than other types of operating systems like (secure) real-time operating systems (RTOS). RISKS Digest has many articles about ATM operating system vulnerabilities.
Linux is also finding some reception in the ATM marketplace. An example of this is Banrisul, the largest bank in the south of Brazil, which has replaced the MS-DOS operating systems in its ATMs with Linux. Banco do Brasil is also migrating ATMs to Linux. Indian-based Vortex Engineering is manufacturing ATMs that operate only with Linux.
Common application layer transaction protocols, such as Diebold 91x (911 or 912) and NCR NDC or NDC+ provide emulation of older generations of hardware on newer platforms with incremental extensions made over time to address new capabilities, although companies like NCR continuously improve these protocols issuing newer versions (e.g. NCR's AANDC v3.x.y, where x.y are subversions). Most major ATM manufacturers provide software packages that implement these protocols. Newer protocols such as IFX have yet to find wide acceptance by transaction processors.
With the move to a more standardised software base, financial institutions have been increasingly interested in the ability to pick and choose the application programs that drive their equipment. WOSA/XFS, now known as CEN XFS (or simply XFS), provides a common API for accessing and manipulating the various devices of an ATM. J/XFS is a Java implementation of the CEN XFS API.
While the perceived benefit of XFS is similar to the Java's "write once, run anywhere" mantra, often different ATM hardware vendors have different interpretations of the XFS standard. The result of these differences in interpretation means that ATM applications typically use a middleware to even out the differences among various platforms.
With the onset of Windows operating systems and XFS on ATMs, the software applications have the ability to become more intelligent. This has created a new breed of ATM applications commonly referred to as programmable applications. These types of applications allows for an entirely new host of applications in which the ATM terminal can do more than only communicate with the ATM switch. It is now empowered to connected to other content servers and video banking systems.
Notable ATM software that operates on XFS platforms include Triton PRISM, Diebold Agilis EmPower, NCR APTRA Edge, Absolute Systems AbsoluteINTERACT, KAL Kalignite Software Platform, Phoenix Interactive VISTAatm, Wincor Nixdorf ProTopas, Euronet EFTS and Intertech inter-ATM.
With the move of ATMs to industry-standard computing environments, concern has risen about the integrity of the ATM's software stack.
Impact on labor
The number of tellers in the United States increased from approximately 300,000 in 1970 to approximately 600,000 in 2010. A contributing factor may have been the introduction of automated teller machines. ATMs allow a branch to operate with fewer tellers, making it more economical for banks to open more branches, necessitating more tellers to staff those additional branches. Further automation and online banking, however, may reverse this increase resulting in a trend toward fewer bank teller positions.
Security
ATM security has several dimensions. ATMs also provide a practical demonstration of a number of security systems and concepts operating together and how various security concerns are addressed.
Physical
Early ATM security focused on making the terminals invulnerable to physical attack; they were effectively safes with dispenser mechanisms. A number of attacks resulted, with thieves attempting to steal entire machines by ram-raiding. Since the late 1990s, criminal groups operating in Japan improved ram-raiding by stealing and using a truck loaded with heavy construction machinery to effectively demolish or uproot an entire ATM and any housing to steal its cash.
Another attack method, plofkraak (a Dutch term), is to seal all openings of the ATM with silicone and fill the vault with a combustible gas or to place an explosive inside, attached, or near the machine. This gas or explosive is ignited and the vault is opened or distorted by the force of the resulting explosion and the criminals can break in.
ATM bombings began in the Netherlands, but as the nation reduced the number of machines in the country from 20000 to 5000 and discouraged cash use, the mostly Moroccan-Dutch gangs expert in the attacks moved elsewhere. Such theft has also occurred in Belgium, France, Denmark, Germany, Australia, and the United Kingdom. When anti-gas explosion prevention devices and reinforced ATMs were installed, criminals began using leaf blowers to remove smoke, and more powerful solid explosives. Despite German banks spending more than €300 million on additional security, the Federal Criminal Police Office estimated that 60% of attacks on ATMs in the country succeeded.
Several attacks in the UK (at least one of which was successful) have involved digging a concealed tunnel under the ATM and cutting through the reinforced base to remove the money.
Modern ATM physical security, per other modern money-handling security, concentrates on denying the use of the money inside the machine to a thief, by using different types of Intelligent Banknote Neutralisation Systems.
A common method is to simply rob the staff filling the machine with money. To avoid this, the schedule for filling them is kept secret, varying and random. The money is often kept in cassettes, which will dye the money if incorrectly opened.
Transactional secrecy and integrity
The security of ATM transactions relies mostly on the integrity of the secure cryptoprocessor: the ATM often uses general commodity components that sometimes are not considered to be "trusted systems".
Encryption of personal information, required by law in many jurisdictions, is used to prevent fraud. Sensitive data in ATM transactions are usually encrypted with DES, but transaction processors now usually require the use of Triple DES. Remote Key Loading techniques may be used to ensure the secrecy of the initialisation of the encryption keys in the ATM. Message Authentication Code (MAC) or Partial MAC may also be used to ensure messages have not been tampered with while in transit between the ATM and the financial network.
Customer identity integrity
There have also been a number of incidents of fraud by man-in-the-middle attacks, where criminals have attached fake keypads or card readers to existing machines. These have then been used to record customers' PINs and bank card information in order to gain unauthorised access to their accounts. Various ATM manufacturers have put in place countermeasures to protect the equipment they manufacture from these threats.
Alternative methods to verify cardholder identities have been tested and deployed in some countries, such as finger and palm vein patterns, iris, and facial recognition technologies. Cheaper mass-produced equipment has been developed and is being installed in machines globally that detect the presence of foreign objects on the front of ATMs, current tests have shown 99% detection success for all types of skimming devices.
Device operation integrity
Openings on the customer side of ATMs are often covered by mechanical shutters to prevent tampering with the mechanisms when they are not in use. Alarm sensors are placed inside ATMs and their servicing areas to alert their operators when doors have been opened by unauthorised personnel.
To protect against hackers, ATMs have a built-in firewall. Once the firewall has detected malicious attempts to break into the machine remotely, the firewall locks down the machine.
Rules are usually set by the government or ATM operating body that dictate what happens when integrity systems fail. Depending on the jurisdiction, a bank may or may not be liable when an attempt is made to dispense a customer's money from an ATM and the money either gets outside of the ATM's vault, or was exposed in a non-secure fashion, or they are unable to determine the state of the money after a failed transaction. Customers often commented that it is difficult to recover money lost in this way, but this is often complicated by the policies regarding suspicious activities typical of the criminal element.
Customer security
In some countries, multiple security cameras and security guards are a common feature. In the United States, The New York State Comptroller's Office has advised the New York State Department of Banking to have more thorough safety inspections of ATMs in high crime areas.
Consultants of ATM operators assert that the issue of customer security should have more focus by the banking industry; it has been suggested that efforts are now more concentrated on the preventive measure of deterrent legislation than on the problem of ongoing forced withdrawals.
At least as far back as 30 July 1986, consultants of the industry have advised for the adoption of an emergency PIN system for ATMs, where the user is able to send a silent alarm in response to a threat. Legislative efforts to require an emergency PIN system have appeared in Illinois, Kansas and Georgia, but none has succeeded yet. In January 2009, Senate Bill 1355 was proposed in the Illinois Senate that revisits the issue of the reverse emergency PIN system. The bill is again supported by the police and opposed by the banking lobby.
In 1998, three towns outside Cleveland, Ohio, in response to an ATM crime wave, adopted legislation requiring that an emergency telephone number switch be installed at all outdoor ATMs within their jurisdiction. In the wake of a homicide in Sharon Hill, Pennsylvania, the city council passed an ATM security bill as well.
In China and elsewhere, many efforts to promote security have been made. On-premises ATMs are often located inside the bank's lobby, which may be accessible 24 hours a day. These lobbies have extensive security camera coverage, a courtesy telephone for consulting with the bank staff, and a security guard on the premises. Bank lobbies that are not guarded 24 hours a day may also have secure doors that can only be opened from outside by swiping the bank card against a wall-mounted scanner, allowing the bank to identify which card enters the building. Most ATMs will also display on-screen safety warnings and may also be fitted with convex mirrors above the display allowing the user to see what is happening behind them.
As of 2013, the only claim available about the extent of ATM-connected homicides is that they range from 500 to 1,000 per year in the US, covering only cases where the victim had an ATM card and the card was used by the killer after the known time of death.
Jackpotting
The term is used to describe one method criminals utilize to steal money from an ATM. The thieves gain physical access through a small hole drilled in the machine. They disconnect the existing hard drive and connect an external drive using an industrial endoscope. They then depress an internal button that reboots the device so that it is now under the control of the external drive. They can then have the ATM dispense all of its cash.
Encryption
In recent years, many ATMs also encrypt the hard disk. This means that actually creating the software for jackpotting is more difficult, and provides more security for the ATM.
Uses
ATMs were originally developed as cash dispensers, and have evolved to provide many other bank-related functions:
Paying routine bills, fees, and taxes (utilities, phone bills, social security, legal fees, income taxes, etc.)
Printing or ordering bank statements
Updating passbooks
Cash advances
Cheque Processing Module
Paying (in full or partially) the credit balance on a card linked to a specific current account.
Transferring money between linked accounts (such as transferring between accounts)
Deposit currency recognition, acceptance, and recycling
In some countries, especially those which benefit from a fully integrated cross-bank network (e.g.: Multibanco in Portugal), ATMs include many functions that are not directly related to the management of one's own bank account, such as:
Loading monetary value into stored-value cards
Adding pre-paid cell phone / mobile phone credit.
Purchasing
Concert tickets
Gold
Lottery tickets
Movie tickets
Postage stamps.
Train tickets
Shopping mall gift certificates.
Donating to charities
Increasingly, banks are seeking to use the ATM as a sales device to deliver pre approved loans and targeted advertising using products such as ITM (the Intelligent Teller Machine) from Aptra Relate from NCR. ATMs can also act as an advertising channel for other companies.*
However, several different ATM technologies have not yet reached worldwide acceptance, such as:
Videoconferencing with human tellers, known as video tellers
Biometrics, where authorization of transactions is based on the scanning of a customer's fingerprint, iris, face, etc.
Cheque/cash Acceptance, where the machine accepts and recognises cheques and/or currency without using envelopes Expected to grow in importance in the US through Check 21 legislation.
Bar code scanning
On-demand printing of "items of value" (such as movie tickets, traveler's cheques, etc.)
Dispensing additional media (such as phone cards)
Co-ordination of ATMs with mobile phones
Integration with non-banking equipment
Games and promotional features
CRM through the ATM
Videoconferencing teller machines are currently referred to as Interactive Teller Machines. Benton Smith writes in the Idaho Business Review, "The software that allows interactive teller machines to function was created by a Salt Lake City-based company called uGenius, a producer of video banking software. NCR, a leading manufacturer of ATMs, acquired uGenius in 2013 and married its own ATM hardware with uGenius' video software."
Pharmacy dispensing units
Reliability
Before an ATM is placed in a public place, it typically has undergone extensive testing with both test money and the backend computer systems that allow it to perform transactions. Banking customers also have come to expect high reliability in their ATMs, which provides incentives to ATM providers to minimise machine and network failures. Financial consequences of incorrect machine operation also provide high degrees of incentive to minimise malfunctions.
ATMs and the supporting electronic financial networks are generally very reliable, with industry benchmarks typically producing 98.25% customer availability for ATMs and up to 99.999% availability for host systems that manage the networks of ATMs. If ATM networks do go out of service, customers could be left without the ability to make transactions until the beginning of their bank's next time of opening hours.
This said, not all errors are to the detriment of customers; there have been cases of machines giving out money without debiting the account, or giving out higher value notes as a result of incorrect denomination of banknote being loaded in the money cassettes. The result of receiving too much money may be influenced by the card holder agreement in place between the customer and the bank.
Errors that can occur may be mechanical (such as card transport mechanisms; keypads; hard disk failures; envelope deposit mechanisms); software (such as operating system; device driver; application); communications; or purely down to operator error.
To aid in reliability, some ATMs print each transaction to a roll-paper journal that is stored inside the ATM, which allows its users and the related financial institutions to settle things based on the records in the journal in case there is a dispute. In some cases, transactions are posted to an electronic journal to remove the cost of supplying journal paper to the ATM and for more convenient searching of data.
Improper money checking can cause the possibility of a customer receiving counterfeit banknotes from an ATM. While bank personnel are generally trained better at spotting and removing counterfeit cash, the resulting ATM money supplies used by banks provide no guarantee for proper banknotes, as the Federal Criminal Police Office of Germany has confirmed that there are regularly incidents of false banknotes having been dispensed through ATMs. Some ATMs may be stocked and wholly owned by outside companies, which can further complicate this problem.
Bill validation technology can be used by ATM providers to help ensure the authenticity of the cash before it is stocked in the machine; those with cash recycling capabilities include this capability.
In India, whenever a transaction fails with an ATM due to network or technical issues and if the amount does not get dispensed in spite of the account being debited then the banks are supposed to return the debited amount to the customer within seven working days from the day of receipt of a complaint. Banks are also liable to pay the late fees in case of delay in repayment of funds post seven days.
Fraud
As with any device containing objects of value, ATMs and the systems they depend on to function are the targets of fraud. Fraud against ATMs and people's attempts to use them takes several forms.
The first known instance of a fake ATM was installed at a shopping mall in Manchester, Connecticut, in 1993. By modifying the inner workings of a Fujitsu model 7020 ATM, a criminal gang known as the Bucklands Boys stole information from cards inserted into the machine by customers.
WAVY-TV reported an incident in Virginia Beach in September 2006 where a hacker, who had probably obtained a factory-default administrator password for a filling station's white-label ATM, caused the unit to assume it was loaded with US$5 bills instead of $20s, enabling himself—and many subsequent customers—to walk away with four times the money withdrawn from their accounts. This type of scam was featured on the TV series The Real Hustle.
ATM behaviour can change during what is called "stand-in" time, where the bank's cash dispensing network is unable to access databases that contain account information (possibly for database maintenance). In order to give customers access to cash, customers may be allowed to withdraw cash up to a certain amount that may be less than their usual daily withdrawal limit, but may still exceed the amount of available money in their accounts, which could result in fraud if the customers intentionally withdraw more money than they had in their accounts.
Card fraud
In an attempt to prevent criminals from shoulder surfing the customer's personal identification number (PIN), some banks draw privacy areas on the floor.
For a low-tech form of fraud, the easiest is to simply steal a customer's card along with its PIN. A later variant of this approach is to trap the card inside of the ATM's card reader with a device often referred to as a Lebanese loop. When the customer gets frustrated by not getting the card back and walks away from the machine, the criminal is able to remove the card and withdraw cash from the customer's account, using the card and its PIN.
This type of fraud has spread globally. Although somewhat replaced in terms of volume by skimming incidents, a re-emergence of card trapping has been noticed in regions such as Europe, where EMV chip and PIN cards have increased in circulation.
Another simple form of fraud involves attempting to get the customer's bank to issue a new card and its PIN and stealing them from their mail.
By contrast, a newer high-tech method of operating, sometimes called card skimming or card cloning, involves the installation of a magnetic card reader over the real ATM's card slot and the use of a wireless surveillance camera or a modified digital camera or a false PIN keypad to observe the user's PIN. Card data is then cloned into a duplicate card and the criminal attempts a standard cash withdrawal. The availability of low-cost commodity wireless cameras, keypads, card readers, and card writers has made it a relatively simple form of fraud, with comparatively low risk to the fraudsters.
In an attempt to stop these practices, countermeasures against card cloning have been developed by the banking industry, in particular by the use of smart cards which cannot easily be copied or spoofed by unauthenticated devices, and by attempting to make the outside of their ATMs tamper evident. Older chip-card security systems include the French Carte Bleue, Visa Cash, Mondex, Blue from American Express and EMV '96 or EMV 3.11. The most actively developed form of smart card security in the industry today is known as EMV 2000 or EMV 4.x.
EMV is widely used in the UK (Chip and PIN) and other parts of Europe, but when it is not available in a specific area, ATMs must fall back to using the easy–to–copy magnetic stripe to perform transactions. This fallback behaviour can be exploited. However, the fallback option has been removed on the ATMs of some UK banks, meaning if the chip is not read, the transaction will be declined.
Card cloning and skimming can be detected by the implementation of magnetic card reader heads and firmware that can read a signature embedded in all magnetic stripes during the card production process. This signature, known as a "MagnePrint" or "BluPrint", can be used in conjunction with common two-factor authentication schemes used in ATM, debit/retail point-of-sale and prepaid card applications.
The concept and various methods of copying the contents of an ATM card's magnetic stripe onto a duplicate card to access other people's financial information were well known in the hacking communities by late 1990.
In 1996, Andrew Stone, a computer security consultant from Hampshire in the UK, was convicted of stealing more than £1 million by pointing high-definition video cameras at ATMs from a considerable distance and recording the card numbers, expiry dates, etc. from the embossed detail on the ATM cards along with video footage of the PINs being entered. After getting all the information from the videotapes, he was able to produce clone cards which not only allowed him to withdraw the full daily limit for each account, but also allowed him to sidestep withdrawal limits by using multiple copied cards. In court, it was shown that he could withdraw as much as £10,000 per hour by using this method. Stone was sentenced to five years and six months in prison.
Related devices
A talking ATM is a type of ATM that provides audible instructions so that people who cannot read a screen can independently use the machine, therefore effectively eliminating the need for assistance from an external, potentially malevolent source. All audible information is delivered privately through a standard headphone jack on the face of the machine. Alternatively, some banks such as the Nordea and Swedbank use a built-in external speaker which may be invoked by pressing the talk button on the keypad. Information is delivered to the customer either through pre-recorded sound files or via text-to-speech speech synthesis.
A postal interactive kiosk may share many components of an ATM (including a vault), but it only dispenses items related to postage.
A scrip cash dispenser or cashless ATM may have many components in common with an ATM, but it lacks the ability to dispense physical cash and consequently requires no vault. Instead, the customer requests a withdrawal transaction from the machine, which prints a receipt or scrip. The customer then takes this receipt to a nearby sales clerk, who then exchanges it for cash from the till.
A teller assist unit (TAU) is distinct in that it is designed to be operated solely by trained personnel and not by the general public, does integrate directly into interbank networks, and usually is controlled by a computer that is not directly integrated into the overall construction of the unit.
A Web ATM is an online interface for ATM card banking that uses a smart card reader. All the usual ATM functions are available, except for withdrawing cash. Most banks in Taiwan provide these online services.
See also
ATM Industry Association (ATMIA)
Automated cash handling
Banknote counter
Bitcoin ATM
Cash register
EFTPOS
Electronic funds transfer
Financial cryptography
Key management
Payroll
Phantom withdrawal
RAS syndrome
Self service
Teller system
Verification and validation
References
Further reading
Ali, Peter Ifeanyichukwu. "Impact of automated teller machine on banking services delivery in Nigeria: a stakeholder analysis." Brazilian Journal of Education, Technology and Society 9.1 (2016): 64–72. online
Bátiz-Lazo, Bernardo. Cash and Dash: How ATMs and Computers Changed Banking (Oxford University Press, 2018). online review
Batiz-Lazo, Bernardo. "Emergence and evolution of ATM networks in the UK, 1967–2000." Business History 51.1 (2009): 1-27. online
Batiz-Lazo, Bernardo, and Gustavo del Angel. The Dawn of the Plastic Jungle: The Introduction of the Credit Card in Europe and North America, 1950-1975 (Hoover Institution, 2016), abstract
Bessen, J. Learning by Doing: The Real Connection between Innovation, Wages, and Wealth (Yale UP, 2015)
Hota, Jyotiranjan, Saboohi Nasim, and Sasmita Mishra. "Drivers and Barriers to Adoption of Multivendor ATM Technology in India: Synthesis of Three Empirical Studies." Journal of Technology Management for Growing Economies 9.1 (2018): 89–102. online
McDysan, David E., and Darren L. Spohn. ATM theory and applications (McGraw-Hill Professional, 1998).
Mkpojiogu, Emmanuel OC, and A. Asuquo. "The user experience of ATM users in Nigeria: a systematic review of empirical papers." Journal of Research in National Development (2018). online
Primary sources
"Interview with Mr. Don Wetzel, Co-Patente of the Automatic Teller Machine" (1995) online
External links
The Money Machines: an account of US cash machine history; by Ellen Florian, Fortune.com
World Map and Chart of Automated Teller Machines per 100,000 Adults by Lebanese-economy-forum, World Bank data
Computer-related introductions in 1967
Banking equipment
Banking technology
Embedded systems
British inventions
Payment systems
Articles containing video clips
1967 in economic history
20th-century inventions | ATM | [
"Technology",
"Engineering"
] | 9,745 | [
"Computer engineering",
"Embedded systems",
"Computer systems",
"Automation",
"Computer science",
"Information systems",
"Self-service",
"Automated teller machines"
] |
46,630 | https://en.wikipedia.org/wiki/Embedded%20system | An embedded system is a specialized computer system—a combination of a computer processor, computer memory, and input/output peripheral devices—that has a dedicated function within a larger mechanical or electronic system. It is embedded as part of a complete device often including electrical or electronic hardware and mechanical parts.
Because an embedded system typically controls physical operations of the machine that it is embedded within, it often has real-time computing constraints. Embedded systems control many devices in common use. , it was estimated that ninety-eight percent of all microprocessors manufactured were used in embedded systems.
Modern embedded systems are often based on microcontrollers (i.e. microprocessors with integrated memory and peripheral interfaces), but ordinary microprocessors (using external chips for memory and peripheral interface circuits) are also common, especially in more complex systems. In either case, the processor(s) used may be types ranging from general purpose to those specialized in a certain class of computations, or even custom designed for the application at hand. A common standard class of dedicated processors is the digital signal processor (DSP).
Since the embedded system is dedicated to specific tasks, design engineers can optimize it to reduce the size and cost of the product and increase its reliability and performance. Some embedded systems are mass-produced, benefiting from economies of scale.
Embedded systems range in size from portable personal devices such as digital watches and MP3 players to bigger machines like home appliances, industrial assembly lines, robots, transport vehicles, traffic light controllers, and medical imaging systems. Often they constitute subsystems of other machines like avionics in aircraft and astrionics in spacecraft. Large installations like factories, pipelines, and electrical grids rely on multiple embedded systems networked together. Generalized through software customization, embedded systems such as programmable logic controllers frequently comprise their functional units.
Embedded systems range from those low in complexity, with a single microcontroller chip, to very high with multiple units, peripherals and networks, which may reside in equipment racks or across large geographical areas connected via long-distance communications lines.
History
Background
The origins of the microprocessor and the microcontroller can be traced back to the MOS integrated circuit, which is an integrated circuit chip fabricated from MOSFETs (metal–oxide–semiconductor field-effect transistors) and was developed in the early 1960s. By 1964, MOS chips had reached higher transistor density and lower manufacturing costs than bipolar chips. MOS chips further increased in complexity at a rate predicted by Moore's law, leading to large-scale integration (LSI) with hundreds of transistors on a single MOS chip by the late 1960s. The application of MOS LSI chips to computing was the basis for the first microprocessors, as engineers began recognizing that a complete computer processor system could be contained on several MOS LSI chips.
The first multi-chip microprocessors, the Four-Phase Systems AL1 in 1969 and the Garrett AiResearch MP944 in 1970, were developed with multiple MOS LSI chips. The first single-chip microprocessor was the Intel 4004, released in 1971. It was developed by Federico Faggin, using his silicon-gate MOS technology, along with Intel engineers Marcian Hoff and Stan Mazor, and Busicom engineer Masatoshi Shima.
Development
One of the first recognizably modern embedded systems was the Apollo Guidance Computer, developed ca. 1965 by Charles Stark Draper at the MIT Instrumentation Laboratory. At the project's inception, the Apollo guidance computer was considered the riskiest item in the Apollo project as it employed the then newly developed monolithic integrated circuits to reduce the computer's size and weight.
An early mass-produced embedded system was the Autonetics D-17 guidance computer for the Minuteman missile, released in 1961. When the Minuteman II went into production in 1966, the D-17 was replaced with a new computer that represented the first high-volume use of integrated circuits.
Since these early applications in the 1960s, embedded systems have come down in price and there has been a dramatic rise in processing power and functionality. An early microprocessor, the Intel 4004 (released in 1971), was designed for calculators and other small systems but still required external memory and support chips. By the early 1980s, memory, input and output system components had been integrated into the same chip as the processor forming a microcontroller. Microcontrollers find applications where a general-purpose computer would be too costly. As the cost of microprocessors and microcontrollers fell, the prevalence of embedded systems increased.
A comparatively low-cost microcontroller may be programmed to fulfill the same role as a large number of separate components. With microcontrollers, it became feasible to replace, even in consumer products, expensive knob-based analog components such as potentiometers and variable capacitors with up/down buttons or knobs read out by a microprocessor. Although in this context an embedded system is usually more complex than a traditional solution, most of the complexity is contained within the microcontroller itself. Very few additional components may be needed and most of the design effort is in the software. Software prototype and test can be quicker compared with the design and construction of a new circuit not using an embedded processor.
Applications
Embedded systems are commonly found in consumer, industrial, automotive, home appliances, medical, telecommunication, commercial, aerospace and military applications.
Telecommunications systems employ numerous embedded systems from telephone switches for the network to cell phones at the end user. Computer networking uses dedicated routers and network bridges to route data.
Consumer electronics include MP3 players, television sets, mobile phones, video game consoles, digital cameras, GPS receivers, and printers. Household appliances, such as microwave ovens, washing machines and dishwashers, include embedded systems to provide flexibility, efficiency and features. Advanced heating, ventilation, and air conditioning (HVAC) systems use networked thermostats to more accurately and efficiently control temperature that can change by time of day and season. Home automation uses wired and wireless networking that can be used to control lights, climate, security, audio/visual, surveillance, etc., all of which use embedded devices for sensing and controlling.
Transportation systems from flight to automobiles increasingly use embedded systems. New airplanes contain advanced avionics such as inertial guidance systems and GPS receivers that also have considerable safety requirements. Spacecraft rely on astrionics systems for trajectory correction. Various electric motors — brushless DC motors, induction motors and DC motors — use electronic motor controllers. Automobiles, electric vehicles, and hybrid vehicles increasingly use embedded systems to maximize efficiency and reduce pollution. Other automotive safety systems using embedded systems include anti-lock braking system (ABS), electronic stability control (ESC/ESP), traction control (TCS) and automatic four-wheel drive.
Medical equipment uses embedded systems for monitoring, and various medical imaging (positron emission tomography (PET), single-photon emission computed tomography (SPECT), computed tomography (CT), and magnetic resonance imaging (MRI) for non-invasive internal inspections. Embedded systems within medical equipment are often powered by industrial computers.
Embedded systems are used for safety-critical systems in aerospace and defense industries. Unless connected to wired or wireless networks via on-chip 3G cellular or other methods for IoT monitoring and control purposes, these systems can be isolated from hacking and thus be more secure. For fire safety, the systems can be designed to have a greater ability to handle higher temperatures and continue to operate. In dealing with security, the embedded systems can be self-sufficient and be able to deal with cut electrical and communication systems.
Miniature wireless devices called motes are networked wireless sensors. Wireless sensor networking makes use of miniaturization made possible by advanced integrated circuit (IC) design to couple full wireless subsystems to sophisticated sensors, enabling people and companies to measure a myriad of things in the physical world and act on this information through monitoring and control systems. These motes are completely self-contained and will typically run off a battery source for years before the batteries need to be changed or charged.
Characteristics
Embedded systems are designed to perform a specific task, in contrast with general-purpose computers designed for multiple tasks. Some have real-time performance constraints that must be met, for reasons such as safety and usability; others may have low or no performance requirements, allowing the system hardware to be simplified to reduce costs.
Embedded systems are not always standalone devices. Many embedded systems are a small part within a larger device that serves a more general purpose. For example, the Gibson Robot Guitar features an embedded system for tuning the strings, but the overall purpose of the Robot Guitar is to play music. Similarly, an embedded system in an automobile provides a specific function as a subsystem of the car itself.
The program instructions written for embedded systems are referred to as firmware, and are stored in read-only memory or flash memory chips. They run with limited computer hardware resources: little memory, small or non-existent keyboard or screen.
User interfaces
Embedded systems range from no user interface at all, in systems dedicated to one task, to complex graphical user interfaces that resemble modern computer desktop operating systems. Simple embedded devices use buttons, light-emitting diodes (LED), graphic or character liquid-crystal displays (LCD) with a simple menu system. More sophisticated devices that use a graphical screen with touch sensing or screen-edge soft keys provide flexibility while minimizing space used: the meaning of the buttons can change with the screen, and selection involves the natural behavior of pointing at what is desired.
Some systems provide user interface remotely with the help of a serial (e.g. RS-232) or network (e.g. Ethernet) connection. This approach extends the capabilities of the embedded system, avoids the cost of a display, simplifies the board support package (BSP) and allows designers to build a rich user interface on the PC. A good example of this is the combination of an embedded HTTP server running on an embedded device (such as an IP camera or a network router). The user interface is displayed in a web browser on a PC connected to the device.
Processors in embedded systems
Examples of properties of typical embedded computers when compared with general-purpose counterparts, are low power consumption, small size, rugged operating ranges, and low per-unit cost. This comes at the expense of limited processing resources.
Numerous microcontrollers have been developed for embedded systems use. General-purpose microprocessors are also used in embedded systems, but generally, require more support circuitry than microcontrollers.
Ready-made computer boards
PC/104 and PC/104+ are examples of standards for ready-made computer boards intended for small, low-volume embedded and ruggedized systems. These are mostly x86-based and often physically small compared to a standard PC, although still quite large compared to most simple (8/16-bit) embedded systems. They may use DOS, FreeBSD, Linux, NetBSD, OpenHarmony or an embedded real-time operating system (RTOS) such as MicroC/OS-II, QNX or VxWorks.
In certain applications, where small size or power efficiency are not primary concerns, the components used may be compatible with those used in general-purpose x86 personal computers. Boards such as the VIA EPIA range help to bridge the gap by being PC-compatible but highly integrated, physically smaller or have other attributes making them attractive to embedded engineers. The advantage of this approach is that low-cost commodity components may be used along with the same software development tools used for general software development. Systems built in this way are still regarded as embedded since they are integrated into larger devices and fulfill a single role. Examples of devices that may adopt this approach are automated teller machines (ATM) and arcade machines, which contain code specific to the application.
However, most ready-made embedded systems boards are not PC-centered and do not use the ISA or PCI busses. When a system-on-a-chip processor is involved, there may be little benefit to having a standardized bus connecting discrete components, and the environment for both hardware and software tools may be very different.
One common design style uses a small system module, perhaps the size of a business card, holding high density BGA chips such as an ARM-based system-on-a-chip processor and peripherals, external flash memory for storage, and DRAM for runtime memory. The module vendor will usually provide boot software and make sure there is a selection of operating systems, usually including Linux and some real-time choices. These modules can be manufactured in high volume, by organizations familiar with their specialized testing issues, and combined with much lower volume custom mainboards with application-specific external peripherals. Prominent examples of this approach include Arduino and Raspberry Pi.
ASIC and FPGA SoC solutions
A system on a chip (SoC) contains a complete system - consisting of multiple processors, multipliers, caches, even different types of memory and commonly various peripherals like interfaces for wired or wireless communication on a single chip. Often graphics processing units (GPU) and DSPs are included such chips. SoCs can be implemented as an application-specific integrated circuit (ASIC) or using a field-programmable gate array (FPGA) which typically can be reconfigured.
ASIC implementations are common for very-high-volume embedded systems like mobile phones and smartphones. ASIC or FPGA implementations may be used for not-so-high-volume embedded systems with special needs in kind of signal processing performance, interfaces and reliability, like in avionics.
Peripherals
Embedded systems talk with the outside world via peripherals, such as:
Serial communication interfaces (SCI): RS-232, RS-422, RS-485, etc.
Synchronous Serial Interface: I2C, SPI, SSC and ESSI (Enhanced Synchronous Serial Interface)
Universal Serial Bus (USB)
Media cards (SD cards, CompactFlash, etc.)
Network interface controller: Ethernet, WiFi, etc.
Fieldbuses: CAN bus, LIN-Bus, PROFIBUS, etc.
Timers: Phase-locked loops, programmable interval timers
General Purpose Input/Output (GPIO)
Analog-to-digital and digital-to-analog converters
Debugging: JTAG, In-system programming, background debug mode interface port, BITP, and DB9 ports.
Tools
As with other software, embedded system designers use compilers, assemblers, and debuggers to develop embedded system software. However, they may also use more specific tools:
In circuit debuggers or emulators (see next section).
Utilities to add a checksum or CRC to a program, so the embedded system can check if the program is valid.
For systems using digital signal processing, developers may use a computational notebook to simulate the mathematics.
System-level modeling and simulation tools help designers to construct simulation models of a system with hardware components such as processors, memories, DMA, interfaces, buses and software behavior flow as a state diagram or flow diagram using configurable library blocks. Simulation is conducted to select the right components by performing power vs. performance trade-offs, reliability analysis and bottleneck analysis. Typical reports that help a designer to make architecture decisions include application latency, device throughput, device utilization, power consumption of the full system as well as device-level power consumption.
A model-based development tool creates and simulates graphical data flow and UML state chart diagrams of components like digital filters, motor controllers, communication protocol decoding and multi-rate tasks.
Custom compilers and linkers may be used to optimize specialized hardware.
An embedded system may have its own special language or design tool, or add enhancements to an existing language such as Forth or Basic.
Another alternative is to add a RTOS or embedded operating system
Modeling and code generating tools often based on state machines
Software tools can come from several sources:
Software companies that specialize in the embedded market
Ported from the GNU software development tools
Sometimes, development tools for a personal computer can be used if the embedded processor is a close relative to a common PC processor
As the complexity of embedded systems grows, higher-level tools and operating systems are migrating into machinery where it makes sense. For example, cellphones, personal digital assistants and other consumer computers often need significant software that is purchased or provided by a person other than the manufacturer of the electronics. In these systems, an open programming environment such as Linux, NetBSD, FreeBSD, OSGi or Embedded Java is required so that the third-party software provider can sell to a large market.
Debugging
Embedded debugging may be performed at different levels, depending on the facilities available. Considerations include: does it slow down the main application, how close is the debugged system or application to the actual system or application, how expressive are the triggers that can be set for debugging (e.g., inspecting the memory when a particular program counter value is reached), and what can be inspected in the debugging process (such as, only memory, or memory and registers, etc.).
From simplest to most sophisticated debugging techniques and systems are roughly grouped into the following areas:
Interactive resident debugging, using the simple shell provided by the embedded operating system (e.g. Forth and Basic)
Software-only debuggers have the benefit that they do not need any hardware modification but have to carefully control what they record in order to conserve time and storage space.
External debugging using logging or serial port output to trace operation using either a monitor in flash or using a debug server like the Remedy Debugger that even works for heterogeneous multicore systems.
An in-circuit debugger (ICD), a hardware device that connects to the microprocessor via a JTAG or Nexus interface. This allows the operation of the microprocessor to be controlled externally, but is typically restricted to specific debugging capabilities in the processor.
An in-circuit emulator (ICE) replaces the microprocessor with a simulated equivalent, providing full control over all aspects of the microprocessor.
A complete emulator provides a simulation of all aspects of the hardware, allowing all of it to be controlled and modified, and allowing debugging on a normal PC. The downsides are expense and slow operation, in some cases up to 100 times slower than the final system.
For SoC designs, the typical approach is to verify and debug the design on an FPGA prototype board. Tools such as Certus are used to insert probes in the FPGA implementation that make signals available for observation. This is used to debug hardware, firmware and software interactions across multiple FPGAs in an implementation with capabilities similar to a logic analyzer.
Unless restricted to external debugging, the programmer can typically load and run software through the tools, view the code running in the processor, and start or stop its operation. The view of the code may be as high-level programming language, assembly code or mixture of both.
Tracing
Real-time operating systems often support tracing of operating system events. A graphical view is presented by a host PC tool, based on a recording of the system behavior. The trace recording can be performed in software, by the RTOS, or by special tracing hardware. RTOS tracing allows developers to understand timing and performance issues of the software system and gives a good understanding of the high-level system behaviors. Trace recording in embedded systems can be achieved using hardware or software solutions. Software-based trace recording does not require specialized debugging hardware and can be used to record traces in deployed devices, but it can have an impact on CPU and RAM usage. One example of a software-based tracing method used in RTOS environments is the use of empty macros which are invoked by the operating system at strategic places in the code, and can be implemented to serve as hooks.
Reliability
Embedded systems often reside in machines that are expected to run continuously for years without error, and in some cases recover by themselves if an error occurs. Therefore, the software is usually developed and tested more carefully than that for personal computers, and unreliable mechanical moving parts such as disk drives, switches or buttons are avoided.
Specific reliability issues may include:
The system cannot safely be shut down for repair, or it is too inaccessible to repair. Examples include space systems, undersea cables, navigational beacons, bore-hole systems, and automobiles.
The system must be kept running for safety reasons. Reduced functionality in the event of failure may be intolerable. Often backups are selected by an operator. Examples include aircraft navigation, reactor control systems, safety-critical chemical factory controls, train signals.
The system will lose large amounts of money when shut down: Telephone switches, factory controls, bridge and elevator controls, funds transfer and market making, automated sales and service.
A variety of techniques are used, sometimes in combination, to recover from errors—both software bugs such as memory leaks, and also soft errors in the hardware:
watchdog timer that resets and restarts the system unless the software periodically notifies the watchdog subsystems
Designing with a trusted computing base (TCB) architecture ensures a highly secure and reliable system environment
A hypervisor designed for embedded systems is able to provide secure encapsulation for any subsystem component so that a compromised software component cannot interfere with other subsystems, or privileged-level system software. This encapsulation keeps faults from propagating from one subsystem to another, thereby improving reliability. This may also allow a subsystem to be automatically shut down and restarted on fault detection.
Immunity-aware programming can help engineers produce more reliable embedded systems code. Guidelines and coding rules such as MISRA C/C++ aim to assist developers produce reliable, portable firmware in a number of different ways: typically by advising or mandating against coding practices which may lead to run-time errors (memory leaks, invalid pointer uses), use of run-time checks and exception handling (range/sanity checks, divide-by-zero and buffer index validity checks, default cases in logic checks), loop bounding, production of human-readable, well commented and well structured code, and avoiding language ambiguities which may lead to compiler-induced inconsistencies or side-effects (expression evaluation ordering, recursion, certain types of macro). These rules can often be used in conjunction with code static checkers or bounded model checking for functional verification purposes, and also assist in determination of code timing properties.
High vs. low volume
For high-volume systems such as mobile phones, minimizing cost is usually the primary design consideration. Engineers typically select hardware that is just good enough to implement the necessary functions.
For low-volume or prototype embedded systems, general-purpose computers may be adapted by limiting the programs or by replacing the operating system with an RTOS.
Embedded software architectures
In 1978 National Electrical Manufacturers Association released ICS 3-1978, a standard for programmable microcontrollers, including almost any computer-based controllers, such as single-board computers, numerical, and event-based controllers.
There are several different types of software architecture in common use.
Simple control loop
In this design, the software simply has a loop which monitors the input devices. The loop calls subroutines, each of which manages a part of the hardware or software. Hence it is called a simple control loop or programmed input-output.
Interrupt-controlled system
Some embedded systems are predominantly controlled by interrupts. This means that tasks performed by the system are triggered by different kinds of events; an interrupt could be generated, for example, by a timer at a predefined interval, or by a serial port controller receiving data.
This architecture is used if event handlers need low latency, and the event handlers are short and simple. These systems run a simple task in a main loop also, but this task is not very sensitive to unexpected delays. Sometimes the interrupt handler will add longer tasks to a queue structure. Later, after the interrupt handler has finished, these tasks are executed by the main loop. This method brings the system close to a multitasking kernel with discrete processes.
Cooperative multitasking
Cooperative multitasking is very similar to the simple control loop scheme, except that the loop is hidden in an API. The programmer defines a series of tasks, and each task gets its own environment to run in. When a task is idle, it calls an idle routine which passes control to another task.
The advantages and disadvantages are similar to that of the control loop, except that adding new software is easier, by simply writing a new task, or adding to the queue.
Preemptive multitasking or multi-threading
In this type of system, a low-level piece of code switches between tasks or threads based on a timer invoking an interrupt. This is the level at which the system is generally considered to have an operating system kernel. Depending on how much functionality is required, it introduces more or less of the complexities of managing multiple tasks running conceptually in parallel.
As any code can potentially damage the data of another task (except in systems using a memory management unit) programs must be carefully designed and tested, and access to shared data must be controlled by some synchronization strategy such as message queues, semaphores or a non-blocking synchronization scheme.
Because of these complexities, it is common for organizations to use an off-the-shelf RTOS, allowing the application programmers to concentrate on device functionality rather than operating system services. The choice to include an RTOS brings in its own issues, however, as the selection must be made prior to starting the application development process. This timing forces developers to choose the embedded operating system for their device based on current requirements and so restricts future options to a large extent.
The level of complexity in embedded systems is continuously growing as devices are required to manage peripherals and tasks such as serial, USB, TCP/IP, Bluetooth, Wireless LAN, trunk radio, multiple channels, data and voice, enhanced graphics, multiple states, multiple threads, numerous wait states and so on. These trends are leading to the uptake of embedded middleware in addition to an RTOS.
Microkernels and exokernels
A microkernel allocates memory and switches the CPU to different threads of execution. User-mode processes implement major functions such as file systems, network interfaces, etc.
Exokernels communicate efficiently by normal subroutine calls. The hardware and all the software in the system are available to and extensible by application programmers.
Monolithic kernels
A monolithic kernel is a relatively large kernel with sophisticated capabilities adapted to suit an embedded environment. This gives programmers an environment similar to a desktop operating system like Linux or Microsoft Windows, and is therefore very productive for development. On the downside, it requires considerably more hardware resources, is often more expensive, and, because of the complexity of these kernels, can be less predictable and reliable.
Common examples of embedded monolithic kernels are embedded Linux, VXWorks and Windows CE.
Despite the increased cost in hardware, this type of embedded system is increasing in popularity, especially on the more powerful embedded devices such as wireless routers and GPS navigation systems.
Additional software components
In addition to the core operating system, many embedded systems have additional upper-layer software components. These components include networking protocol stacks like CAN, TCP/IP, FTP, HTTP, and HTTPS, and storage capabilities like FAT and flash memory management systems. If the embedded device has audio and video capabilities, then the appropriate drivers and codecs will be present in the system. In the case of the monolithic kernels, many of these software layers may be included in the kernel. In the RTOS category, the availability of additional software components depends upon the commercial offering.
Domain-specific architectures
In the automotive sector, AUTOSAR is a standard architecture for embedded software.
See also
Communications server
Cyber-physical system
Electronic control unit
Information appliance
Integrated development environment
Photonically Optimized Embedded Microprocessors
Silicon compiler
Software engineering
System on module
Ubiquitous computing
Notes
References
Further reading
External links
Embedded Systems course with mbed YouTube, ongoing from 2015
Trends in Cyber Security and Embedded Systems Dan Geer, November 2013
Modern Embedded Systems Programming Video Course YouTube, ongoing from 2013
Embedded Systems Week (ESWEEK) yearly event with conferences, workshops and tutorials covering all aspects of embedded systems and software
, workshop covering educational aspects of embedded systems
Developing Embedded Systems - A Tools Introduction | Embedded system | [
"Technology",
"Engineering"
] | 5,956 | [
"Embedded systems",
"Computer science",
"Computer engineering",
"Computer systems"
] |
46,633 | https://en.wikipedia.org/wiki/Turing%20tarpit | A Turing tarpit (or Turing tar-pit) is any programming language or computer interface that allows for flexibility in function but is difficult to learn and use because it offers little or no support for common tasks. The phrase was coined in 1982 by Alan Perlis in the Epigrams on Programming:
In any Turing complete language, it is possible to write any computer program, so in a very rigorous sense nearly all programming languages are equally capable. However, having that theoretical ability is not the same as usefulness in practice. Turing tarpits are characterized by having a simple abstract machine that requires the user to deal with many details in the solution of a problem. At the extreme opposite are interfaces that can perform very complex tasks with little human intervention but become obsolete if requirements change slightly.
Some esoteric programming languages, such as Brainfuck or Malbolge, are specifically referred to as "Turing tarpits" because they deliberately implement the minimum functionality necessary to be classified as Turing complete languages. Using such languages is a form of mathematical recreation: programmers can work out how to achieve basic programming constructs in an extremely difficult but mathematically Turing-equivalent language.
See also
Greenspun's tenth rule
Zawinski's law of software envelopment
References
Further reading
G. Fischer, A.C. Lemke, "Constrained Design Processes: Steps Toward Convivial Computing", Technical Report CU-CS-369-87, University of Colorado, USA.
E.L. Hutchins, J.D. Hollan, D.A. Norman, . Also found in
Esolangs, Turing Tarpit.
Alan Turing
Recreational mathematics
Theory of computation
Software engineering folklore | Turing tarpit | [
"Mathematics",
"Engineering"
] | 346 | [
"Software engineering",
"Recreational mathematics",
"Software engineering folklore"
] |
46,634 | https://en.wikipedia.org/wiki/Xenu | Xenu ( ), also called Xemu, is a figure in the Church of Scientology's secret "Advanced Technology", a sacred and esoteric teaching. According to the "Technology", Xenu was the extraterrestrial ruler of a "Galactic Confederacy" who brought billions of his people to Earth (then known as "Teegeeack") in DC-8-like spacecraft 75 million years ago, stacked them around volcanoes, and killed them with hydrogen bombs. Official Scientology scriptures hold that the thetans (immortal spirits) of these aliens adhere to humans, causing spiritual harm.
These events are known within Scientology as "Incident II", and the traumatic memories associated with them as "The Wall of Fire" or "R6 implant". The narrative of Xenu is part of Scientologist teachings about extraterrestrial civilizations and alien interventions in earthly events, collectively described as "space opera" by L. Ron Hubbard. Hubbard detailed the story in Operating Thetan level III (OT III) in 1967, warning that the "R6 implant" (past trauma) was "calculated to kill (by pneumonia, etc.) anyone who attempts to solve it".
The Church of Scientology normally only reveals the Xenu story to members who have completed a lengthy sequence of courses costing large amounts of money. The church avoids mention of Xenu in public statements and has gone to considerable effort to maintain the story's confidentiality, including legal action on the grounds of copyright and trade secrecy. Officials of the Church of Scientology widely deny or try to hide the Xenu story. Despite this, much material on Xenu has leaked to the public via court documents and copies of Hubbard's notes that have been distributed through the Internet.
In commentary on the impact of the Xenu text, academic scholars have discussed and analyzed Hubbard's writings, their place within Scientology, and relationship to science fiction, UFO religions, Gnosticism, and creation myths.
Summary
The story of Xenu is covered in OT III, part of Scientology's secret "Advanced Technology" doctrines taught only to advanced members who have undergone many hours of auditing and reached the state of Clear followed by Operating Thetan levels 1 and 2. It is described in more detail in the accompanying confidential "Assists" lecture of October 3, 1968, and is dramatized in Revolt in the Stars (a screen-story – in the form of a novel – written by L. Ron Hubbard in 1977).
Hubbard wrote that Xenu was the ruler of a Galactic Confederacy 75 million years ago, which consisted of 26 stars and 76 planets including Earth, which was then known as "Teegeeack". The planets were overpopulated, containing an average population of 178 billion. The Galactic Confederacy's civilization was comparable to our own, with aliens "walking around in clothes which looked very remarkably like the clothes they wear this very minute" and using cars, trains and boats looking exactly the same as those "circa 1950, 1960" on Earth.
Xenu was about to be deposed from power, so he devised a plot to eliminate the excess population from his dominions. With the assistance of psychiatrists, he gathered billions of his citizens under the pretense of income tax inspections, then paralyzed them and froze them in a mixture of alcohol and glycol to capture their souls. The kidnapped populace was loaded into spacecraft for transport to the site of extermination, the planet of Teegeeack (Earth). The appearance of these spacecraft would later be subconsciously expressed in the design of the Douglas DC-8, the only difference being that "the DC8 had fans, propellers on it and the space plane didn't". When they had reached Teegeeack, the paralyzed citizens were off-loaded, and placed around the bases of volcanoes across the planet. Hydrogen bombs were then lowered into the volcanoes and detonated simultaneously, killing all but a few aliens. Hubbard described the scene in his film script, Revolt in the Stars:
The now-disembodied victims' souls, which Hubbard called thetans, were blown into the air by the blast. They were captured by Xenu's forces using an "electronic ribbon" ("which also was a type of standing wave") and sucked into "vacuum zones" around the world. The hundreds of billions of captured thetans were taken to a type of cinema, where they were forced to watch a "three-D, super colossal motion picture" for thirty-six days. This implanted what Hubbard termed "various misleading data" (collectively termed the R6 implant) into the memories of the hapless thetans, "which has to do with God, the Devil, space opera, etcetera". This included all world religions; Hubbard specifically attributed Roman Catholicism and the image of the Crucifixion to the influence of Xenu. The two "implant stations" cited by Hubbard were said to have been located on Hawaii and Las Palmas in the Canary Islands.
In addition to implanting new beliefs in the thetans, the images deprived them of their sense of personal identity. When the thetans left the projection areas, they started to cluster together in groups of a few thousand, having lost the ability to differentiate between each other. Each cluster of thetans gathered into one of the few remaining bodies that survived the explosion. These became what are known as body thetans, which are said to be still clinging to and adversely affecting everyone except Scientologists who have performed the necessary steps to remove them.
A government faction known as the Loyal Officers finally overthrew Xenu and his renegades, and locked him away in "an electronic mountain trap" from which he has not escaped. Although the location of Xenu is sometimes said to be the Pyrenees on Earth, this is actually the location Hubbard gave elsewhere for an ancient "Martian report station". Teegeeack was subsequently abandoned by the Galactic Confederacy and remains a pariah "prison planet" to this day, although it has suffered repeatedly from incursions by alien "Invader Forces" since that time.
In 1988, the cost of learning these secrets from the Church of Scientology was £3,830, or US$6,500. This is in addition to the cost of the prior courses which are necessary to be eligible for OT III, which in 2006 was often well over US$100,000 (roughly £77,000). Belief in Xenu and body thetans is a requirement for a Scientologist to progress further along the Bridge to Total Freedom. Those who do not experience the benefits of the OT III course are expected to take it and pay for it again.
Scientology doctrine
Within Scientology, the Xenu story is referred to as "The Wall of Fire" or "Incident II". Hubbard attached tremendous importance to it, saying that it constituted "the secrets of a disaster which resulted in the decay of life as we know it in this sector of the galaxy". The broad outlines of the story—that 75 million years ago a great catastrophe happened in this sector of the galaxy which caused profoundly negative effects for everyone since then—are told to lower-level Scientologists; but the details are kept strictly confidential.
The OT III document asserts that Hubbard entered the Wall of Fire but emerged alive ("probably the only one ever to do so in 75,000,000 years"). He first publicly announced his "breakthrough" in Ron's Journal 67 (RJ67), a taped lecture recorded on September 20, 1967, to be sent to all Scientologists. According to Hubbard, his research was achieved at the cost of a broken back, knee, and arm. OT III contains a warning that the R6 implant is "calculated to kill (by pneumonia etc.) anyone who attempts to solve it". Hubbard claimed that his "tech development"—i.e. his OT materials—had neutralized this threat, creating a safe path to redemption.
The Church of Scientology forbids individuals from reading the OT III Xenu cosmogony without first having taken prerequisite courses. Scientologists warn that reading the Xenu story without proper authorization could cause pneumonia.
In RJ67, Hubbard alludes to the devastating effect of Xenu's purported genocide:
OT III also deals with Incident I, set four quadrillion years ago. (Scientific consensus places the age of the universe at approximately 13.8 billion years old.) In Incident I, the unsuspecting thetan was subjected to a loud snapping noise followed by a flood of luminescence, then saw a chariot followed by a trumpeting cherub. After a loud set of snaps, the thetan was overwhelmed by darkness. It is described that these traumatic memories alone separate thetans from their static (natural, godlike) state.
Hubbard uses the existence of body thetans to explain many of the physical and mental ailments of humanity which, he says, prevent people from achieving their highest spiritual levels. OT III tells the Scientologist to locate body thetans and release them from the effects of Incidents I and II. This is accomplished in solo auditing, where the Scientologist holds both cans of an E-meter in one hand and asks questions as an auditor. The Scientologist is directed to find a cluster of body thetans, address it telepathically as a cluster, and take first the cluster, then each individual member, through Incident II, then Incident I if needed. Hubbard warns that this is a painstaking procedure, and that OT levels IV to VII are necessary to continue dealing with one's body thetans.
The Church of Scientology has objected to the Xenu story being used to paint Scientology as science fiction fantasy. Hubbard's statements concerning the R6 implant have been a source of contention. Critics and some Christians state that Hubbard's statements regarding R6 prove that Scientology doctrine is incompatible with Christianity, despite the Church's statements to the contrary. In "Assists", Hubbard says:
Origins of the story
Hubbard wrote OT III in late 1966 and early 1967 in North Africa while on his way to Las Palmas to join the Enchanter, the first vessel of his private Scientology fleet. (OT III says "In December 1967 I knew someone had to take the plunge", but the material was publicized well before this.) He emphasized later that OT III was his own personal discovery.
Critics of Scientology have suggested that other factors may have been at work. In a letter of the time to his wife Mary Sue, Hubbard said that, in order to assist his research, he was drinking alcohol and taking stimulants and depressants ("I'm drinking lots of rum and popping pinks and greys"). His assistant at the time, Virginia Downsborough, said that she had to wean him off the diet of drugs to which he had become accustomed. Russell Miller posits in Bare-faced Messiah that it was important for Hubbard to be found in a debilitated condition, so as to present OT III as "a research accomplishment of immense magnitude".
Elements of the Xenu story appeared in Scientology before OT III. Hubbard's descriptions of extraterrestrial conflicts were put forward as early as 1950 in his book Have You Lived Before This Life?, and were enthusiastically endorsed by Scientologists who documented their past lives on other planets.
Influence of OT III on Scientology
The 1968 and subsequent reprints of Dianetics have had covers depicting an exploding volcano, which is reportedly a reference to OT III. In a 1968 lecture, and in instructions to his marketing staff, Hubbard explained that these images would "key in" the submerged memories of Incident II and impel people to buy the books:
Since the 1980s, the volcano has also been depicted in television commercials advertising Dianetics. Scientology's "Sea Org", an elite group within the church that originated with Hubbard's personal staff aboard his fleet of ships, takes many of its symbols from the story of Xenu and OT III. It is explicitly intended to be a revival of the "Loyal Officers" who overthrew Xenu. Its logo, a wreath with 26 leaves, represents the 26 stars of Xenu's Galactic Confederacy. According to an official Scientology dictionary, "the Sea Org symbol, adopted and used as the symbol of a Galactic Confederacy far back in the history of this sector, derives much of its power and authority from that association".
In the Advanced Orgs in Edinburgh and Los Angeles, Scientology staff were at one time ordered to wear all-white uniforms with silver boots, to mimic Xenu's Galactic Patrol as depicted on the cover of Dianetics: The Evolution of a Science. This was reportedly done on the basis of Hubbard's declaration in his Flag Order 652 that mankind would accept regulation from that group which had last betrayed it—hence the imitation of Xenu's henchmen. In Los Angeles, a nightwatch was ordered to watch for returning spaceships.
The Church of Scientology's own organizational structure is said to be based on that of the Galactic Confederacy. The Church's "org board" is "a refined board ... of an old galactic civilization. ... We applied Scientology to it and found out why the civilization eventually failed. They lacked a couple of departments and that was enough to mess it all up. And they only lasted 80 trillion [years]."
Name
The name has been spelled both as Xenu and Xemu. The Class VIII course material includes a three-page text, handwritten by Hubbard, headed "Data", in which the Xenu story is given in detail. Hubbard's indistinct handwriting makes either spelling possible, particularly as the use of the name on the first page of OT III is the only known example of the name in his handwriting. In the "Assists" lecture, Hubbard speaks of "Xenu, ahhh, could be spelled X-E-M-U" and clearly says "Xemu" several times on the recording. The treatment of Revolt in the Stars—which is typewritten—uses Xenu exclusively.
It has been speculated that the name derives from Xemnu, an extraterrestrial comic book villain who first appeared in the story "I Was a Slave of the Living Hulk!" in Journey into Mystery #62 (November 1960). He was created by Stan Lee and Jack Kirby. Xemnu is a giant, hairy intergalactic criminal who escaped a prison planet, traveled to Earth, and hypnotized the entire human population. Upon Xemnu's defeat by electrician Joe Harper, Xemnu is imprisoned in a state of continual electric shock in orbit around the Sun, and humanity is left with no memory of Xemnu's existence.
Church of Scientology's position
In its public statements, the Church of Scientology has been reluctant to allow any mention of Xenu. A passing mention by a trial judge in 1997 prompted the Church's lawyers to have the ruling sealed, although this was reversed. In the relatively few instances in which it has acknowledged Xenu, Scientology has stated the story's true meaning can only be understood after years of study. They complain of critics using it to paint the religion as a science-fiction fantasy.
Senior members of the Church of Scientology have several times publicly denied or minimized the importance of the Xenu story, but others have affirmed its existence. In 1995, Scientology lawyer Earl Cooley hinted at the importance of Xenu in Scientology doctrine by stating that "thousands of articles are written about Coca-Cola, and they don't print the formula for Coca-Cola". Scientology has many graduated levels through which one can progress. Many who remain at lower levels in the church are unaware of much of the Xenu story which is first revealed on Operating Thetan level three, or "OT III". Because the information imparted to members is to be kept secret from others who have not attained that level, the member must publicly deny its existence when asked. OT III recipients must sign an agreement promising never to reveal its contents before they are given the manila envelope containing the Xenu knowledge. Its knowledge is so dangerous, members are told, that anyone learning this material before they are ready could become afflicted with pneumonia.
Religious Technology Center director Warren McShane testified in a 1995 court case that the Church of Scientology receives a significant amount of its revenue from fixed donations paid by Scientologists to study the OT materials. McShane said that Hubbard's work "may seem weird" to those that have not yet completed the prior levels of coursework in Scientology. McShane said the story had never been secret, although maintaining there were nevertheless trade secrets contained in OT III. McShane discussed the details of the story at some length and specifically attributed the authorship of the story to Hubbard.
When John Carmichael, the president of the Church of Scientology of New York, was asked about the Xenu story, he said, as reported in the September 9, 2007, edition of The Daily Telegraph: "That's not what we believe". When asked directly about the Xenu story by Ted Koppel on ABC's Nightline, Scientology leader David Miscavige said that he was taking things Hubbard said out of context. However, in a 2006 interview with Rolling Stone, Mike Rinder, the then-director of the church's Office of Special Affairs, said that "It is not a story, it is an auditing level", when asked about the validity of the Xenu story.
In a BBC Panorama programme that aired on May 14, 2007, senior Scientologist Tommy Davis interrupted when celebrity members were asked about Xenu, saying: "None of us know what you're talking about. It's loony. It's weird." In March 2009, Davis was interviewed by investigative journalist Nathan Baca for KESQ-TV and was again asked about the OT III texts. Davis told Baca "I'm familiar with the material", and called it "the confidential scriptures of the Church". In an interview on ABC News Nightline, October 23, 2009, Davis walked off the set when Martin Bashir asked him about Xenu. He told Bashir, "Martin, I am not going to discuss the disgusting perversions of Scientology beliefs that can be found now commonly on the internet and be put in the position of talking about things, talking about things that are so fundamentally offensive to Scientologists to discuss. ... It is in violation of my religious beliefs to talk about them." When Bashir repeated a question about Xenu, Davis pulled off his microphone and left the set.
In November 2009 the Church of Scientology's representative in New Zealand, Mike Ferris, was asked in a radio interview about Xenu. The radio host asked, "So what you're saying is, Xenu is a part of the religion, but something that you don't want to talk about". Ferris responded, "Sure". Ferris acknowledged that Xenu "is part of the esoterica of Scientology".
Leaking of the story
Despite the Church of Scientology's efforts to keep the story secret, details have been leaked over the years. OT III was first revealed in Robert Kaufman's 1972 book Inside Scientology, in which Kaufman detailed his own experiences of OT III. It was later described in a 1981 Clearwater Sun article, and came to greater public fame in a 1985 court case brought against Scientology by Lawrence Wollersheim. The church failed to have the documents sealed and attempted to keep the case file checked out by a reader at all times, but the story was summarized in the Los Angeles Times and detailed in William Poundstone's Bigger Secrets (1986) from information presented in the Wollersheim case. In 1987, a book by L. Ron Hubbard Jr., L. Ron Hubbard, Messiah or Madman? quoted the first page of OT III and summarized the rest of its content.
Since then, news media have mentioned Xenu in coverage of Scientology or its celebrity proponents such as Tom Cruise. In 1987, the BBC's investigative news series Panorama aired a report titled "The Road to Total Freedom?" which featured an outline of the OT III story in cartoon form.
On December 24, 1994, the Xenu story was published on the Internet for the first time in a posting to the Usenet newsgroup alt.religion.scientology, through an anonymous remailer. This led to an online battle between Church of Scientology lawyers and detractors. Older versions of OT levels I to VII were brought as exhibits attached to a declaration by Steven Fishman on April 9, 1993, as part of Church of Scientology International v. Fishman and Geertz. The text of this declaration and its exhibits, collectively known as the Fishman Affidavit, were posted to the Internet newsgroup alt.religion.scientology in August 1995 by Arnie Lerma and on the World Wide Web by David S. Touretzky. This was a subject of great controversy and legal battles for several years. There was a copyright raid on Lerma's house (leading to massive mirroring of the documents) and a suit against Dutch writer Karin Spaink—the Church bringing suit on copyright violation grounds for reproducing the source material, and also claiming rewordings would reveal a trade secret.
The Church of Scientology's attempts to keep Xenu secret have been cited in court findings against it. In September 2003, a Dutch court, in a ruling in the case against Karin Spaink, stated that one objective in keeping OT II and OT III secret was to wield power over members of the Church of Scientology and prevent discussion about its teachings and practices:
Despite his claims that premature revelation of the OT III story was lethal, L. Ron Hubbard wrote a screenplay version under the title Revolt in the Stars in the 1970s. This revealed that Xenu had been assisted by beings named Chi ("the Galactic Minister of Police") and Chu ("the Executive President of the Galactic Interplanetary Bank"). It has not been officially published, although the treatment was circulated around Hollywood in the early 1980s. Unofficial copies of the screenplay circulate on the Internet.
On March 10, 2001, a user posted the text of OT3 to the online community Slashdot. The site owners took down the comment after the Church of Scientology issued a legal notice under the Digital Millennium Copyright Act. Critics of the Church of Scientology have used public protests to spread the Xenu secret. This has included creating web sites with "xenu" in the domain name, and displaying the name Xenu on banners and protest signs.
In popular culture
Versions of the Xenu story have appeared in both television shows and stage productions. The Off-Broadway satirical musical A Very Merry Unauthorized Children's Scientology Pageant, first staged in 2003 and winner of an Obie Award in 2004, featured children in alien costumes telling the story of Xenu.
The Xenu story was also satirized in a November 2005 episode of the animated television series South Park titled "Trapped in the Closet". The Emmy-nominated episode, which also lampooned Scientologists Tom Cruise and John Travolta as closeted homosexuals, depicted Xenu as a vaguely humanoid alien with tentacles for arms, in a sequence that had the words "This Is What Scientologists Actually Believe" superimposed on screen. The episode became the subject of controversy when the musician Isaac Hayes, the voice of the character "Chef" and a Scientologist, quit the show in March 2006, just prior to the episode's first scheduled re-screening, citing South Parks "inappropriate ridicule" of his religion. Hayes' statement did not mention the episode in particular, but expressed his view that the show's habit of parodying religion was part of a "growing insensitivity toward personal spiritual beliefs" in the media that was also reflected in the Muhammad cartoons controversy: "There is a place in this world for satire, but there is a time when satire ends and intolerance and bigotry towards religious beliefs of others begins." Responding to Hayes' statement, South Park co-creator Matt Stone said his resignation had "nothing to do with intolerance and bigotry and everything to do with the fact that Isaac Hayes is a Scientologist and that we recently featured Scientology in an episode of South Park ... In 10 years and over 150 episodes of South Park, Isaac never had a problem with the show making fun of Christians, Muslims, Mormons and Jews. He got a sudden case of religious sensitivity when it was his religion featured on the show. Of course we will release Isaac from his contract and we wish him well." Comedy Central cancelled the repeat at short notice, choosing instead to screen two episodes featuring Hayes. A spokesman said that "in light of the events of earlier this week, we wanted to give Chef an appropriate tribute by airing two episodes he is most known for." It did eventually rebroadcast the episode on July 19, 2006. Stone and South Park co-creator Trey Parker felt that Comedy Central's owners Viacom had cancelled the repeat because of the upcoming release of the Tom Cruise film Mission: Impossible III by Paramount, another Viacom company: "I only know what we were told, that people involved with MI3 wanted the episode off the air and that is why Comedy Central had to do it. I don't know why else it would have been pulled."
Commentary
Writing in the book Scientology published by Oxford University Press, contributor Mikael Rothstein observes that, "To my knowledge no real analysis of Scientology's Xenu myth has appeared in scholarly publications. The most sober and enlightening text about the Xenu myth is probably the article on Wikipedia (English version) and, even if brief, Andreas Grünschloss's piece on Scientology in Lewis (2000: 266–268)." Rothstein places the Xenu text by L. Ron Hubbard within the context of a creation myth within the Scientology methodology, and characterizes it as "one of Scientology's more important religious narratives, the text that apparently constitutes the basic (sometimes implicit) mythology of the movement, the Xenu myth, which is basically a story of the origin of man on Earth and the human condition." Rothstein describes the phenomenon within a belief system inspired by science fiction, and notes that the "myth about Xenu, ... in the shape of a science fiction-inspired anthropogony, explains the basic Scientological claims about the human condition."
Andreas Grünschloß analyzes the Xenu text in The Oxford Handbook of New Religious Movements, within the context of a discussion on UFO religions. He characterizes the text as "Scientology's secret mythology (contained especially in the OT III teachings)". Grünschloß points out that L. Ron Hubbard, "also wrote a science fiction story called Revolt in the Stars, where he displays this otherwise arcane story about the ancient ruler Xenu in the form of an ordinary science fiction novel". Grünschloß posits, "because of the connections between several motifs in Hubbard's novels and specific Scientology teachings, one might perceive Scientology as one of the rare instances where science fiction (or fantasy literature generally) is related to the successful formation of a new spiritual movement." Comparing the fusion between the two genres of Hubbard's science fiction writing and Scientology creation myth, Grünschloß writes, "Although the science fiction novels are of a different genre than other 'techno-logical' disclosures of Hubbard, they are highly appreciated by participants, and Hubbard's literary output in this realm (including the latest movie, Battlefield Earth) is also well promoted by the organization." Writing in the book UFO Religions edited by Christopher Partridge, Grünschloß observes, "the enthusiasm for ufology and science fiction was cultivated in the formative phase of Scientology. Indeed, even the highly arcane story of the intergalactic ruler Xenu ... is related by Hubbard in the style of a simple science fiction novel".
Several authors have pointed out structural similarities between the Xenu story and the mythology of gnosticism. James A. Herrick, writing about the Xenu text in The Making of the New Spirituality: The Eclipse of the Western Religious Tradition, notes that "Hubbard's gnostic leanings are evident in his account of human origins ... In Hubbard, ideas first expressed in science fiction are seamlessly transformed into a worldwide religion with affinities to gnosticism." Mary Farrell Bednarowski, writing in America's Alternative Religions, similarly states that the outline of the Xenu mythology is "not totally unfamiliar to the historian acquainted with ancient gnosticism", noting that many other religious traditions have the practice of reserving certain texts to high-level initiates. Nevertheless, she writes, the Xenu story arouses suspicion in the public about Scientology and adds fuel to "the claims that Hubbard's system is the product of his creativity as a science fiction writer rather than a theologian."
Authors Michael McDowell and Nathan Robert Brown discuss misconceptions about the Xenu text in their book World Religions at Your Fingertips, and observe, "Probably the most controversial, misunderstood, and frequently misrepresented part of the Scientology religion has to do with a Scientology myth commonly referred to as the Legend of Xenu. While this story has now been undoubtedly proven a part of the religion (despite the fact that church representatives often deny its existence), the story's true role in Scientology is often misrepresented by its critics as proof that they 'believe in alien parasites.' While the story may indeed seem odd, this is simply not the case." The authors write that "The story is actually meant to be a working myth, illustrating the Scientology belief that humans were at one time spiritual beings, existing on infinite levels of intergalactic and interdimensional realities. At some point, the beings that we once were became trapped in physical reality (where we remain to this day). This is supposed to be the underlying message of the Xenu story, not that humans are "possessed by aliens". McDowell and Brown conclude that these inappropriate misconceptions about the Xenu text have had a negative impact, "Such harsh statements are the reason many Scientologists now become passionately offended at even the mention of Xenu by nonmembers."
The free speech lawyer Mike Godwin analyzes actions by the Scientology organization to protect and keep secret the Xenu text, within a discussion in his book Cyber Rights about the application of trade secret law on the Internet. Godwin explains, "trade secret law protects the information itself, not merely its particular expression. Trade secret law, unlike copyright, can protect ideas and facts directly." He puts forth the question, "But did the material really qualify as 'trade secrets'? Among the material the church has been trying to suppress is what might be called a 'genesis myth of Scientology': a story about a galactic despot named Xenu who decided 75 million years ago to kill a bunch of people by chaining them to volcanoes and dropping nuclear bombs on them." Godwin asks, "Does a 'church' normally have 'competitors' in the trade secret sense? If the Catholics got hold of the full facts about Xenu, does this mean they'll get more market share?" He comments on the ability of the Scientology organization to utilize such laws in order to contain its secret texts, "It seems likely, given what we know about the case now, that even a combination of copyright and trade secret law wouldn't accomplish what the church would like to accomplish: the total suppression of any dissemination of church documents or doctrines." The author concludes, "But the fact that the church was unlikely to gain any complete legal victories in its cases didn't mean that they wouldn't litigate. It's indisputable that the mere threat of litigation, or the costs of actual litigation, may accomplish what the legal theories alone do not: the effective silencing of many critics of the church."
See also
Incident (Scientology)
Science fiction
Sinister Barrier, a 1939 novel with similar themes
Notes
References
External links
"OT III Released" in online edition of What Is Scientology
OT III Scholarship Page (David S. Touretzky; includes page scans, commentary, audio files)
Revolt in the Stars summary (Grady Ward)
Xenu Leaflet (Roland Rashleigh-Berry)
The Fishman Affidavit: OT III (extracts and synopsis by Karin Spaink)
A Scientific scrutiny of OT III (Peter Forde, June 1996) Claims about Xenu evaluated against scientific geology
"The History Of Xenu, As Explained By L. Ron Hubbard In 8 Minutes" (Gawker.com) Extract from the "Assists" lecture of October 3, 1968
Scientology and Christianity Examined ()
Testimony under oath (pp274–275) from Robert Vaughn Young in RTC v. FactNet, Civil Action No. 95B2143, United States Courthouse, Denver, Colorado, September 11, 1995
Alleged extraterrestrial beings
Creation myths
Extraterrestrial life in popular culture
Mythological peoples
Scientology beliefs and practices
Scientology-related controversies
Trade secrets | Xenu | [
"Astronomy"
] | 6,909 | [
"Cosmogony",
"Creation myths"
] |
46,642 | https://en.wikipedia.org/wiki/Grep | grep is a command-line utility for searching plaintext datasets for lines that match a regular expression. Its name comes from the ed command g/re/p (global regular expression search and print), which has the same effect. grep was originally developed for the Unix operating system, but later became available for all Unix-like systems and some others such as OS-9.
History
Before it was named, grep was a private utility written by Ken Thompson to search files for certain patterns. Doug McIlroy, unaware of its existence, asked Thompson to write such a program. Responding that he would think about such a utility overnight, Thompson actually corrected bugs and made improvements for about an hour on his own program called "s" (short for "search"). The next day he presented the program to McIlroy, who said it was exactly what he wanted. Thompson's account may explain the belief that grep was written overnight.
Thompson wrote the first version in PDP-11 assembly language to help Lee E. McMahon analyze the text of The Federalist Papers to determine authorship of the individual papers. The ed text editor (also authored by Thompson) had regular expression support but could not be used to search through such a large amount of text, as it loaded the entire file into memory to enable random access editing, so Thompson excerpted that regexp code into a standalone tool which would instead process arbitrarily long files sequentially without buffering too much into memory. He chose the name because in ed, the command g/re/p would print all lines featuring a specified pattern match. grep was first included in Version 4 Unix. Stating that it is "generally cited as the prototypical software tool", McIlroy credited grep with "irrevocably ingraining" Thompson's tools philosophy in Unix.
Implementations
A variety of grep implementations are available in many operating systems and software development environments. Early variants included egrep and fgrep, introduced in Version 7 Unix. The egrep variant supports an extended regular expression syntax added by Alfred Aho after Ken Thompson's original regular expression implementation. The "fgrep" variant searches for any of a list of fixed strings using the Aho–Corasick string matching algorithm. Binaries of these variants exist in modern systems, usually linking to grep or calling grep as a shell script with the appropriate flag added, e.g. exec grep -E "$@". egrep and fgrep, while commonly deployed on POSIX systems, to the point the POSIX specification mentions their widespread existence, are actually not part of POSIX.
Other commands contain the word "grep" to indicate they are search tools, typically ones that rely on regular expression matches. The pgrep utility, for instance, displays the processes whose names match a given regular expression.
In the Perl programming language, grep is a built-in function that finds elements in a list that satisfy a certain property. This higher-order function is typically named filter or where in other languages.
The pcregrep command is an implementation of grep that uses Perl regular expression syntax. Similar functionality can be invoked in the GNU version of grep with the -P flag.
Ports of grep (within Cygwin and GnuWin32, for example) also run under Microsoft Windows. Some versions of Windows feature the similar qgrep or findstr command.
A grep command is also part of ASCII's MSX-DOS2 Tools for MSX-DOS version 2.
The grep, egrep, and fgrep commands have also been ported to the IBM i operating system.
The software Adobe InDesign has functions GREP (since CS3 version (2007)), in the find/change dialog box "GREP" tab, and introduced with InDesign CS4 in paragraph styles "GREP styles".
agrep
agrep (approximate grep) is an open-source approximate string matching program, developed by Udi Manber and Sun Wu between 1988 and 1991, for use with the Unix operating system. It was later ported to OS/2, DOS, and Windows.
agrep matches even when the text only approximately fits the search pattern.
This following invocation finds netmasks in file myfile, but also any other word that can be derived from it, given no more than two substitutions.
agrep -2 netmasks myfile
This example generates a list of matches with the closest, that is those with the fewest, substitutions listed first. The command flag -B means "best":
agrep -B netmasks myfile
Usage as a verb
In December 2003, the Oxford English Dictionary Online added "grep" as both a noun and a verb.
A common verb usage is the phrase "You can't grep dead trees"—meaning one can more easily search through digital media, using tools such as grep, than one could with a hard copy (i.e. one made from "dead trees", which in this context is a dysphemism for paper).
See also
Boyer–Moore string-search algorithm
agrep, an approximate string-matching command
find (Windows) or Findstr, a DOS and Windows command that performs text searches, similar to a simple grep
find (Unix), a Unix command that finds files by attribute, very different from grep
List of Unix commands
vgrep, or "visual grep"
ngrep, the network grep
References
Notes
Hume, Andrew Grep wars: The strategic search initiative. In Peter Collinson, editor, Proceedings of the EUUG Spring 88 Conference, pages 237–245, Buntingford, UK, 1988. European UNIX User Group.
External links
GNU Grep official website
GNU Grep manual
"why GNU grep is fast" - implementation details from GNU grep's author.
Command Grep – 25 practical examples
Unix text processing utilities
Unix SUS2008 utilities
Standard Unix programs
Plan 9 commands
Inferno (operating system) commands
IBM i Qshell commands | Grep | [
"Technology"
] | 1,274 | [
"IBM i Qshell commands",
"Standard Unix programs",
"Computing commands",
"Plan 9 commands",
"Inferno (operating system) commands"
] |
46,656 | https://en.wikipedia.org/wiki/Radio%20telescope | A radio telescope is a specialized antenna and radio receiver used to detect radio waves from astronomical radio sources in the sky. Radio telescopes are the main observing instrument used in radio astronomy, which studies the radio frequency portion of the electromagnetic spectrum, just as optical telescopes are used to make observations in the visible portion of the spectrum in traditional optical astronomy. Unlike optical telescopes, radio telescopes can be used in the daytime as well as at night.
Since astronomical radio sources such as planets, stars, nebulas and galaxies are very far away, the radio waves coming from them are extremely weak, so radio telescopes require very large antennas to collect enough radio energy to study them, and extremely sensitive receiving equipment. Radio telescopes are typically large parabolic ("dish") antennas similar to those employed in tracking and communicating with satellites and space probes. They may be used individually or linked together electronically in an array. Radio observatories are preferentially located far from major centers of population to avoid electromagnetic interference (EMI) from radio, television, radar, motor vehicles, and other man-made electronic devices.
Radio waves from space were first detected by engineer Karl Guthe Jansky in 1932 at Bell Telephone Laboratories in Holmdel, New Jersey using an antenna built to study radio receiver noise. The first purpose-built radio telescope was a 9-meter parabolic dish constructed by radio amateur Grote Reber in his back yard in Wheaton, Illinois in 1937. The sky survey he performed is often considered the beginning of the field of radio astronomy.
Early radio telescopes
The first radio antenna used to identify an astronomical radio source was built by Karl Guthe Jansky, an engineer with Bell Telephone Laboratories, in 1932. Jansky was assigned the task of identifying sources of static that might interfere with radiotelephone service. Jansky's antenna was an array of dipoles and reflectors designed to receive short wave radio signals at a frequency of 20.5 MHz (wavelength about 14.6 meters). It was mounted on a turntable that allowed it to rotate in any direction, earning it the name "Jansky's merry-go-round." It had a diameter of approximately and stood tall. By rotating the antenna, the direction of the received interfering radio source (static) could be pinpointed. A small shed to the side of the antenna housed an analog pen-and-paper recording system. After recording signals from all directions for several months, Jansky eventually categorized them into three types of static: nearby thunderstorms, distant thunderstorms, and a faint steady hiss above shot noise, of unknown origin. Jansky finally determined that the "faint hiss" repeated on a cycle of 23 hours and 56 minutes. This period is the length of an astronomical sidereal day, the time it takes any "fixed" object located on the celestial sphere to come back to the same location in the sky. Thus Jansky suspected that the hiss originated outside of the Solar System, and by comparing his observations with optical astronomical maps, Jansky concluded that the radiation was coming from the Milky Way Galaxy and was strongest in the direction of the center of the galaxy, in the constellation of Sagittarius.
An amateur radio operator, Grote Reber, was one of the pioneers of what became known as radio astronomy. He built the first parabolic "dish" radio telescope, in diameter, in his back yard in Wheaton, Illinois in 1937. He repeated Jansky's pioneering work, identifying the Milky Way as the first off-world radio source, and he went on to conduct the first sky survey at very high radio frequencies, discovering other radio sources. The rapid development of radar during World War II created technology which was applied to radio astronomy after the war, and radio astronomy became a branch of astronomy, with universities and research institutes constructing large radio telescopes.
Types
The range of frequencies in the electromagnetic spectrum that makes up the radio spectrum is very large. As a consequence, the types of antennas that are used as radio telescopes vary widely in design, size, and configuration. At wavelengths of 30 meters to 3 meters (10–100 MHz), they are generally either directional antenna arrays similar to "TV antennas" or large stationary reflectors with movable focal points. Since the wavelengths being observed with these types of antennas are so long, the "reflector" surfaces can be constructed from coarse wire mesh such as chicken wire.
At shorter wavelengths parabolic "dish" antennas predominate. The angular resolution of a dish antenna is determined by the ratio of the diameter of the dish to the wavelength of the radio waves being observed. This dictates the dish size a radio telescope needs for a useful resolution. Radio telescopes that operate at wavelengths of 3 meters to 30 cm (100 MHz to 1 GHz) are usually well over 100 meters in diameter. Telescopes working at wavelengths shorter than 30 cm (above 1 GHz) range in size from 3 to 90 meters in diameter.
Frequencies
The increasing use of radio frequencies for communication makes astronomical observations more and more difficult (see Open spectrum).
Negotiations to defend the frequency allocation for parts of the spectrum most useful for observing the universe are coordinated in the Scientific Committee on Frequency Allocations for Radio Astronomy and Space Science.
Some of the more notable frequency bands used by radio telescopes include:
Every frequency in the United States National Radio Quiet Zone
Channel 37: 608 to 614 MHz
The "Hydrogen line", also known as the "21 centimeter line": 1,420.40575177 MHz, used by many radio telescopes including The Big Ear in its discovery of the Wow! signal
1,406 MHz and 430 MHz
The Waterhole: 1,420 to 1,666 MHz
The Arecibo Observatory had several receivers that together covered the whole 1–10 GHz range.
The Wilkinson Microwave Anisotropy Probe mapped the cosmic microwave background radiation in 5 different frequency bands, centered on 23 GHz, 33 GHz, 41 GHz, 61 GHz, and 94 GHz.
Big dishes
The world's largest filled-aperture (i.e. full dish) radio telescope is the Five-hundred-meter Aperture Spherical Telescope (FAST) completed in 2016 by China. The dish with an area as large as 30 football fields is built into a natural karst depression in the landscape in Guizhou province and cannot move; the feed antenna is in a cabin suspended above the dish on cables. The active dish is composed of 4,450 moveable panels controlled by a computer. By changing the shape of the dish and moving the feed cabin on its cables, the telescope can be steered to point to any region of the sky up to 40° from the zenith. Although the dish is 500 meters in diameter, only a 300-meter circular area on the dish is illuminated by the feed antenna at any given time, so the actual effective aperture is 300 meters. Construction began in 2007 and was completed July 2016 and the telescope became operational September 25, 2016.
The world's second largest filled-aperture telescope was the Arecibo radio telescope located in Arecibo, Puerto Rico, though it suffered catastrophic collapse on 1 December 2020. Arecibo was one of the world's few radio telescope also capable of active (i.e., transmitting) radar imaging of near-Earth objects (see: radar astronomy); most other telescopes employ passive detection, i.e., receiving only. Arecibo was another stationary dish telescope like FAST. Arecibo's dish was built into a natural depression in the landscape, the antenna was steerable within an angle of about 20° of the zenith by moving the suspended feed antenna, giving use of a 270-meter diameter portion of the dish for any individual observation.
The largest individual radio telescope of any kind is the RATAN-600 located near Nizhny Arkhyz, Russia, which consists of a 576-meter circle of rectangular radio reflectors, each of which can be pointed towards a central conical receiver.
The above stationary dishes are not fully "steerable"; they can only be aimed at points in an area of the sky near the zenith, and cannot receive from sources near the horizon. The largest fully steerable dish radio telescope is the 100 meter Green Bank Telescope in West Virginia, United States, constructed in 2000. The largest fully steerable radio telescope in Europe is the Effelsberg 100-m Radio Telescope near Bonn, Germany, operated by the Max Planck Institute for Radio Astronomy, which also was the world's largest fully steerable telescope for 30 years until the Green Bank antenna was constructed. The third-largest fully steerable radio telescope is the 76-meter Lovell Telescope at Jodrell Bank Observatory in Cheshire, England, completed in 1957. The fourth-largest fully steerable radio telescopes are six 70-meter dishes: three Russian RT-70, and three in the NASA Deep Space Network. The planned Qitai Radio Telescope, at a diameter of , is expected to become the world's largest fully steerable single-dish radio telescope when completed in 2028.
A more typical radio telescope has a single antenna of about 25 meters diameter. Dozens of radio telescopes of about this size are operated in radio observatories all over the world.
Gallery of big dishes
Radio Telescopes in space
Since 1965, humans have launched three space-based radio telescopes. The first one, KRT-10, was attached to Salyut 6 orbital space station in 1979. In 1997, Japan sent the second, HALCA. The last one was sent by Russia in 2011 called Spektr-R.
Radio interferometry
One of the most notable developments came in 1946 with the introduction of the technique called astronomical interferometry, which means combining the signals from multiple antennas so that they simulate a larger antenna, in order to achieve greater resolution. Astronomical radio interferometers usually consist either of arrays of parabolic dishes (e.g., the One-Mile Telescope), arrays of one-dimensional antennas (e.g., the Molonglo Observatory Synthesis Telescope) or two-dimensional arrays of omnidirectional dipoles (e.g., Tony Hewish's Pulsar Array). All of the telescopes in the array are widely separated and are usually connected using coaxial cable, waveguide, optical fiber, or other type of transmission line. Recent advances in the stability of electronic oscillators also now permit interferometry to be carried out by independent recording of the signals at the various antennas, and then later correlating the recordings at some central processing facility. This process is known as Very Long Baseline Interferometry (VLBI). Interferometry does increase the total signal collected, but its primary purpose is to vastly increase the resolution through a process called aperture synthesis. This technique works by superposing (interfering) the signal waves from the different telescopes on the principle that waves that coincide with the same phase will add to each other while two waves that have opposite phases will cancel each other out. This creates a combined telescope that is equivalent in resolution (though not in sensitivity) to a single antenna whose diameter is equal to the spacing of the antennas furthest apart in the array.
A high-quality image requires a large number of different separations between telescopes. Projected separation between any two telescopes, as seen from the radio source, is called a baseline. For example, the Very Large Array (VLA) near Socorro, New Mexico has 27 telescopes with 351 independent baselines at once, which achieves a resolution of 0.2 arc seconds at 3 cm wavelengths. Martin Ryle's group in Cambridge obtained a Nobel Prize for interferometry and aperture synthesis. The Lloyd's mirror interferometer was also developed independently in 1946 by Joseph Pawsey's group at the University of Sydney. In the early 1950s, the Cambridge Interferometer mapped the radio sky to produce the famous 2C and 3C surveys of radio sources. An example of a large physically connected radio telescope array is the Giant Metrewave Radio Telescope, located in Pune, India. The largest array, the Low-Frequency Array (LOFAR), finished in 2012, is located in western Europe and consists of about 81,000 small antennas in 48 stations distributed over an area several hundreds of kilometers in diameter and operates between 1.25 and 30 m wavelengths. VLBI systems using post-observation processing have been constructed with antennas thousands of miles apart. Radio interferometers have also been used to obtain detailed images of the anisotropies and the polarization of the Cosmic Microwave Background, like the CBI interferometer in 2004.
The world's largest physically connected telescope, the Square Kilometre Array (SKA), is planned to start operations in 2025.
Astronomical observations
Many astronomical objects are not only observable in visible light but also emit radiation at radio wavelengths. Besides observing energetic objects such as pulsars and quasars, radio telescopes are able to "image" most astronomical objects such as galaxies, nebulae, and even radio emissions from planets.
See also
Aperture synthesis
Astropulse – distributed computing to search data tapes for primordial black holes, pulsars, and ETI
List of astronomical observatories
List of radio telescopes
List of telescope types
Search for extraterrestrial intelligence
Telescope
Radar telescope
References
Further reading
Rohlfs, K., & Wilson, T. L. (2004). Tools of radio astronomy. Astronomy and astrophysics library. Berlin, Germany: Springer.
Asimov, I. (1979). Isaac Asimov's Book of facts; Sky Watchers. New York: Grosset & Dunlap. pp. 390–399. .
External links
PICTOR: A free-to-use radio telescope
American inventions
Astronomical imaging
Astronomical instruments | Radio telescope | [
"Astronomy"
] | 2,811 | [
"Astronomical instruments"
] |
46,675 | https://en.wikipedia.org/wiki/Xylem | Xylem is one of the two types of transport tissue in vascular plants, the other being phloem; both of these are part of the vascular bundle. The basic function of the xylem is to transport water upward from the roots to parts of the plants such as stems and leaves, but it also transports nutrients. The word xylem is derived from the Ancient Greek word, (xylon), meaning "wood"; the best-known xylem tissue is wood, though it is found throughout a plant. The term was introduced by Carl Nägeli in 1858.
Structure
The most distinctive xylem cells are the long tracheary elements that transport water. Tracheids and vessel elements are distinguished by their shape; vessel elements are shorter, and are connected together into long tubes that are called vessels.
Xylem also contains two other type of cells: parenchyma and fibers.
Xylem can be found:
in vascular bundles, present in non-woody plants and non-woody parts of woody plants
in secondary xylem, laid down by a meristem called the vascular cambium in woody plants
as part of a stelar arrangement not divided into bundles, as in many ferns.
In transitional stages of plants with secondary growth, the first two categories are not mutually exclusive, although usually a vascular bundle will contain primary xylem only.
The branching pattern exhibited by xylem follows Murray's law.
Primary and secondary xylem
Primary xylem is formed during primary growth from procambium. It includes protoxylem and metaxylem. Metaxylem develops after the protoxylem but before secondary xylem. Metaxylem has wider vessels and tracheids than protoxylem.
Secondary xylem is formed during secondary growth from vascular cambium. Although secondary xylem is also found in members of the gymnosperm groups Gnetophyta and Ginkgophyta and to a lesser extent in members of the Cycadophyta, the two main groups in which secondary xylem can be found are:
conifers (Coniferae): there are approximately 600 known species of conifers. All species have secondary xylem, which is relatively uniform in structure throughout this group. Many conifers become tall trees: the secondary xylem of such trees is used and marketed as softwood.
angiosperms (Angiospermae): there are approximately 250,000 known species of angiosperms. Within this group secondary xylem is rare in the monocots. Many non-monocot angiosperms become trees, and the secondary xylem of these is used and marketed as hardwood.
Main function – upwards water transport
The xylem, vessels and tracheids of the roots, stems and leaves are interconnected to form a continuous system of water-conducting channels reaching all parts of the plants. The system transports water and soluble mineral nutrients from the roots throughout the plant. It is also used to replace water lost during transpiration and photosynthesis. Xylem sap consists mainly of water and inorganic ions, although it can also contain a number of organic chemicals as well. The transport is passive, not powered by energy spent by the tracheary elements themselves, which are dead by maturity and no longer have living contents. Transporting sap upwards becomes more difficult as the height of a plant increases and upwards transport of water by xylem is considered to limit the maximum height of trees. Three phenomena cause xylem sap to flow:
Pressure flow hypothesis: Sugars produced in the leaves and other green tissues are kept in the phloem system, creating a solute pressure differential versus the xylem system carrying a far lower load of solutes—water and minerals. The phloem pressure can rise to several MPa, far higher than atmospheric pressure. Selective inter-connection between these systems allows this high solute concentration in the phloem to draw xylem fluid upwards by negative pressure.
Transpirational pull: Similarly, the evaporation of water from the surfaces of mesophyll cells to the atmosphere also creates a negative pressure at the top of a plant. This causes millions of minute menisci to form in the mesophyll cell wall. The resulting surface tension causes a negative pressure or tension in the xylem that pulls the water from the roots and soil.
Root pressure: If the water potential of the root cells is more negative than that of the soil, usually due to high concentrations of solute, water can move by osmosis into the root from the soil. This causes a positive pressure that forces sap up the xylem towards the leaves. In some circumstances, the sap will be forced from the leaf through a hydathode in a phenomenon known as guttation. Root pressure is highest in the morning before the opening of stomata and allow transpiration to begin. Different plant species can have different root pressures even in a similar environment; examples include up to 145 kPa in Vitis riparia but around zero in Celastrus orbiculatus.
The primary force that creates the capillary action movement of water upwards in plants is the adhesion between the water and the surface of the xylem conduits. Capillary action provides the force that establishes an equilibrium configuration, balancing gravity. When transpiration removes water at the top, the flow is needed to return to the equilibrium.
Transpirational pull results from the evaporation of water from the surfaces of cells in the leaves. This evaporation causes the surface of the water to recess into the pores of the cell wall. By capillary action, the water forms concave menisci inside the pores. The high surface tension of water pulls the concavity outwards, generating enough force to lift water as high as a hundred meters from ground level to a tree's highest branches.
Transpirational pull requires that the vessels transporting the water be very small in diameter; otherwise, cavitation would break the water column. And as water evaporates from leaves, more is drawn up through the plant to replace it. When the water pressure within the xylem reaches extreme levels due to low water input from the roots (if, for example, the soil is dry), then the gases come out of solution and form a bubble – an embolism forms, which will spread quickly to other adjacent cells, unless bordered pits are present (these have a plug-like structure called a torus, that seals off the opening between adjacent cells and stops the embolism from spreading). Even after an embolism has occurred, plants are able to refill the xylem and restore the functionality.
Cohesion-tension theory
The cohesion-tension theory is a theory of intermolecular attraction that explains the process of water flow upwards (against the force of gravity) through the xylem of plants. It was proposed in 1894 by John Joly and Henry Horatio Dixon. Despite numerous objections, this is the most widely accepted theory for the transport of water through a plant's vascular system based on the classical research of Dixon-Joly (1894), Eugen Askenasy (1845–1903) (1895), and Dixon (1914,1924).
Water is a polar molecule. When two water molecules approach one another, the slightly negatively charged oxygen atom of one forms a hydrogen bond with a slightly positively charged hydrogen atom in the other. This attractive force, along with other intermolecular forces, is one of the principal factors responsible for the occurrence of surface tension in liquid water. It also allows plants to draw water from the root through the xylem to the leaf.
Water is constantly lost through transpiration from the leaf. When one water molecule is lost another is pulled along by the processes of cohesion and tension. Transpiration pull, utilizing capillary action and the inherent surface tension of water, is the primary mechanism of water movement in plants. However, it is not the only mechanism involved. Any use of water in leaves forces water to move into them.
Transpiration in leaves creates tension (differential pressure) in the cell walls of mesophyll cells. Because of this tension, water is being pulled up from the roots into the leaves, helped by cohesion (the pull between individual water molecules, due to hydrogen bonds) and adhesion (the stickiness between water molecules and the hydrophilic cell walls of plants). This mechanism of water flow works because of water potential (water flows from high to low potential), and the rules of simple diffusion.
Over the past century, there has been a great deal of research regarding the mechanism of xylem sap transport; today, most plant scientists continue to agree that the cohesion-tension theory best explains this process, but multiforce theories that hypothesize several alternative mechanisms have been suggested, including longitudinal cellular and xylem osmotic pressure gradients, axial potential gradients in the vessels, and gel- and gas-bubble-supported interfacial gradients.
Measurement of pressure
Until recently, the differential pressure (suction) of transpirational pull could only be measured indirectly, by applying external pressure with a pressure bomb to counteract it. When the technology to perform direct measurements with a pressure probe was developed, there was initially some doubt about whether the classic theory was correct, because some workers were unable to demonstrate negative pressures. More recent measurements do tend to validate the classic theory, for the most part. Xylem transport is driven by a combination of transpirational pull from above and root pressure from below, which makes the interpretation of measurements more complicated.
Evolution
Xylem appeared early in the history of terrestrial plant life. Fossil plants with anatomically preserved xylem are known from the Silurian (more than 400 million years ago), and trace fossils resembling individual xylem cells may be found in earlier Ordovician rocks. The earliest true and recognizable xylem consists of tracheids with a helical-annular reinforcing layer added to the cell wall. This is the only type of xylem found in the earliest vascular plants, and this type of cell continues to be found in the protoxylem (first-formed xylem) of all living groups of vascular plants. Several groups of plants later developed pitted tracheid cells independently through convergent evolution. In living plants, pitted tracheids do not appear in development until the maturation of the metaxylem (following the protoxylem).
In most plants, pitted tracheids function as the primary transport cells. The other type of vascular element, found in angiosperms, is the vessel element. Vessel elements are joined end to end to form vessels in which water flows unimpeded, as in a pipe. The presence of xylem vessels (also called trachea) is considered to be one of the key innovations that led to the success of the angiosperms. However, the occurrence of vessel elements is not restricted to angiosperms, and they are absent in some archaic or "basal" lineages of the angiosperms: (e.g., Amborellaceae, Tetracentraceae, Trochodendraceae, and Winteraceae), and their secondary xylem is described by Arthur Cronquist as "primitively vesselless". Cronquist considered the vessels of Gnetum to be convergent with those of angiosperms. Whether the absence of vessels in basal angiosperms is a primitive condition is contested, the alternative hypothesis states that vessel elements originated in a precursor to the angiosperms and were subsequently lost.
To photosynthesize, plants must absorb from the atmosphere. However, this comes at a price: while stomata are open to allow to enter, water can evaporate. Water is lost much faster than is absorbed, so plants need to replace it, and have developed systems to transport water from the moist soil to the site of photosynthesis. Early plants sucked water between the walls of their cells, then evolved the ability to control water loss (and acquisition) through the use of stomata. Specialized water transport tissues soon evolved in the form of hydroids, tracheids, then secondary xylem, followed by an endodermis and ultimately vessels.
The high levels of Silurian-Devonian times, when plants were first colonizing land, meant that the need for water was relatively low. As was withdrawn from the atmosphere by plants, more water was lost in its capture, and more elegant transport mechanisms evolved. As water transport mechanisms, and waterproof cuticles, evolved, plants could survive without being continually covered by a film of water. This transition from poikilohydry to homoiohydry opened up new potential for colonization. Plants then needed a robust internal structure that held long narrow channels for transporting water from the soil to all the different parts of the above-soil plant, especially to the parts where photosynthesis occurred.
During the Silurian, was readily available, so little water needed expending to acquire it. By the end of the Carboniferous, when levels had lowered to something approaching today's, around 17 times more water was lost per unit of uptake. However, even in these "easy" early days, water was at a premium, and had to be transported to parts of the plant from the wet soil to avoid desiccation. This early water transport took advantage of the cohesion-tension mechanism inherent in water. Water has a tendency to diffuse to areas that are drier, and this process is accelerated when water can be wicked along a fabric with small spaces. In small passages, such as that between the plant cell walls (or in tracheids), a column of water behaves like rubber – when molecules evaporate from one end, they pull the molecules behind them along the channels. Therefore, transpiration alone provided the driving force for water transport in early plants. However, without dedicated transport vessels, the cohesion-tension mechanism cannot transport water more than about 2 cm, severely limiting the size of the earliest plants. This process demands a steady supply of water from one end, to maintain the chains; to avoid exhausting it, plants developed a waterproof cuticle. Early cuticle may not have had pores but did not cover the entire plant surface, so that gas exchange could continue. However, dehydration at times was inevitable; early plants cope with this by having a lot of water stored between their cell walls, and when it comes to it sticking out the tough times by putting life "on hold" until more water is supplied.
To be free from the constraints of small size and constant moisture that the parenchymatic transport system inflicted, plants needed a more efficient water transport system. During the early Silurian, they developed specialized cells, which were lignified (or bore similar chemical compounds) to avoid implosion; this process coincided with cell death, allowing their innards to be emptied and water to be passed through them. These wider, dead, empty cells were a million times more conductive than the inter-cell method, giving the potential for transport over longer distances, and higher diffusion rates.
The earliest macrofossils to bear water-transport tubes are Silurian plants placed in the genus Cooksonia. The early Devonian pretracheophytes Aglaophyton and Horneophyton have structures very similar to the hydroids of modern mosses.
Plants continued to innovate new ways of reducing the resistance to flow within their cells, thereby increasing the efficiency of their water transport. Bands on the walls of tubes, in fact apparent from the early Silurian onwards, are an early improvisation to aid the easy flow of water. Banded tubes, as well as tubes with pitted ornamentation on their walls, were lignified and, when they form single celled conduits, are considered to be tracheids. These, the "next generation" of transport cell design, have a more rigid structure than hydroids, allowing them to cope with higher levels of water pressure. Tracheids may have a single evolutionary origin, possibly within the hornworts, uniting all tracheophytes (but they may have evolved more than once).
Water transport requires regulation, and dynamic control is provided by stomata.
By adjusting the amount of gas exchange, they can restrict the amount of water lost through transpiration. This is an important role where water supply is not constant, and indeed stomata appear to have evolved before tracheids, being present in the non-vascular hornworts.
An endodermis probably evolved during the Silu-Devonian, but the first fossil evidence for such a structure is Carboniferous. This structure in the roots covers the water transport tissue and regulates ion exchange (and prevents unwanted pathogens etc. from entering the water transport system). The endodermis can also provide an upwards pressure, forcing water out of the roots when transpiration is not enough of a driver.
Once plants had evolved this level of controlled water transport, they were truly homoiohydric, able to extract water from their environment through root-like organs rather than relying on a film of surface moisture, enabling them to grow to much greater size. As a result of their independence from their surroundings, they lost their ability to survive desiccation – a costly trait to retain.
During the Devonian, maximum xylem diameter increased with time, with the minimum diameter remaining pretty constant. By the middle Devonian, the tracheid diameter of some plant lineages (Zosterophyllophytes) had plateaued. Wider tracheids allow water to be transported faster, but the overall transport rate depends also on the overall cross-sectional area of the xylem bundle itself. The increase in vascular bundle thickness further seems to correlate with the width of plant axes, and plant height; it is also closely related to the appearance of leaves and increased stomatal density, both of which would increase the demand for water.
While wider tracheids with robust walls make it possible to achieve higher water transport tensions, this increases the likelihood of cavitation. Cavitation occurs when a bubble of air forms within a vessel, breaking the bonds between chains of water molecules and preventing them from pulling more water up with their cohesive tension. A tracheid, once cavitated, cannot have its embolism removed and return to service (except in a few advanced angiosperms which have developed a mechanism of doing so). Therefore, it is well worth plants' while to avoid cavitation occurring. For this reason, pits in tracheid walls have very small diameters, to prevent air entering and allowing bubbles to nucleate. Freeze-thaw cycles are a major cause of cavitation. Damage to a tracheid's wall almost inevitably leads to air leaking in and cavitation, hence the importance of many tracheids working in parallel.
Once cavitation has occurred, plants have a range of mechanisms to contain the damage. Small pits link adjacent conduits to allow fluid to flow between them, but not air – although these pits, which prevent the spread of embolism, are also a major cause of them. These pitted surfaces further reduce the flow of water through the xylem by as much as 30%. The diversification of xylem strand shapes with tracheid network topologies increasingly resistant to the spread of embolism likely facilitated increases in plant size and the colonization of drier habitats during the Devonian radiation. Conifers, by the Jurassic, developed bordered pits had valve-like structures to isolate cavitated elements. These torus-margo structures have an impermeable disc (torus) suspended by a permeable membrane (margo) between two adjacent pores. When a tracheid on one side depressurizes, the disc is sucked into the pore on that side, and blocks further flow. Other plants simply tolerate cavitation. For instance, oaks grow a ring of wide vessels at the start of each spring, none of which survive the winter frosts. Maples use root pressure each spring to force sap upwards from the roots, squeezing out any air bubbles.
Growing to height also employed another trait of tracheids – the support offered by their lignified walls. Defunct tracheids were retained to form a strong, woody stem, produced in most instances by a secondary xylem. However, in early plants, tracheids were too mechanically vulnerable, and retained a central position, with a layer of tough sclerenchyma on the outer rim of the stems. Even when tracheids do take a structural role, they are supported by sclerenchymatic tissue.
Tracheids end with walls, which impose a great deal of resistance on flow; vessel members have perforated end walls, and are arranged in series to operate as if they were one continuous vessel. The function of end walls, which were the default state in the Devonian, was probably to avoid embolisms. An embolism is where an air bubble is created in a tracheid. This may happen as a result of freezing, or by gases dissolving out of solution. Once an embolism is formed, it usually cannot be removed (but see later); the affected cell cannot pull water up, and is rendered useless.
End walls excluded, the tracheids of prevascular plants were able to operate under the same hydraulic conductivity as those of the first vascular plant, Cooksonia.
The size of tracheids is limited as they comprise a single cell; this limits their length, which in turn limits their maximum useful diameter to 80 μm. Conductivity grows with the fourth power of diameter, so increased diameter has huge rewards; vessel elements, consisting of a number of cells, joined at their ends, overcame this limit and allowed larger tubes to form, reaching diameters of up to 500 μm, and lengths of up to 10 m.
Vessels first evolved during the dry, low periods of the late Permian, in the horsetails, ferns and Selaginellales independently, and later appeared in the mid Cretaceous in angiosperms and gnetophytes.
Vessels allow the same cross-sectional area of wood to transport around a hundred times more water than tracheids! This allowed plants to fill more of their stems with structural fibers, and also opened a new niche to vines, which could transport water without being as thick as the tree they grew on. Despite these advantages, tracheid-based wood is a lot lighter, thus cheaper to make, as vessels need to be much more reinforced to avoid cavitation.
Development
Xylem development can be described by four terms: centrarch, exarch, endarch and mesarch. As it develops in young plants, its nature changes from protoxylem to metaxylem (i.e. from first xylem to after xylem). The patterns in which protoxylem and metaxylem are arranged are essential in studying plant morphology.
Protoxylem and metaxylem
As a young vascular plant grows, one or more strands of primary xylem form in its stems and roots. The first xylem to develop is called 'protoxylem'. In appearance, protoxylem is usually distinguished by narrower vessels formed of smaller cells. Some of these cells have walls that contain thickenings in the form of rings or helices. Functionally, protoxylem can extend: the cells can grow in size and develop while a stem or root is elongating. Later, 'metaxylem' develops in the strands of xylem. Metaxylem vessels and cells are usually larger; the cells have thickenings typically either in the form of ladderlike transverse bars (scalariform) or continuous sheets except for holes or pits (pitted). Functionally, metaxylem completes its development after elongation ceases when the cells no longer need to grow in size.
Patterns of protoxylem and metaxylem
There are four primary patterns to the arrangement of protoxylem and metaxylem in stems and roots.
Centrarch refers to the case in which the primary xylem forms a single cylinder in the center of the stem and develops from the center outwards. The protoxylem is thus found in the central core, and the metaxylem is in a cylinder around it. This pattern was common in early land plants, such as "rhyniophytes", but is not present in any living plants.
The other three terms are used where there is more than one strand of primary xylem.
Exarch is used when there is more than one strand of primary xylem in a stem or root, and the xylem develops from the outside inwards towards the center, i.e., centripetally. The metaxylem is thus closest to the center of the stem or root, and the protoxylem is closest to the periphery. The roots of vascular plants are generally considered to have exarch development.
Endarch is used when there is more than one strand of primary xylem in a stem or root, and the xylem develops from the inside outwards towards the periphery, i.e., centrifugally. The protoxylem is thus closest to the center of the stem or root, and the metaxylem is closest to the periphery. The stems of seed plants typically have endarch development.
Mesarch is used when there is more than one strand of primary xylem in a stem or root, and the xylem develops from the middle of a strand in both directions. The metaxylem is thus on both the peripheral and central sides of the strand, with the protoxylem between the metaxylem (possibly surrounded by it). The leaves and stems of many ferns have mesarch development.
History
In his book De plantis libri XVI (On Plants, in 16 books) (1583), the Italian physician and botanist Andrea Cesalpino proposed that plants draw water from soil not by magnetism (ut magnes ferrum trahit, as magnetic iron attracts) nor by suction (vacuum), but by absorption, as occurs in the case of linen, sponges, or powders. The Italian biologist Marcello Malpighi was the first person to describe and illustrate xylem vessels, which he did in his book Anatome plantarum ... (1675). Although Malpighi believed that xylem contained only air, the British physician and botanist Nehemiah Grew, who was Malpighi's contemporary, believed that sap ascended both through the bark and through the xylem. However, according to Grew, capillary action in the xylem would raise the sap by only a few inches; to raise the sap to the top of a tree, Grew proposed that the parenchymal cells become turgid and thereby not only squeeze the sap in the tracheids but force some sap from the parenchyma into the tracheids. In 1727, English clergyman and botanist Stephen Hales showed that transpiration by a plant's leaves causes water to move through its xylem. By 1891, the Polish-German botanist Eduard Strasburger had shown that the transport of water in plants did not require the xylem cells to be alive.
See also
Soil plant atmosphere continuum
Suction
Tylosis
Vascular tissue
Xylem sap
Explanatory notes
References
Citations
General references
is the main source used for the paragraph on recent research.
is the first published independent test showing the Scholander bomb actually does measure the tension in the xylem.
is the second published independent test showing the Scholander bomb actually does measure the tension in the xylem.
recent update of the classic book on xylem transport by the late Martin Zimmermann
External links
Plant anatomy
Plant cells
Plant physiology
Tissues (biology) | Xylem | [
"Biology"
] | 5,810 | [
"Plant physiology",
"Plants"
] |
46,676 | https://en.wikipedia.org/wiki/Banach%20fixed-point%20theorem | In mathematics, the Banach fixed-point theorem (also known as the contraction mapping theorem or contractive mapping theorem or Banach–Caccioppoli theorem) is an important tool in the theory of metric spaces; it guarantees the existence and uniqueness of fixed points of certain self-maps of metric spaces and provides a constructive method to find those fixed points. It can be understood as an abstract formulation of Picard's method of successive approximations. The theorem is named after Stefan Banach (1892–1945) who first stated it in 1922.
Statement
Definition. Let be a metric space. Then a map is called a contraction mapping on X if there exists such that
for all
Banach fixed-point theorem. Let be a non-empty complete metric space with a contraction mapping Then T admits a unique fixed-point in X (i.e. ). Furthermore, can be found as follows: start with an arbitrary element and define a sequence by for Then .
Remark 1. The following inequalities are equivalent and describe the speed of convergence:
Any such value of q is called a Lipschitz constant for , and the smallest one is sometimes called "the best Lipschitz constant" of .
Remark 2. for all is in general not enough to ensure the existence of a fixed point, as is shown by the map
which lacks a fixed point. However, if is compact, then this weaker assumption does imply the existence and uniqueness of a fixed point, that can be easily found as a minimizer of , indeed, a minimizer exists by compactness, and has to be a fixed point of It then easily follows that the fixed point is the limit of any sequence of iterations of
Remark 3. When using the theorem in practice, the most difficult part is typically to define properly so that
Proof
Let be arbitrary and define a sequence by setting . We first note that for all we have the inequality
This follows by induction on , using the fact that is a contraction mapping. Then we can show that is a Cauchy sequence. In particular, let such that :
Let be arbitrary. Since , we can find a large so that
Therefore, by choosing and greater than we may write:
This proves that the sequence is Cauchy. By completeness of , the sequence has a limit Furthermore, must be a fixed point of :
As a contraction mapping, is continuous, so bringing the limit inside was justified. Lastly, cannot have more than one fixed point in , since any pair of distinct fixed points and would contradict the contraction of :
Applications
A standard application is the proof of the Picard–Lindelöf theorem about the existence and uniqueness of solutions to certain ordinary differential equations. The sought solution of the differential equation is expressed as a fixed point of a suitable integral operator on the space of continuous functions under the uniform norm. The Banach fixed-point theorem is then used to show that this integral operator has a unique fixed point.
One consequence of the Banach fixed-point theorem is that small Lipschitz perturbations of the identity are bi-lipschitz homeomorphisms. Let Ω be an open set of a Banach space E; let denote the identity (inclusion) map and let g : Ω → E be a Lipschitz map of constant k < 1. Then
Ω′ := (I + g)(Ω) is an open subset of E: precisely, for any x in Ω such that one has
I + g : Ω → Ω′ is a bi-Lipschitz homeomorphism;
precisely, (I + g)−1 is still of the form with h a Lipschitz map of constant k/(1 − k). A direct consequence of this result yields the proof of the inverse function theorem.
It can be used to give sufficient conditions under which Newton's method of successive approximations is guaranteed to work, and similarly for Chebyshev's third-order method.
It can be used to prove existence and uniqueness of solutions to integral equations.
It can be used to give a proof to the Nash embedding theorem.
It can be used to prove existence and uniqueness of solutions to value iteration, policy iteration, and policy evaluation of reinforcement learning.
It can be used to prove existence and uniqueness of an equilibrium in Cournot competition, and other dynamic economic models.
Converses
Several converses of the Banach contraction principle exist. The following is due to Czesław Bessaga, from 1959:
Let f : X → X be a map of an abstract set such that each iterate fn has a unique fixed point. Let then there exists a complete metric on X such that f is contractive, and q is the contraction constant.
Indeed, very weak assumptions suffice to obtain such a kind of converse. For example if is a map on a T1 topological space with a unique fixed point a, such that for each we have fn(x) → a, then there already exists a metric on X with respect to which f satisfies the conditions of the Banach contraction principle with contraction constant 1/2. In this case the metric is in fact an ultrametric.
Generalizations
There are a number of generalizations (some of which are immediate corollaries).
Let T : X → X be a map on a complete non-empty metric space. Then, for example, some generalizations of the Banach fixed-point theorem are:
Assume that some iterate Tn of T is a contraction. Then T has a unique fixed point.
Assume that for each n, there exist cn such that d(Tn(x), Tn(y)) ≤ cnd(x, y) for all x and y, and that
Then T has a unique fixed point.
In applications, the existence and uniqueness of a fixed point often can be shown directly with the standard Banach fixed point theorem, by a suitable choice of the metric that makes the map T a contraction. Indeed, the above result by Bessaga strongly suggests to look for such a metric. See also the article on fixed point theorems in infinite-dimensional spaces for generalizations.
A different class of generalizations arise from suitable generalizations of the notion of metric space, e.g. by weakening the defining axioms for the notion of metric. Some of these have applications, e.g., in the theory of programming semantics in theoretical computer science.
Example
An application of the Banach fixed-point theorem and fixed-point iteration can be used to quickly obtain an approximation of with high accuracy. Consider the function . It can be verified that is a fixed point of f, and that f maps the interval to itself. Moreover, , and it can be verified that
on this interval. Therefore, by an application of the mean value theorem, f has a Lipschitz constant less than 1 (namely ). Applying the Banach fixed-point theorem shows that the fixed point is the unique fixed point on the interval, allowing for fixed-point iteration to be used.
For example, the value 3 may be chosen to start the fixed-point iteration, as . The Banach fixed-point theorem may be used to conclude that
Applying f to 3 only three times already yields an expansion of accurate to 33 digits:
See also
Brouwer fixed-point theorem
Caristi fixed-point theorem
Contraction mapping
Fichera's existence principle
Fixed-point iteration
Fixed-point theorems
Infinite compositions of analytic functions
Kantorovich theorem
Notes
References
See chapter 7.
Articles containing proofs
Eponymous theorems of mathematics
Fixed-point theorems
Metric geometry
Topology | Banach fixed-point theorem | [
"Physics",
"Mathematics"
] | 1,556 | [
"Theorems in mathematical analysis",
"Articles containing proofs",
"Fixed-point theorems",
"Theorems in topology",
"Topology",
"Space",
"Geometry",
"Spacetime"
] |
46,680 | https://en.wikipedia.org/wiki/Entropy%20coding | In information theory, an entropy coding (or entropy encoding) is any lossless data compression method that attempts to approach the lower bound declared by Shannon's source coding theorem, which states that any lossless data compression method must have an expected code length greater than or equal to the entropy of the source.
More precisely, the source coding theorem states that for any source distribution, the expected code length satisfies , where is the number of symbols in a code word, is the coding function, is the number of symbols used to make output codes and is the probability of the source symbol. An entropy coding attempts to approach this lower bound.
Two of the most common entropy coding techniques are Huffman coding and arithmetic coding.
If the approximate entropy characteristics of a data stream are known in advance (especially for signal compression), a simpler static code may be useful.
These static codes include universal codes (such as Elias gamma coding or Fibonacci coding) and Golomb codes (such as unary coding or Rice coding).
Since 2014, data compressors have started using the asymmetric numeral systems family of entropy coding techniques, which allows combination of the compression ratio of arithmetic coding with a processing cost similar to Huffman coding.
Entropy as a measure of similarity
Besides using entropy coding as a way to compress digital data, an entropy encoder can also be used to measure the amount of similarity between streams of data and already existing classes of data. This is done by generating an entropy coder/compressor for each class of data; unknown data is then classified by feeding the uncompressed data to each compressor and seeing which compressor yields the highest compression. The coder with the best compression is probably the coder trained on the data that was most similar to the unknown data.
See also
Arithmetic coding
Asymmetric numeral systems (ANS)
Context-adaptive binary arithmetic coding (CABAC)
Huffman coding
Range coding
References
External links
Information Theory, Inference, and Learning Algorithms, by David MacKay (2003), gives an introduction to Shannon theory and data compression, including the Huffman coding and arithmetic coding.
Source Coding, by T. Wiegand and H. Schwarz (2011).
Entropy coding
Entropy and information
Data compression | Entropy coding | [
"Physics",
"Mathematics"
] | 453 | [
"Dynamical systems",
"Entropy",
"Physical quantities",
"Entropy and information"
] |
46,696 | https://en.wikipedia.org/wiki/Machaeridia%20%28annelid%29 | Machaeridia is an extinct group of armoured, segmented annelid worms, known from the Early Ordovician (Late Tremadoc) to Carboniferous. It consists of three distinct families: the plumulitids, turrilepadids and lepidocoleids.
Fossils
Only the calcitic sclerites ("armour plates") of these worms tend to be preserved in the fossil record. These are tiny, and usually found disarticulated: articulated specimens reach about a centimeter in length, and are incredibly rare – hence the limited degree of study since their description in 1857.
The machaeridians are characterized by having serialized rows of calcitic shell plates. The dorsal sclerites were convex and almost isometric; lateral sclerites were flatter and longer. The plates comprised two calcite layers: the outer layer is thin and formed by lamellar deposition, whereas new elements were added to the thicker inner layer as it grew. Scales are ridged with growth lines, implying that they grew episodically. A few taxa experimented with different approaches to scale formation; some were only very weakly calcified and may have mainly been organic in nature. They were never moulted, and each scale could be moved with an attached muscle.
The front two segments of the machaeridians were commonly different from the rest, bearing fewer spiny projections.
The plumulitids are flattened from above and looks much like the coat of mail armour of chitons. The two other families are laterally compressed and some lepidocoleids formed a dorsal hinge, which make these machaeridians look like a string of bivalves.
Ecology
Machaeridians are often found in association with stylophorans - the cornutes and mitrates. This suggests that they possessed a similar ecology. They probably fed on organic detritus, perhaps even the faeces of the accompanying stylophorans.
Their scales almost certainly performed a defensive role.
The organisms would have had limited ability to flex to the right and left (in the sagittal plane), but would have been able to roll up. While most possessed bilateral symmetry, the scales on the right and left side of Turrilepas wrightiana are different in shape and form.
The Plumulitid machaeridians would have moved across the surface of the sea floor using parapodia, whereas the fully armoured Turrelepids and Lepidocoelids burrowed in a peristaltic fashion reminiscent of their evolutionary cousins, the earthworms. This burrowing role has subjected them to the same evolutionary pressures which affect burrowing bivalves; convergent evolution as a result of their shared function probably contributed to early suggestions that the machaeridians should be classified with the molluscs.
Taxonomic affinity
Historically the group has been assigned to the echinoderms, barnacles, annelids and mollusks. Relationships to other Cambrian forms (such as the Halkieriids) have been proposed and discounted. In 2008, the discovery of a fossil preserving soft tissue (including chaetae and parapodia) established an annelid affinity. Machaeridians represent the only instance of this group developing calcitic armour (notwithstanding certain polychaetes that integrate calcite into their chaetae). The exact position with annelids remains unresolved, though some characters indicate a relationship to Aphroditacean annelids (Vinther et al. 2008). In an accompanying commentary, Jean-Bernard Caron suggested that machaeridians must be a stem group based on number of specialised features. However, one cannot assess crown group/stem group affinities based on autapomorphies, but on shared morphological traits or the lack thereof. He also suggested that machaeridians might be polyphyletic, but machaerdians are a well defined group with a number of shared characters and morphological gradations among all three families.
A study in 2019 recognized machaeridians as phyllodocids based on their jaws.
Articulated specimens
Articulated machaeridians are known from:
... and possibly elsewhere
References
Annelids
Controversial taxa
Prehistoric annelids
† | Machaeridia (annelid) | [
"Biology"
] | 878 | [
"Biological hypotheses",
"Controversial taxa"
] |
46,698 | https://en.wikipedia.org/wiki/Property%20damage | Property damage (sometimes called damage to property), is the damage or destruction of real or tangible personal property, caused by negligence, willful destruction, or an act of nature. Destruction of property (sometimes called property destruction, or criminal damage in England and Wales) is a sub-type of property damage that involves damage to property that results from willful misconduct and is punishable as a crime.
Destruction of property encompasses vandalism (deliberate damage, destruction, or defacement), building implosion (destroying property with explosives), and arson (destroying property with fire), and similar crimes that involve unlawful infliction of damage to or destruction of personal property or real property.
See also
Criminal mischief
Criminal damage in English law
Arson
Building implosion
Mischief
Vandalism
References
Criminal law
Property insurance
Vandalism
Problem behavior
Property crimes | Property damage | [
"Biology"
] | 172 | [
"Behavior",
"Problem behavior",
"Human behavior"
] |
46,740 | https://en.wikipedia.org/wiki/Euler%27s%20identity | In mathematics, Euler's identity (also known as Euler's equation) is the equality
where
is Euler's number, the base of natural logarithms,
is the imaginary unit, which by definition satisfies , and
is pi, the ratio of the circumference of a circle to its diameter.
Euler's identity is named after the Swiss mathematician Leonhard Euler. It is a special case of Euler's formula when evaluated for . Euler's identity is considered to be an exemplar of mathematical beauty as it shows a profound connection between the most fundamental numbers in mathematics. In addition, it is directly used in a proof that is transcendental, which implies the impossibility of squaring the circle.
Mathematical beauty
Euler's identity is often cited as an example of deep mathematical beauty. Three of the basic arithmetic operations occur exactly once each: addition, multiplication, and exponentiation. The identity also links five fundamental mathematical constants:
The number 0, the additive identity
The number 1, the multiplicative identity
The number ( = 3.14159...), the fundamental circle constant
The number ( = 2.71828...), also known as Euler's number, which occurs widely in mathematical analysis
The number , the imaginary unit such that
The equation is often given in the form of an expression set equal to zero, which is common practice in several areas of mathematics.
Stanford University mathematics professor Keith Devlin has said, "like a Shakespearean sonnet that captures the very essence of love, or a painting that brings out the beauty of the human form that is far more than just skin deep, Euler's equation reaches down into the very depths of existence". And Paul Nahin, a professor emeritus at the University of New Hampshire, who has written a book dedicated to Euler's formula and its applications in Fourier analysis, describes Euler's identity as being "of exquisite beauty".
Mathematics writer Constance Reid has opined that Euler's identity is "the most famous formula in all mathematics". And Benjamin Peirce, a 19th-century American philosopher, mathematician, and professor at Harvard University, after proving Euler's identity during a lecture, stated that the identity "is absolutely paradoxical; we cannot understand it, and we don't know what it means, but we have proved it, and therefore we know it must be the truth".
A poll of readers conducted by The Mathematical Intelligencer in 1990 named Euler's identity as the "most beautiful theorem in mathematics". In another poll of readers that was conducted by Physics World in 2004, Euler's identity tied with Maxwell's equations (of electromagnetism) as the "greatest equation ever".
At least three books in popular mathematics have been published about Euler's identity:
Dr. Euler's Fabulous Formula: Cures Many Mathematical Ills, by Paul Nahin (2011)
A Most Elegant Equation: Euler's formula and the beauty of mathematics, by David Stipp (2017)
Euler's Pioneering Equation: The most beautiful theorem in mathematics, by Robin Wilson (2018).
Explanations
Imaginary exponents
Euler's identity asserts that is equal to −1. The expression is a special case of the expression , where is any complex number. In general, is defined for complex by extending one of the definitions of the exponential function from real exponents to complex exponents. For example, one common definition is:
Euler's identity therefore states that the limit, as approaches infinity, of is equal to −1. This limit is illustrated in the animation to the right.
Euler's identity is a special case of Euler's formula, which states that for any real number ,
where the inputs of the trigonometric functions sine and cosine are given in radians.
In particular, when ,
Since
and
it follows that
which yields Euler's identity:
Geometric interpretation
Any complex number can be represented by the point on the complex plane. This point can also be represented in polar coordinates as , where is the absolute value of (distance from the origin), and is the argument of (angle counterclockwise from the positive x-axis). By the definitions of sine and cosine, this point has cartesian coordinates of , implying that . According to Euler's formula, this is equivalent to saying .
Euler's identity says that . Since is for = 1 and , this can be interpreted as a fact about the number −1 on the complex plane: its distance from the origin is 1, and its angle from the positive x-axis is radians.
Additionally, when any complex number is multiplied by , it has the effect of rotating counterclockwise by an angle of on the complex plane. Since multiplication by −1 reflects a point across the origin, Euler's identity can be interpreted as saying that rotating any point radians around the origin has the same effect as reflecting the point across the origin. Similarly, setting equal to yields the related equation which can be interpreted as saying that rotating any point by one turn around the origin returns it to its original position.
Generalizations
Euler's identity is also a special case of the more general identity that the th roots of unity, for , add up to 0:
Euler's identity is the case where .
A similar identity also applies to quaternion exponential: let be the basis quaternions; then,
More generally, let be a quaternion with a zero real part and a norm equal to 1; that is, with Then one has
The same formula applies to octonions, with a zero real part and a norm equal to 1. These formulas are a direct generalization of Euler's identity, since and are the only complex numbers with a zero real part and a norm (absolute value) equal to 1.
History
While Euler's identity is a direct result of Euler's formula, published in his monumental work of mathematical analysis in 1748, Introductio in analysin infinitorum, it is questionable whether the particular concept of linking five fundamental constants in a compact form can be attributed to Euler himself, as he may never have expressed it.
Robin Wilson states the following.
See also
De Moivre's formula
Exponential function
Gelfond's constant
Notes
References
Sources
Conway, John H., and Guy, Richard K. (1996), The Book of Numbers, Springer
Crease, Robert P. (10 May 2004), "The greatest equations ever", Physics World [registration required]
Dunham, William (1999), Euler: The Master of Us All, Mathematical Association of America
Euler, Leonhard (1922), Leonhardi Euleri opera omnia. 1, Opera mathematica. Volumen VIII, Leonhardi Euleri introductio in analysin infinitorum. Tomus primus, Leipzig: B. G. Teubneri
Kasner, E., and Newman, J. (1940), Mathematics and the Imagination, Simon & Schuster
Maor, Eli (1998), : The Story of a number, Princeton University Press
Nahin, Paul J. (2006), Dr. Euler's Fabulous Formula: Cures Many Mathematical Ills, Princeton University Press
Paulos, John Allen (1992), Beyond Numeracy: An Uncommon Dictionary of Mathematics, Penguin Books
Reid, Constance (various editions), From Zero to Infinity, Mathematical Association of America
Sandifer, C. Edward (2007), Euler's Greatest Hits, Mathematical Association of America
External links
Intuitive understanding of Euler's formula
Exponentials
Mathematical identities
E (mathematical constant)
Theorems in complex analysis
Leonhard Euler
de:Eulersche Formel#Eulersche Identit.C3.A4t
pl:Wzór Eulera#Tożsamość Eulera | Euler's identity | [
"Mathematics"
] | 1,663 | [
"Theorems in mathematical analysis",
"Theorems in complex analysis",
"E (mathematical constant)",
"Exponentials",
"Mathematical problems",
"Mathematical identities",
"Mathematical theorems",
"Algebra"
] |
46,754 | https://en.wikipedia.org/wiki/Sacred%20geometry | Sacred geometry ascribes symbolic and sacred meanings to certain geometric shapes and certain geometric proportions. It is associated with the belief of a divine creator of the universal geometer. The geometry used in the design and construction of religious structures such as churches, temples, mosques, religious monuments, altars, and tabernacles has sometimes been considered sacred. The concept applies also to sacred spaces such as temenoi, sacred groves, village greens, pagodas and holy wells, Mandala Gardens and the creation of religious and spiritual art.
As worldview and cosmology
The belief that a god created the universe according to a geometric plan has ancient origins. Plutarch attributed the belief to Plato, writing that "Plato said God geometrizes continually" (Convivialium disputationum, liber 8,2). In modern times, the mathematician Carl Friedrich Gauss adapted this quote, saying "God arithmetizes".
Johannes Kepler (1571–1630) believed in the geometric underpinnings of the cosmos. Harvard mathematician Shing-Tung Yau expressed a belief in the centrality of geometry in 2010:
"Lest one conclude that geometry is little more than a well-calibrated ruler – and this is no knock against the ruler, which happens to be a technology I admire – geometry is one of the main avenues available to us for probing the universe. Physics and cosmology have been, almost by definition, absolutely crucial for making sense of the universe. Geometry's role in this may be less obvious, but is equally vital. I would go so far as to say that geometry not only deserves a place at the table alongside physics and cosmology, but in many ways, it is the table."
Natural forms
According to Stephen Skinner, the study of sacred geometry has its roots in the study of nature, and the mathematical principles at work therein. Many forms observed in nature can be related to geometry; for example, the chambered nautilus grows at a constant rate and so its shell forms a logarithmic spiral to accommodate that growth without changing shape. Also, honeybees construct hexagonal cells to hold their honey. These and other correspondences are sometimes interpreted in terms of sacred geometry and considered to be further proof of the natural significance of geometric forms.
Representations in art and architecture
Geometric ratios, and geometric figures were often employed in the designs of ancient Egyptian, ancient Indian, Greek and Roman architecture. Medieval European cathedrals also incorporated symbolic geometry. Indian and Himalayan spiritual communities often constructed temples and fortifications on design plans of mandala and yantra. Mandala Vaatikas or Sacred Gardens were designed using the same principles.
Many of the sacred geometry principles of the human body and of ancient architecture were compiled into the Vitruvian Man drawing by Leonardo da Vinci. The latter drawing was itself based on the much older writings of the Roman architect Vitruvius.
In Buddhism
Mandalas are made up of a compilation of geometric shapes. In Buddhism, it is made up of concentric circles and squares that are equally placed from the center. Located within the geometric configurations are deities or suggestions of the deity, such as in the form of a symbol. This is because Buddhists believe that deities can actually manifest inside the mandala. Mandalas can be created with a variety of mediums. Tibetan Buddhists create mandalas out of sand that are then ritually destroyed. In order to create the mandala, two lines are first drawn on a predetermined grid. The lines, known as Brahman lines, must overlap at the precisely calculated center of the grid. The mandala is then divided into thirteen equal parts not by a mathematical calculation, but through trial and error. Next, monks purify the grid to prepare it for the constructing of the deities before sand is finally added. Tibetan Buddhists believe that anyone who looks at the mandala will receive positive energy and be blessed. Due to the Buddhist belief in impermanence, the mandala is eventually dismantled and is ritualistically released into the world.
In Chinese spiritual traditions
One of the cornerstones of Chinese folk religion is the relationship between man and nature. This is epitomized in feng shui, which are architectural principles outlining the design plans of buildings in order to optimize the harmony of man and nature through the movement of Chi, or “life-generating energy.” In order to maximize the flow of Chi throughout a building, its design plan must utilize specific shapes. Rectangles and squares are considered to be the best shapes to use in feng shui design. This is because other shapes may obstruct the flow of Chi from one room to the next due to what are considered to be unnatural angles. Room layout is also an important element, as doors should be proportional to one another and located at appropriate positions throughout the house. Typically, doors are not situated across from one another because it may cause Chi to flow too fast from one room to the next.
The Forbidden City is an example of a building that uses sacred geometry through the principles of feng shui in its design plan. It is laid out in the shape of a rectangle that measures over half a mile long and about half a mile wide. Furthermore, the Forbidden City constructed its most important buildings on a central axis. The Hall of Supreme Harmony, which was the Emperor’s throne room, is located at the midpoint or “epicenter” of the central axis. This was done intentionally, as it was meant to show that when the Emperor entered this room, he would be ceremonially transformed into the center of the universe.
In Islam
The geometric designs in Islamic art are often built on combinations of repeated squares and circles, which may be overlapped and interlaced, as can arabesques (with which they are often combined), to form intricate and complex patterns, including a wide variety of tessellations. These may constitute the entire decoration, may form a framework for floral or calligraphic embellishments, or may retreat into the background around other motifs. The complexity and variety of patterns used evolved from simple stars and lozenges in the ninth century, through a variety of 6- to 13-point patterns by the 13th century, and finally to include also 14- and 16-point stars in the sixteenth century.
Geometric patterns occur in a variety of forms in Islamic art and architecture including kilim carpets, Persian girih and Moroccan/Algerian zellige tilework, muqarnas decorative vaulting, jali pierced stone screens, ceramics, leather, stained glass, woodwork, and metalwork.
Islamic geometric patterns are used in the Quran, Mosques and even in the calligraphies.
In Hinduism/Indic Religion
The Agamas are a collection of Sanskrit, Tamil, and Grantha scriptures chiefly constituting the methods of temple construction and creation of idols, worship means of deities, philosophical doctrines, meditative practices, attainment of sixfold desires, and four kinds of yoga.
Elaborate rules are laid out in the Agamas for Shilpa (the art of sculpture) describing the quality requirements of such matters as the places where temples are to be built, the kinds of image to be installed, the materials from which they are to be made, their dimensions, proportions, air circulation, and lighting in the temple complex. The Manasara and Silpasara are works that deal with these rules. The rituals of daily worship at the temple also follow rules laid out in the Agamas.
Hindu temples, the symbolic representation of cosmic model is then projected onto Hindu temples using the Vastu Shastra principle of Sukha Darshan, which states that smaller parts of the temple should be self-similar and a replica of the whole. The repetition of these replication parts symbolizes the natural phenomena of fractal patterns found in nature. These patterns make up the exterior of Hindu temples. Each element and detail are proportional to each other, this occurrence is also known as the sacred geometry.
In Christianity
The construction of Medieval European cathedrals was often based on geometries intended to make the viewer see the world through mathematics, and through this understanding, gain a better understanding of the divine. These churches frequently featured a Latin Cross floor-plan.
At the beginning of the Renaissance in Europe, views shifted to favor simple and regular geometries. The circle in particular became a central and symbolic shape for the base of buildings, as it represented the perfection of nature and the centrality of man's place in the universe. The use of the circle and other simple and symmetrical geometric shapes was solidified as a staple of Renaissance sacred architecture in Leon Battista Alberti's architectural treatise, which described the ideal church in terms of spiritual geometry.
In the High Middle Ages, leading Christian philosophers explained the layout of the universe in terms of a microcosm analogy. In her book describing the divine visions she witnessed, Hildegard of Bingen explains that she saw an outstretched human figure located within a circular orb. When interpreted by theologians, the human figure was Christ and mankind showing the Earthly realm and the circumference of the circle was a representation of the universe. Some images also show above the universe a depiction of God. This is thought to later have inspired Da Vinci’s Vitruvian Man.
Dante uses circles to make up the nine layers of hell categorized in his book, The Divine Comedy. “Celestial spheres” are also utilized to make up the nine layers of Paradise. He further creates a cosmic order of circular forms that stretches from Jerusalem in the Earthly realm up to God in Heaven. This cosmology is believed to have been inspired by the ancient astronomer Ptolemy.
Unanchored geometry
Stephen Skinner criticizes the tendency of some writers to place a geometric diagram over virtually any image of a natural object or human created structure, find some lines intersecting the image and declare it based on sacred geometry. If the geometric diagram does not intersect major physical points in the image, the result is what Skinner calls "unanchored geometry".
Notable artists
Hildegard of Bingen
Hilma af Klint
Olga Fröbe-Kapteyn
Carl Jung
See also
Circle dance
Golden Ratio
Harmony of the spheres
Lu Ban and Feng shui
Magic circle
Numerology
Shield of the Trinity
108 (number)
References
Further reading
Bain, George. Celtic Art: The Methods of Construction. Dover, 1973. .
Bamford, Christopher, Homage to Pythagoras: Rediscovering Sacred Science, Lindisfarne Press, 1994,
Johnson, Anthony: Solving Stonehenge, the New Key to an Ancient Enigma. Thames & Hudson 2008
Lawlor, Robert. Sacred Geometry: Philosophy and practice (Art and Imagination). Thames & Hudson, 1989 (1st edition 1979, 1980, or 1982). .
Lippard, Lucy R. Overlay: Contemporary Art and the Art of Prehistory. Pantheon Books New York 1983
Mann, A. T. Sacred Architecture, Element Books, 1993, .
Michell, John. City of Revelation. Abacus, 1972. .
Schneider, Michael S. A Beginner's Guide to Constructing the Universe: Mathematical Archetypes of Nature, Art, and Science. Harper, 1995.
The Golden Mean, Parabola magazine, v.16, n.4 (1991)
West, John Anthony, Inaugural Lines: Sacred geometry at St. John the Divine, Parabola magazine, v.8, n.1, Spring 1983.
External links | Sacred geometry | [
"Engineering"
] | 2,343 | [
"Sacred geometry",
"Architecture"
] |
46,770 | https://en.wikipedia.org/wiki/Fixed-wing%20aircraft | A fixed-wing aircraft is a heavier-than-air aircraft, such as an airplane, which is capable of flight using aerodynamic lift. Fixed-wing aircraft are distinct from rotary-wing aircraft (in which a rotor mounted on a spinning shaft generates lift), and ornithopters (in which the wings oscillate to generate lift). The wings of a fixed-wing aircraft are not necessarily rigid; kites, hang gliders, variable-sweep wing aircraft, and airplanes that use wing morphing are all classified as fixed wing.
Gliding fixed-wing aircraft, including free-flying gliders and tethered kites, can use moving air to gain altitude. Powered fixed-wing aircraft (airplanes) that gain forward thrust from an engine include powered paragliders, powered hang gliders and ground effect vehicles. Most fixed-wing aircraft are operated by a pilot, but some are unmanned and controlled either remotely or autonomously.
History
Kites
Kites were used approximately 2,800 years ago in China, where kite building materials were available. Leaf kites may have been flown earlier in what is now Sulawesi, based on their interpretation of cave paintings on nearby Muna Island. By at least 549 AD paper kites were flying, as recorded that year, a paper kite was used as a message for a rescue mission. Ancient and medieval Chinese sources report kites used for measuring distances, testing the wind, lifting men, signaling, and communication for military operations.
Kite stories were brought to Europe by Marco Polo towards the end of the 13th century, and kites were brought back by sailors from Japan and Malaysia in the 16th and 17th centuries. Although initially regarded as curiosities, by the 18th and 19th centuries kites were used for scientific research.
Gliders and powered devices
Around 400 BC in Greece, Archytas was reputed to have designed and built the first self-propelled flying device, shaped like a bird and propelled by a jet of what was probably steam, said to have flown some . This machine may have been suspended during its flight.
One of the earliest attempts with gliders was by 11th-century monk Eilmer of Malmesbury, which failed. A 17th-century account states that 9th-century poet Abbas Ibn Firnas made a similar attempt, though no earlier sources record this event.
In 1799, Sir George Cayley laid out the concept of the modern airplane as a fixed-wing machine with systems for lift, propulsion, and control. Cayley was building and flying models of fixed-wing aircraft as early as 1803, and built a successful passenger-carrying glider in 1853. In 1856, Frenchman Jean-Marie Le Bris made the first powered flight, had his glider L'Albatros artificiel towed by a horse along a beach. In 1884, American John J. Montgomery made controlled flights in a glider as a part of a series of gliders he built between 1883 and 1886. Other aviators who made similar flights at that time were Otto Lilienthal, Percy Pilcher, and protégés of Octave Chanute.
In the 1890s, Lawrence Hargrave conducted research on wing structures and developed a box kite that lifted the weight of a man. His designs were widely adopted. He also developed a type of rotary aircraft engine, but did not create a powered fixed-wing aircraft.
Powered flight
Sir Hiram Maxim built a craft that weighed 3.5 tons, with a 110-foot (34-meter) wingspan powered by two 360-horsepower (270-kW) steam engines driving two propellers. In 1894, his machine was tested with overhead rails to prevent it from rising. The test showed that it had enough lift to take off. The craft was uncontrollable, and Maxim abandoned work on it.
The Wright brothers' flights in 1903 with their Flyer I are recognized by the Fédération Aéronautique Internationale (FAI), the standard setting and record-keeping body for aeronautics, as "the first sustained and controlled heavier-than-air powered flight". By 1905, the Wright Flyer III was capable of fully controllable, stable flight for substantial periods.
In 1906, Brazilian inventor Alberto Santos Dumont designed, built and piloted an aircraft that set the first world record recognized by the Aéro-Club de France by flying the 14 bis in less than 22 seconds. The flight was certified by the FAI.
The Bleriot VIII design of 1908 was an early aircraft design that had the modern monoplane tractor configuration. It had movable tail surfaces controlling both yaw and pitch, a form of roll control supplied either by wing warping or by ailerons and controlled by its pilot with a joystick and rudder bar. It was an important predecessor of his later Bleriot XI Channel-crossing aircraft of the summer of 1909.
World War I
World War I served initiated the use of aircraft as weapons and observation platforms. The earliest known aerial victory with a synchronized machine gun-armed fighter aircraft occurred in 1915, flown by German Luftstreitkräfte Lieutenant Kurt Wintgens. Fighter aces appeared; the greatest (by number of air victories) was Manfred von Richthofen.
Alcock and Brown crossed the Atlantic non-stop for the first time in 1919. The first commercial flights traveled between the United States and Canada in 1919.
Interwar aviation; the "Golden Age"
The so-called Golden Age of Aviation occurred between the two World Wars, during which updated interpretations of earlier breakthroughs. Innovations include Hugo Junkers' all-metal air frames in 1915 leading to multi-engine aircraft of up to 60+ meter wingspan sizes by the early 1930s, adoption of the mostly air-cooled radial engine as a practical aircraft power plant alongside V-12 liquid-cooled aviation engines, and longer and longer flights – as with a Vickers Vimy in 1919, followed months later by the U.S. Navy's NC-4 transatlantic flight; culminating in May 1927 with Charles Lindbergh's solo trans-Atlantic flight in the Spirit of St. Louis spurring ever-longer flight attempts.
World War II
Airplanes had a presence in the major battles of World War II. They were an essential component of military strategies, such as the German Blitzkrieg or the American and Japanese aircraft carrier campaigns of the Pacific.
Military gliders were developed and used in several campaigns, but were limited by the high casualty rate encountered. The Focke-Achgelis Fa 330 Bachstelze (Wagtail) rotor kite of 1942 was notable for its use by German U-boats.
Before and during the war, British and German designers worked on jet engines. The first jet aircraft to fly, in 1939, was the German Heinkel He 178. In 1943, the first operational jet fighter, the Messerschmitt Me 262, went into service with the German Luftwaffe. Later in the war the British Gloster Meteor entered service, but never saw action – top air speeds for that era went as high as , with the early July 1944 unofficial record flight of the German Me 163B V18 rocket fighter prototype.
Postwar
In October 1947, the Bell X-1 was the first aircraft to exceed the speed of sound, flown by Chuck Yeager.
In 1948–49, aircraft transported supplies during the Berlin Blockade. New aircraft types, such as the B-52, were produced during the Cold War.
The first jet airliner, the de Havilland Comet, was introduced in 1952, followed by the Soviet Tupolev Tu-104 in 1956. The Boeing 707, the first widely successful commercial jet, was in commercial service for more than 50 years, from 1958 to 2010. The Boeing 747 was the world's largest passenger aircraft from 1970 until it was surpassed by the Airbus A380 in 2005. The most successful aircraft is the Douglas DC-3 and its military version, the C-47, a medium sized twin engine passenger or transport aircraft that has been in service since 1936 and is still used throughout the world. Some of the hundreds of versions found other purposes, like the AC-47, a Vietnam War era gunship, which is still used in the Colombian Air Force.
Types
Airplane/aeroplane
An airplane (aeroplane or plane) is a powered fixed-wing aircraft propelled by thrust from a jet engine or propeller. Planes come in many sizes, shapes, and wing configurations. Uses include recreation, transportation of goods and people, military, and research.
Seaplane
A seaplane (hydroplane) is capable of taking off and landing (alighting) on water. Seaplanes that can also operate from dry land are a subclass called amphibian aircraft. Seaplanes and amphibians divide into two categories: float planes and flying boats.
A float plane is similar to a land-based airplane. The fuselage is not specialized. The wheels are replaced/enveloped by floats, allowing the craft to make remain afloat for water landings.
A flying boat is a seaplane with a watertight hull for the lower (ventral) areas of its fuselage. The fuselage lands and then rests directly on the water's surface, held afloat by the hull. It does not need additional floats for buoyancy, although small underwing floats or fuselage-mounted sponsons may be used to stabilize it. Large seaplanes are usually flying boats, embodying most classic amphibian aircraft designs.
Powered gliders
Many forms of glider may include a small power plant. These include:
Motor glider – a conventional glider or sailplane with an auxiliary power plant that may be used when in flight to increase performance.
Powered hang glider – a hang glider with a power plant added.
Powered parachute – a paraglider type of parachute with an integrated air frame, seat, undercarriage and power plant hung beneath.
Powered paraglider or paramotor – a paraglider with a power plant suspended behind the pilot.
Ground effect vehicle
A ground effect vehicle (GEV) flies close to the terrain, making use of the ground effect – the interaction between the wings and the surface. Some GEVs are able to fly higher out of ground effect (OGE) when required – these are classed as powered fixed-wing aircraft.
Glider
A glider is a heavier-than-air craft whose free flight does not require an engine. A sailplane is a fixed-wing glider designed for soaring – gaining height using updrafts of air and to fly for long periods.
Gliders are mainly used for recreation but have found use for purposes such as aerodynamics research, warfare and spacecraft recovery.
Motor gliders are equipped with a limited propulsion system for takeoff, or to extend flight duration.
As is the case with planes, gliders come in diverse forms with varied wings, aerodynamic efficiency, pilot location, and controls.
Large gliders are most commonly born aloft by a tow-plane or by a winch. Military gliders have been used in combat to deliver troops and equipment, while specialized gliders have been used in atmospheric and aerodynamic research. Rocket-powered aircraft and spaceplanes have made unpowered landings similar to a glider.
Gliders and sailplanes that are used for the sport of gliding have high aerodynamic efficiency. The highest lift-to-drag ratio is 70:1, though 50:1 is common. After take-off, further altitude can be gained through the skillful exploitation of rising air. Flights of thousands of kilometers at average speeds over 200 km/h have been achieved.
One small-scale example of a glider is the paper airplane. An ordinary sheet of paper can be folded into an aerodynamic shape fairly easily; its low mass relative to its surface area reduces the required lift for flight, allowing it to glide some distance.
Gliders and sailplanes share many design elements and aerodynamic principles with powered aircraft. For example, the Horten H.IV was a tailless flying wing glider, and the delta-winged Space Shuttle orbiter glided during its descent phase. Many gliders adopt similar control surfaces and instruments as airplanes.
Types
The main application of modern glider aircraft is sport and recreation.
Sailplane
Gliders were developed in the 1920s for recreational purposes. As pilots began to understand how to use rising air, sailplane gliders were developed with a high lift-to-drag ratio. These allowed the craft to glide to the next source of "lift", increasing their range. This gave rise to the popular sport of gliding.
Early gliders were built mainly of wood and metal, later replaced by composite materials incorporating glass, carbon or aramid fibers. To minimize drag, these types have a streamlined fuselage and long narrow wings incorporating a high aspect ratio. Single-seat and two-seat gliders are available.
Initially, training was done by short "hops" in primary gliders, which have no cockpit and minimal instruments. Since shortly after World War II, training is done in two-seat dual control gliders, but high-performance two-seaters can make long flights. Originally skids were used for landing, later replaced by wheels, often retractable. Gliders known as motor gliders are designed for unpowered flight, but can deploy piston, rotary, jet or electric engines. Gliders are classified by the FAI for competitions into glider competition classes mainly on the basis of wingspan and flaps.
A class of ultralight sailplanes, including some known as microlift gliders and some known as airchairs, has been defined by the FAI based on weight. They are light enough to be transported easily, and can be flown without licensing in some countries. Ultralight gliders have performance similar to hang gliders, but offer some crash safety as the pilot can strap into an upright seat within a deform-able structure. Landing is usually on one or two wheels which distinguishes these craft from hang gliders. Most are built by individual designers and hobbyists.
Military gliders
Military gliders were used during World War II for carrying troops (glider infantry) and heavy equipment to combat zones. The gliders were towed into the air and most of the way to their target by transport planes, e.g. C-47 Dakota, or by one-time bombers that had been relegated to secondary activities, e.g. Short Stirling. The advantage over paratroopers were that heavy equipment could be landed and that troops were quickly assembled rather than dispersed over a parachute drop zone. The gliders were treated as disposable, constructed from inexpensive materials such as wood, though a few were re-used. By the time of the Korean War, transport aircraft had become larger and more efficient so that even light tanks could be dropped by parachute, obsoleting gliders.
Research gliders
Even after the development of powered aircraft, gliders continued to be used for aviation research. The NASA Paresev Rogallo flexible wing was developed to investigate alternative methods of recovering spacecraft. Although this application was abandoned, publicity inspired hobbyists to adapt the flexible-wing airfoil for hang gliders.
Initial research into many types of fixed-wing craft, including flying wings and lifting bodies was also carried out using unpowered prototypes.
Hang glider
A hang glider is a glider aircraft in which the pilot is suspended in a harness suspended from the air frame, and exercises control by shifting body weight in opposition to a control frame. Hang gliders are typically made of an aluminum alloy or composite-framed fabric wing. Pilots can soar for hours, gain thousands of meters of altitude in thermal updrafts, perform aerobatics, and glide cross-country for hundreds of kilometers.
Paraglider
A paraglider is a lightweight, free-flying, foot-launched glider with no rigid body. The pilot is suspended in a harness below a hollow fabric wing whose shape is formed by its suspension lines. Air entering vents in the front of the wing and the aerodynamic forces of the air flowing over the outside power the craft. Paragliding is most often a recreational activity.
Unmanned gliders
A paper plane is a toy aircraft (usually a glider) made out of paper or paperboard.
Model glider aircraft are models of aircraft using lightweight materials such as polystyrene and balsa wood. Designs range from simple glider aircraft to accurate scale models, some of which can be very large.
Glide bombs are bombs with aerodynamic surfaces to allow a gliding flight path rather than a ballistic one. This enables stand-off aircraft to attack a target from a distance.
Kite
A kite is a tethered aircraft held aloft by wind that blows over its wing(s). High pressure below the wing deflects the airflow downwards. This deflection generates horizontal drag in the direction of the wind. The resultant force vector from the lift and drag force components is opposed by the tension of the tether.
Kites are mostly flown for recreational purposes, but have many other uses. Early pioneers such as the Wright Brothers and J.W. Dunne sometimes flew an aircraft as a kite in order to confirm its flight characteristics, before adding an engine and flight controls.
Applications
Military
Kites have been used for signaling, for delivery of munitions, and for observation, by lifting an observer above the field of battle, and by using kite aerial photography.
Science and meteorology
Kites have been used for scientific purposes, such as Benjamin Franklin's famous experiment proving that lightning is electricity. Kites were the precursors to the traditional aircraft, and were instrumental in the development of early flying craft. Alexander Graham Bell experimented with large man-lifting kites, as did the Wright brothers and Lawrence Hargrave. Kites had a historical role in lifting scientific instruments to measure atmospheric conditions for weather forecasting.
Radio aerials and light beacons
Kites can be used to carry radio antennas. This method was used for the reception station of the first transatlantic transmission by Marconi. Captive balloons may be more convenient for such experiments, because kite-carried antennas require strong wind, which may be not always available with heavy equipment and a ground conductor.
Kites can be used to carry light sources such as light sticks or battery-powered lights.
Kite traction
Kites can be used to pull people and vehicles downwind. Efficient foil-type kites such as power kites can also be used to sail upwind under the same principles as used by other sailing craft, provided that lateral forces on the ground or in the water are redirected as with the keels, center boards, wheels and ice blades of traditional sailing craft. In the last two decades, kite sailing sports have become popular, such as kite buggying, kite landboarding, kite boating and kite surfing. Snow kiting is also popular.
Kite sailing opens several possibilities not available in traditional sailing:
Wind speeds are greater at higher altitudes
Kites may be maneuvered dynamically, which dramatically increases the available force
Mechanical structures are not needed to withstand bending forces; vehicles/hulls can be light or eliminated.
Power generation
Research and development projects investigate kites for harnessing high altitude wind currents for electricity generation.
Cultural uses
Kite festivals are a popular form of entertainment throughout the world. They include local events, traditional festivals and major international festivals.
Designs
Bermuda kite
Bowed kite, e.g. Rokkaku
Cellular or box kite
Chapi-chapi
Delta kite
Foil, parafoil or bow kite
Malay kite see also wau bulan
Tetrahedral kite
Types
Expanded polystyrene kite
Fighter kite
Indoor kite
Inflatable single-line kite
Kytoon
Man-lifting kite
Rogallo parawing kite
Stunt (sport) kite
Water kite
Characteristics
Air frame
The structural element of a fixed-wing aircraft is the air frame. It varies according to the aircraft's type, purpose, and technology. Early airframes were made of wood with fabric wing surfaces, When engines became available for powered flight, their mounts were made of metal. As speeds increased metal became more common until by the end of World War II, all-metal (and glass) aircraft were common. In modern times, composite materials became more common.
Typical structural elements include:
One or more mostly horizontal wings, often with an airfoil cross-section. The wing deflects air downward as the aircraft moves forward, generating lifting force to support it in flight. The wing also provides lateral stability to stop the aircraft level in steady flight. Other roles are to hold the fuel and mount the engines.
A fuselage, typically a long, thin body, usually with tapered or rounded ends to make its shape aerodynamically slippery. The fuselage joins the other parts of the air frame and contains the payload, and flight systems.
A vertical stabilizer or fin is a rigid surface mounted at the rear of the plane and typically protruding above it. The fin stabilizes the plane's yaw (turn left or right) and mounts the rudder which controls its rotation along that axis.
A horizontal stabilizer, usually mounted at the tail near the vertical stabilizer. The horizontal stabilizer is used to stabilize the plane's pitch (tilt up or down) and mounts the elevators that provide pitch control.
Landing gear, a set of wheels, skids, or floats that support the plane while it is not in flight. On seaplanes, the bottom of the fuselage or floats (pontoons) support it while on the water. On some planes, the landing gear retracts during the flight to reduce drag.
Wings
The wings of a fixed-wing aircraft are static planes extending to either side of the aircraft. When the aircraft travels forwards, air flows over the wings that are shaped to create lift.
Structure
Kites and some lightweight gliders and airplanes have flexible wing surfaces that are stretched across a frame and made rigid by the lift forces exerted by the airflow over them. Larger aircraft have rigid wing surfaces.
Whether flexible or rigid, most wings have a strong frame to give them shape and to transfer lift from the wing surface to the rest of the aircraft. The main structural elements are one or more spars running from root to tip, and ribs running from the leading (front) to the trailing (rear) edge.
Early airplane engines had little power and light weight was critical. Also, early airfoil sections were thin, and could not support a strong frame. Until the 1930s, most wings were so fragile that external bracing struts and wires were added. As engine power increased, wings could be made heavy and strong enough that bracing was unnecessary. Such an unbraced wing is called a cantilever wing.
Configuration
The number and shape of wings vary widely. Some designs blend the wing with the fuselage, while left and right wings separated by the fuselage are more common.
Occasionally more wings have been used, such as the three-winged triplane from World War I. Four-winged quadruplanes and other multiplane designs have had little success.
Most planes are monoplanes, with one or two parallel wings. Biplanes and triplanes stack one wing above the other. Tandem wings place one wing behind the other, possibly joined at the tips. When the available engine power increased during the 1920s and 1930s and bracing was no longer needed, the unbraced or cantilever monoplane became the most common form.
The planform is the shape when seen from above/below. To be aerodynamically efficient, wings are straight with a long span, but a short chord (high aspect ratio). To be structurally efficient, and hence lightweight, wingspan must be as small as possible, but offer enough area to provide lift.
To travel at transonic speeds, variable geometry wings change orientation, angling backward to reduce drag from supersonic shock waves. The variable-sweep wing transforms between an efficient straight configuration for takeoff and landing, to a low-drag swept configuration for high-speed flight. Other forms of variable planform have been flown, but none have gone beyond the research stage. The swept wing is a straight wing swept backward or forwards.
The delta wing is a triangular shape that serves various purposes. As a flexible Rogallo wing, it allows a stable shape under aerodynamic forces, and is often used for kites and other ultralight craft. It is supersonic capable, combining high strength with low drag.
Wings are typically hollow, also serving as fuel tanks. They are equipped with flaps, which allow the wing to increase/decrease drag/lift, for take-off and landing, and acting in opposition, to change direction.
Fuselage
The fuselage is typically long and thin, usually with tapered or rounded ends to make its shape aerodynamically smooth. Most fixed-wing aircraft have a single fuselage. Others may have multiple fuselages, or the fuselage may be fitted with booms on either side of the tail to allow the extreme rear of the fuselage to be utilized.
The fuselage typically carries the flight crew, passengers, cargo, and sometimes fuel and engine(s). Gliders typically omit fuel and engines, although some variations such as motor gliders and rocket gliders have them for temporary or optional use.
Pilots of manned commercial fixed-wing aircraft control them from inside a cockpit within the fuselage, typically located at the front/top, equipped with controls, windows, and instruments, separated from passengers by a secure door. In small aircraft, the passengers typically sit behind the pilot(s) in the cabin, Occasionally, a passenger may sit beside or in front of the pilot. Larger passenger aircraft have a separate passenger cabin or occasionally cabins that are physically separated from the cockpit.
Aircraft often have two or more pilots, with one in overall command (the "pilot") and one or more "co-pilots". On larger aircraft a navigator is typically also seated in the cockpit as well. Some military or specialized aircraft may have other flight crew members in the cockpit as well.
Wings vs. bodies
Flying wing
A flying wing is a tailless aircraft that has no distinct fuselage, housing the crew, payload, and equipment inside.
The flying wing configuration was studied extensively in the 1930s and 1940s, notably by Jack Northrop and Cheston L. Eshelman in the United States, and Alexander Lippisch and the Horten brothers in Germany. After the war, numerous experimental designs were based on the flying wing concept. General interest continued into the 1950s, but designs did not offer a great advantage in range and presented technical problems. The flying wing is most practical for designs in the slow-to-medium speed range, and drew continual interest as a tactical airlifter design.
Interest in flying wings reemerged in the 1980s due to their potentially low radar cross-sections. Stealth technology relies on shapes that reflect radar waves only in certain directions, thus making it harder to detect. This approach eventually led to the Northrop B-2 Spirit stealth bomber (pictured). The flying wing's aerodynamics are not the primary concern. Computer-controlled fly-by-wire systems compensated for many of the aerodynamic drawbacks, enabling an efficient and stable long-range aircraft.
Blended wing body
Blended wing body aircraft have a flattened airfoil-shaped body, which produces most of the lift to keep itself aloft, and distinct and separate wing structures, though the wings are blended with the body.
Blended wing bodied aircraft incorporate design features from both fuselage and flying wing designs. The purported advantages of the blended wing body approach are efficient, high-lift wings and a wide, airfoil-shaped body. This enables the entire craft to contribute to lift generation with potentially increased fuel economy.
Lifting body
A lifting body is a configuration in which the body produces lift. In contrast to a flying wing, which is a wing with minimal or no conventional fuselage, a lifting body can be thought of as a fuselage with little or no conventional wing. Whereas a flying wing seeks to maximize cruise efficiency at subsonic speeds by eliminating non-lifting surfaces, lifting bodies generally minimize the drag and structure of a wing for subsonic, supersonic, and hypersonic flight, or, spacecraft re-entry. All of these flight regimes pose challenges for flight stability.
Lifting bodies were a major area of research in the 1960s and 1970s as a means to build small and lightweight manned spacecraft. The US built lifting body rocket planes to test the concept, as well as several rocket-launched re-entry vehicles. Interest waned as the US Air Force lost interest in the manned mission, and major development ended during the Space Shuttle design process when it became clear that highly shaped fuselages made it difficult to fit fuel tanks.
Empennage and foreplane
The classic airfoil section wing is unstable in flight. Flexible-wing planes often rely on an anchor line or the weight of a pilot hanging beneath to maintain the correct attitude. Some free-flying types use an adapted airfoil that is stable, or other mechanisms including electronic artificial stability.
In order to achieve trim, stability, and control, most fixed-wing types have an empennage comprising a fin and rudder that act horizontally, and a tailplane and elevator that act vertically. This is so common that it is known as the conventional layout. Sometimes two or more fins are spaced out along the tailplane.
Some types have a horizontal "canard" foreplane ahead of the main wing, instead of behind it. This foreplane may contribute to the trim, stability or control of the aircraft, or to several of these.
Aircraft controls
Kite control
Kites are controlled by one or more tethers.
Free-flying aircraft controls
Gliders and airplanes have sophisticated control systems, especially if they are piloted.
The controls allow the pilot to direct the aircraft in the air and on the ground. Typically these are:
The yoke or joystick controls rotation of the plane about the pitch and roll axes. A yoke resembles a steering wheel. The pilot can pitch the plane down by pushing on the yoke or joystick, and pitch the plane up by pulling on it. Rolling the plane is accomplished by turning the yoke in the direction of the desired roll, or by tilting the joystick in that direction.
Rudder pedals control rotation of the plane about the yaw axis. Two pedals pivot so that when one is pressed forward the other moves backward, and vice versa. The pilot presses on the right rudder pedal to make the plane yaw to the right, and pushes on the left pedal to make it yaw to the left. The rudder is used mainly to balance the plane in turns, or to compensate for winds or other effects that push the plane about the yaw axis.
On powered types, an engine stop control ("fuel cutoff", for example) and, usually, a Throttle or thrust lever and other controls, such as a fuel-mixture control (to compensate for air density changes with altitude change).
Other common controls include:
Flap levers, which are used to control the deflection position of flaps on the wings.
Spoiler levers, which are used to control the position of spoilers on the wings, and to arm their automatic deployment in planes designed to deploy them upon landing. The spoilers reduce lift for landing.
Trim controls, which usually take the form of knobs or wheels and are used to adjust pitch, roll, or yaw trim. These are often connected to small airfoils on the trailing edge of the control surfaces and are called "trim tabs". Trim is used to reduce the amount of pressure on the control forces needed to maintain a steady course.
On wheeled types, brakes are used to slow and stop the plane on the ground, and sometimes for turns on the ground.
A craft may have two pilot seats with dual controls, allowing two to take turns.
The control system may allow full or partial automation, such as an autopilot, a wing leveler, or a flight management system. An unmanned aircraft has no pilot and is controlled remotely or via gyroscopes, computers/sensors or other forms of autonomous control.
Cockpit instrumentation
On manned fixed-wing aircraft, instruments provide information to the pilots, including flight, engines, navigation, communications, and other aircraft systems that may be installed.
The six basic instruments, sometimes referred to as the six pack, are:
The airspeed indicator (ASI) shows the speed at which the plane is moving through the air.
The attitude indicator (AI), sometimes called the artificial horizon, indicates the exact orientation of the aircraft about its pitch and roll axes.
The altimeter indicates the altitude or height of the plane above mean sea level (AMSL).
The vertical speed indicator (VSI), or variometer, shows the rate at which the plane is climbing or descending.
The heading indicator (HI), sometimes called the directional gyro (DG), shows the magnetic compass orientation of the fuselage. The direction is affected by wind conditions and magnetic declination.
The turn coordinator (TC), or turn and bank indicator, helps the pilot to control the plane in a coordinated attitude while turning.
Other cockpit instruments include:
A two-way radio, to enable communications with other planes and with air traffic control.
A horizontal situation indicator (HSI) indicates the position and movement of the plane as seen from above with respect to the ground, including course/heading and other information.
Instruments showing the status of the plane's engines (operating speed, thrust, temperature, and other variables).
Combined display systems such as primary flight displays or navigation aids.
Information displays such as onboard weather radar displays.
A radio direction finder (RDF), to indicate the direction to one or more radio beacons, which can be used to determine the plane's position.
A satellite navigation (satnav) system, to provide an accurate position.
Some or all of these instruments may appear on a computer display and be operated with touches, ala a phone.
See also
Aircraft flight mechanics
Airliner
Aviation
Aviation and the environment
Aviation history
Fuel efficiency
List of altitude records reached by different aircraft types
Maneuvering speed
Rotorcraft
References
Notes
In 1903, when the Wright brothers used the word, "aeroplane" (a British English term that can also mean airplane in American English) meant wing, not the whole aircraft. See text of their patent. Patent 821,393 – Wright brothers' patent for "Flying Machine"
Citations
Bibliography
Blatner, David. The Flying Book: Everything You've Ever Wondered About Flying on Airplanes.
External links
The airplane centre
Airliners.net
Aerospaceweb.org
How Airplanes Work – Howstuffworks.com
Smithsonian National Air and Space Museum's How Things Fly website
"Hops and Flights – a Roll Call of Early Powered Take-offs" a 1959 Flight article
Aircraft configurations
Articles containing video clips | Fixed-wing aircraft | [
"Engineering"
] | 7,028 | [
"Aircraft configurations",
"Aerospace engineering"
] |
46,784 | https://en.wikipedia.org/wiki/Daniel%20Bernoulli | Daniel Bernoulli ( ; ; – 27 March 1782) was a Swiss mathematician and physicist and was one of the many prominent mathematicians in the Bernoulli family from Basel. He is particularly remembered for his applications of mathematics to mechanics, especially fluid mechanics, and for his pioneering work in probability and statistics. His name is commemorated in the Bernoulli's principle, a particular example of the conservation of energy, which describes the mathematics of the mechanism underlying the operation of two important technologies of the 20th century: the carburetor and the aeroplane wing.
Early life
Daniel Bernoulli was born in Groningen, in the Netherlands, into a family of distinguished mathematicians. The Bernoulli family came originally from Antwerp, at that time in the Spanish Netherlands, but emigrated to escape the Spanish persecution of the Protestants. After a brief period in Frankfurt the family moved to Basel, in Switzerland.
Daniel was the son of Johann Bernoulli (one of the early developers of calculus) and a nephew of Jacob Bernoulli (an early researcher in probability theory and the discoverer of the mathematical constant e). He had two brothers, Niklaus and Johann II. Daniel Bernoulli was described by W. W. Rouse Ball as "by far the ablest of the younger Bernoullis".
He is said to have had a bad relationship with his father. Both of them entered and tied for first place in a scientific contest at the University of Paris. Johann banned Daniel from his house, allegedly being unable to bear the "shame" of Daniel being considered his equal. Johann allegedly plagiarized key ideas from Daniel's book Hydrodynamica in his book Hydraulica and backdated them to before Hydrodynamica. Daniel's attempts at reconciliation with his father were unsuccessful.
When he was in school, Johann encouraged Daniel to study business citing poor financial compensation for mathematicians. Daniel initially refused but later relented and studied both business and medicine at his father's behest under the condition that his father would teach him mathematics privately. Daniel studied medicine at Basel, Heidelberg, and Strasbourg, and earned a PhD in anatomy and botany in 1721.
He was a contemporary and close friend of Leonhard Euler. He went to St. Petersburg in 1724 as professor of mathematics, but was very unhappy there. A temporary illness together with the censorship by the Russian Orthodox Church and disagreements over his salary gave him an excuse for leaving St. Petersburg in 1733. He returned to the University of Basel, where he successively held the chairs of medicine, metaphysics, and natural philosophy until his death.
In May 1750 he was elected a Fellow of the Royal Society.
Mathematical work
His earliest mathematical work was the Exercitationes (Mathematical Exercises), published in 1724 with the help of Goldbach. Two years later he pointed out for the first time the frequent desirability of resolving a compound motion into motions of translation and motion of rotation. His chief work is Hydrodynamica, published in 1738. It resembles Joseph Louis Lagrange's Mécanique Analytique in being arranged so that all the results are consequences of a single principle, namely, conservation of energy. This was followed by a memoir on the theory of the tides, to which, conjointly with the memoirs by Euler and Colin Maclaurin, a prize was awarded by the French Academy: these three memoirs contain all that was done on this subject between the publication of Isaac Newton's Philosophiae Naturalis Principia Mathematica and the investigations of Pierre-Simon Laplace. Bernoulli also wrote a large number of papers on various mechanical questions, especially on problems connected with vibrating strings, and the solutions given by Brook Taylor and by Jean le Rond d'Alembert.
Economics and statistics
In his 1738 book Specimen theoriae novae de mensura sortis (Exposition of a New Theory on the Measurement of Risk), Bernoulli offered a solution to the St. Petersburg paradox as the basis of the economic theory of risk aversion, risk premium, and utility. Bernoulli often noticed that when making decisions that involved some uncertainty, people did not always try to maximize their possible monetary gain, but rather tried to maximize "utility", an economic term encompassing their personal satisfaction and benefit. Bernoulli realized that for humans, there is a direct relationship between money gained and utility, but that it diminishes as the money gained increases. For example, to a person whose income is $10,000 per year, an additional $100 in income will provide more utility than it would to a person whose income is $50,000 per year.
One of the earliest attempts to analyze a statistical problem involving censored data was Bernoulli's 1766 analysis of smallpox morbidity and mortality data to demonstrate the efficacy of inoculation.
Physics
In Hydrodynamica (1738) he laid the basis for the kinetic theory of gases, and applied the idea to explain Boyle's law.
He worked with Euler on elasticity and the development of the Euler–Bernoulli beam equation. Bernoulli's principle is of critical use in aerodynamics.
According to Léon Brillouin, the principle of superposition was first stated by Daniel Bernoulli in 1753: "The general motion of a vibrating system is given by a superposition of its proper vibrations."
Works
Legacy
In 2002, Bernoulli was inducted into the International Air & Space Hall of Fame at the San Diego Air & Space Museum.
See also
Hydrodynamica
Mathematical modelling of infectious diseases
References
Footnotes
Works cited
(Original entry based on the public domain Rouse History of Mathematics)
External links
Rothbard, Murray. Daniel Bernoulli and the Founding of Mathematical Economics , Mises Institute (excerpted from An Austrian Perspective on the History of Economic Thought'')
1700 births
1782 deaths
Daniel
Heidelberg University alumni
18th-century Swiss physicists
18th-century writers in Latin
18th-century male writers
18th-century Swiss mathematicians
Swiss Calvinist and Reformed Christians
Mathematical analysts
Fluid dynamicists
Probability theorists
Fellows of the Royal Society
Full members of the Saint Petersburg Academy of Sciences
Swiss expatriates in the Dutch Republic
Scientists from Groningen (city)
Swiss expatriates in Germany
People associated with the University of Basel | Daniel Bernoulli | [
"Chemistry",
"Mathematics"
] | 1,284 | [
"Mathematical analysis",
"Fluid dynamicists",
"Mathematical analysts",
"Fluid dynamics"
] |
46,790 | https://en.wikipedia.org/wiki/Desert%20varnish | Desert varnish or rock varnish is an orange-yellow to black coating found on exposed rock surfaces in arid environments. Desert varnish is approximately one micrometer thick and exhibits nanometer-scale layering. Rock rust and desert patina are other terms which are also used for the condition, but less often.
Formation
Desert varnish forms only on physically stable rock surfaces that are no longer subject to frequent precipitation, fracturing or wind abrasion. The varnish is primarily composed of particles of clay along with oxides of iron and manganese. There is also a host of trace elements and almost always some organic matter. The color of the varnish varies from shades of brown to black.
It has been suggested that desert varnish should be investigated as a potential candidate for a "shadow biosphere". However, a 2008 microscopy study posited that desert varnish has already been reproduced with chemistry not involving life in the lab, and that the main component is actually silica and not clay as previously thought. The study notes that desert varnish is an excellent fossilizer for microbes and indicator of water. Desert varnish appears to have been observed by rovers on Mars, and if examined may contain fossilized life from Mars's wet period.
Composition
Originally scientists thought that the varnish was made from substances drawn out of the rocks it coats. Microscopic and microchemical observations, however, show that a major part of varnish is clay, which could only arrive by wind. Clay, then, acts as a substrate to catch additional substances that chemically react together when the rock reaches high temperatures in the desert sun. Wetting by dew is also important in the process.
An important characteristic of black desert varnish is that it has an unusually high concentration of manganese. Manganese is relatively rare in the Earth's crust, making up only 0.12% of its weight. In black desert varnish, however, manganese is 50 to 60 times more abundant. One proposal for a mechanism of desert varnish formation is that it is caused by manganese-oxidizing microbes (mixotrophs) which are common in environments poor in organic nutrients. A micro-environment pH above 7.5 is inhospitable for manganese-concentrating microbes. In such conditions, orange varnishes develop, poor in manganese (Mn) but rich in iron (Fe). An alternative hypothesis for Mn/Fe fluctuation has been proposed that considers Mn-rich and Fe-rich varnishes to be related to humid and arid climates, respectively.
Even though it contains high concentrations of iron and manganese, there are no significant modern uses of desert varnish. However, some Native American peoples created petroglyphs by scraping or chipping away the dark varnish to expose the lighter rock beneath.
Desert varnish often obscures the identity of the underlying rock, and different rocks have varying abilities to accept and retain varnish. Limestones, for example, typically do not have varnish because they are too water-soluble and therefore do not provide a stable surface for varnish to form. Shiny, dense and black varnishes form on basalt, fine quartzites and metamorphosed shales due to these rocks' relatively high resistance to weathering.
Its presence has been cited as a key factor in the preservation of a large number of petroglyphs dating back to the Iron Age and earlier in areas such as the Wadi Saham in the United Arab Emirates.
See also
Gallery
References
External links
Desert Varnish
Rock Varnish (desert varnish): An Internet Primer for Rock Art Research by Ronald I. Dorn, Professor of Geography Arizona State University
DESERT VARNISH (rock varnish)
New Way Suggested to Search for Life on Mars — Space.com
Researcher: Mars rock varnish hints of life July 2, 2001 By Richard Stenger CNN
Rock Varnish As A Habitat For Extant Life On Mars
Deposition (geology)
Sedimentology
Deserts
Rocks
Hypothetical life forms | Desert varnish | [
"Physics",
"Biology"
] | 812 | [
"Hypothetical life forms",
"Ecosystems",
"Physical objects",
"Deserts",
"Rocks",
"Matter",
"Biological hypotheses"
] |
46,793 | https://en.wikipedia.org/wiki/Death%20Valley%20National%20Park | Death Valley National Park is a national park of the United States that straddles the California–Nevada border, east of the Sierra Nevada. The park boundaries include Death Valley, the northern section of Panamint Valley, the southern section of Eureka Valley and most of Saline Valley.
The park occupies an interface zone between the arid Great Basin and Mojave deserts, protecting the northwest corner of the Mojave Desert and its diverse environment of salt-flats, sand dunes, badlands, valleys, canyons and mountains.
Death Valley is the largest national park in the contiguous United States, as well as the hottest, driest and lowest of all the national parks in the United States. It contains Badwater Basin, the second-lowest point in the Western Hemisphere and lowest in North America at below sea level. More than 93% of the park is a designated wilderness area.
The park is home to many species of plants and animals which have adapted to the harsh desert environment including creosote bush, Joshua tree, bighorn sheep, coyote, and the endangered Death Valley pupfish, a survivor from much wetter times. UNESCO included Death Valley as the principal feature of its Mojave and Colorado Deserts Biosphere Reserve in 1984.
A series of Native American groups inhabited the area from as early as 7000 BC, most recently the Timbisha around 1000 AD who migrated between winter camps in the valleys and summer grounds in the mountains. A group of European-Americans, lost in the valley in 1849 while looking for a shortcut to the gold fields of California, gave this valley its grim name, even though only one of their group died there.
Several short-lived boom towns sprang up during the late 19th and early 20th centuries to mine gold and silver. The only long-term profitable ore to be mined was borax, which was transported out of the valley with twenty-mule teams. The valley later became the subject of books, radio programs, television series, and movies. Tourism expanded in the 1920s when resorts were built around Stovepipe Wells and Furnace Creek. Death Valley National Monument was declared in 1933 and the park was substantially expanded and became a national park in 1994.
The natural environment of the area has been shaped largely by its geology. The valley is actually a graben with the oldest rocks being extensively metamorphosed and at least 1.7 billion years old. Ancient, warm, shallow seas deposited marine sediments until rifting opened the Pacific Ocean. Additional sedimentation occurred until a subduction zone formed off the coast. The subduction uplifted the region out of the sea and created a line of volcanoes. Later the crust started to pull apart, creating the current Basin and Range landform. Valleys filled with sediment and, during the wet times of glacial periods, with lakes, such as Lake Manly.
Death Valley is the fifth-largest American national park and the largest in the contiguous United States. It is also larger than the states of Rhode Island and Delaware combined, and nearly as large as Puerto Rico. In 2013, Death Valley National Park was designated as a dark sky park by the International Dark-Sky Association.
Geographic setting
There are two major valleys in the park, Death Valley and Panamint Valley. Both of these valleys were formed within the last few million years and both are bounded by north–south-trending mountain ranges. These and adjacent valleys follow the general trend of Basin and Range topography with one modification: there are parallel strike-slip faults that perpendicularly bound the central extent of Death Valley. The result of this shearing action is additional extension in the central part of Death Valley which causes a slight widening and more subsidence there.
Uplift of surrounding mountain ranges and subsidence of the valley floor are both occurring. The uplift on the Black Mountains is so fast that the alluvial fans (fan-shaped deposits at the mouth of canyons) there are small and steep compared to the huge alluvial fans coming off the Panamint Range. Fast uplift of a mountain range in an arid environment often does not allow its canyons enough time to cut a classic V-shape all the way down to the stream bed. Instead, a V-shape ends at a slot canyon halfway down, forming a 'wine glass canyon.' Sediment is deposited on a small and steep alluvial fan.
At below sea level at its lowest point, Badwater Basin on Death Valley's floor is the second-lowest depression in the Western Hemisphere (behind Laguna del Carbón in Argentina), while Mount Whitney, only to the west, rises to and is the tallest mountain in the contiguous United States. This topographic relief is the greatest elevation gradient in the contiguous United States and is the terminus point of the Great Basin's southwestern drainage. Although the extreme lack of water in the Great Basin makes this distinction of little current practical use, it does mean that in wetter times the lake that once filled Death Valley (Lake Manly) was the last stop for water flowing in the region, meaning the water there was saturated in dissolved materials. Thus, the salt pans in Death Valley are among the largest in the world and are rich in minerals, such as borax and various salts and hydrates. The largest salt pan in the park extends from the Ashford Mill Site to the Salt Creek Hills, covering some of the valley floor. The best known playa in the park is the Racetrack, known for its moving rocks.
Climate
According to the Köppen climate classification system, Death Valley National Park has a hot desert climate (BWh). The plant hardiness zone at Badwater Basin is 9b with an average annual extreme minimum temperature of .
Death Valley is the hottest and driest place in North America due to its lack of surface water and low relief. It is so frequently the hottest spot in the United States that many tabulations of the highest daily temperatures in the country omit Death Valley as a matter of course.
On the afternoon of July 10, 1913, the United States Weather Bureau recorded a high temperature of at Greenland Ranch (now Furnace Creek) in Death Valley. This temperature stands as the highest ambient air temperature ever recorded at the surface of the Earth. (A report of a temperature of recorded in Libya in 1922 was later determined to be inaccurate.) Daily summer temperatures of or greater are common, as well as below freezing nightly temperatures in the winter. July is the hottest month, with an average high of and an average low of . December is the coldest month, with an average high of and an average low of . The record low is . There are an average of 197.3 days annually with highs of or higher and 146.9 days annually with highs of or higher. Freezing temperatures of or lower occur on an average of 8.6 days annually.
Several of the larger Death Valley springs derive their water from a regional aquifer, which extends as far east as southern Nevada and Utah. Much of the water in this aquifer has been there for many thousands of years, since the Pleistocene ice ages, when the climate was cooler and wetter. Today's drier climate does not provide enough precipitation to recharge the aquifer at the rate at which water is being withdrawn.
The highest range within the park is the Panamint Range, with Telescope Peak being its highest point at . The Death Valley region is a transitional zone in the northernmost part of the Mojave Desert and consists of five mountain ranges removed from the Pacific Ocean. Three of these are significant barriers: the Sierra Nevada, the Argus Range, and the Panamint Range. Air masses tend to lose moisture as they are forced up over mountain ranges, in what climatologists call a rainshadow effect.
The exaggerated rain shadow effect for the Death Valley area makes it North America's driest spot, receiving about of rainfall annually at Badwater, and some years fail to register any measurable rainfall. Annual average precipitation varies from overall below sea level to over in the higher mountains that surround the valley. When rain does arrive it often does so in intense storms that cause flash floods which remodel the landscape and sometimes create very shallow ephemeral lakes.
The hot, dry climate makes it difficult for soil to form. Mass wasting, the down-slope movement of loose rock, is therefore the dominant erosive force in mountainous areas, resulting in "skeletonized" ranges (mountains with very little soil on them). Sand dunes in the park, while famous, are not nearly as widespread as their fame or the dryness of the area may suggest. The Mesquite Flat dune field is the most easily accessible from the paved road just east of Stovepipe Wells in the north-central part of the valley and is primarily made of quartz sand. Another dune field is just to the north but is instead mostly composed of travertine sand. The highest dunes in the park, and some of the highest in North America, are located in the Eureka Valley about to the north of Stovepipe Wells, while the Panamint Valley dunes and the Saline Valley dunes are located west and northwest of the town, respectively. The Ibex dune field is near the seldom-visited Ibex Hill in the southernmost part of the park, just south of the Saratoga Springs marshland. All the latter four dune fields are accessible only via unpaved roads. Prevailing winds in the winter come from the north, and prevailing winds in the summer come from the south. Thus, the overall position of the dune fields remains more or less fixed.
There are rare exceptions to the dry nature of the area. In 2005, an unusually wet winter created a 'lake' in the Badwater Basin and led to the greatest wildflower season in the park's history. In October 2015, a "1000 year flood event" with over three inches of rain caused major damage in Death Valley National Park. A similar widespread storm in August 2022 damaged pavement and deposited debris on nearly every road, trapping 1,000 residents and visitors overnight.
Human history
Early inhabitants and transient populations
Four Native American cultures are known to have lived in the area during the last 10,000 years. The first known group, the Nevares Spring People, were hunters and gatherers who arrived in the area perhaps 9,000 years ago (7000 BC) when there were still small lakes in Death Valley and neighboring Panamint Valley. A much milder climate persisted at that time, and large game animals were still plentiful. By 5,000 years ago (3000 BC) the Mesquite Flat People displaced the Nevares Spring People. Around 2,000 years ago the Saratoga Spring People moved into the area, which by then was probably already a hot, dry desert. This culture was more advanced at hunting and gathering and was skillful at handcrafts. They also left mysterious stone patterns in the valley.
One thousand years ago, the nomadic Timbisha (formerly called Shoshone and also known as Panamint or Koso) moved into the area and hunted game and gathered mesquite beans along with pinyon pine nuts. Because of the wide altitude differential between the valley bottom and the mountain ridges, especially on the west, the Timbisha practiced a vertical migration pattern. Their winter camps were located near water sources in the valley bottoms. As the spring and summer progressed and the weather warmed, grasses and other plant food sources ripened at progressively higher altitudes. November found them at the very top of the mountain ridges where they harvested pine nuts before moving back to the valley bottom for winter.
The California Gold Rush brought the first people of European descent known to visit the immediate area. In December 1849 two groups of California Gold Country-bound travelers with perhaps 100 wagons total stumbled into Death Valley after getting lost on what they thought was a shortcut off the Old Spanish Trail. Called the Bennett-Arcane Party, they were unable to find a pass out of the valley for weeks; they were able to find fresh water at various springs in the area, but were forced to eat several of their oxen to survive. They used the wood of their wagons to cook the meat and make jerky. The place where they did this is today referred to as "Burnt Wagons Camp" and is located near Stovepipe Wells.
After abandoning their wagons, they eventually were able to hike out of the valley. Just after leaving the valley, one of the women in the group turned and said, "Goodbye Death Valley," giving the valley its name. Included in the party was William Lewis Manly whose autobiographical book Death Valley in '49 detailed this trek and popularized the area (geologists later named the prehistoric lake that once filled the valley after him).
Boom and bust
The ores that are most famously associated with the area were also the easiest to collect and the most profitable: evaporite deposits such as salts, borate, and talc. Borax was found by Rosie and Aaron Winters near The Ranch at Death Valley (then called Greenland) in 1881. Later that same year, the Eagle Borax Works became Death Valley's first commercial borax operation. William Tell Coleman built the Harmony Borax Works plant and began to process ore in late 1883 or early 1884, continuing until 1888. This mining and smelting company produced borax to make soap and for industrial uses. The end product was shipped out of the valley to the Mojave railhead in 10-ton-capacity wagons pulled by "twenty-mule teams" that were actually teams of 18 mules and two horses each.
The teams averaged an hour and required about 30 days to complete a round trip. The trade name 20-Mule Team Borax was established by Francis Marion Smith's Pacific Coast Borax Company after Smith acquired Coleman's borax holdings in 1890. A memorable advertising campaign used the wagon's image to promote the Boraxo brand of granular hand soap and the Death Valley Days radio and television programs. In 1914, the Death Valley Railroad was built to serve mining operations on the east side of the valley. Mining continued after the collapse of Coleman's empire, and by the late 1920s the area was the world's number one source of borax. Some four to six million years old, the Furnace Creek Formation is the primary source of borate minerals gathered from Death Valley's playas.
Other visitors stayed to prospect for and mine deposits of copper, gold, lead, and silver. These sporadic mining ventures were hampered by their remote location and the harsh desert environment. In December 1903, two men from Ballarat were prospecting for silver. One was an out-of-work Irish miner named Jack Keane and the other was a one-eyed Basque butcher named Domingo Etcharren. Quite by accident, Keane discovered an immense ledge of free-milling gold by the duo's work site and named the claim the Keane Wonder Mine. This started a minor and short-lived gold rush into the area. The Keane Wonder Mine, along with mines at Rhyolite, Skidoo and Harrisburg, were the only ones to extract enough metal ore to make them worthwhile. Outright shams such as Leadfield also occurred, but most ventures quickly ended after a short series of prospecting mines failed to yield evidence of significant ore (these mines now dot the entire area and are a significant hazard to anyone who enters them). The boom towns which sprang up around these mines flourished during the first decade of the 1900s, but soon declined after the Panic of 1907.
Early tourism
The first documented tourist facilities in Death Valley were a set of tent houses built in the 1920s where Stovepipe Wells is now located. People flocked to resorts built around natural springs thought to have curative and restorative properties. In 1927, Pacific Coast Borax turned the crew quarters of its Furnace Creek Ranch into a resort, creating the Furnace Creek Inn and resort. The spring at Furnace Creek was harnessed to develop the resort, and as the water was diverted, the surrounding marshes and wetlands started to shrink.
Soon the valley was a popular winter destination. Other facilities started off as private getaways but were later opened to the public. Most notable among these was Death Valley Ranch, better known as Scotty's Castle. This large ranch home built in the Spanish Revival style became a hotel in the late 1930s and, largely because of the fame of Death Valley Scotty, a tourist attraction. Death Valley Scotty, whose real name was Walter Scott, was a gold miner who pretended to be the owner of "his castle", which he claimed to have built with profits from his gold mine. Neither claim was true, but the real owner, Chicago millionaire Albert Mussey Johnson, encouraged the myth. When asked by reporters what his connection was to Walter Scott's castle, Johnson replied that he was Mr. Scott's banker.
Protection and later history
President Herbert Hoover proclaimed a national monument in and around Death Valley on February 11, 1933, setting aside almost of southeastern California and small parts of Nevada.
The Civilian Conservation Corps (CCC) developed infrastructure in Death Valley National Monument during the Great Depression and on into the early 1940s. The CCC built barracks, graded of roads, installed water and telephone lines, and a total of 76 buildings. Trails in the Panamint Range were built to points of scenic interest, and an adobe village, laundry and trading post were constructed for the Timbisha Shoshone Tribe. Five campgrounds, restrooms, an airplane landing field and picnic facilities were also built.
The creation of the monument resulted in a temporary closing of the lands to prospecting and mining. However, Death Valley was quickly reopened to mining by Congressional action in June 1933. As improvements in mining technology allowed lower grades of ore to be processed, and new heavy equipment allowed greater amounts of rock to be moved, mining in Death Valley changed. Gone were the days of the "single-blanket, jackass prospector" long associated with the romantic west. Open pit and strip mines scarred the landscape as international mining corporations bought claims in highly visible areas of the national monument. The public outcry that ensued led to greater protection for all national park and monument areas in the United States. In 1976, Congress passed the Mining in the Parks Act, which closed Death Valley National Monument to the filing of new mining claims, banned open-pit mining and required the National Park Service to examine the validity of tens of thousands of pre-1976 mining claims. Mining was allowed to resume on a limited basis in 1980 with stricter environmental standards. The last mine in the park, Billie Mine, closed in 2005.
In 1952 President Harry Truman added the Devils Hole to Death Valley National Monument; it is the only habitat of the Devils Hole pupfish.
Death Valley National Monument was designated a biosphere reserve in 1984. On October 31, 1994, the monument was expanded by and re-designated as a national park, via congressional passage of the California Desert Protection Act (Public Law 103–433). Consequently, the elevated status for Death Valley made it the largest national park in the contiguous United States. On March 12, 2019, the John D. Dingell, Jr. Conservation, Management, and Recreation Act added to the park.
Many of the larger cities and towns within the boundary of the regional groundwater flow system that the park and its plants and animals rely upon are experiencing some of the fastest growth rates of any place in the United States. Notable examples within a radius of Death Valley National Park include Las Vegas and Pahrump, Nevada. In the case of Las Vegas, the local Chamber of Commerce estimates that 6,000 people are moving to the city every month. Between 1985 and 1995, the population of the Las Vegas Valley increased from 550,700 to 1,138,800.
In 1977, parts of Death Valley were used by director George Lucas as a filming location for Star Wars, providing the setting for the fictional planet Tatooine.
Geologic history
The park has a diverse and complex geologic history. Since its formation, the area that comprises the park has experienced at least four major periods of extensive volcanism, three or four periods of major sedimentation, and several intervals of major tectonic deformation where the crust has been reshaped. Two periods of glaciation (a series of ice ages) have also had effects on the area, although no glaciers ever existed in the ranges now in the park.
Basement and Pahrump Group
Little is known about the history of the oldest exposed rocks in the area due to extensive metamorphism (alteration of rock by heat and pressure). Radiometric dating gives an age of 1,700 million years for the metamorphism during the Proterozoic. About 1,400 million years ago a mass of granite now in the Panamint Range intruded this complex. Uplift later exposed these rocks to nearly 500 million years of erosion.
The Proterozoic sedimentary formations of the Pahrump Group were deposited on these basement rocks. This occurred following uplift and erosion of any earlier sediments from the Proterozoic basement rocks. The Pahrump is composed of arkose conglomerate (quartz clasts in a concrete-like matrix) and mudstone in its lower part, followed by dolomite from carbonate banks topped by algal mats as stromatolites, and finished with basin-filling sediment derived from the above, including possible glacial till from the hypothesized Snowball Earth glaciation. The very youngest rocks in the Pahrump Group are basaltic lava flows.
Rifting and deposition
A rift opened and subsequently flooded the region as part of the breakup of the supercontinent Rodinia in the Neoproterozoic (by about 755 million years ago) and the creation of the Pacific Ocean. A shoreline similar to the present Atlantic Ocean margin of the United States lay to the east. An algal mat-covered carbonate bank was deposited, forming the Noonday Dolomite. Subsidence of the region occurred as the continental crust thinned and the newly formed Pacific widened, forming the Ibex Formation. An angular unconformity (an uneven gap in the geologic record) followed.
A true ocean basin developed to the west, breaking all the earlier formations along a steep front. A wedge of clastic sediment then began to accumulate at the base of the two underwater precipices, starting the formation of opposing continental shelves. Three formations developed from sediment that accumulated on the wedge. The region's first known fossils of complex life are found in the resulting formations. Notable among these are the Ediacara fauna and trilobites, the evolution of the latter being part of the Cambrian Explosion of life.
The sandy mudflats gave way about 550 million years ago to a carbonate platform (similar to the one around the present-day Bahamas), which lasted for the next 300 million years of Paleozoic time (refer to the middle of the timescale image). Death Valley's position was then within ten or twenty degrees of the Paleozoic equator. Thick beds of carbonate-rich sediments were periodically interrupted by periods of emergence. Although details of geography varied during this immense interval of time, a north-northeastern coastline trend generally ran from Arizona up through Utah. The resulting eight formations and one group are thick and underlay much of the Cottonwood, Funeral, Grapevine, and Panamint ranges.
Compression and uplift
In the early-to-mid- Mesozoic the western edge of the North American continent was pushed against the oceanic plate under the Pacific Ocean, creating a subduction zone. A subduction zone is a type of contact between different crustal plates where heavier crust slides below lighter crust. Erupting volcanoes and uplifting mountains were created as a result, and the coastline was pushed to the west. The Sierran Arc started to form to the northwest from heat and pressure generated from subduction, and compressive forces caused thrust faults to develop.
A long period of uplift and erosion was concurrent with and followed the above events, creating a major unconformity, which is a large gap in the geologic record. Sediments worn off the Death Valley region were carried both east and west by wind and water. No Jurassic- to Eocene-aged sedimentary formations exist in the area, except for some possibly Jurassic-age volcanic rocks (see the top of the timescale image).
Stretching and lakes
Basin and Range-associated stretching of large parts of crust below southwestern United States and northwestern Mexico started around 16 million years ago and the region is still spreading. This stretching began to affect the Death and Panamint valleys area by 3 million years ago. Before this, rocks now in the Panamint Range were on top of rocks that would become the Black Mountains and the Cottonwood Mountains. Lateral and vertical transport of these blocks was accomplished by movement on normal faults. Right-lateral movement along strike-slip faults that run parallel to and at the base of the ranges also helped to develop the area. Torsional forces, probably associated with northwesterly movement of the Pacific plate along the San Andreas Fault (west of the region), is responsible for the lateral movement.
Igneous activity associated with this stretching occurred from 12 million to 4 million years ago. Sedimentation is concentrated in valleys (basins) from material eroded from adjacent ranges. The amount of sediment deposited has roughly kept up with this subsidence, resulting in the retention of more or less the same valley floor elevation over time.
Pleistocene ice ages started 2 million years ago, and melt from alpine glaciers on the nearby Sierra Nevada Mountains fed a series of lakes that filled Death and Panamint valleys and surrounding basins (see the top of the timescale image). The lake that filled Death Valley was the last of a chain of lakes fed by the Amargosa and Mojave Rivers, and possibly also the Owens River. The large lake that covered much of Death Valley's floor, which geologists call Lake Manly, started to dry up 10,500 years ago. Salt pans and playas were created as ice age glaciers retreated, thus drastically reducing the lakes' water source. Only faint shorelines are left.
Biology
Habitat varies from salt pan at below sea level to the sub-alpine conditions found on the summit of Telescope Peak, which rises to . Vegetation zones include creosote bush, desert holly, and mesquite at the lower elevations and sage up through shadscale, blackbrush, Joshua tree, pinyon-juniper, to limber pine and bristlecone pine woodlands. The salt pan is devoid of vegetation, and the rest of the valley floor and lower slopes have sparse cover, although where water is available, an abundance of vegetation is usually present.
These zones and the adjacent desert support a variety of wildlife species, including 51 species of native mammals, 307 species of birds, 36 species of reptiles, 3 species of amphibians, and 2 species of native fish.
Small mammals are more numerous than large mammals, such as bighorn sheep, coyotes, bobcats, kit foxes, cougars, and mule deer. Mule deer are present in the pinyon/juniper associations of the Grapevine, Cottonwood, and Panamint ranges. Bighorn sheep are a rare species of mountain-dwelling sheep that exist in isolated bands in the Sierra and in Death Valley. These are highly adaptable animals and can eat almost any plant. They have no known predators, but humans and burros compete for habitat.
The ancestors of the Death Valley pupfish swam to the area from the Colorado River via a long-since dried-up system of rivers and lakes (see Lake Manly). They now live in two separate populations: one in Salt Creek and another in Cottonball Marsh. Death Valley is one of the hottest and driest places in North America, yet it is home to over 1,000 species of plants; 23 of which, including the very rare rock lady (Holmgrenanthe), are not found anywhere else.
Adaptation to the dry environment is key. For example, creosote bush and mesquite have tap-root systems that can extend down in order to take advantage of a year-round supply of ground water. The diversity of Death Valley's plant communities results partly from the region's location in a transition zone between the Mojave Desert, the Great Basin Desert and the Sonoran Desert. This location, combined with the great relief found within the park, supports vegetation typical of three biotic life zones: the lower Sonoran, the Canadian, and the arctic/alpine in portions of the Panamint Range. Based on the Munz and Keck (1968) classifications, seven plant communities can be categorized within these life zones, each characterized by dominant vegetation and representative of three vegetation types: scrub, desert woodland, and coniferous forest. Microhabitats further subdivide some communities into zones, especially on the valley floor.
Unlike more typical locations across the Mojave Desert, many of the water-dependent Death Valley habitats possess a diversity of plant and animal species that are not found anywhere else in the world. The existence of these species is due largely to a unique geologic history and the process of evolution that has progressed in habitats that have been isolated from one another since the Pleistocene epoch.
Activities
Sightseeing is available by personal automobile, four-wheel drive, bicycle, mountain bike (on established roadways only), and hiking. Riding through the park on motorcycle is also a popular pastime. State Route 190, the Badwater Road, the Scotty's Castle Road, and paved roads to Dante's View and Wildrose provide access to the major scenic viewpoints and historic points of interest. More than of unpaved and four-wheel-drive roads provide access to wilderness hiking, camping, and historical sites. All vehicles must be licensed and street legal. Unlike many other national parks in the U.S. there are no formal entrance stations, and instead entry fees can be paid at the visitor centers, ranger stations, or various fee machines around the park. There are hiking trails of varying lengths and difficulties, but most backcountry areas are accessible only by cross-country hiking. There are thousands of hiking possibilities. The normal season for visiting the park is from October 15 to May 15, avoiding summer extremes in temperature. Costumed living history tours of the historic Death Valley Scotty's Castle were conducted for a fee, but were suspended in October 2015 due to extensive flood damage to the buildings and grounds. It remains closed to the public.
There are nine designated campgrounds within the park, and overnight backcountry camping permits are available at the Visitor Center. Xanterra Parks & Resorts owns and operates a private resort, the Oasis at Death Valley, which comprises two separate and distinct hotels: the Inn at Death Valley is a four-star historic hotel, and the Ranch at Death Valley is a three-star ranch-style property reminiscent of the mining and prospecting days. Panamint Springs Resort is in the western part of the park. Death Valley Lodging Company operates the Stovepipe Wells Resort under a concession permit. There are a few motels near entrances to the park, in Shoshone, Death Valley Junction, Beatty, and Pahrump.
Furnace Creek Visitor Center is located on CA-190. A 22-minute introductory slide program is shown every 30 minutes. During the winter season—November through April—rangers offer interpretive tours and a wide variety of walks, talks, and slide presentations about Death Valley cultural and natural history. The visitor center has displays dealing with the park's geology, climate, wildlife and natural history. There are also specific sections dealing with the human history and pioneer experience. The maintains a bookstore specifically geared to the natural and cultural history of the park.
The northeast corner of Saline Valley has several developed hot spring pools. The pools can be accessed by driving on the unpaved Saline Valley Road for several hours, or by flying a personal aircraft to the Chicken Strip—an uncharted airstrip a short walk from the springs.
Death Valley National Park is a popular location for stargazing as it has one of the darkest night skies in the United States. Despite its remote location, air quality and night visibility are threatened by civilization. In particular, light pollution is introduced by nearby Las Vegas. The darkest skies are, in general, located in the northwest of the park. The northwestern area of the park, including sites such as Ubehebe Crater, is a Bortle class 1 or "excellent dark sky" site. The Andromeda Galaxy and the Triangulum Galaxy are visible to the unaided eye under these conditions, and the Milky Way casts shadows; optical phenomena such as zodiacal light or "false dawn" and gegenschein are also visible to the unaided eye under these conditions. Most southern regions of the park are Bortle class 2 or "average dark sky" sites.
See also
Henry Wade Exit Route California Historic Landmark
List of national parks of the United States
List of nationally protected areas of the United States
National parks in California
National Register of Historic Places listings in Death Valley National Park
Notes
References
Explanatory notes
Citations
Bibliography
(adapted public domain text)
Rothman, Hal K., and Char Miller. Death Valley National Park: A History (University of Nevada Press; 2013) 216 pages; an environmental and human history
(adapted public domain text)
External links
Death Valley National Park by the National Park Service
1920s images of Death Valley and Surrounding Locales from the Death Valley Region Photographs Digital Collection: Utah State University
Death Valley National Park by the Death Valley Conservancy
(Archived version)
Historic American Engineering Record in California
National parks in California
Protected areas of the Mojave Desert
Protected areas of the Great Basin
Sandboarding locations
Parks in San Bernardino County, California
National parks in Nevada
Protected areas of Nye County, Nevada
Protected areas of Esmeralda County, Nevada
Protected areas established in 1994
Civilian Conservation Corps in California
Parks in Southern California
Dark-sky preserves in the United States
1994 establishments in California
1994 establishments in Nevada | Death Valley National Park | [
"Astronomy"
] | 6,953 | [
"Dark-sky preserves in the United States",
"Dark-sky preserves"
] |
46,795 | https://en.wikipedia.org/wiki/Mono%20Lake | Mono Lake ( ) is a saline soda lake in Mono County, California, formed at least 760,000 years ago as a terminal lake in an endorheic basin. The lack of an outlet causes high levels of salts to accumulate in the lake which make its water alkaline.
The desert lake has an unusually productive ecosystem based on brine shrimp, which thrive in its waters, and provides critical habitat for two million annual migratory birds that feed on the shrimp and alkali flies (Ephydra hians). Historically, the native Kutzadika'a people ate the alkali flies' pupae, which live in the shallow waters around the edge of the lake.
When the city of Los Angeles diverted water from the freshwater streams flowing into the lake, it lowered the lake level, which imperiled the migratory birds. The Mono Lake Committee formed in response and won a legal battle that forced Los Angeles to partially replenish the lake level.
Geology
Mono Lake occupies part of the Mono Basin, an endorheic basin that has no outlet to the ocean. Dissolved salts in the runoff thus remain in the lake and raise the water's pH levels and salt concentration. The tributaries of Mono Lake include Lee Vining Creek, Rush Creek and Mill Creek which flows through Lundy Canyon.
The basin was formed by geological forces over the last five million years: basin and range crustal stretching and associated volcanism and faulting at the base of the Sierra Nevada.
From 4.5 to 2.6 million years ago, large volumes of basalt were extruded around what is now Cowtrack Mountain (east and south of Mono Basin); eventually covering and reaching a maximum thickness of . Later volcanism in the area occurred 3.8 million to 250,000 years ago. This activity was northwest of Mono Basin and included the formation of Aurora Crater, Beauty Peak, Cedar Hill (later an island in the highest stands of Mono Lake), and Mount Hicks.
Lake Russell was the prehistoric predecessor to Mono Lake, during the Pleistocene. Its shoreline reached the modern-day elevation of , about higher than the present-day lake. As of 1.6 million years ago, Lake Russell discharged to the northeast, into the Walker River drainage. After the Long Valley Caldera eruption 760,000 years ago, Lake Russell discharged into Adobe Lake to the southeast, then into the Owens River, and eventually into Lake Manly in Death Valley. Prominent shore lines of Lake Russell, called strandlines by geologists, can be seen west of Mono Lake.
The area around Mono Lake is currently geologically active. Volcanic activity is related to the Mono–Inyo Craters: the most recent eruption occurred 350 years ago, resulting in the formation of Paoha Island. Panum Crater (on the south shore of the lake) is an example of a combined rhyolite dome and cinder cone.
Tufa towers
Many columns of limestone rise above the surface of Mono Lake. These limestone towers consist primarily of calcium carbonate minerals such as calcite (CaCO3). This type of limestone rock is referred to as tufa, which is a term used for limestone that forms in low to moderate temperatures.
Tufa tower formation
Mono Lake is a highly alkaline lake, or soda lake. Alkalinity is a measure of how many bases are in a solution, and how well the solution can neutralize acids. Carbonate (CO32-) and bicarbonate (HCO3−) are both bases. Hence, Mono Lake has a very high content of dissolved inorganic carbon. Through supply of calcium ions (Ca2+), the water will precipitate carbonate-minerals such as calcite (CaCO3). Subsurface waters enter the bottom of Mono Lake through small springs. High concentrations of dissolved calcium ions in these subsurface waters cause huge amounts of calcite to precipitate around the spring orifices.
The tufa originally formed at the bottom of the lake. It took many decades or even centuries to form the well-recognized tufa towers. When lake levels fell, the tufa towers came to rise above the water surface and stand as the pillars seen today (see Mono lake#Lake Level History for more information).
Tufa morphology
Description of the Mono Lake tufa dates back to the 1880s, when Edward S. Dana and Israel C. Russell made the first systematic descriptions of the Mono Lake tufa. The tufa occurs as "modern" tufa towers. There are tufa sections from old shorelines, when the lake levels were higher. These pioneering works in tufa morphology are referred to by researchers and were confirmed by James R. Dunn in 1953. The tufa types can roughly be divided into three main categories based on morphology:
Lithoid tufa - massive and porous with a rock-like appearance
Dendritic tufa - branching structures that look similar to small shrubs
Thinolitic tufa - large well-formed crystals of several centimeters
Through time, many hypotheses were developed regarding the formation of the large thinolite crystals (also referred to as glendonite) in thinolitic tufa. It was relatively clear that the thinolites represented a calcite pseudomorph after some unknown original crystal. The original crystal was only determined when the mineral ikaite was discovered in 1963. Ikaite, or hexahydrated CaCO3, is metastable and only crystallizes at near-freezing temperatures. It is also believed that calcite crystallization inhibitors such as phosphate, magnesium, and organic carbon may aid in the stabilization of ikaite. When heated, ikaite breaks down and becomes replaced by smaller crystals of calcite. In the Ikka Fjord of Greenland, ikaite was also observed to grow in columns similar to the tufa towers of Mono Lake. This has led scientists to believe that thinolitic tufa is an indicator of past climates in Mono Lake because they reflect very cold temperatures.
Tufa chemistry
Russell (1883) studied the chemical composition of the different tufa types in Lake Lahontan, a large Pleistocene system of multiple lakes in California, Nevada, and Oregon. Not surprisingly, it was found that the tufas consisted primarily of CaO and CO2. However, they also contain minor constituents of MgO (~2 wt%), Fe/Al-oxides (.25-1.29 wt%), and PO5 (0.3 wt%).
Climate
Limnology
The limnology of the lake shows it contains approximately 280 million tons of dissolved salts, with the salinity varying depending upon the amount of water in the lake at any given time. Before 1941, average salinity was approximately 50 grams per liter (g/L) (compared to a value of 31.5 g/L for the world's oceans). In January 1982, when the lake reached its lowest level of , the salinity had nearly doubled to 99 g/L. In 2002, it was measured at 78 g/L and is expected to stabilize at an average 69 g/L as the lake replenishes over the next 20 years.
An unintended consequence of ending the water diversions was the onset of a period of "meromixis" in Mono Lake. In the time prior to this, Mono Lake was typically "monomictic"; which means that at least once each year the deeper waters and the shallower waters of the lake mixed thoroughly, thus bringing oxygen and other nutrients to the deep waters. In meromictic lakes, the deeper waters do not undergo this mixing; the deeper layers are more saline than the water near the surface, and are typically nearly devoid of oxygen. As a result, becoming meromictic greatly changes a lake's ecology.
Mono Lake has experienced meromictic periods in the past; this most recent episode of meromixis, brought on by the end of the water diversions, commenced in 1994 and had ended by 2004.
Lake-level history
An important characteristic of Mono Lake is that it is a closed lake, meaning it has no outflow. Water can only escape the lake if it evaporates or is lost to groundwater. This may cause closed lakes to become very saline. The reconstruction of historical Mono Lake levels through carbon and oxygen isotopes have also revealed a correlation with well-documented changes in climate.
In the recent past, Earth experienced periods of increased glaciation known as ice ages. This geological period of ice ages is known as the Pleistocene, which lasted until ~11 ka. Lake levels in Mono Lake can reveal how the climate fluctuated. For example, during the cold climate of the Pleistocene the lake level was higher because there was less evaporation and more precipitation. Following the Pleistocene, the lake level was generally lower due to increased evaporation and decreased precipitation associated with a warmer climate.
The lake level has fluctuated during the Holocene, since the end of the ice ages. The Holocene high point is at elevation , reached in approximately 1820 BCE. The low point before modern diversions is at elevation , reached in 143 CE. The lowest modern level due to diversions is at , reached in 1980.
Ecology
Aquatic life
The hypersalinity and high alkalinity (pH=10 or equivalent to 4 milligrams of NaOH per liter of water) of the lake means that no fish are native to the lake. An attempt by the California Department of Fish and Game to stock the lake failed.
The whole food chain of the lake is based on the high population of single-celled planktonic algae present in the photic zone of the lake. These algae reproduce rapidly during winter and early spring after winter runoff brings nutrients to the surface layer of water. By March the lake is "as green as pea soup" with photosynthesizing algae.
The lake is famous for the Mono Lake brine shrimp, Artemia monica, a tiny species of brine shrimp, no bigger than a thumbnail, that are endemic to the lake. During the warmer summer months, an estimated 4–6 trillion brine shrimp inhabit the lake. Brine shrimp have no food value for humans, but are a staple for birds of the region. The brine shrimp feed on microscopic algae.
Alkali flies, Ephydra hians, live along the shores of the lake and walk underwater, encased in small air bubbles, for grazing and to lay eggs. These flies are an important source of food for migratory and nesting birds.
Eight nematode species were found living in the littoral sediment:
Auanema spec., which is outstanding for its extreme arsenic resistance (survives concentrations 500 times higher than humans), having 3 sexes, and being viviparous.
Pellioditis spec.
Mononchoides americanus
Diplogaster rivalis
species of the family Mermithidae
Prismatolaimus dolichurus
2 species of the order Monhysterida
Birds
Mono Lake is a vital resting and eating stop for migratory shorebirds and has been recognized as a site of international importance by the Western Hemisphere Shorebird Reserve Network.
Nearly 2,000,000 waterbirds, including 35 species of shorebirds, use Mono Lake to rest and eat for at least part of the year. Some shorebirds that depend on the resources of Mono Lake include American avocets, killdeer, and sandpipers. One to two million eared grebes and phalaropes use Mono Lake during their long migrations.
Late every summer tens of thousands of Wilson's phalaropes and red-necked phalaropes arrive from their nesting grounds, and feed until they continue their migration to South America or the tropical oceans respectively.
In addition to migratory birds, a few species spend several months to nest at Mono Lake. Mono Lake has the second largest nesting population of California gulls, Larus californicus, second only to the Great Salt Lake in Utah. Since abandoning the landbridged Negit Island in the late 1970s, California gulls have moved to some nearby islets and have established new, if less protected, nesting sites. Cornell University and Point Blue Conservation Science have continued the study of nesting populations on Mono Lake that was begun 35 years ago. Snowy plovers also arrive at Mono Lake each spring to nest along the northern and eastern shores.
History
Native Americans
The indigenous people of Mono Lake are from a band of the Northern Paiute, called the Kutzadika'a. They speak the Northern Paiute language. The Kutzadika'a traditionally forage alkali fly pupae, called kutsavi in their language.
The term "Mono" is derived from "Monachi", a Yokuts term for the tribes that live on both the east and west side of the Sierra Nevada.
During early contact, the first known Mono Lake Paiute chief was Captain John.
The Mono tribe has two bands: Eastern and Western. The Eastern Mono joined the Western Mono bands' villages annually at Hetch Hetchy Valley, Yosemite Valley, and along the Merced River to gather acorns, different plant species, and to trade. The Western Mono and Eastern mono traditionally lived in the south-central Sierra Nevada foothills, including Historical Yosemite Valley.
Present day Mono Reservations are currently located in Big Pine, Bishop, and several in Madera County and Fresno County, California.
Conservation efforts
The city of Los Angeles diverted water from the Owens River into the Los Angeles Aqueduct in 1913. In 1941, the Los Angeles Department of Water and Power extended the Los Angeles Aqueduct system farther northward into the Mono Basin with the completion of the Mono Craters Tunnel between the Grant Lake Reservoir on Rush Creek and the Upper Owens River. So much water was diverted that evaporation soon exceeded inflow and the surface level of Mono Lake fell rapidly. By 1982 the lake was reduced to , 69 percent of its 1941 surface area. By 1990, the lake had dropped 45 vertical feet and had lost half its volume relative to the 1941 pre-diversion water level. As a result, alkaline sands and formerly submerged tufa towers became exposed, the water salinity doubled, and Negit Island became a peninsula, exposing the nests of California gulls to predators (such as coyotes), and forcing the gull colony to abandon this site.
In 1974, ecologist David Gaines and his student David Winkler studied the Mono Lake ecosystem and became instrumental in alerting the public of the effects of the lower water level with Winkler's 1976 ecological inventory of the Mono Basin. The National Science Foundation funded the first comprehensive ecological study of Mono Lake, conducted by Gaines and undergraduate students. In June 1977, the Davis Institute of Ecology of the University of California published a report, "An Ecological Study of Mono Lake, California," which alerted California to the ecological dangers posed by the redirection of water away from the lake for municipal uses.
Gaines formed the Mono Lake Committee in 1978. He and Sally Judy, a UC Davis student, led the committee and pursued an informational tour of California. They joined with the Audubon Society to fight a now famous court battle, the National Audubon Society v. Superior Court, to protect Mono Lake through state public trust laws. While these efforts have resulted in positive change, the surface level is still below historical levels, and exposed shorelines are a source of significant alkaline dust during periods of high winds.
Owens Lake, the once-navigable terminus of the Owens River which had sustained a healthy ecosystem, is now a dry lake bed during dry years due to water diversion beginning in the 1920s. Mono Lake was spared this fate when the California State Water Resources Control Board (after over a decade of litigation) issued an order (SWRCB Decision 1631) to protect Mono Lake and its tributary streams on September 28, 1994. SWRCB Board Vice-chair Marc Del Piero was the sole Hearing Officer (see D-1631). In 1941 the surface level was at above sea level. As of October 2022, Mono Lake was at above sea level. The lake level of above sea level is the goal, designed to ensure that the lake would be able to reach and sustain a minimum surface level that is generally agreed to be the minimum for keeping the ecosystem healthy. It has been more difficult during years of drought in the American West.
In popular culture
Artwork
In 1968, the artist Robert Smithson made Mono Lake Non-Site (Cinders near Black Point) using pumice collected while visiting Mono on July 27, 1968, with his wife Nancy Holt and Michael Heizer (both prominent visual artists). In 2004, Nancy Holt made a short film entitled Mono Lake using Super 8 footage and photographs of this trip. An audio recording by Smithson and Heizer, two songs by Waylon Jennings, and Michel Legrand's Le Jeu, the main theme of Jacques Demy's film Bay of Angels (1963), were used for the soundtrack.
The Diver, a photo taken by Aubrey Powell of Hipgnosis for Pink Floyd's album Wish You Were Here (1975), features what appears to be a man diving into a lake, creating no ripples. The photo was taken at Mono Lake, and the tufa towers are a prominent part of the landscape. The effect was actually created when the diver performed a handstand underwater until the ripples dissipated.
In print
Mark Twain's Roughing It, published in 1872, provides an informative early description of Mono Lake in its natural condition in the 1860s. Twain found the lake to be lying "in a lifeless, treeless, hideous desert... the loneliest place on earth."
In film
A scene featuring a volcano in the film Fair Wind to Java (1953) was shot at Mono Lake.
Most of the film High Plains Drifter (1973) by Clint Eastwood was shot on the southern shores of Mono Lake in the 1970s. An entire town was built here for the film, and later removed when shooting was complete.
In music
The music video for glam metal band Cinderella's 1988 power ballad "Don't Know What You Got ('Till It's Gone)" was filmed by the lake.
See also
Bodie, a nearby ghost town
List of lakes in California
Mono Lake Tufa State Reserve
Mono Basin National Scenic Area
GFAJ-1, an organism from Mono Lake that has been at the center of a scientific controversy over hypothetical arsenic in DNA.
List of drying lakes
Whoa Nellie Deli, located in Lee Vining, California, overlooking Mono Lake
Monolake, a Berlin-based electronic music project named after the lake
References
Bibliography
Jayko, A.S., et al. (2013). Methods and Spatial Extent of Geophysical Investigations, Mono Lake, California, 2009 to 2011. Reston, Va.: U.S. Department of the Interior, U.S. Geological Survey.
External links
Mono Lake Area Visitor Information
Mono Lake Tufa State Nature Reserve
Mono Lake Committee website
Mono Lake Visitor Guide
Landsat image of Mono Lake
Roadside Geology and Mining History of the Owens Valley and Mono Basin
Saline lakes of the United States
Shrunken lakes
Lakes of Mono County, California
California placenames of Native American origin
Inyo National Forest
Mono people
Native American history of California
Lakes of the Sierra Nevada (United States)
Lakes of the Great Basin
Environment of California
Tourist attractions in Mono County, California
Endorheic lakes of California
Environmental controversies
Lakes of California
Lakes of Northern California
Geological type localities
Eutrophication | Mono Lake | [
"Chemistry",
"Environmental_science"
] | 3,989 | [
"Eutrophication",
"Environmental chemistry",
"Water pollution"
] |
46,802 | https://en.wikipedia.org/wiki/Simple%20continued%20fraction | A simple or regular continued fraction is a continued fraction with numerators all equal one, and denominators built from a sequence of integer numbers. The sequence can be finite or infinite, resulting in a finite (or terminated) continued fraction like
or an infinite continued fraction like
Typically, such a continued fraction is obtained through an iterative process of representing a number as the sum of its integer part and the reciprocal of another number, then writing this other number as the sum of its integer part and another reciprocal, and so on. In the finite case, the iteration/recursion is stopped after finitely many steps by using an integer in lieu of another continued fraction. In contrast, an infinite continued fraction is an infinite expression. In either case, all integers in the sequence, other than the first, must be positive. The integers are called the coefficients or terms of the continued fraction.
Simple continued fractions have a number of remarkable properties related to the Euclidean algorithm for integers or real numbers. Every rational number has two closely related expressions as a finite continued fraction, whose coefficients can be determined by applying the Euclidean algorithm to . The numerical value of an infinite continued fraction is irrational; it is defined from its infinite sequence of integers as the limit of a sequence of values for finite continued fractions. Each finite continued fraction of the sequence is obtained by using a finite prefix of the infinite continued fraction's defining sequence of integers. Moreover, every irrational number is the value of a unique infinite regular continued fraction, whose coefficients can be found using the non-terminating version of the Euclidean algorithm applied to the incommensurable values and 1. This way of expressing real numbers (rational and irrational) is called their continued fraction representation.
Motivation and notation
Consider, for example, the rational number , which is around 4.4624. As a first approximation, start with 4, which is the integer part; . The fractional part is the reciprocal of which is about 2.1628. Use the integer part, 2, as an approximation for the reciprocal to obtain a second approximation of ;
the remaining fractional part, , is the reciprocal of , and is around 6.1429. Use 6 as an approximation for this to obtain as an approximation for and , about 4.4615, as the third approximation. Further, . Finally, the fractional part, , is the reciprocal of 7, so its approximation in this scheme, 7, is exact () and produces the exact expression for .
That expression is called the continued fraction representation of . This can be represented by the abbreviated notation = [4; 2, 6, 7]. (It is customary to replace only the first comma by a semicolon to indicate that the preceding number is the whole part.) Some older textbooks use all commas in the -tuple, for example, [4, 2, 6, 7].
If the starting number is rational, then this process exactly parallels the Euclidean algorithm applied to the numerator and denominator of the number. In particular, it must terminate and produce a finite continued fraction representation of the number. The sequence of integers that occur in this representation is the sequence of successive quotients computed by the Euclidean algorithm. If the starting number is irrational, then the process continues indefinitely. This produces a sequence of approximations, all of which are rational numbers, and these converge to the starting number as a limit. This is the (infinite) continued fraction representation of the number. Examples of continued fraction representations of irrational numbers are:
. The pattern repeats indefinitely with a period of 6.
. The pattern repeats indefinitely with a period of 3 except that 2 is added to one of the terms in each cycle.
. No pattern has ever been found in this representation.
. The golden ratio, the irrational number that is the "most difficult" to approximate rationally .
. The Euler–Mascheroni constant, which is expected but not known to be irrational, and whose continued fraction has no apparent pattern.
Continued fractions are, in some ways, more "mathematically natural" representations of a real number than other representations such as decimal representations, and they have several desirable properties:
The continued fraction representation for a real number is finite if and only if it is a rational number. In contrast, the decimal representation of a rational number may be finite, for example , or infinite with a repeating cycle, for example
Every rational number has an essentially unique simple continued fraction representation. Each rational can be represented in exactly two ways, since . Usually the first, shorter one is chosen as the canonical representation.
The simple continued fraction representation of an irrational number is unique. (However, additional representations are possible when using generalized continued fractions; see below.)
The real numbers whose continued fraction eventually repeats are precisely the quadratic irrationals. For example, the repeating continued fraction is the golden ratio, and the repeating continued fraction is the square root of 2. In contrast, the decimal representations of quadratic irrationals are apparently random. The square roots of all (positive) integers that are not perfect squares are quadratic irrationals, and hence are unique periodic continued fractions.
The successive approximations generated in finding the continued fraction representation of a number, that is, by truncating the continued fraction representation, are in a certain sense (described below) the "best possible".
Formulation
A continued fraction in canonical form is an expression of the form
where ai are integer numbers, called the coefficients or terms of the continued fraction.
When the expression contains finitely many terms, it is called a finite continued fraction.
When the expression contains infinitely many terms, it is called an infinite continued fraction.
When the terms eventually repeat from some point onwards, the continued fraction is called periodic.
Thus, all of the following illustrate valid finite simple continued fractions:
For simple continued fractions of the form
the term can be calculated using the following recursive formula:
where and
from which it can be understood that the sequence stops if .
Notations
Consider a continued fraction expressed as
Because such a continued fraction expression may take a significant amount of vertical space, a number of methods have been tried to shrink it.
Gottfried Leibniz sometimes used the notation
and later the same idea was taken even further with the nested fraction bars drawn aligned, for example by Alfred Pringsheim as
or in more common related notations as
or
Carl Friedrich Gauss used a notation reminiscent of summation notation,
or in cases where the numerator is always 1, eliminated the fraction bars altogether, writing a list-style
Sometimes list-style notation uses angle brackets instead,
The semicolon in the square and angle bracket notations is sometimes replaced by a comma.
One may also define infinite simple continued fractions as limits:
This limit exists for any choice of and positive integers .
Calculating continued fraction representations
Consider a real number .
Let and let .
When , the continued fraction representation of is
, where is the continued fraction representation of . When , then is the integer part of , and is the fractional part of .
In order to calculate a continued fraction representation of a number , write down the floor of . Subtract this value from . If the difference is 0, stop; otherwise find the reciprocal of the difference and repeat. The procedure will halt if and only if is rational. This process can be efficiently implemented using the Euclidean algorithm when the number is rational.
The table below shows an implementation of this procedure for the number :
{| class="wikitable"
|-
!Step
!RealNumber
!Integerpart
!Fractionalpart
!Simplified
!Reciprocalof
|-
!1
|
|
|
|
|
|-
!2
|
|
|
|
|
|-
!3
|
|
|
|
|
|-
!4
|
|
|
|
|colspan="2"|STOP
|}
The continued fraction for is thus or, expanded:
Reciprocals
The continued fraction representations of a positive rational number and its reciprocal are identical except for a shift one place left or right depending on whether the number is less than or greater than one respectively. In other words, the numbers represented by
and are reciprocals.
For instance if is an integer and then
and .
If then
and .
The last number that generates the remainder of the continued fraction is the same for both and its reciprocal.
For example,
and .
Finite continued fractions
Every finite continued fraction represents a rational number, and every rational number can be represented in precisely two different ways as a finite continued fraction, with the conditions that the first coefficient is an integer and the other coefficients are positive integers. These two representations agree except in their final terms. In the longer representation the final term in the continued fraction is 1; the shorter representation drops the final 1, but increases the new final term by 1. The final element in the short representation is therefore always greater than 1, if present. In symbols:
.
.
Infinite continued fractions and convergents
Every infinite continued fraction is irrational, and every irrational number can be represented in precisely one way as an infinite continued fraction.
An infinite continued fraction representation for an irrational number is useful because its initial segments provide rational approximations to the number. These rational numbers are called the convergents of the continued fraction. The larger a term is in the continued fraction, the closer the corresponding convergent is to the irrational number being approximated. Numbers like π have occasional large terms in their continued fraction, which makes them easy to approximate with rational numbers. Other numbers like e have only small terms early in their continued fraction, which makes them more difficult to approximate rationally. The golden ratio Φ has terms equal to 1 everywhere—the smallest values possible—which makes Φ the most difficult number to approximate rationally. In this sense, therefore, it is the "most irrational" of all irrational numbers. Even-numbered convergents are smaller than the original number, while odd-numbered ones are larger.
For a continued fraction , the first four convergents (numbered 0 through 3) are
The numerator of the third convergent is formed by multiplying the numerator of the second convergent by the third coefficient, and adding the numerator of the first convergent. The denominators are formed similarly. Therefore, each convergent can be expressed explicitly in terms of the continued fraction as the ratio of certain multivariate polynomials called continuants.
If successive convergents are found, with numerators , , ... and denominators , , ... then the relevant recursive relation is that of Gaussian brackets:
The successive convergents are given by the formula
Thus to incorporate a new term into a rational approximation, only the two previous convergents are necessary. The initial "convergents" (required for the first two terms) are 0⁄1 and 1⁄0. For example, here are the convergents for [0;1,5,2,2].
{| class="wikitable"
|- align="right"
!
| −2|| −1|| 0 || 1 || 2 || 3 || 4
|- align="right"
!
| || || 0 || 1 || 5 || 2 || 2
|- align="right"
!
| 0 || 1 || 0 || 1 || 5 || 11 || 27
|- align="right"
!
| 1 || 0 || 1 || 1 || 6 || 13 || 32
|}
When using the Babylonian method to generate successive approximations to the square root of an integer, if one starts with the lowest integer as first approximant, the rationals generated all appear in the list of convergents for the continued fraction. Specifically, the approximants will appear on the convergents list in positions 0, 1, 3, 7, 15, ... , , ... For example, the continued fraction expansion for is . Comparing the convergents with the approximants derived from the Babylonian method:
{| class="wikitable"
|- align="right"
!
| −2|| −1|| 0 || 1 || 2 || 3 || 4 || 5 || 6 || 7
|- align="right"
!
| || || 1 || 1 || 2 || 1 || 2 || 1 || 2 || 1
|- align="right"
!
| 0 || 1 || 1 || 2 || 5 || 7 || 19 || 26 || 71 || 97
|- align="right"
!
| 1 || 0 || 1 || 1 || 3 || 4 || 11 || 15 || 41 || 56
|}
Properties
The Baire space is a topological space on infinite sequences of natural numbers. The infinite continued fraction provides a homeomorphism from the Baire space to the space of irrational real numbers (with the subspace topology inherited from the usual topology on the reals). The infinite continued fraction also provides a map between the quadratic irrationals and the dyadic rationals, and from other irrationals to the set of infinite strings of binary numbers (i.e. the Cantor set); this map is called the Minkowski question-mark function. The mapping has interesting self-similar fractal properties; these are given by the modular group, which is the subgroup of Möbius transformations having integer values in the transform. Roughly speaking, continued fraction convergents can be taken to be Möbius transformations acting on the (hyperbolic) upper half-plane; this is what leads to the fractal self-symmetry.
The limit probability distribution of the coefficients in the continued fraction expansion of a random variable uniformly distributed in (0, 1) is the Gauss–Kuzmin distribution.
Some useful theorems
If is an infinite sequence of positive integers, define the sequences and recursively:
Theorem 1. For any positive real number
Theorem 2. The convergents of are given by
or in matrix form,
Theorem 3. If the th convergent to a continued fraction is then
or equivalently
Corollary 1: Each convergent is in its lowest terms (for if and had a nontrivial common divisor it would divide which is impossible).
Corollary 2: The difference between successive convergents is a fraction whose numerator is unity:
Corollary 3: The continued fraction is equivalent to a series of alternating terms:
Corollary 4: The matrix
has determinant , and thus belongs to the group of
unimodular matrices
Corollary 5: The matrix
has determinant , or equivalently,meaning that the odd terms monotonically decrease, while the even terms monotonically increase.
Corollary 6: The denominator sequence satisfies the recurrence relation , and grows at least as fast as the Fibonacci sequence, which itself grows like where is the golden ratio.
Theorem 4. Each (th) convergent is nearer to a subsequent (th) convergent than any preceding (th) convergent is. In symbols, if the th convergent is taken to be then
for all
Corollary 1: The even convergents (before the th) continually increase, but are always less than
Corollary 2: The odd convergents (before the th) continually decrease, but are always greater than
Theorem 5.
Corollary 1: A convergent is nearer to the limit of the continued fraction than any fraction whose denominator is less than that of the convergent.
Corollary 2: A convergent obtained by terminating the continued fraction just before a large term is a close approximation to the limit of the continued fraction.Theorem 6: Consider the set of all open intervals with end-points . Denote it as . Any open subset of is a disjoint union of sets from .Corollary: The infinite continued fraction provides a homeomorphism from the Baire space to .
Semiconvergents
If
are consecutive convergents, then any fractions of the form
where is an integer such that , are called semiconvergents, secondary convergents, or intermediate fractions. The -st semiconvergent equals the mediant of the -th one and the convergent . Sometimes the term is taken to mean that being a semiconvergent excludes the possibility of being a convergent (i.e., ), rather than that a convergent is a kind of semiconvergent.
It follows that semiconvergents represent a monotonic sequence of fractions between the convergents (corresponding to ) and (corresponding to ). The consecutive semiconvergents and satisfy the property .
If a rational approximation to a real number is such that the value is smaller than that of any approximation with a smaller denominator, then is a semiconvergent of the continued fraction expansion of . The converse is not true, however.
Best rational approximations
One can choose to define a best rational approximation to a real number as a rational number , , that is closer to than any approximation with a smaller or equal denominator. The simple continued fraction for can be used to generate all of the best rational approximations for by applying these three rules:
Truncate the continued fraction, and reduce its last term by a chosen amount (possibly zero).
The reduced term cannot have less than half its original value.
If the final term is even, half its value is admissible only if the corresponding semiconvergent is better than the previous convergent. (See below.)
For example, 0.84375 has continued fraction [0;1,5,2,2]. Here are all of its best rational approximations.
{| class="wikitable"
|- align="center"
! Continued fraction
| [0;1] || [0;1,3] || [0;1,4] || [0;1,5] || [0;1,5,2] || [0;1,5,2,1] || [0;1,5,2,2]
|- align="center"
! Rational approximation
| 1 || || || || || ||
|- align="center"
! Decimal equivalent
| 1 || 0.75 || 0.8 || ~0.83333 || ~0.84615 || ~0.84211 || 0.84375
|- align="center"
! Error
| +18.519% || −11.111% || −5.1852% || −1.2346% || +0.28490% || −0.19493% || 0%
|}
The strictly monotonic increase in the denominators as additional terms are included permits an algorithm to impose a limit, either on size of denominator or closeness of approximation.
The "half rule" mentioned above requires that when is even, the halved term /2 is admissible if and only if This is equivalent to: .
.
The convergents to are "best approximations" in a much stronger sense than the one defined above. Namely, / is a convergent for if and only if has the smallest value among the analogous expressions for all rational approximations / with ; that is, we have so long as . (Note also that as .)
Best rational within an interval
A rational that falls within the interval , for , can be found with the continued fractions for and . When both and are irrational and
where and have identical continued fraction expansions up through , a rational that falls within the interval is given by the finite continued fraction,
This rational will be best in the sense that no other rational in will have a smaller numerator or a smaller denominator.
If is rational, it will have two continued fraction representations that are finite, and , and similarly a rational will have two representations, and . The coefficients beyond the last in any of these representations should be interpreted as ; and the best rational will be one of , , , or .
For example, the decimal representation 3.1416 could be rounded from any number in the interval . The continued fraction representations of 3.14155 and 3.14165 are
and the best rational between these two is
Thus, is the best rational number corresponding to the rounded decimal number 3.1416, in the sense that no other rational number that would be rounded to 3.1416 will have a smaller numerator or a smaller denominator.
Interval for a convergent
A rational number, which can be expressed as finite continued fraction in two ways,
will be one of the convergents for the continued fraction expansion of a number, if and only if the number is strictly between (see this proof)
and
The numbers and are formed by incrementing the last coefficient in the two representations for . It is the case that when is even, and when is odd.
For example, the number has the continued fraction representations
= [3; 7, 15, 1] = [3; 7, 16]
and thus is a convergent of any number strictly between
{| cellpadding="2" cellspacing="0"
| align="right" | ||||
|-
| align="right" | ||||
|}
Legendre's theorem on continued fractions
In his Essai sur la théorie des nombres (1798), Adrien-Marie Legendre derives a necessary and sufficient condition for a rational number to be a convergent of the continued fraction of a given real number. A consequence of this criterion, often called Legendre's theorem within the study of continued fractions, is as follows:
Theorem. If α is a real number and p, q are positive integers such that , then p/q is a convergent of the continued fraction of α.
Proof. We follow the proof given in An Introduction to the Theory of Numbers by G. H. Hardy and E. M. Wright.
Suppose α, p, q are such that , and assume that α > p/q. Then we may write , where 0 < θ < 1/2. We write p/q as a finite continued fraction [a0; a1, ..., an], where due to the fact that each rational number has two distinct representations as finite continued fractions differing in length by one (namely, one where an = 1 and one where an ≠ 1), we may choose n to be even. (In the case where α < p/q, we would choose n to be odd.)
Let p0/q0, ..., pn/qn = p/q be the convergents of this continued fraction expansion. Set , so that and thus,where we have used the fact that pn−1 qn - pn qn−1 = (-1)n and that n is even.
Now, this equation implies that α = [a0; a1, ..., an, ω]. Since the fact that 0 < θ < 1/2 implies that ω > 1, we conclude that the continued fraction expansion of α must be [a0; a1, ..., an, b0, b1, ...], where [b0; b1, ...] is the continued fraction expansion of ω, and therefore that pn/qn = p/q is a convergent of the continued fraction of α.
This theorem forms the basis for Wiener's attack, a polynomial-time exploit of the RSA cryptographic protocol that can occur for an injudicious choice of public and private keys (specifically, this attack succeeds if the prime factors of the public key n = pq satisfy p < q < 2p and the private key d is less than (1/3)n1/4).
Comparison
Consider and . If is the smallest index for which is unequal to then if and otherwise.
If there is no such , but one expansion is shorter than the other, say and with for , then if is even and if is odd.
Continued fraction expansion of and its convergents
To calculate the convergents of we may set , define and , and , . Continuing like this, one can determine the infinite continued fraction of as
[3;7,15,1,292,1,1,...] .
The fourth convergent of is [3;7,15,1] = = 3.14159292035..., sometimes called Milü, which is fairly close to the true value of .
Let us suppose that the quotients found are, as above, [3;7,15,1]. The following is a rule by which we can write down at once the convergent fractions which result from these quotients without developing the continued fraction.
The first quotient, supposed divided by unity, will give the first fraction, which will be too small, namely, . Then, multiplying the numerator and denominator of this fraction by the second quotient and adding unity to the numerator, we shall have the second fraction, , which will be too large. Multiplying in like manner the numerator and denominator of this fraction by the third quotient, and adding to the numerator the numerator of the preceding fraction, and to the denominator the denominator of the preceding fraction, we shall have the third fraction, which will be too small. Thus, the third quotient being 15, we have for our numerator , and for our denominator, . The third convergent, therefore, is . We proceed in the same manner for the fourth convergent. The fourth quotient being 1, we say 333 times 1 is 333, and this plus 22, the numerator of the fraction preceding, is 355; similarly, 106 times 1 is 106, and this plus 7 is 113.
In this manner, by employing the four quotients [3;7,15,1], we obtain the four fractions:
, , , , ....
To sum up, the pattern is
These convergents are alternately smaller and larger than the true value of , and approach nearer and nearer to . The difference between a given convergent and is less than the reciprocal of the product of the denominators of that convergent and the next convergent. For example, the fraction is greater than , but − is less than = (in fact, − is just more than = ).
The demonstration of the foregoing properties is deduced from the fact that if we seek the difference between one of the convergent fractions and the next adjacent to it we shall obtain a fraction of which the numerator is always unity and the denominator the product of the two denominators. Thus the difference between and is , in excess; between and , , in deficit; between and , , in excess; and so on. The result being, that by employing this series of differences we can express in another and very simple manner the fractions with which we are here concerned, by means of a second series of fractions of which the numerators are all unity and the denominators successively be the product of every two adjacent denominators. Instead of the fractions written above, we have thus the series:
+ − + − ...
The first term, as we see, is the first fraction; the first and second together give the second fraction, ; the first, the second and the third give the third fraction , and so on with the rest; the result being that the series entire is equivalent to the original value.
Non-simple continued fraction
A non-simple continued fraction is an expression of the form
where the an (n > 0) are the partial numerators, the bn are the partial denominators, and the leading term b0 is called the integer part of the continued fraction.
To illustrate the use of non-simple continued fractions, consider the following example. The sequence of partial denominators of the simple continued fraction of does not show any obvious pattern:
or
However, several non-simple continued fractions for have a perfectly regular structure, such as:
The first two of these are special cases of the arctangent function with = 4 arctan (1) and the fourth and fifth one can be derived using the Wallis product.
The continued fraction of above consisting of cubes uses the Nilakantha series and an exploit from Leonhard Euler.
Other continued fraction expansions
Periodic continued fractions
The numbers with periodic continued fraction expansion are precisely the irrational solutions of quadratic equations with rational coefficients; rational solutions have finite continued fraction expansions as previously stated. The simplest examples are the golden ratio φ = [1;1,1,1,1,1,...] and = [1;2,2,2,2,...], while = [3;1,2,1,6,1,2,1,6...] and = [6;2,12,2,12,2,12...]. All irrational square roots of integers have a special form for the period; a symmetrical string, like the empty string (for ) or 1,2,1 (for ), followed by the double of the leading integer.
A property of the golden ratio φ
Because the continued fraction expansion for φ doesn't use any integers greater than 1, φ is one of the most "difficult" real numbers to approximate with rational numbers. Hurwitz's theorem states that any irrational number can be approximated by infinitely many rational with
While virtually all real numbers will eventually have infinitely many convergents whose distance from is significantly smaller than this limit, the convergents for φ (i.e., the numbers , , , , etc.) consistently "toe the boundary", keeping a distance of almost exactly away from φ, thus never producing an approximation nearly as impressive as, for example, for . It can also be shown that every real number of the form , where , , , and are integers such that , shares this property with the golden ratio φ; and that all other real numbers can be more closely approximated.
Regular patterns in continued fractions
While there is no discernible pattern in the simple continued fraction expansion of , there is one for , the base of the natural logarithm:
which is a special case of this general expression for positive integer :
Another, more complex pattern appears in this continued fraction expansion for positive odd :
with a special case for :
Other continued fractions of this sort are
where is a positive integer; also, for integer :
with a special case for :
If is the modified, or hyperbolic, Bessel function of the first kind, we may define a function on the rationals by
which is defined for all rational numbers, with and in lowest terms. Then for all nonnegative rationals, we have
with similar formulas for negative rationals; in particular we have
Many of the formulas can be proved using Gauss's continued fraction.
Typical continued fractions
Most irrational numbers do not have any periodic or regular behavior in their continued fraction expansion. Nevertheless, for almost all numbers on the unit interval, they have the same limit behavior.
The arithmetic average diverges: , and so the coefficients grow arbitrarily large: . In particular, this implies that almost all numbers are well-approximable, in the sense thatKhinchin proved that the geometric mean of tends to a constant (known as Khinchin's constant):Paul Lévy proved that the th root of the denominator of the th convergent converges to Lévy's constant
Lochs' theorem states that the convergents converge exponentially at the rate of
Applications
Pell's equation
Continued fractions play an essential role in the solution of Pell's equation. For example, for positive integers and , and non-square , it is true that if , then is a convergent of the regular continued fraction for . The converse holds if the period of the regular continued fraction for is 1, and in general the period describes which convergents give solutions to Pell's equation.
Dynamical systems
Continued fractions also play a role in the study of dynamical systems, where they tie together the Farey fractions which are seen in the Mandelbrot set with Minkowski's question-mark function and the modular group Gamma.
The backwards shift operator for continued fractions is the map called the Gauss map, which lops off digits of a continued fraction expansion: . The transfer operator of this map is called the Gauss–Kuzmin–Wirsing operator. The distribution of the digits in continued fractions is given by the zero'th eigenvector of this operator, and is called the Gauss–Kuzmin distribution.
History
300 BCE Euclid's Elements contains an algorithm for the greatest common divisor, whose modern version generates a continued fraction as the sequence of quotients of successive Euclidean divisions that occur in it.
499 The Aryabhatiya contains the solution of indeterminate equations using continued fractions
1572 Rafael Bombelli, L'Algebra Opera – method for the extraction of square roots which is related to continued fractions
1613 Pietro Cataldi, Trattato del modo brevissimo di trovar la radice quadra delli numeri – first notation for continued fractions
Cataldi represented a continued fraction as & & & with the dots indicating where the following fractions went.
1695 John Wallis, Opera Mathematica – introduction of the term "continued fraction"
1737 Leonhard Euler, De fractionibus continuis dissertatio – Provided the first then-comprehensive account of the properties of continued fractions, and included the first proof that the number e is irrational.
1748 Euler, Introductio in analysin infinitorum. Vol. I, Chapter 18 – proved the equivalence of a certain form of continued fraction and a generalized infinite series, proved that every rational number can be written as a finite continued fraction, and proved that the continued fraction of an irrational number is infinite.
1761 Johann Lambert – gave the first proof of the irrationality of using a continued fraction for tan(x).
1768 Joseph-Louis Lagrange – provided the general solution to Pell's equation using continued fractions similar to Bombelli's
1770 Lagrange – proved that quadratic irrationals expand to periodic continued fractions.
1813 Carl Friedrich Gauss, Werke, Vol. 3, pp. 134–138 – derived a very general complex-valued continued fraction via a clever identity involving the hypergeometric function
1892 Henri Padé defined Padé approximant
1972 Bill Gosper – First exact algorithms for continued fraction arithmetic.
See also
Notes
References
External links
Linas Vepstas Continued Fractions and Gaps (2004) reviews chaotic structures in continued fractions.
Continued Fractions on the Stern-Brocot Tree at cut-the-knot
The Antikythera Mechanism I: Gear ratios and continued fractions
Continued fraction calculator, WIMS.
Continued Fraction Arithmetic Gosper's first continued fractions paper, unpublished. Cached on the Internet Archive's Wayback Machine
Continued Fractions by Stephen Wolfram and Continued Fraction Approximations of the Tangent Function by Michael Trott, Wolfram Demonstrations Project.
A view into "fractional interpolation" of a continued fraction }
Best rational approximation through continued fractions
CONTINUED FRACTIONS by C. D. Olds
Mathematical analysis | Simple continued fraction | [
"Mathematics"
] | 7,348 | [
"Mathematical analysis",
"Continued fractions",
"Number theory"
] |
46,825 | https://en.wikipedia.org/wiki/Otto%20Hahn | Otto Hahn (; 8 March 1879 – 28 July 1968) was a German chemist who was a pioneer in the field of radiochemistry. He is referred to as the father of nuclear chemistry and discoverer of nuclear fission, the science behind nuclear reactors and nuclear weapons. Hahn and Lise Meitner discovered isotopes of the radioactive elements radium, thorium, protactinium and uranium. He also discovered the phenomena of atomic recoil and nuclear isomerism, and pioneered rubidium–strontium dating. In 1938, Hahn, Meitner and Fritz Strassmann discovered nuclear fission, for which Hahn alone was awarded the 1944 Nobel Prize in Chemistry.
A graduate of the University of Marburg, which awarded him a doctorate in 1901, Hahn studied under Sir William Ramsay at University College London and at McGill University in Montreal under Ernest Rutherford, where he discovered several new radioactive isotopes. He returned to Germany in 1906; Emil Fischer let him use a former woodworking shop in the basement of the Chemical Institute at the University of Berlin as a laboratory. Hahn completed his habilitation in early 1907 and became a Privatdozent. In 1912, he became head of the Radioactivity Department of the newly founded Kaiser Wilhelm Institute for Chemistry (KWIC). Working with the Austrian physicist Lise Meitner in the building that now bears their names, they made a series of groundbreaking discoveries, culminating with her isolation of the longest-lived isotope of protactinium in 1918.
During World War I he served with a Landwehr regiment on the Western Front, and with the chemical warfare unit headed by Fritz Haber on the Western, Eastern and Italian fronts, earning the Iron Cross (2nd Class) for his part in the First Battle of Ypres. After the war he became the head of the KWIC, while remaining in charge of his own department. Between 1934 and 1938, he worked with Strassmann and Meitner on the study of isotopes created by neutron bombardment of uranium and thorium, which led to the discovery of nuclear fission. He was an opponent of Nazism and the persecution of Jews by the Nazi Party that caused the removal of many of his colleagues, including Meitner, who was forced to flee Germany in 1938. During World War II, he worked on the German nuclear weapons program, cataloguing the fission products of uranium. At the end of the war he was arrested by the Allied forces and detained in Farm Hall with nine other German scientists, from July 1945 to January 1946.
Hahn served as the last president of the Kaiser Wilhelm Society for the Advancement of Science in 1946 and as the founding president of its successor, the Max Planck Society from 1948 to 1960. In 1959 in Berlin he co-founded the Federation of German Scientists, a non-governmental organisation committed to the ideal of responsible science. As he worked to rebuild German science, he became one of the most influential and respected citizens of post-war West Germany.
Early life and education
Otto Hahn was born in Frankfurt am Main on 8 March 1879, the youngest son of Heinrich Hahn (1845–1922), a prosperous glazier (and founder of the Glasbau Hahn company), and Charlotte Hahn née Giese (1845–1905). He had an older half-brother Karl, his mother's son from her previous marriage, and two older brothers, Heiner and Julius. The family lived above his father's workshop. The younger three boys were educated at the Klinger Oberrealschule in Frankfurt. At the age of 15, he began to take a special interest in chemistry, and carried out simple experiments in the laundry room of the family home. His father wanted Otto to study architecture, as he had built or acquired several residential and business properties, but Otto persuaded him that his ambition was to become an industrial chemist.
In 1897, after passing his Abitur, Hahn began to study chemistry at the University of Marburg. His subsidiary subjects were mathematics, physics, mineralogy and philosophy. Hahn joined the Students' Association of Natural Sciences and Medicine, a student fraternity and a forerunner of today's Landsmannschaft Nibelungi (Coburger Convent der akademischen Landsmannschaften und Turnerschaften). He spent his third and fourth semesters at the University of Munich, studying organic chemistry under Adolf von Baeyer, physical chemistry under , and inorganic chemistry under Karl Andreas Hofmann. In 1901, Hahn received his doctorate in Marburg for a dissertation entitled "On Bromine Derivates of Isoeugenol", a topic in classical organic chemistry. He completed his one-year military service (instead of the usual two because he had a doctorate) in the 81st Infantry Regiment, but unlike his brothers, did not apply for a commission. He then returned to the University of Marburg, where he worked for two years as assistant to his doctoral supervisor, Geheimrat professor Theodor Zincke.
Early career in London and Canada
Discovery of radiothorium and other "new elements"
Hahn's intention was still to work in industry. He received an offer of employment from Eugen Fischer, the director of (and the father of organic chemist Hans Fischer), but a condition of employment was that Hahn had to have lived in another country and have a reasonable command of another language. With this in mind, and to improve his knowledge of English, Hahn took up a post at University College London in 1904, working under Sir William Ramsay, who was known for having discovered the noble gases. Here Hahn worked on radiochemistry, at that time a very new field. In early 1905, in the course of his work with salts of radium, Hahn discovered a new substance he called radiothorium (thorium-228), which at that time was believed to be a new radioactive element. In fact, it was an isotope of the known element thorium; the concept of an isotope, along with the term, was coined in 1913 by the British chemist Frederick Soddy.
Ramsay was enthusiastic when yet another new element was found in his institute, and he intended to announce the discovery in a correspondingly suitable way. In accordance with tradition this was done before the committee of the venerable Royal Society. At the session of the Royal Society on 16 March 1905 Ramsay communicated Hahn's discovery of radiothorium. The Daily Telegraph informed its readers:
Hahn published his results in the Proceedings of the Royal Society on 24 May 1905. It was the first of more than 250 scientific publications in the field of radiochemistry. At the end of his time in London, Ramsay asked Hahn about his plans for the future, and Hahn told him about the job offer from Kalle & Co. Ramsay told him radiochemistry had a bright future, and that someone who had discovered a new radioactive element should go to the University of Berlin. Ramsay wrote to Emil Fischer, the head of the chemistry institute there, who replied that Hahn could work in his laboratory, but could not be a Privatdozent because radiochemistry was not taught there. At this point, Hahn decided that he first needed to know more about the subject, so he wrote to the leading expert on the field, Ernest Rutherford. Rutherford agreed to take Hahn on as an assistant, and Hahn's parents undertook to pay Hahn's expenses.
From September 1905 until mid-1906, Hahn worked with Rutherford's group in the basement of the Macdonald Physics Building at McGill University in Montreal. There was some scepticism about the existence of radiothorium, which Bertram Boltwood memorably described as a compound of thorium X and stupidity. Boltwood was soon convinced that it did exist, although he and Hahn differed on what its half-life was. William Henry Bragg and Richard Kleeman had noted that the alpha particles emitted from radioactive substances always had the same energy, providing a second way of identifying them, so Hahn set about measuring the alpha particle emissions of radiothorium. In the process, he found that a precipitation of thorium A (polonium-216) and thorium B (lead-212) also contained a short-lived "element", which he named thorium C (which was later identified as polonium-212). Hahn was unable to separate it, and concluded that it had a very short half-life (it is about 300 ns). He also identified radioactinium (thorium-227) and radium D (later identified as lead-210). Rutherford remarked that: "Hahn has a special nose for discovering new elements."
Chemical Institute in Berlin
Discovery of mesothorium I
In 1906, Hahn returned to Germany, where Fischer placed at his disposal a former woodworking shop (Holzwerkstatt) in the basement of the Chemical Institute to use as a laboratory. Hahn equipped it with electroscopes to measure alpha and beta particles and gamma rays. In Montreal these had been made from discarded coffee tins; Hahn made the ones in Berlin from brass, with aluminium strips insulated with amber. These were charged with hard rubber sticks that he rubbed against the sleeves of his suit. It was not possible to conduct research in the wood shop, but Alfred Stock, the head of the inorganic chemistry department, let Hahn use a space in one of his two private laboratories. Hahn purchased two milligrams of radium from Friedrich Oskar Giesel, the discoverer of emanium (radon), for 100 marks a milligram (), and obtained thorium for free from Otto Knöfler, whose Berlin firm was a major producer of thorium products.
In the space of a few months Hahn discovered mesothorium I (radium-228), mesothorium II (actinium-228), and – independently from Boltwood – the mother substance of radium, ionium (later identified as thorium-230). In subsequent years, mesothorium I assumed great importance because, like radium-226 (discovered by Pierre and Marie Curie), it was ideally suited for use in medical radiation treatment, but cost only half as much to manufacture. Along the way, Hahn determined that just as he was unable to separate thorium from radiothorium, so he could not separate mesothorium I from radium.
In Canada there had been no requirement to be circumspect when addressing the egalitarian New Zealander Rutherford, but many people in Germany found his manner off-putting, and characterised him as an "Anglicised Berliner". Hahn completed his habilitation in early 1907, and became a Privatdozent. A thesis was not required; the Chemical Institute accepted one of his publications on radioactivity instead. Most of the organic chemists at the Chemical Institute did not regard Hahn's work as real chemistry. Fischer objected to Hahn's contention in his habilitation colloquium that many radioactive substances existed in such tiny amounts that they could only be detected by their radioactivity, venturing that he had always been able to detect substances with his keen sense of smell, but soon gave in. One department head remarked: "it is incredible what one gets to be a Privatdozent these days!"
Physicists were more accepting of Hahn's work, and he began attending a colloquium at the Physics Institute conducted by Heinrich Rubens. It was at one of these colloquia where, on 28 September 1907, he made the acquaintance of the Austrian physicist Lise Meitner. Almost the same age as himself, she was only the second woman to receive a doctorate from the University of Vienna, and had already published two papers on radioactivity. Rubens suggested her as a possible collaborator. So began the thirty-year collaboration and lifelong close friendship between the two scientists.
In Montreal, Hahn had worked with physicists including at least one woman, Harriet Brooks, but it was difficult for Meitner at first. Women were not yet admitted to universities in Prussia. Meitner was allowed to work in the wood shop, which had its own external entrance, but could not enter the rest of the institute, including Hahn's laboratory space upstairs. If she wanted to go to the toilet, she had to use one at the restaurant down the street. The following year, women were admitted to universities, and Fischer lifted the restrictions and had women's toilets installed in the building.
Discovery of radioactive recoil
Harriet Brooks observed a radioactive recoil in 1904, but interpreted it wrongly. Hahn and Meitner succeeded in demonstrating the radioactive recoil incident to alpha particle emission and interpreted it correctly. Hahn pursued a report by Stefan Meyer and Egon Schweidler of a decay product of actinium with a half-life of about 11.8 days. Hahn determined that it was actinium X (radium-223). He also discovered that at the moment when a radioactinium (thorium-227) atom emits an alpha particle, it does so with great force, and the actinium X experiences a recoil. This is enough to free it from chemical bonds, and it has a positive charge, and can be collected at a negative electrode.
Hahn was thinking only of actinium, but on reading his paper, Meitner told him that he had found a new way of detecting radioactive substances. They set up some tests, and soon found actinium C (thallium-207) and thorium C (thallium-208). The physicist Walther Gerlach described radioactive recoil as "a profoundly significant discovery in physics with far-reaching consequences".
Kaiser Wilhelm Institute for Chemistry
In 1910, Hahn was appointed professor by the Prussian Minister of Culture and Education, August von Trott zu Solz. Two years later, Hahn became head of the Radioactivity Department of the newly founded Kaiser Wilhelm Institute for Chemistry (KWIC) in Berlin-Dahlem (in what is today the Hahn-Meitner-Building of the Free University of Berlin). This came with an annual salary of 5,000 marks (). In addition, he received 66,000 marks in 1914 () from Knöfler for the mesothorium process, of which he gave 10 per cent to Meitner. The new institute was inaugurated on 23 October 1912 in a ceremony presided over by Kaiser Wilhelm II. The Kaiser was shown glowing radioactive substances in a dark room.
The move to new accommodation was fortuitous, as the wood shop had become heavily contaminated by radioactive liquids that had been spilt, and radioactive gases that had vented and then decayed and settled as radioactive dust, making sensitive measurements impossible. To ensure that their clean new laboratories stayed that way, Hahn and Meitner instituted strict procedures. Chemical and physical measurements were conducted in different rooms, people handling radioactive substances had to follow protocols that included not shaking hands, and rolls of toilet paper were hung next to every telephone and door handle. Strongly radioactive substances were stored in the old wood shop, and later in a purpose-built radium house on the institute grounds.
World War I
In July 1914—shortly before the outbreak of World War I—Hahn was recalled to active duty with the army in a Landwehr regiment. They marched through Belgium, where the platoon he commanded was armed with captured machine guns. He was awarded the Iron Cross (2nd Class) for his part in the First Battle of Ypres. He was a joyful participant in the Christmas truce of 1914, and was commissioned as a lieutenant. In mid-January 1915, he was summoned to meet chemist Fritz Haber, who explained his plan to break the trench deadlock with chlorine gas. Hahn raised the issue that the Hague Convention banned the use of projectiles containing poison gases, but Haber explained that the French had already initiated chemical warfare with tear gas grenades, and he planned to get around the letter of the convention by releasing gas from cylinders instead of shells.
Haber's new unit was called Pioneer Regiment 35. After brief training in Berlin, Hahn, together with physicists James Franck and Gustav Hertz, was sent to Flanders again to scout for a site for a first gas attack. He did not witness the attack because he and Franck were off selecting a position for the next attack. Transferred to Poland, at the Battle of Bolimów on 12 June 1915, they released a mixture of chlorine and phosgene gas. Some German troops were reluctant to advance when the gas started to blow back, so Hahn led them across No Man's land. He witnessed the death agonies of Russians they had poisoned, and unsuccessfully attempted to revive some with gas masks. He was transferred to Berlin as a human guinea pig testing poisonous gases and gas masks. On their next attempt on 7 July, the gas again blew back on German lines, and Hertz was poisoned. This assignment was interrupted by a mission at the front in Flanders and again in 1916 by a mission to Verdun to introduce shells filled with phosgene to the Western Front. Then once again he was hunting along both fronts for sites for gas attacks. In December 1916 he joined the new gas command unit at Imperial Headquarters.
Between operations, Hahn returned to Berlin, where he was able to slip back to his old laboratory and work with Meitner, continuing with their research. In September 1917 he was one of three officers, disguised in Austrian uniforms, sent to the Isonzo front in Italy to find a suitable location for an attack, using newly developed rifled minenwerfers that simultaneously hurled hundreds of containers of poison gas onto enemy targets. They selected a site where the Italian trenches were sheltered in a deep valley so that a gas cloud would persist. The following Battle of Caporetto broke the Italian lines, and the Central Powers overran much of northern Italy. That summer Hahn was accidentally poisoned by phosgene while testing a new model of gas mask. At the end of the war he was in the field in mufti on a secret mission to test a pot that heated and released a cloud of arsenicals.
Discovery of protactinium
In 1913, chemists Frederick Soddy and Kasimir Fajans independently observed that alpha decay caused atoms to move down two places on the periodic table, while the loss of two beta particles restored it to its original position. Under the resulting reorganisation of the periodic table, radium was placed in group II, actinium in group III, thorium in group IV and uranium in group VI. This left a gap between thorium and uranium. Soddy predicted that this unknown element, which he referred to (after Dmitri Mendeleev) as "ekatantalium", would be an alpha emitter with chemical properties similar to tantalium. It was not long before Fajans and Oswald Helmuth Göhring discovered it as a decay product of a beta-emitting product of thorium. Based on the radioactive displacement law of Fajans and Soddy, this was an isotope of the missing element, which they named "brevium" after its short half life. However, it was a beta emitter, and therefore could not be the mother isotope of actinium. This had to be another isotope of the same element.
Hahn and Meitner set out to find the missing mother isotope. They developed a new technique for separating the tantalum group from pitchblende, which they hoped would speed the isolation of the new isotope. The work was interrupted by the First World War. Meitner became an X-ray nurse, working in Austrian Army hospitals, but she returned to the Kaiser Wilhelm Institute in October 1916. Hahn joined the new gas command unit at Imperial Headquarters in Berlin in December 1916 after travelling between the western and eastern front, Berlin and Leverkusen between mid-1914 and late 1916.
Most of the students, laboratory assistants and technicians had been called up, so Hahn, who was stationed in Berlin between January and September 1917, and Meitner had to do everything themselves. By December 1917 she was able to isolate the substance, and after further work were able to prove that it was indeed the missing isotope. Meitner submitted her and Hahn's findings for publication in March 1918 to the scientific paper Physikalischen Zeitschrift under the title ("The Mother Substance of Actinium; A New Radioactive Element with a Long Lifetime"). Although Fajans and Göhring had been the first to discover the element, custom required that an element was represented by its longest-lived and most abundant isotope, and while brevium had a half life of 1.7 minutes, Hahn and Meitner's isotope had one of 32,500 years. The name brevium no longer seemed appropriate. Fajans agreed to Meitner and Hahn naming the element "protoactinium".
In June 1918, Soddy and John Cranston announced that they had extracted a sample of the isotope, but unlike Hahn and Meitner were unable to describe its characteristics. They acknowledged Hahn´s and Meitner's priority, and agreed to the name. The connection to uranium remained a mystery, as neither of the known isotopes of uranium decayed into protactinium. It remained unsolved until the mother isotope, uranium-235, was discovered in 1929. For their discovery Hahn and Meitner were repeatedly nominated for the Nobel Prize in Chemistry in the 1920s by several scientists, among them Max Planck, Heinrich Goldschmidt, and Fajans himself. In 1949, the International Union of Pure and Applied Chemistry (IUPAC) named the new element definitively protactinium, and confirmed Hahn and Meitner as discoverers.
Discovery of nuclear isomerism
With the discovery of protactinium, most of the decay chains of uranium had been mapped. When Hahn returned to his work after the war, he looked back over his 1914 results, and considered some anomalies that had been dismissed or overlooked. He dissolved uranium salts in a hydrofluoric acid solution with tantalic acid. First the tantalum in the ore was precipitated, then the protactinium. In addition to the uranium X1 (thorium-234) and uranium X2 (protactinium-234), Hahn detected traces of a radioactive substance with a half-life of between 6 and 7 hours. There was one isotope known to have a half-life of 6.2 hours, mesothorium II (actinium-228). This was not in any probable decay chain, but it could have been contamination, as the KWICy had experimented with it. Hahn and Meitner demonstrated in 1919 that when actinium is treated with hydrofluoric acid, it remains in the insoluble residue. Since mesothorium II was an isotope of actinium, the substance was not mesothorium II; it was protactinium. Hahn was now confident enough he had found something that he named his new isotope "uranium Z". In February 1921, he published the first report on his discovery.
Hahn determined that uranium Z had a half-life of around 6.7 hours (with a two per cent margin of error) and that when uranium X1 decayed, it became uranium X2 about 99.75 per cent of the time, and uranium Z around 0.25 per cent of the time. He found that the proportion of uranium X to uranium Z extracted from several kilograms of uranyl nitrate remained constant over time, strongly indicating that uranium X was the mother of uranium Z. To prove this, Hahn obtained a hundred kilograms of uranyl nitrate; separating the uranium X from it took weeks. He found that the half-life of the parent of uranium Z differed from the known 24-day half-life of uranium X1 by no more than two or three days, but was unable to get a more accurate value. Hahn concluded that uranium Z and uranium X2 were both the same isotope of protactinium (protactinium-234), and they both decayed into uranium II (uranium-234), but with different half-lives.
Uranium Z was the first example of nuclear isomerism. Walther Gerlach later remarked that this was "a discovery that was not understood at the time but later became highly significant for nuclear physics". Not until 1936 was Carl Friedrich von Weizsäcker able to provide a theoretical explanation of the phenomenon. For this discovery, whose full significance was recognised by very few, Hahn was again proposed for the Nobel Prize in Chemistry by Bernhard Naunyn, Goldschmidt and Planck.
Applied Radiochemistry
In 1924, Hahn was elected to full membership of the Prussian Academy of Sciences in Berlin, by a vote of thirty white balls to two black. While still remaining the head of his own department, he became Deputy Director of the KWIC in 1924, and succeeded Alfred Stock as the director in 1928. Meitner became the director of the Physical Radioactivity Division, while Hahn headed the Chemical Radioactivity Division.
In the early 1920s, Hahn created a new line of research. Using the "emanation method", which he had recently developed, and the "emanation ability", he founded what became known as "applied radiochemistry" for the researching of general chemical and physical-chemical questions. In 1936 Cornell University Press published a book in English (and later in Russian) titled Applied Radiochemistry, which contained the lectures given by Hahn when he was a visiting professor at Cornell University in Ithaca, New York, in 1933. This publication had a major influence on almost all nuclear chemists and physicists in the United States, the United Kingdom, France, and the Soviet Union during the 1930s and 1940s. Hahn is referred to as the father of nuclear chemistry, which emerged from applied radiochemistry.
National Socialist Germany
Impact of National Socialism
Fritz Strassmann had come to the KWIC to study under Hahn to improve his employment prospects. After the Nazi Party (NSDAP) came to power in Germany in 1933, Strassmann declined a lucrative offer of employment because it required political training and Nazi Party membership. Later, rather than become a member of a Nazi-controlled organisation, Strassmann resigned from the Society of German Chemists when it became part of the Nazi German Labour Front. As a result, he could neither work in the chemical industry nor receive his habilitation, the prerequisite for an academic position. Meitner persuaded Hahn to hire Strassmann as an assistant. Soon he would be credited as a third collaborator on the papers they produced, and would sometimes even be listed first.
Hahn spent February to June 1933 in the United States and Canada as a visiting professor at Cornell University. He gave an interview to the Toronto Star Weekly in which he painted a flattering portrait of Adolf Hitler:
The April 1933 Law for the Restoration of the Professional Civil Service banned Jews and communists from academia. Meitner was exempt from its impact because she was an Austrian rather than a German citizen. Haber was likewise exempt as a veteran of World War I, but chose to resign his directorship of the Kaiser Wilhelm Institute of Physical Chemistry and Electrochemistry in protest on 30 April 1933. The directors of the other Kaiser Wilhelm Institutes, even the Jewish ones, complied with the new law, which applied to the KWS as a whole and those Kaiser Wilhelm institutes with more than 50% state support, which exempted the KWI for Chemistry. Hahn therefore did not have to fire any of his own full-time staff, but as the interim director of Haber's institute, he dismissed a quarter of its staff, including three department heads. Gerhart Jander was appointed the new director of Haber's old institute, and reoriented it towards chemical warfare research.
Like most KWS institute directors, Haber had accrued a large discretionary fund. It was his wish that it be distributed to the dismissed staff to facilitate their emigration. Hahn brokered a deal whereby 10 per cent of the funds would be allocated to Haber's people and the rest to KWS, but the Rockefeller Foundation insisted that the funds be used for their original scientific research or else be returned. In August 1933 the administrators of the KWS were alerted that several boxes of Rockefeller Foundation-funded equipment were about to be shipped to Herbert Freundlich, one of the department heads that Hahn had dismissed, who was now working in England. , a Nazi Party member, was in charge while Planck, the president of the KWS since 1930, was on vacation, and he ordered the shipment halted. Hahn complied, but he disgreed with the decision on the grounds that funds from abroad should not be diverted to military research, which the KWS was increasingly undertaking. When Planck returned from vacation, he ordered Hahn to expedite the shipment.
Haber died on 29 January 1934. A memorial service was held on the first anniversary of his death. University professors were forbidden to attend, so they sent their wives in their place. Hahn, Planck and Joseph Koeth attended, and gave speeches. The aging Planck did not seek re-election, and was succeeded in 1937 as president by Carl Bosch, a winner of the Nobel Prize in Chemistry and the chairman of the board of IG Farben, a company which had bankrolled the Nazi Party since 1932. Telschow became Secretary of the KWS. He was an enthusiastic supporter of the Nazis, but was also loyal to Hahn, being one of his former students, and Hahn welcomed his appointment. Hahn's chief assistant, Otto Erbacher, became the KWI for Chemistry's party steward (Vertrauensmann).
Rubidium–strontium dating
While Hahn was in North America in 1905–1906, his attention had been drawn to a mica-like mineral from Manitoba that contained rubidium. He had studied the radioactive decay of rubidium-87, and had estimated its half-life at 2 x 1011 years. It occurred to him that by comparing the quantity of strontium in the mineral (which had once been rubidium) with that of the remaining rubidium, he could measure the age of the mineral, assuming that his original calculation of the half-life was reasonably accurate. This would be a superior dating method to studying the decay of uranium, because some of the uranium turns into helium, which then escapes, resulting in rocks appearing to be younger than they really were. Jacob Papish helped Hahn obtain several kilograms of the mineral.
In 1937, Strassmann and Ernst Walling extracted 253.4 milligrams of strontium carbonate from 1,012 grams of the mineral, all of which was the strontium-87 isotope, indicating that it had all been produced from radioactive decay of rubidium-87. The age of the mineral had been estimated at 1,975 million years from uranium minerals in the same deposit, which implied that the half-life of rubidium-87 was 2.3 x 1011 years: quite close to Hahn's original calculation. Rubidium–strontium dating became a widely used technique for dating rocks in the 1950s, when mass spectrometry became common.
Discovery of nuclear fission
After James Chadwick discovered the neutron in 1932, Irène Curie and Frédéric Joliot irradiated aluminium foil with alpha particles. They found that this results in a short-lived radioactive isotope of phosphorus. They noted that positron emission continued after the neutron emissions ceased. Not only had they discovered a new form of radioactive decay, they had transmuted an element into a hitherto unknown radioactive isotope of another, thereby inducing radioactivity where there had been none before. Radiochemistry was now no longer confined to certain heavy elements, but extended to the entire periodic table. Chadwick noted that being electrically neutral, neutrons could penetrate the atomic nucleus more easily than protons or alpha particles. Enrico Fermi and his colleagues in Rome picked up on this idea, and began irradiating elements with neutrons.
The radioactive displacement law of Fajans and Soddy said that beta decay causes isotopes to move one element up on the periodic table, and alpha decay causes them to move two down. When Fermi's group bombarded uranium atoms with neutrons, they found a complex mix of half-lives. Fermi therefore concluded that the new elements with atomic numbers greater than 92 (known as transuranium elements) had been created. Meitner and Hahn had not collaborated for many years, but Meitner was eager to investigate Fermi's results. Hahn, initially, was not, but he changed his mind when Aristid von Grosse suggested that what Fermi had found was an isotope of protactinium. They set out to determine whether or not the 13-minute isotope was indeed an isotope of protactinium.
Between 1934 and 1938, Hahn, Meitner and Strassmann found a great number of radioactive transmutation products, all of which they regarded as transuranic. At that time, the existence of actinides was not yet established, and uranium was wrongly believed to be a group 6 element similar to tungsten. It followed that the first transuranic elements would be similar to group 7 to 10 elements, i.e. rhenium and platinoids. They established the presence of multiple isotopes of at least four such elements, and (mistakenly) identified them as elements with atomic numbers 93 through 96. They were the first scientists to measure the 23-minute half-life of uranium-239 and to establish chemically that it was an isotope of uranium, but were unable to continue this work to its logical conclusion and identify the real element 93. They identified ten different half-lives, with varying degrees of certainty. To account for them, Meitner had to hypothesise a new class of reaction and the alpha decay of uranium, neither of which had ever been reported before, and for which physical evidence was lacking. Hahn and Strassmann refined their chemical procedures, while Meitner devised new experiments to shine more light on the reaction processes.
In May 1937, they issued parallel reports, one in the Zeitschrift für Physik with Meitner as the principal author, and one in the Chemische Berichte with Hahn as the principal author. Hahn concluded his by stating emphatically: ("Above all, their chemical distinction from all previously known elements needs no further discussion"). Meitner, however, was increasingly uncertain. She considered the possibility that the reactions were from different isotopes of uranium; three were known: uranium-238, uranium-235 and uranium-234. However, when she calculated the neutron cross section, it was too large to be anything other than the most abundant isotope, uranium-238. She concluded that it must be another case of the nuclear isomerism that Hahn had discovered in protactinium. She therefore ended her report on a very different note to Hahn, reporting that: ("The process must be neutron capture by uranium-238, which leads to three isomeric nuclei of uranium-239. This result is very difficult to reconcile with current concepts of the nucleus.")
With the Anschluss, Germany's annexation of Austria on 12 March 1938, Meitner lost her Austrian citizenship, and fled to Sweden. She carried only a little money, but before she left, Hahn gave her a diamond ring he had inherited from his mother. Meitner continued to correspond with Hahn by mail. In late 1938 Hahn and Strassmann found evidence of isotopes of an alkaline earth metal in their sample. Finding a group 2 metal was problematic, because it did not logically fit with the other elements found thus far. Hahn initially suspected it to be radium, produced by splitting off two alpha-particles from the uranium nucleus, but chipping off two alpha particles via this process was unlikely. The idea of turning uranium into barium (by removing around 100 nucleons) was seen as preposterous.
During a visit to Copenhagen on 10 November, Hahn discussed these results with Niels Bohr, Meitner, and Otto Robert Frisch. Further refinements of the technique, leading to the decisive experiment on 16–17 December 1938, produced puzzling results: the three isotopes consistently behaved not as radium, but as barium. Hahn, who did not inform the physicists in his Institute, described the results exclusively in a letter to Meitner on 19 December:
In her reply, Meitner concurred. "At the moment, the interpretation of such a thoroughgoing breakup seems very difficult to me, but in nuclear physics we have experienced so many surprises, that one cannot unconditionally say: 'it is impossible'." On 22 December 1938, Hahn sent a manuscript to Naturwissenschaften reporting their radiochemical results, which were published on 6 January 1939. On 27 December, Hahn telephoned the editor of the Naturwissenschaften and requested an addition to the article, speculating that some platinum group elements previously observed in irradiated uranium, which were originally interpreted as transuranium elements, could in fact be technetium (then called "masurium"), mistakenly believing that the atomic masses had to add up rather than the atomic numbers. By January 1939, he was sufficiently convinced of the formation of light elements that he published a new revision of the article, retracting former claims of observing transuranic elements and neighbours of uranium.
As a chemist, Hahn was reluctant to propose a revolutionary discovery in physics, but Meitner and Frisch worked out a theoretical interpretation of nuclear fission, a term appropriated by Frisch from biology. In January and February they published two articles discussing and experimentally confirming their theory. In their second publication on nuclear fission, Hahn and Strassmann used the term Uranspaltung (uranium fission) for the first time, and predicted the existence and liberation of additional neutrons during the fission process, opening up the possibility of a nuclear chain reaction. This was shown to be the case by Frédéric Joliot and his team in March 1939. Edwin McMillan and Philip Abelson used the cyclotron at the Berkeley Radiation Laboratory to bombard uranium with neutrons, and were able to identify an isotope with a 23-minute half-life that was the daughter of uranium-239, and therefore the real element 93, which they named neptunium. "There goes a Nobel Prize", Hahn remarked.
At the KWIC, Kurt Starke independently produced element 93, using only the weak neutron sources available there. Hahn and Strassmann then began researching its chemical properties. They knew that it should decay into the real element 94, which according to the latest version of the liquid drop model of the nucleus propounded by Bohr and John Archibald Wheeler, would be even more fissile than uranium-235, but were unable to detect its radioactive decay. They concluded that it must have an extremely long half-life, perhaps millions of years. Part of the problem was that they still believed that element 94 was a platinoid, which confounded their attempts at chemical separation.
World War II
On 24 April 1939, Paul Harteck and his assistant, Wilhelm Groth, had written to the Armed Forces High Command (OKW), alerting it to the possibility of the development of an atomic bomb. In response, the Army Weapons Branch (HWA) had established a physics section under the nuclear physicist Kurt Diebner. After World War II broke out on 1 September 1939, the HWA moved to control the German nuclear weapons program. From then on, Hahn participated in a ceaseless series of meetings related to the project. After the Director of the Kaiser Wilhelm Institute for Physics, Peter Debye, left for the United States in 1940 and never returned, Diebner was installed as its director. Hahn reported to the HWA on the progress of his research. Together with his assistants, Hans-Joachim Born, Siegfried Flügge, Hans Götte, Walter Seelmann-Eggebert and Strassmann, he catalogued about one hundred fission product isotopes. They also investigated means of isotope separation; the chemistry of element 93; and methods for purifying uranium oxides and salts.
On the night of 15 February 1944, the KWIC building was struck by a bomb. Hahn's office was destroyed, along with his correspondence with Rutherford and other researchers, and many of his personal possessions. The office was the intended target of the raid, which had been ordered by Brigadier General Leslie Groves, the director of the Manhattan Project, in the hope of disrupting the German uranium project. Albert Speer, the Reich Minister of Armaments and War Production, arranged for the institute to move to Tailfingen in southern Germany. All work in Berlin ceased by July. Hahn and his family moved to the house of a textile manufacturer there.
Life became precarious for those married to Jewish women. One was Philipp Hoernes, a chemist working for Auergesellschaft, the firm that mined the uranium ore used by the project. After the firm let him go in 1944, Hoernes faced being conscripted for forced labour. At the age of 60, it was doubtful that he would survive. Hahn and Nikolaus Riehl arranged for Hoernes to work at the KWIC, claiming that his work was essential to the uranium project and that uranium was highly toxic, making it hard to find people to work with it. Hahn was aware that uranium ore was fairly safe in the laboratory, although not so much for the 2,000 female slave labourers from the Sachsenhausen concentration camp who mined it in Oranienburg. Another physicist with a Jewish wife was . Hahn certified that his work was important to the war effort, and that his wife Maria, who had a doctorate in physics, was required as his assistant. After he died on 19 September 1944, Maria faced being sent to a concentration camp. Hahn mounted a lobbying campaign to get her released, but to no avail, and she was sent to the Theresienstadt Ghetto in January 1945. She survived the war, and was reunited with her daughters in England after the war.
Post-war
Incarceration in Farm Hall
On 25 April 1945, an armoured task force from the British/American Alsos Mission arrived in Tailfingen, and surrounded the KWIC. Hahn was informed that he was under arrest. When asked about reports related to his secret work on uranium, Hahn replied: "I have them all here", and handed over 150 reports. He was taken to Hechingen, where he joined Erich Bagge, Horst Korsching, Max von Laue, Carl Friedrich von Weizsäcker and Karl Wirtz. They were then taken to a dilapidated château in Versailles, where they heard about the signing of the German Instrument of Surrender at Reims on 7 May. Over the following days they were joined by Kurt Diebner, Walther Gerlach, Paul Harteck and Werner Heisenberg. All were physicists except Hahn and Harteck, who were chemists, and all had worked on the German nuclear weapons program except von Laue, although he was well aware of it.
They were relocated to the Château de Facqueval in Modave, Belgium, where Hahn used the time to work on his memoirs and then, on 3 July, were flown to England. They arrived at Farm Hall, Godmanchester, near Cambridge, on 3 July. While they were there, all their conversations, indoors and out, were covertly recorded with hidden microphones. They were given British newspapers, which Hahn was able to read. He was greatly disturbed by their reports of the Potsdam Conference, where German territory was ceded to Poland and the USSR. In August 1945, the German scientists were informed of the atomic bombing of Hiroshima. Up to this point the scientists, except Harteck, were completely certain that their project was further advanced than any in other countries, and the Alsos Mission's chief scientist, Samuel Goudsmit, did nothing to correct this impression. Now the reason for their incarceration in Farm Hall suddenly became apparent.
As they recovered from the shock of the announcement, they began to rationalise what had happened. Hahn noted that he was glad that they had not succeeded, and von Weizsäcker suggested that they should claim that they had not wanted to. They drafted a memorandum on the project, noting that fission was discovered by Hahn and Strassmann. The revelation that Nagasaki had been destroyed by a plutonium bomb came as another shock, as it meant that the Allies had not only been able to successfully conduct uranium enrichment, but had mastered nuclear reactor technology as well. The memorandum became the first draft of a postwar apologia. The idea that Germany had lost the war because its scientists were morally superior was as outrageous as it was unbelievable, but struck a chord in postwar German academia. It infuriated Goudsmit, whose parents had been murdered in Auschwitz. On 3 January 1946, exactly six months after they had arrived at Farm Hall, the group was allowed to return to Germany. Hahn, Heisenberg, von Laue and von Weizsäcker were brought to Göttingen, which was controlled by the British occupation authorities.
The Nobel Prize in Chemistry 1944
On 16 November 1945 the Royal Swedish Academy of Sciences announced that Hahn had been awarded the 1944 Nobel Prize in Chemistry "for his discovery of the fission of heavy atomic nuclei." Hahn was still at Farm Hall when the announcement was made; thus, his whereabouts were a secret, and it was impossible for the Nobel committee to send him a congratulatory telegram. Instead, he learned about his award on 18 November through the Daily Telegraph. His fellow interned scientists celebrated his award by giving speeches, making jokes, and composing songs.
Hahn had been nominated for the chemistry and the physics Nobel prizes many times even before the discovery of nuclear fission. Several more followed for the discovery of fission. The Nobel prize nominations were vetted by committees of five, one for each award. Although Hahn and Meitner received nominations for physics, radioactivity and radioactive elements had traditionally been seen as the domain of chemistry, and so the Nobel Committee for Chemistry evaluated the nominations. The committee received reports from Theodor Svedberg and . These chemists were impressed by Hahn's work, but felt that of Meitner and Frisch was not extraordinary, and did not understand why the physics community regarded their work as seminal. As for Strassmann, although his name was on the papers, there was a long-standing policy of conferring awards on the most senior scientist in a collaboration. The committee therefore recommended that Hahn alone be given the chemistry prize.
Under Nazi rule, Germans had been forbidden to accept Nobel prizes after the Nobel Peace Prize had been awarded to Carl von Ossietzky in 1936. The Nobel Committee for Chemistry's recommendation was therefore rejected by the Royal Swedish Academy of Sciences in 1944, which also decided to defer the award for one year. When the Academy reconsidered the award in September 1945, the war was over and thus the German boycott had ended. Also, the chemistry committee had now become more cautious, as it was apparent that much research had taken place in the United States in secret, and suggested deferring for another year, but the Academy was swayed by Göran Liljestrand, who argued that it was important for the Academy to assert its independence from the Allies of World War II, and award the prize to a German, as it had done after World War I when it had awarded it to Fritz Haber. Hahn therefore became the sole recipient of the 1944 Nobel Prize for Chemistry.
The invitation to attend the Nobel festivities was transmitted via the British Embassy in Stockholm. On 4 December, Hahn was persuaded by two of his Alsos captors, American Lieutenant Colonel Horace K. Calvert and British Lieutenant Commander Eric Welsh, to write a letter to the Nobel committee accepting the prize but stating that he would not be able to attend the award ceremony on 10 December since his captors would not allow him to leave Farm Hall. When Hahn protested, Welsh reminded him that Germany had lost the war. Under the Nobel Foundation statutes, Hahn had six months to deliver the Nobel Prize lecture, and until 1 October 1946 to cash the 150,000 Swedish krona cheque.
Hahn was repatriated from Farm Hall on 3 January 1946, but it soon became apparent that difficulties obtaining permission to travel from the British government meant that he would be unable to travel to Sweden before December 1946. Accordingly, the Academy of Sciences and the Nobel Foundation obtained an extension from the Swedish government. Hahn attended the year after he was awarded the prize. On 10 December 1946, the anniversary of the death of Alfred Nobel, King Gustav V of Sweden presented him with his Nobel Prize medal and diploma. Hahn gave 10,000 krona of his prize to Strassmann, who refused to use it.
Founder and President of the Max Planck Society
The suicide of Albert Vögler on 14 April 1945 left the KWS without a president. The British chemist Bertie Blount was placed in charge of its affairs while the Allies decided what to do with it, and he decided to install Max Planck as an interim president. Now aged 87, Planck was in the small town of Rogätz, in an area that the Americans were preparing to hand over to the Soviet Union. The Dutch astronomer Gerard Kuiper from the Alsos Mission fetched Planck in a jeep and brought him to Göttingen on 16 May. Planck wrote to Hahn, who was still in captivity in England, on 25 July, and informed Hahn that the directors of the KWS had voted to make him the next president, and asked if he would accept the position. Hahn did not receive the letter until September, and did not think he was a good choice, as he regarded himself as a poor negotiator, but his colleagues persuaded him to accept. After his return to Germany, he assumed the office on 1 April 1946.
Allied Control Council Law No. 25 on the control of scientific research dated 29 April 1946 restricted German scientists to conducting basic research only, and on 11 July the Allied Control Council dissolved the KWS on the insistence of the Americans, who considered that it had been too close to the national socialist regime, and was a threat to world peace. However, the British, who had voted against the dissolution, were more sympathetic, and offered to let the Kaiser Wilhelm Society continue in the British Zone, on one condition: that the name be changed. Hahn and Heisenberg were distraught at this prospect. To them it was an international brand that represented political independence and scientific research of the highest order. Hahn noted that it had been suggested that the name be changed during the Weimar Republic, but the Social Democratic Party of Germany had been persuaded not to. To Hahn, the name represented the good old days of the German Empire, however authoritarian and undemocratic it was, before the hated Weimar Republic. Heisenberg asked Niels Bohr for support, but Bohr recommended that the name be changed. Lise Meitner wrote to Hahn, explaining that:
In September 1946, a new Max Planck Society was established at Bad Driburg in the British Zone. On 26 February 1948, after the US and British zones were fused into Bizonia, it was dissolved to make way for the Max Planck Society, with Hahn as the founding president. It took over the 29 institutes of the former Kaiser Wilhelm Society that were located in the British and American zones. When the Federal Republic of Germany (or West-Germany) was formed in 1949, the five institutes located in the French zone joined them. The KWIC, now under Strassmann, built and renovated new accommodation in Mainz, but work proceeded slowly, and it did not relocate from Tailfingen until 1949. Hahn's insistence on retaining Telschow as the general secretary nearly caused a rebellion against his presidency. In his efforts to rebuild German science, Hahn was generous in issuing persilschein (whitewash certificates), writing one for Gottfried von Droste, who had joined the Sturmabteilung (SA) in 1933 and the NSDAP in 1937, and wore his SA uniform at the KWIC, and for Heinrich Hörlein and Fritz ter Meer from IG Farben. Hahn served as president of the Max Planck Society until 1960, and succeeded in regaining the renown that had once been enjoyed by the Kaiser Wilhelm Society. New institutes were founded and old ones expanded, the budget rose from 12 million Deutsche Marks in 1949 (equivalent to € million in ) to 47 million in 1960 (equivalent to € million in ), and the workforce grew from 1,400 to nearly 3,000.
Spokesman for social responsibility
After the Second World War, Hahn came out strongly against the use of nuclear energy for military purposes. He saw the application of his scientific discoveries to such ends as a misuse, or even a crime. The historian Lawrence Badash wrote: "His wartime recognition of the perversion of science for the construction of weapons, and his postwar activity in planning the direction of his country's scientific endeavours now inclined him increasingly toward being a spokesman for social responsibility."
In early 1954, he wrote the article "Cobalt 60 – Danger or Blessing for Mankind?", about the misuse of atomic energy, which was widely reprinted and transmitted in the radio in Germany, Norway, Austria, and Denmark, and in an English version worldwide via the BBC. The international reaction was encouraging. The following year he initiated and organized the Mainau Declaration of 1955, in which he and other international Nobel Prize-winners called attention to the dangers of atomic weapons and urgently warned the nations of the world against the use of "force as a final resort", and which was issued a week after the similar Russell-Einstein Manifesto. In 1956, Hahn repeated his appeal with the signature of 52 of his Nobel colleagues from all parts of the world.
Hahn was also instrumental in and one of the authors of the Göttingen Manifesto of 13 April 1957, in which, together with 17 leading German atomic scientists, he protested against a proposed nuclear arming of the West German armed forces (Bundeswehr). This resulted in Hahn receiving an invitation to meet the Chancellor of Germany, Konrad Adenauer and other senior officials, including the Defense Minister, Franz Josef Strauss, and Generals Hans Speidel and Adolf Heusinger (who had both been generals in the Nazi era). The two generals argued that the Bundeswehr needed nuclear weapons, and Adenauer accepted their advice. A communiqué was drafted that said that the Federal Republic did not manufacture nuclear weapons, and would not ask its scientists to do so. Instead, the German forces were equipped with US nuclear weapons.
On 13 November 1957, in the Konzerthaus (Concert Hall) in Vienna, Hahn warned of the "dangers of A- and H-bomb-experiments", and declared that "today war is no means of politics anymore – it will only destroy all countries in the world". His highly acclaimed speech was transmitted internationally by the Austrian radio, Österreichischer Rundfunk (ÖR). On 28 December 1957, Hahn repeated his appeal in an English translation for the Bulgarian Radio in Sofia, which was broadcast in all Warsaw pact states.
In 1959 Hahn co-founded in Berlin the Federation of German Scientists (VDW), a non-governmental organisation, which has been committed to the ideal of responsible science. The members of the Federation feel committed to taking into consideration the possible military, political, and economic implications and possibilities of atomic misuse when carrying out their scientific research and teaching. With the results of its interdisciplinary work the VDW not only addresses the general public, but also the decision-makers at all levels of politics and society. Right up to his death, Otto Hahn never tired of warning of the dangers of the nuclear arms race between the great powers and of the radioactive contamination of the planet.
Lawrence Badash wrote:
He was one of the signatories of the agreement to convene a convention for drafting a world constitution. As a result, for the first time in human history, a World Constituent Assembly convened to draft and adopt a Constitution for the Federation of Earth.
Private life
In June 1911, while attending a conference in Stettin, Hahn met (1887–1968), a student at the Royal School of Art in Berlin. They saw each other again in Berlin, and became engaged in November 1912. On 22 March 1913 the couple were married in Stettin, where Edith's father, Paul Ferdinand Junghans, was a high-ranking law officer and President of the City Parliament until his death in 1915. After a honeymoon at Punta San Vigilio on Lake Garda in Italy, they visited Vienna, and then Budapest, where they stayed with George de Hevesy.
They had one child, , who was born on 9 April 1922. Hanno enlisted in the army in 1942, and served on the Eastern Front in World War II as a panzer commander. He lost an arm in combat. After the war he became an art historian and architectural researcher (at the Hertziana in Rome), known for his discoveries in the early Cistercian architecture of the 12th century. In August 1960, while on a study trip in France, Hanno died in a car accident, together with his wife and assistant Ilse Hahn née Pletz. They left a fourteen-year-old son, .
In 1990, the for outstanding contributions to Italian art history was established in memory of Hanno and Ilse Hahn to support young and talented art historians. It is awarded biennially by the Bibliotheca Hertziana – Max Planck Institute for Art History in Rome.
Death and Legacy
Death
Hahn was shot in the back in October 1951 by a disgruntled inventor who wished to highlight the neglect of his ideas by mainstream scientists. Hahn was injured in a motor vehicle accident in 1952, and had a minor heart attack the following year. In 1962, he published a book, Vom Radiothor zur Uranspaltung (From the radiothor to Uranium fission). It was released in English in 1966 with the title Otto Hahn: A Scientific Autobiography, with an introduction by Glenn Seaborg. The success of this book may have prompted him to write another, fuller autobiography, Otto Hahn. Mein Leben, but before it could be published, he fractured one of the vertebrae in his neck while getting out of a car. He gradually became weaker and died in Göttingen on 28 July 1968. His wife Edith survived him by only a fortnight. He was buried in the Stadtfriedhof in Göttingen.
The day after his death, the Max Planck Society published the following obituary notice:
Fritz Strassmann wrote:
Otto Robert Frisch recalled:
The Royal Society in London wrote in an obituary:
Legacy
Hahn is considered the father of radiochemistry and nuclear chemistry. He is chiefly remembered for the discovery of nuclear fission, the basis of nuclear power and nuclear weapons. Glenn Seaborg wrote that "it has been given to very few men to make contributions to science and to humanity of the magnitude of those made by Otto Hahn". His award of the 1944 Nobel Prize for Chemistry was in recognition of this discovery but was attainted by sexism and antisemitism in Meitner being overlooked. Conflict between chemists and physicists and the theorists and experimentalists also played a role. Hahn's efforts to rehabilitate the image of Germany after the war also became problematic. He was no Nazi, but tolerated those who were. He was not culpable, but was complicit. In a letter to James Franck dated 22 February 1946, Meitner wrote:
Honours and awards
During his lifetime Hahn was awarded orders, medals, scientific prizes, and fellowships of Academies, Societies, and Institutions from all over the world. At the end of 1999, the German news magazine Focus published an inquiry of 500 leading natural scientists, engineers, and physicians about the most important scientists of the 20th century. In this poll Hahn was elected third (with 81 points), after the theoretical physicists Albert Einstein and Max Planck, and thus the most significant chemist of his time.
As well as the Nobel Prize in Chemistry (1944), Hahn was awarded:
the Emil Fischer Medal of the Society of German Chemists (1922),
the Cannizaro Prize of the Royal Academy of Science in Rome (1938),
the Copernicus Prize of the University of Konigsberg (1941),
the Gothenius Medal of the Akademie der Naturforscher (1943),
the Max Planck Medal of the German Physical Society, with Lise Meitner (1949),
the Goethe Medal of the city of Frankfurt-on-the-Main (1949),
the Golden Paracelsus Medal of the Swiss Chemical Society (1953),
the Faraday Lectureship Prize with Medal from the Royal Society of Chemistry (1956),
the Grotius Medal of the Hugo Grotius Foundation (1956),
the Wilhelm Exner Medal of the Austrian Industry Association (1958),
the Helmholtz Medal of the Berlin-Brandenburg Academy of Sciences and Humanities (1959),
and the Harnack medal in Gold from the Max Planck Society (1959).
Hahn became the honorary president of the Max Planck Society in 1962.
He was elected a Foreign Member of the Royal Society (1957).
His honorary memberships of foreign academies and scientific societies included:
the Romanian Physical Society in Bucharest,
the Royal Spanish Society for Chemistry and Physics and the Spanish National Research Council,
and the Academies in Allahabad, Bangalore, Berlin, Boston, Bucharest, Copenhagen, Göttingen, Halle, Helsinki, Lisbon, Madrid, Mainz, Munich, Rome, Stockholm, the Vatican, and Vienna.
He was an honorary fellow of University College London,
and an honorary citizen of the cities of Frankfurt am Main and Göttingen in 1959, and of Berlin (1968).
Hahn was made an Officer of the Ordre National de la Légion d'Honneur of France (1959),
and was awarded the Grand Cross First Class of the Order of Merit of the Federal Republic of Germany (1959).
In 1966, US President Lyndon B. Johnson and the United States Atomic Energy Commission (AEC) awarded Hahn, Lise Meitner and Fritz Strassmann the Enrico Fermi Award. The diploma for Hahn bore the words: "For pioneering research in the naturally occurring radioactivities and extensive experimental studies culminating in the discovery of fission."
He received honorary doctorates from
the University of Gottingen,
the Technische Universität Darmstadt,
the Goethe University Frankfurt in 1949,
and the University of Cambridge in 1957.
Objects named after Hahn include:
NS Otto Hahn, the only European nuclear-powered civilian ship (1964);
a crater on the Moon (shared with his namesake Friedrich von Hahn);
and the asteroid 19126 Ottohahn;
the Otto Hahn Prize of both the German Chemical and Physical Societies and the city of Frankfurt/Main;
the Otto Hahn Medal – An Incentive for Young Scientists – and the Otto Hahn Award of the Max Planck Society;
and the Otto Hahn Peace Medal in Gold of the United Nations Association of Germany (DGVN) in Berlin (1988).
Proposals were made at various times, first in 1971 by American chemists, that the newly synthesised element 105 should be named hahnium in Hahn's honour, but in 1997 the IUPAC named it dubnium, after the Russian research centre in Dubna. In 1992 element 108 was discovered by a German research team, and they proposed the name hassium (after Hesse). In spite of the long-standing convention to give the discoverer the right to suggest a name, a 1994 IUPAC committee recommended that it be named hahnium. After protests from the German discoverers, the name hassium (Hs) was adopted internationally in 1997.
See also
List of peace activists
Publications in English
Notes
References
Further reading
External links
Otto Hahn – winner of the Enrico Fermi Award 1966 U.S Government, Department of Energy
including the Nobel Lecture on 13 December 1946 From the Natural Transmutations of Uranium to Its Artificial Fission
Award Ceremony Speech honoring Otto Hahn by Professor Arne Westgren, Stockholm.
Otto Hahn and the Discovery of Nuclear Fission BR, 2008
Otto Hahn – Discoverer of Nuclear Fission Author: Dr. Anne Hardy (Pro-Physik, 2004)
Otto Hahn (1879–1968) – The discovery of fission Visit Berlin, 2011.
Otto Hahn – Discoverer of nuclear fission
Otto Hahn – Founder of the Atomic Age Author: Dr Edmund Neubauer (Translation: Brigitte Hippmann) – Website of the Otto Hahn Gymnasium (OHG), 2007.
Otto Hahn Award
Otto Hahn Peace Medal in Gold Website of the United Nations Association of Germany (DGVN) in Berlin
Otto Hahn Medal
The history of the Hahn Meitner Institute (HMI) Helmholtz-Zentrum, Berlin 2011.
Otto Hahn heads a delegation to Israel 1959 Website of the Max Planck Society, 2011.
Biography Otto Hahn 1879–1968
Otto Hahn – A Life for Science, Humanity and Peace Hiroshima University Peace Lecture, held by Dietrich Hahn, 2 October 2013.
Otto Hahn – Discoverer of nuclear fission, grandfather of the Atombomb GMX, Switzerland, 17 December 2013. Author: Marinus Brandl.
Otto Hahn
1879 births
1968 deaths
20th-century German chemists
Cornell University faculty
Discoverers of chemical elements
Enrico Fermi Award recipients
Fellows of the American Academy of Arts and Sciences
Foreign fellows of the Indian National Science Academy
Foreign members of the Royal Society
Academic staff of the Free University of Berlin
German anti–nuclear weapons activists
German autobiographers
German Army personnel of World War I
German Nobel laureates
German pacifists
Grand Crosses 1st class of the Order of Merit of the Federal Republic of Germany
Honorary members of the Romanian Academy
Honorary officers of the Order of the British Empire
Humboldt University of Berlin alumni
Academic staff of the Humboldt University of Berlin
Ludwig Maximilian University of Munich alumni
Max Planck Society people
Members of the Austrian Academy of Sciences
Members of the Bavarian Academy of Sciences
Members of the Finnish Academy of Science and Letters
Members of the German Academy of Sciences at Berlin
Members of the Pontifical Academy of Sciences
Members of the Prussian Academy of Sciences
Members of the Royal Danish Academy of Sciences and Letters
Members of the Royal Swedish Academy of Sciences
Nobel laureates in Chemistry
Nuclear chemists
Officers of the Legion of Honour
Operation Epsilon
Scientists from Frankfurt
People from Hesse-Nassau
Recipients of the Pour le Mérite (civil class)
University of Marburg alumni
Winners of the Max Planck Medal
Rare earth scientists
Max Planck Institute directors
World Constitutional Convention call signatories
Recipients of the Cothenius Medal | Otto Hahn | [
"Chemistry"
] | 13,643 | [
"Nuclear chemists"
] |
46,828 | https://en.wikipedia.org/wiki/Fertilisation | Fertilisation or fertilization (see spelling differences), also known as generative fertilisation, syngamy and impregnation, is the fusion of gametes to give rise to a zygote and initiate its development into a new individual organism or offspring. While processes such as insemination or pollination, which happen before the fusion of gametes, are also sometimes informally referred to as fertilisation, these are technically separate processes. The cycle of fertilisation and development of new individuals is called sexual reproduction. During double fertilisation in angiosperms, the haploid male gamete combines with two haploid polar nuclei to form a triploid primary endosperm nucleus by the process of vegetative fertilisation.
History
In antiquity, Aristotle conceived the formation of new individuals through fusion of male and female fluids, with form and function emerging gradually, in a mode called by him as epigenetic.
In 1784, Spallanzani established the need of interaction between the female's ovum and male's sperm to form a zygote in frogs. In 1827, Karl Ernst von Baer observed a therian mammalian egg for the first time. Oscar Hertwig (1876), in Germany, described the fusion of nuclei of spermatozoa and of ova from sea urchin.
Evolution
The evolution of fertilisation is related to the origin of meiosis, as both are part of sexual reproduction, originated in eukaryotes. One hypothesis states that meiosis originated from mitosis.
Fertilisation in plants
The gametes that participate in fertilisation of plants are the sperm (male) and the egg (female) cell. Various plant groups have differing methods by which the gametes produced by the male and female gametophytes come together and are fertilised. In bryophytes and pteridophytic land plants, fertilisation of the sperm and egg takes place within the archegonium. In seed plants, the male gametophyte is formed within a pollen grain. After pollination, the pollen grain germinates, and a pollen tube grows and penetrates the ovule through a tiny pore called a micropyle. The sperm are transferred from the pollen through the pollen tube to the ovule where the egg is fertilised. In flowering plants, two sperm cells are released from the pollen tube, and a second fertilisation event occurs involving the second sperm cell and the central cell of the ovule, which is a second female gamete.
Pollen tube growth
Unlike animal sperm which is motile, the sperm of most seed plants is immotile and relies on the pollen tube to carry it to the ovule where the sperm is released. The pollen tube penetrates the stigma and elongates through the extracellular matrix of the style before reaching the ovary. Then near the receptacle, it breaks through the ovule through the micropyle (an opening in the ovule wall) and the pollen tube "bursts" into the embryo sac, releasing sperm. The growth of the pollen tube has been believed to depend on chemical cues from the pistil, however these mechanisms were poorly understood until 1995. Work done on tobacco plants revealed a family of glycoproteins called TTS proteins that enhanced growth of pollen tubes. Pollen tubes in a sugar free pollen germination medium and a medium with purified TTS proteins both grew. However, in the TTS medium, the tubes grew at a rate 3x that of the sugar-free medium. TTS proteins were also placed on various locations of semi in vivo pollinated pistils, and pollen tubes were observed to immediately extend toward the proteins. Transgenic plants lacking the ability to produce TTS proteins had slower pollen tube growth and reduced fertility.
Rupture of pollen tube
The rupture of the pollen tube to release sperm in Arabidopsis has been shown to depend on a signal from the female gametophyte. Specific proteins called FER protein kinases present in the ovule control the production of highly reactive derivatives of oxygen called reactive oxygen species (ROS). ROS levels have been shown via GFP to be at their highest during floral stages when the ovule is the most receptive to pollen tubes, and lowest during times of development and following fertilisation. High amounts of ROS activate Calcium ion channels in the pollen tube, causing these channels to take up Calcium ions in large amounts. This increased uptake of calcium causes the pollen tube to rupture, and release its sperm into the ovule. Pistil feeding assays in which plants were fed diphenyl iodonium chloride (DPI) suppressed ROS concentrations in Arabidopsis, which in turn prevented pollen tube rupture.
Flowering plants
After being fertilised, the ovary starts to swell and develop into the fruit. With multi-seeded fruits, multiple grains of pollen are necessary for syngamy with each ovule. The growth of the pollen tube is controlled by the vegetative (or tube) cytoplasm. Hydrolytic enzymes are secreted by the pollen tube that digest the female tissue as the tube grows down the stigma and style; the digested tissue is used as a nutrient source for the pollen tube as it grows. During pollen tube growth towards the ovary, the generative nucleus divides to produce two separate sperm nuclei (haploid number of chromosomes) – a growing pollen tube therefore contains three separate nuclei, two sperm and one tube. The sperms are interconnected and dimorphic, the large one, in a number of plants, is also linked to the tube nucleus and the interconnected sperm and the tube nucleus form the "male germ unit".
Double fertilisation is the process in angiosperms (flowering plants) in which two sperm from each pollen tube fertilise two cells in a female gametophyte (sometimes called an embryo sac) that is inside an ovule. After the pollen tube enters the gametophyte, the pollen tube nucleus disintegrates and the two sperm cells are released; one of the two sperm cells fertilises the egg cell (at the bottom of the gametophyte near the micropyle), forming a diploid (2n) zygote. This is the point when fertilisation actually occurs; pollination and fertilisation are two separate processes. The nucleus of the other sperm cell fuses with two haploid polar nuclei (contained in the central cell) in the centre of the gametophyte. The resulting cell is triploid (3n). This triploid cell divides through mitosis and forms the endosperm, a nutrient-rich tissue, inside the seed. The two central-cell maternal nuclei (polar nuclei) that contribute to the endosperm arise by mitosis from the single meiotic product that also gave rise to the egg. Therefore, maternal contribution to the genetic constitution of the triploid endosperm is double that of the embryo.
One primitive species of flowering plant, Nuphar polysepala, has endosperm that is diploid, resulting from the fusion of a sperm with one, rather than two, maternal nuclei. It is believed that early in the development of angiosperm lineages, there was a duplication in this mode of reproduction, producing seven-celled/eight-nucleate female gametophytes, and triploid endosperms with a 2:1 maternal to paternal genome ratio.
In many plants, the development of the flesh of the fruit is proportional to the percentage of fertilised ovules. For example, with watermelon, about a thousand grains of pollen must be delivered and spread evenly on the three lobes of the stigma to make a normal sized and shaped fruit.
Self-pollination and outcrossing
Outcrossing, or cross-fertilisation, and self-fertilisation represent different strategies with differing benefits and costs. An estimated 48.7% of plant species are either dioecious or self-incompatible obligate outcrossers. It is also estimated that about 42% of flowering plants exhibit a mixed mating system in nature.
In the most common kind of mixed mating system, individual plants produce a single type of flower and fruits may contain self-fertilised, outcrossed or a mixture of progeny types. The transition from cross-fertilisation to self-fertilisation is the most common evolutionary transition in plants, and has occurred repeatedly in many independent lineages. About 10-15% of flowering plants are predominantly self-fertilising.
Under circumstances where pollinators or mates are rare, self-fertilisation offers the advantage of reproductive assurance. Self-fertilisation can therefore result in improved colonisation ability. In some species, self-fertilisation has persisted over many generations. Capsella rubella is a self-fertilising species that became self-compatible 50,000 to 100,000 years ago. Arabidopsis thaliana is a predominantly self-fertilising plant with an out-crossing rate in the wild of less than 0.3%; a study suggested that self-fertilisation evolved roughly a million years ago or more in A. thaliana. In long-established self-fertilising plants, the masking of deleterious mutations and the production of genetic variability is infrequent and thus unlikely to provide a sufficient benefit over many generations to maintain the meiotic apparatus. Consequently, one might expect self-fertilisation to be replaced in nature by an ameiotic asexual form of reproduction that would be less costly. However the actual persistence of meiosis and self-fertilisation as a form of reproduction in long-established self-fertilising plants may be related to the immediate benefit of efficient recombinational repair of DNA damage during formation of germ cells provided by meiosis at each generation.
Fertilisation in animals
The mechanics behind fertilisation has been studied extensively in sea urchins and mice. This research addresses the question of how the sperm and the appropriate egg find each other and the question of how only one sperm gets into the egg and delivers its contents. There are three steps to fertilisation that ensure species-specificity:
Chemotaxis
Sperm activation/acrosomal reaction
Sperm/egg adhesion
Internal vs. external
Consideration as to whether an animal (more specifically a vertebrate) uses internal or external fertilisation is often dependent on the method of birth. Oviparous animals laying eggs with thick calcium shells, such as chickens, or thick leathery shells generally reproduce via internal fertilisation so that the sperm fertilises the egg without having to pass through the thick, protective, tertiary layer of the egg. Ovoviviparous and viviparous animals also use internal fertilisation. Although some organisms reproduce via amplexus, they may still use internal fertilisation, as with some salamanders. Advantages of internal fertilisation include minimal waste of gametes, greater chance of individual egg fertilisation, longer period of egg protection, and selective fertilisation. Many females have the ability to store sperm for extended periods of time and can fertilise their eggs at their own desire.
Oviparous animals producing eggs with thin tertiary membranes or no membranes at all, on the other hand, use external fertilisation methods. Such animals may be more precisely termed ovuliparous. External fertilisation is advantageous in that it minimises contact (which decreases the risk of disease transmission), and greater genetic variation.
Sea urchins
Sperm find the eggs via chemotaxis, a type of ligand/receptor interaction. Resact is a 14 amino acid peptide purified from the jelly coat of A. punctulata that attracts the migration of sperm.
After finding the egg, the sperm penetrates the jelly coat through a process called sperm activation. In another ligand/receptor interaction, an oligosaccharide component of the egg binds and activates a receptor on the sperm and causes the acrosomal reaction. The acrosomal vesicles of the sperm fuse with the plasma membrane and are released. In this process, molecules bound to the acrosomal vesicle membrane, such as bindin, are exposed on the surface of the sperm. These contents digest the jelly coat and eventually the vitelline membrane. In addition to the release of acrosomal vesicles, there is explosive polymerisation of actin to form a thin spike at the head of the sperm called the acrosomal process.
The sperm binds to the egg through another ligand reaction between receptors on the vitelline membrane. The sperm surface protein bindin, binds to a receptor on the vitelline membrane identified as EBR1.
Fusion of the plasma membranes of the sperm and egg are likely mediated by bindin. At the site of contact, fusion causes the formation of a fertilisation cone.
Mammals
Male mammals internally fertilise females and ejaculate semen through the penis during copulation. After ejaculation, many sperm move to the upper vagina (via contractions from the vagina) through the cervix and across the length of the uterus to meet the ovum. In cases where fertilisation occurs, the female usually ovulates during a period that extends from hours before copulation to a few days after; therefore, in most mammals, it is more common for ejaculation to precede ovulation than vice versa.
When sperm are deposited into the anterior vagina, they are not capable of fertilisation (i.e., non-capacitated) and are characterised by slow linear motility patterns. This motility, combined with muscular contractions enables sperm transport towards the uterus and oviducts. There is a pH gradient within the micro-environment of the female reproductive tract such that the pH near the vaginal opening is lower (approximately 5) than the oviducts (approximately 8). The sperm-specific pH-sensitive calcium transport protein called CatSper increases the sperm cell permeability to calcium as it moves further into the reproductive tract. Intracellular calcium influx contributes to sperm capacitation and hyperactivation, causing a more violent and rapid non-linear motility pattern as sperm approach the oocyte. The capacitated spermatozoon and the oocyte meet and interact in the ampulla of the fallopian tube. Rheotaxis, thermotaxis and chemotaxis are known mechanisms that guide sperm towards the egg during the final stage of sperm migration. Spermatozoa respond (see Sperm thermotaxis) to the temperature gradient of ~2 °C between the oviduct and the ampulla, and chemotactic gradients of progesterone have been confirmed as the signal emanating from the cumulus oophorus cells surrounding rabbit and human oocytes. Capacitated and hyperactivated sperm respond to these gradients by changing their behaviour and moving towards the cumulus-oocyte complex. Other chemotactic signals such as formyl Met-Leu-Phe (fMLF) may also guide spermatozoa.
The zona pellucida, a thick layer of extracellular matrix that surrounds the egg and is similar to the role of the vitelline membrane in sea urchins, binds the sperm. Unlike sea urchins, the sperm binds to the egg before the acrosomal reaction. ZP3, a glycoprotein in the zona pellucida, is responsible for egg/sperm adhesion in humans. The receptor galactosyltransferase (GalT) binds to the N-acetylglucosamine residues on the ZP3 and is important for binding with the sperm and activating the acrosome reaction. ZP3 is sufficient though unnecessary for sperm/egg binding. Two additional sperm receptors exist: a 250kD protein that binds to an oviduct secreted protein, and SED1, which independently binds to the zona. After the acrosome reaction, the sperm is believed to remain bound to the zona pellucida through exposed ZP2 receptors. These receptors are unknown in mice but have been identified in guinea pigs.
In mammals, the binding of the spermatozoon to the GalT initiates the acrosome reaction. This process releases the hyaluronidase that digests the matrix of hyaluronic acid in the vestments around the oocyte. Additionally, heparin-like glycosaminoglycans (GAGs) are released near the oocyte that promote the acrosome reaction. Fusion between the oocyte plasma membranes and sperm follows and allows the sperm nucleus, the typical centriole, and atypical centriole that is attached to the flagellum, but not the mitochondria, to enter the oocyte. The protein CD9 likely mediates this fusion in mice (the binding homolog). The egg "activates" itself upon fusing with a single sperm cell and thereby changes its cell membrane to prevent fusion with other sperm. Zinc atoms are released during this activation.
This process ultimately leads to the formation of a diploid cell called a zygote. The zygote divides to form a blastocyst and, upon entering the uterus, implants in the endometrium, beginning pregnancy. Embryonic implantation not in the uterine wall results in an ectopic pregnancy that can kill the mother.
In such animals as rabbits, coitus induces ovulation by stimulating the release of the pituitary hormone gonadotropin; this release greatly increases the likelihood of pregnancy.
Humans
Fertilisation in humans is the union of a human egg and sperm, usually occurring in the ampulla of the fallopian tube, producing a single celled zygote, the first stage of life in the development of a genetically unique organism, and initiating embryonic development. Scientists discovered the dynamics of human fertilisation in the nineteenth century.
The term conception commonly refers to "the process of becoming pregnant involving fertilisation or implantation or both". Its use makes it a subject of semantic arguments about the beginning of pregnancy, typically in the context of the abortion debate.
Upon gastrulation, which occurs around 16 days after fertilisation, the implanted blastocyst develops three germ layers, the endoderm, the ectoderm and the mesoderm, and the genetic code of the father becomes fully involved in the development of the embryo; later twinning is impossible. Additionally, interspecies hybrids survive only until gastrulation and cannot further develop.
However, some human developmental biology literature refers to the conceptus and such medical literature refers to the "products of conception" as the post-implantation embryo and its surrounding membranes. The term "conception" is not usually used in scientific literature because of its variable definition and connotation.
Insects
Insects in different groups, including the Odonata (dragonflies and damselflies) and the Hymenoptera (ants, bees, and wasps) practise delayed fertilisation. Among the Odonata, females may mate with multiple males, and store sperm until the eggs are laid. The male may hover above the female during egg-laying (oviposition) to prevent her from mating with other males and replacing his sperm; in some groups such as the darters, the male continues to grasp the female with his claspers during egg-laying, the pair flying around in tandem. Among social Hymenoptera, honeybee queens mate only on mating flights, in a short period lasting some days; a queen may mate with eight or more drones. She then stores the sperm for the rest of her life, perhaps for five years or more.
Fertilisation in fungi
In many fungi (except chytrids), as in some protists, fertilisation is a two step process. First, the cytoplasms of the two gamete cells fuse (called plasmogamy), producing a dikaryotic or heterokaryotic cell with multiple nuclei. This cell may then divide to produce dikaryotic or heterokaryotic hyphae. The second step of fertilisation is karyogamy, the fusion of the nuclei to form a diploid zygote.
In chytrid fungi, fertilisation occurs in a single step with the fusion of gametes, as in animals and plants.
Fertilisation in protists
Fertilisation in protozoa
There are three types of fertilisation processes in protozoa:
gametogamy;
autogamy;
gamontogamy.
Fertilisation in algae
Algae, like some land plants, undergo alternation of generations. Some algae are isomorphic, where both the sporophyte (2n) and gameteophyte (n) are the same morphologically. When algae reproduction is described as oogamous, the male and female gametes are different morphologically, where there is a large non-motile egg for female gametes, and the male gamete are uniflagellate (motile). Via the process of syngamy, these will form a new zygote, regenerating the sporophyte generation again.
Fertilisation and genetic recombination
Meiosis results in a random segregation of the genes that each parent contributes. Each parent organism is usually identical save for a fraction of their genes; each gamete is therefore genetically unique. At fertilisation, parental chromosomes combine. In humans, (2²²)² = 17.6x1012 chromosomally different zygotes are possible for the non-sex chromosomes, even assuming no chromosomal crossover. If crossover occurs once, then on average (4²²)² = 309x1024 genetically different zygotes are possible for every couple, not considering that crossover events can take place at most points along each chromosome. The X and Y chromosomes undergo no crossover events and are therefore excluded from the calculation. The mitochondrial DNA is only inherited from the maternal parent.
The sperm aster and zygote centrosomes
Shortly after the sperm fuse with the egg, the two sperm centrioles form the embryo first centrosome and microtubule aster. The sperm centriole, found near the male pronucleus, recruit egg Pericentriolar material proteins forming the zygote first centrosome. This centrosome nucleates microtubules in the shape of stars called astral microtubules. The microtubules span the whole valium of the egg, allowing the egg pronucleus to use the cables to get to the male pronucleus. As the male and female pronuclei approach each other, the single centrosome split into two centrosomes located in the interphase between the pronuclei. Then the centrosome via the astral microtubules polarises the genome inside the pronuclei.
Parthenogenesis
Organisms that normally reproduce sexually can also reproduce via parthenogenesis, wherein an unfertilised female gamete produces viable offspring. These offspring may be clones of the mother, or in some cases genetically differ from her but inherit only part of her DNA. Parthenogenesis occurs in many plants and animals and may be induced in others through a chemical or electrical stimulus to the egg cell. In 2004, Japanese researchers led by Tomohiro Kono succeeded after 457 attempts to merge the ova of two mice by blocking certain proteins that would normally prevent the possibility; the resulting embryo normally developed into a mouse.
Allogamy and autogamy
Allogamy, which is also known as cross-fertilisation, refers to the fertilisation of an egg cell from one individual with the male gamete of another.
Autogamy which is also known as self-fertilisation, occurs in such hermaphroditic organisms as plants and flatworms; therein, two gametes from one individual fuse.
Other variants of bisexual reproduction
Some relatively unusual forms of reproduction are:
Gynogenesis: A sperm stimulates the egg to develop without fertilisation or syngamy. The sperm may enter the egg.
Hybridogenesis: One genome is eliminated to produce haploid eggs.
Canina meiosis: (sometimes called "permanent odd polyploidy") one genome is transmitted in the Mendelian fashion, others are transmitted clonally.
Benefits of cross-fertilisation
The major benefit of cross-fertilisation is generally thought to be the avoidance of inbreeding depression. Charles Darwin, in his 1876 book The Effects of Cross and Self Fertilisation in the Vegetable Kingdom (pages 466-467) summed up his findings in the following way.
"It has been shown in the present volume that the offspring from the union of two distinct individuals, especially if their progenitors have been subjected to very different conditions, have an immense advantage in height, weight, constitutional vigour and fertility over the self-fertilised offspring from one of the same parents. And this fact is amply sufficient to account for the development of the sexual elements, that is, for the genesis of the two sexes."
In addition, it is thought by some, that a long-term advantage of out-crossing in nature is increased genetic variability that promotes adaptation or avoidance of extinction (see Genetic variability).
See also
Cell fusion
Conception cap
Conception device
Female sperm
Fetal development
In vitro fertilisation
Kaguya (mouse)
Parthenogenesis, a type of reproduction that does not involve fertilisation
Pollination
Pre-embryo
Pronucleus
Superfecundation
Superfetation
Symmetry breaking and cortical rotation
Cortical reaction
Polyspermy
References
External links
Fertilisation (Conception) video
Reproduction
Fertility
Pollination | Fertilisation | [
"Biology"
] | 5,517 | [
"Biological interactions",
"Behavior",
"Reproduction"
] |
46,860 | https://en.wikipedia.org/wiki/Catalan%27s%20constant | In mathematics, Catalan's constant , is the alternating sum of the reciprocals of the odd square numbers, being defined by:
where is the Dirichlet beta function. Its numerical value is approximately
Catalan's constant was named after Eugène Charles Catalan, who found quickly-converging series for its calculation and published a memoir on it in 1865.
Uses
In low-dimensional topology, Catalan's constant is 1/4 of the volume of an ideal hyperbolic octahedron, and therefore 1/4 of the hyperbolic volume of the complement of the Whitehead link. It is 1/8 of the volume of the complement of the Borromean rings.
In combinatorics and statistical mechanics, it arises in connection with counting domino tilings, spanning trees, and Hamiltonian cycles of grid graphs.
In number theory, Catalan's constant appears in a conjectured formula for the asymptotic number of primes of the form according to Hardy and Littlewood's Conjecture F. However, it is an unsolved problem (one of Landau's problems) whether there are even infinitely many primes of this form.
Catalan's constant also appears in the calculation of the mass distribution of spiral galaxies.
Properties
It is not known whether is irrational, let alone transcendental. has been called "arguably the most basic constant whose irrationality and transcendence (though strongly
suspected) remain unproven".
There exist however partial results. It is known that infinitely many of the numbers β(2n) are irrational, where β(s) is the Dirichlet beta function. In particular at least one of β(2), β(4), β(6), β(8), β(10) and β(12) must be irrational, where β(2) is Catalan's constant. These results by Wadim Zudilin and Tanguy Rivoal are related to similar ones given for the odd zeta constants ζ(2n+1).
Catalan's constant is known to be an algebraic period, which follows from some of the double integrals given below.
Series representations
Catalan's constant appears in the evaluation of several rational series including:
The following two formulas involve quickly converging series, and are thus appropriate for numerical computation:
and
The theoretical foundations for such series are given by Broadhurst, for the first formula, and Ramanujan, for the second formula. The algorithms for fast evaluation of the Catalan constant were constructed by E. Karatsuba. Using these series, calculating Catalan's constant is now about as fast as calculating Apéry's constant, .
Other quickly converging series, due to Guillera and Pilehrood and employed by the y-cruncher software, include:
All of these series have time complexity .
Integral identities
As Seán Stewart writes, "There is a rich and seemingly endless source of definite integrals that
can be equated to or expressed in terms of Catalan's constant." Some of these expressions include:
where the last three formulas are related to Malmsten's integrals.
If is the complete elliptic integral of the first kind, as a function of the elliptic modulus , then
If is the complete elliptic integral of the second kind, as a function of the elliptic modulus , then
With the gamma function
The integral
is a known special function, called the inverse tangent integral, and was extensively studied by Srinivasa Ramanujan.
Relation to special functions
appears in values of the second polygamma function, also called the trigamma function, at fractional arguments:
Simon Plouffe gives an infinite collection of identities between the trigamma function, 2 and Catalan's constant; these are expressible as paths on a graph.
Catalan's constant occurs frequently in relation to the Clausen function, the inverse tangent integral, the inverse sine integral, the Barnes -function, as well as integrals and series summable in terms of the aforementioned functions.
As a particular example, by first expressing the inverse tangent integral in its closed form – in terms of Clausen functions – and then expressing those Clausen functions in terms of the Barnes -function, the following expression is obtained (see Clausen function for more):
If one defines the Lerch transcendent by
then
Continued fraction
can be expressed in the following form:
The simple continued fraction is given by:
This continued fraction would have infinite terms if and only if is irrational, which is still unresolved.
Known digits
The number of known digits of Catalan's constant has increased dramatically during the last decades. This is due both to the increase of performance of computers as well as to algorithmic improvements.
See also
Gieseking manifold
List of mathematical constants
Mathematical constant
Particular values of Riemann zeta function
References
Further reading
External links
(Provides over one hundred different identities).
(Provides a graphical interpretation of the relations)
(Provides the first 300,000 digits of Catalan's constant)
Combinatorics
Mathematical constants | Catalan's constant | [
"Mathematics"
] | 1,023 | [
"Discrete mathematics",
"Mathematical objects",
"Combinatorics",
"nan",
"Mathematical constants",
"Numbers"
] |
46,863 | https://en.wikipedia.org/wiki/Asymmetric%20warfare | Asymmetric warfare (or asymmetric engagement) is a type of war between belligerents whose relative military power, strategy or tactics differ significantly. This type of warfare often, but not necessarily, involves insurgents, terrorist groups, or resistance militias operating within territory mostly controlled by the superior force.
Asymmetrical warfare can also describe a conflict in which belligerents' resources are uneven, and consequently, they both may attempt to exploit each other's relative weaknesses. Such struggles often involve unconventional warfare, with the weaker side attempting to use strategy to offset deficiencies in the quantity or quality of their forces and equipment. Such strategies may not necessarily be militarized. This is in contrast to symmetrical warfare, where two powers have comparable military power, resources, and rely on similar tactics.
Asymmetric warfare is a form of irregular warfare – conflicts in which enemy combatants are not regular military forces of nation-states. The term is frequently used to describe what is also called guerrilla warfare, insurgency, counterinsurgency, rebellion, terrorism, and counterterrorism.
Definition and differences
The popularity of the term dates from Andrew J. R. Mack's 1975 article "Why Big Nations Lose Small Wars" in World Politics, in which "asymmetric" referred simply to a significant disparity in power between opposing actors in a conflict. "Power," in this sense, is broadly understood to mean material power, such as a large army, sophisticated weapons, an advanced economy, and so on. Mack's analysis was largely ignored in its day, but the end of the Cold War sparked renewed interest among academics. By the late 1990s, new research building off Mack's works was beginning to mature; after 9/11, the U.S. military began once again to grapple with asymmetric warfare strategy.
Since 2004, the discussion of asymmetric warfare has been complicated by the tendency of academic and military officials to use the term in different ways, as well as by its close association with guerrilla warfare, insurgency, terrorism, counterinsurgency, and counterterrorism.
Academic authors tend to focus on explaining two puzzles in asymmetric conflict. First, if "power" determines victory, there must be reasons why weaker actors decide to fight more powerful actors. Key explanations include:
Weaker actors may have secret weapons.
Weaker actors may have powerful allies.
Stronger actors are unable to make threats credible.
The demands of a stronger actor are extreme.
The weaker actor must consider its regional rivals when responding to threats from powerful actors.
Second, if "power," as generally understood, leads to victory in war, then there must be an explanation for why the "weak" can defeat the "strong." Key explanations include:
Strategic interaction.
Willingness of the weak to suffer more or bear higher costs.
External support of weak actors.
Reluctance to escalating violence on the part of strong actors.
Internal group dynamics.
Inflated strong actor war aims.
Evolution of asymmetric rivals' attitudes towards time.
Asymmetric conflicts include interstate and civil wars, and over the past two hundred years, have generally been won by strong actors. Since 1950, however, weak actors have won the majority of asymmetric conflicts. In asymmetric conflicts conflict escalation can be rational for one side.
Strategic basis
In most conventional warfare, the belligerents deploy forces of a similar type, and the outcome can be predicted by the quantity or quality of the opposing forces, for example, better command and control of theirs (c2). There are times when this is the case, and conventional forces are not easily compared, making it difficult for opposing sides to engage. An example of this is the standoff between the continental land forces of the French Army and the maritime forces of the United Kingdom's Royal Navy during the French Revolutionary and Napoleonic Wars. In the words of Admiral Jervis during the campaigns of 1801, "I do not say, my Lords, that the French will not come. I say only they will not come by sea", and a confrontation that Napoleon Bonaparte described as that between the elephant and the whale.
Tactical basis
The tactical success of asymmetric warfare is dependent on at least some of the following assumptions:
One side can have a technological advantage that outweighs the numerical advantage of the enemy; the English longbow at the Battle of Crécy is an example.
Technological superiority usually is cancelled by the more vulnerable infrastructure, which can be targeted with devastating results. Destruction of multiple electric lines, roads, or water supply systems in highly populated areas could devastate the economy and morale. In contrast, the weaker side may not have these structures at all.
Training, tactics, and technology can prove decisive and allow a smaller force to overcome a much larger one. For example, for several centuries, the Greek hoplite's (heavy infantry) use of phalanx made them far superior to their enemies. The Battle of Thermopylae, which also involved good use of terrain, is a well-known example.
If the inferior power is in a position of self-defense, i.e., under attack or occupation, it may be possible to use unconventional tactics, such as hit-and-run and selective battles in which the superior power is weaker, as an effective means of harassment without violating the laws of war. Perhaps the classic historical examples of this doctrine may be found in the American Revolutionary War, movements in World War II, such as the French Resistance and Soviet and Yugoslav partisans. Against democratic aggressor nations, this strategy can be used to play on the electorate's patience with the conflict (as in the Vietnam War, and others since), provoking protests, and consequent disputes among elected legislators.
However, if the weaker power is in an aggressive position or turns to tactics prohibited by the laws of war (jus in bello), its success depends on the superior power's refraining from like tactics. For example, the law of land warfare prohibits the use of a flag of truce or marked medical vehicles as cover for an attack or ambush. Still, an asymmetric combatant using this prohibited tactic to its advantage depends on the superior power's obedience to the corresponding law. Similarly, warfare laws prohibit combatants from using civilian settlements, populations or facilities as military bases, but when an inferior force uses this tactic, it depends on the premise that the superior one will respect the law that the other is violating, and will not attack that civilian target, or if they do the propaganda advantage will outweigh the material loss.
Terrorism
There are two opposing viewpoints on the relationship between asymmetric warfare and terrorism. In the modern context, asymmetric warfare is increasingly considered a component of fourth generation warfare. When practiced outside the laws of war, it is often defined as terrorism, though rarely by its practitioners or their supporters. The other view is that asymmetric warfare does not coincide with terrorism.
Use of terrain
Terrain that limits mobility, such as forests and mountains, can be used as a force multiplier by the smaller force and as a force inhibitor against the larger one, especially one operating far from its logistical base. Such terrain is called difficult terrain. Urban areas, though generally having good transport access, provide innumerable ready-made defensible positions with simple escape routes and can also become rough terrain if prolonged combat fills the streets with rubble:
In the 12th century, irregulars known as the Assassins were successful in the Nizari Ismaili state. The "state" consisted of fortresses (such as the Alamut Castle) built on strategic mountaintops and highlands with difficult access, surrounded by hostile lands. The Assassins developed tactics to eliminate high-value targets, threatening their security, including the Crusaders.
In the American Revolutionary War, Patriot Lieutenant Colonel Francis Marion, known as the "Swamp Fox," took advantage of irregular tactics, interior lines, and the wilderness of colonial South Carolina to hinder larger British regular forces.
Yugoslav Partisans, starting as small detachments around mountain villages in 1941, fought the German and other Axis occupation forces, successfully taking advantage of the rough terrain to survive despite their small numbers. Over the next four years, they slowly forced their enemies back, recovering population centers and resources, eventually growing into the regular Yugoslav Army.
The Vietnam war is a classical example of the use of terrain to fight an asymmetrical war, The North Vietnamese army (NVA) and Viet Cong (VC) used the dense jungles, mountains, and river systems of Vietnam to allow for effective concealment of troop movements in spite of superior enemy air power. This allowed supplying troops to be possible without incurring heavy losses from American airstrikes, who could not effectively identify or track their movements from the air. This was true to such an extent that the US employed defoliation methods such as the use of Agent Orange and extensive Napalm use to make forested areas visible from the air. The NVA and VC also used intricate tunnel systems, such as the Củ Chi tunnels, which enabled them to move undetected, store supplies, and evade U.S. search-and-destroy missions.
Role of civilians
Civilians can play a vital role in determining the outcome of an asymmetric war. In such conflicts, when it is easy for insurgents to assimilate into the population quickly after an attack, tips on the timing or location of insurgent activity can severely undermine the resistance. An information-central framework, in which civilians are seen primarily as sources of strategic information rather than resources, provides a paradigm to understand better the dynamics of such conflicts where civilian information-sharing is vital. The framework assumes that:
The consequential action of non-combatants (civilians) is information sharing rather than supplying resources, recruits, or shelter to combatants.
Information can be shared anonymously without endangering the civilian who relays it.
Given the additional assumption that the larger or dominant force is the government, the framework suggests the following implications:
Civilians receive services from government and rebel forces as an incentive to share valuable information.
Rebel violence can be reduced if the government provides services.
Provision of security and services are complementary in reducing violence.
Civilian casualties reduce civilian support to the perpetrating group.
Provision of information is strongly correlated with the level of anonymity that can be ensured.
A survey of the empirical literature on conflict, does not provide conclusive evidence on the claims. But the framework gives a starting point to explore the role of civilian information sharing in asymmetric warfare.
War by proxy
Where asymmetric warfare is carried out (generally covertly) by allegedly non-governmental actors who are connected to or sympathetic to a particular nation's (the "state actor's") interest, it may be deemed war by proxy. This is typically done to give the state actor deniability. The deniability can be crucial to keep the state actor from being tainted by the actions, to allow the state actor to negotiate in apparent good faith by claiming they are not responsible for the actions of parties who are merely sympathizers, or to avoid being accused of belligerent actions or war crimes. If proof emerges of the true extent of the state actor's involvement, this strategy can backfire; for example, see Iran-contra and Philip Agee.
Examples
American Indian Wars
Benjamin Church designed his force primarily to emulate Native American patterns of war. Toward this end, Church endeavored to learn to fight like Native Americans from Native Americans. Americans became rangers exclusively under the tutelage of the Native American allies. (Until the end of the colonial period, rangers depended on Native Americans as both allies and teachers.)
Church developed a special full-time unit mixing white colonists selected for frontier skills with friendly Native Americans to carry out offensive strikes against hostile Native Americans in terrain where normal militia units were ineffective. Church paid special care to outfitting, supplying and instructing his troops in ways inspired by indigenous methods of warfare and ways of living. He emphasized the adoption of indigenous techniques, which prioritized small, mobile and flexible units which used the countryside for cover, in lieu of massed frontal assaults by large formations. Benjamin Church is sometimes referred to as the father of Unconventional warfare.
American Revolutionary War
From its initiation, the American Revolutionary War was, necessarily, a showcase for asymmetric techniques. In the 1920s, Harold Murdock of Boston attempted to solve the puzzle of the first shots fired on Lexington Green and came to the suspicion that the few score militiamen who gathered before sunrise to await the arrival of hundreds of well-prepared British soldiers were sent to provoke an incident which could be used for Patriot propaganda purposes. The return of the British force to Boston following the search operations at Concord was subject to constant skirmishing by Patriot forces gathered from communities all along the route, making maximum use of the terrain (particularly, trees and stone field walls) to overcome the limitations of their weapons – muskets with an effective range of only about 50–70 meters. Throughout the war, skirmishing tactics against British troops on the move continued to be a key factor in the Patriots' success; particularly in the Western theater of the American Revolutionary War.
Another feature of the long march from Concord was the urban warfare technique of using buildings along the route as additional cover for snipers. When revolutionary forces forced their way into Norfolk, Virginia and used waterfront buildings as cover for shots at British vessels out in the river, the response of destruction of those buildings was ingeniously used to the advantage of the rebels, who encouraged the spread of fire throughout the largely Loyalist town and spread propaganda blaming it on the British. Shortly afterwards, they destroyed the remaining houses because they might provide cover for British soldiers.
The rebels also adopted a form of asymmetric sea warfare by using small, fast vessels to avoid the Royal Navy and capturing or sinking large numbers of merchant ships; however the Crown responded by issuing letters of marque permitting private armed vessels to undertake similar attacks on Patriot shipping. John Paul Jones became notorious in Britain for his expedition from France in the sloop of war Ranger in April 1778, during which, in addition to his attacks on merchant shipping, he made two landings on British soil. The effect of these raids, particularly when coupled with his capture of the Royal Navy's – the first such success in British waters, but not Jones' last – was to force the British government to increase resources for coastal defense, and to create a climate of fear among the British public which was subsequently fed by press reports of his preparations for the 1779 Bonhomme Richard mission.
From 1776, the conflict turned increasingly into a proxy war on behalf of France, following a strategy proposed in the 1760s but initially resisted by the idealistic young King Louis XVI, who came to the throne at the age of 19 a few months before Lexington. France ultimately drove Great Britain to the brink of defeat by entering the war(s) directly on several fronts throughout the world.
American Civil War
The American Civil War saw the rise of asymmetric warfare in the Border States, and in particular on the US Western Territorial Border after the Kansas-Nebraska Act of 1854 opened the territories to vote on the expansion of slavery beyond the Missouri Compromise lines. Political implications of this broken 1820's compromise were nothing less than the potential expansion of slavery all across the North American continent, including the northern reaches of the annexed Mexican territories to California and Oregon. So the stakes were high, and it caused a flood of immigration to the border: some to grab land and expand slavery west, others to grab land and vote down the expansion of slavery. The pro-slavery land grabbers began asymmetric, violent attacks against the more pacifist abolitionists who had settled Lawrence and other territorial towns to suppress slavery. John Brown, the abolitionist, travelled to Osawatomie in the Kansas Territory expressly to foment retaliatory attacks back against the pro-slavery guerrillas who, by 1858, had twice ransacked both Lawrence and Osawatomie (where one of Brown's sons was shot dead).
The abolitionists would not return the attacks and Brown theorized that a violent spark set off on "the Border" would be a way to finally ignite his long hoped-for slave rebellion. Brown had broad-sworded slave owners at Potawatomi Creek, so the bloody civilian violence was initially symmetrical; however, once the American Civil War ignited in 1861, and when the state of Missouri voted overwhelmingly not to secede from the Union, the pro-slavers on the MO-KS border were driven either south to Arkansas and Texas, or underground—where they became guerrilla fighters and "Bushwhackers" living in the bushy ravines throughout northwest Missouri across the (now) state line from Kansas. The bloody "Border War" lasted all during the Civil War (and long after with guerrilla partisans like the James brothers cynically robbing and murdering, aided and abetted by lingering lost causers). Tragically the Western Border War was an asymmetric war: pro-slavery guerrillas and paramilitary partisans on the pro-Confederate side attacked pro-Union townspeople and commissioned Union military units, with the Union army trying to keep both in check: blocking Kansans and pro-Union Missourians from organizing militarily against the marauding Bushwhackers.
The worst act of domestic terror in U.S. history came in August 1863 when paramilitary guerrillas amassed 350 strong and rode all night 50 miles across eastern Kansas to the abolitionist stronghold of Lawrence (a political target) and destroyed the town, gunning down 150 civilians. The Confederate officer whose company had joined Quantrill's Raiders that day witnessed the civilian slaughter and forbade his soldiers from participating in the carnage. The commissioned officer refused to participate in Quantrill's asymmetric warfare on civilians.
Philippine–American War
The Philippine–American War (1899–1902) was an armed conflict between the United States and Filipino revolutionaries. Estimates of the Filipino forces vary between 100,000 and 1,000,000, with tens of thousands of auxiliaries. Lack of weapons and ammunition was a significant impediment to the Filipinos, so most of the forces were only armed with bolo knives, bows and arrows, spears and other primitive weapons that, in practice, proved vastly inferior to U.S. firepower.
The goal, or end-state, sought by the First Philippine Republic was a sovereign, independent, socially stable Philippines led by the ilustrado (intellectual) oligarchy. Local chieftains, landowners, and businessmen were the principales who controlled local politics. The war was strongest when illustrados, principales, and peasants were unified in opposition to annexation. The peasants, who provided the bulk of guerrilla forces, had interests different from their illustrado leaders and the principales of their villages. Coupled with the ethnic and geographic fragmentation, unity was a daunting task. The challenge for Aguinaldo and his generals was to sustain unified Filipino public opposition; this was the revolutionaries' strategic centre of gravity. The Filipino operational center of gravity was the ability to sustain its force of 100,000 irregulars in the field. The Filipino General Francisco Macabulos described the Filipinos' war aim as "not to vanquish the U.S. Army but to inflict on them constant losses." They initially sought to use conventional tactics and an increasing toll of U.S. casualties to contribute to McKinley's defeat in the 1900 presidential election. Their hope was that as president the avowedly anti-imperialist future Secretary of state William Jennings Bryan would withdraw from the Philippines. They pursued this short-term goal with guerrilla tactics better suited to a protracted struggle. While targeting McKinley motivated the revolutionaries in the short term, his victory demoralized them and convinced many undecided Filipinos that the United States would not depart precipitously. For most of 1899, the revolutionary leadership had viewed guerrilla warfare strategically only as a tactical option of final recourse, not as a means of operation which better suited their disadvantaged situation. On 13 November 1899, Emilio Aguinaldo decreed that guerrilla war would henceforth be the strategy. This made the American occupation of the Philippine archipelago more difficult over the next few years. In fact, during just the first four months of the guerrilla war, the Americans had nearly 500 casualties. The Philippine Revolutionary Army began staging bloody ambushes and raids, such as the guerrilla victories at Paye, Catubig, Makahambus, Pulang Lupa, Balangiga and Mabitac. At first, it seemed like the Filipinos would fight the Americans to a stalemate and force them to withdraw. President McKinley even considered this at the beginning of the phase. The shift to guerrilla warfare drove the U.S. Army to adopt counterinsurgency tactics.
20th century
Second Boer War
Asymmetric warfare featured prominently during the Second Boer War. After an initial phase, which was fought by both sides as a conventional war, the British captured Johannesburg, the Boers' largest city, and captured the capitals of the two Boer Republics. The British then expected the Boers to accept peace as dictated in the traditional European manner. However, the Boers fought a protracted guerrilla war instead of capitulating. 20,000-30,000 Boer guerrillas were only defeated after the British brought to bear 450,000 imperial troops, about ten times as many as were used in the conventional phase of the war. The British began constructing blockhouses built within machine gun range of one another and flanked by barbed wire to slow the Boers' movement across the countryside and block paths to valuable targets. Such tactics eventually evolved into today's counterinsurgency tactics.
The Boer commando raids deep into the Cape Colony, which were organized and commanded by Jan Smuts, resonated throughout the century as the British adopted and adapted the tactics first used against them by the Boers.
World War I
T. E. Lawrence and British support for the Arab uprising against the Ottoman Empire. The Ottomans were the stronger power, and the Arab coalition were the weaker.
Austria-Hungary's invasion of Serbia, August 1914. Austria-Hungary was the stronger power, and Serbia was the weaker.
Germany's invasion of Belgium, August 1914. Germany was the stronger power, Belgium the weaker.
Between the World Wars
Abd el-Krim led resistance in Morocco from 1920 to 1924 against French and Spanish colonial armies ten times as strong as the guerrilla force, led by General Philippe Pétain.
TIGR, the first anti-fascist national-defensive organization in Europe, fought against Benito Mussolini's regime in Northeast Italy.
Anglo-Irish War (Irish War of Independence) fought between the Irish Republican Army and the Black and Tans/Auxiliaries. Though Lloyd George (Prime Minister at the time) attempted to persuade other nations that it was not a war by refusing to use the army and using the Black and Tans instead, the conflict was conducted as an asymmetric guerrilla war and was registered as a war with the League of Nations by the Irish Free State.
World War II
Philippine resistance against JapanDuring the Japanese occupation in World War II, there was an extensive Philippine resistance movement, which opposed the Japanese with an active underground and guerrilla activity that increased over the years.
Winter WarFinland was invaded by the much larger mechanized military units of the Soviet Union. Although the Soviets captured 8% of Finland, they suffered enormous casualties versus much lower losses for the Finns. Soviet vehicles were confined to narrow forest roads by terrain and snow, while the Finns used ski tactics around them unseen through the trees. They cut the advancing Soviet column into what they called motti (a cubic metre of firewood) and then destroyed the cut-off sections one by one. Many Soviets were shot, had their throats cut from behind, or froze to death due to inadequate clothing and lack of camouflage and shelter. The Finns also devised a petrol bomb they called the Molotov cocktail to destroy Soviet tanks.
Soviet partisansresistance movement which fought in the German occupied parts of the Soviet Union.
Warsaw UprisingPoland (Home Army, Armia Krajowa) rose up against the German occupation.
Germany's occupation of Yugoslavia, 1941–45 (Germany vs. Tito's Partisans and Mihailović's Chetniks).
Britain
British Commandos and European coastal raids. German countermeasures and the notorious Commando Order.
Long Range Desert Group and the Special Air Service in Africa and later in Europe.
South East Asian Theater: Wingate, Chindits, Force 136, V Force
Special Operations Executive (SOE)
Provisional Irish Republican Army against British security forces in the Northern Campaign.
United States
Office of Strategic Services (OSS)
China Burma India Theater: Merrill's Marauders and OSS Detachment 101.
After World War II
First Indochina War (1946-1954) and Algerian War of Independence (1954-1962); both against France
The Cuban Revolution of 1953-1958 became a template of asymmetric warfare.
The Hungarian Revolution of 1956 (or "Russo-Hungarian" war) saw makeshift forces improvising lopsided tactics against Soviet tanks.
Libyan support to the Provisional Irish Republican Army during the Troubles (1960s to 1998) and collusion between British security forces and Ulster loyalist paramilitaries.
United States Military Assistance Command Studies and Observations Group (US MAC-V SOG) (1964-1972) and Viet Cong in Vietnam.
The South African Border War, otherwise known as the Namibian War of Independence (1966-1990) between the South African Defense Force and People's Liberation Army of Namibia.
United States support of the Nicaraguan Contras (1979-1990).
Cold War (1945–1992)
The end of World War II established the two strongest victors, the United States of America (the United States, or just the U.S.) and the Union of Soviet Socialist Republics (USSR, or just the Soviet Union) as the two dominant global superpowers.
Cold War examples of proxy wars
In Southeast Asia, specifically Vietnam, the Viet Minh, NLF and other insurgencies engaged in asymmetrical guerrilla warfare with France. The war between the Mujahideen and the Soviet Armed Forces during the Soviet–Afghan War of 1979 to 1989, though claimed as a source of the term "asymmetric warfare," occurred years after Mack wrote of "asymmetric conflict." (Note that the term "asymmetric warfare" became well-known in the West only in the 1990s.) The aid given by the U.S. to the Mujahideen during the war was only covert at the tactical level; the Reagan Administration told the world that it was helping the "freedom-loving people of Afghanistan." Many countries, including the U.S., participated in this proxy war against the USSR during the Cold War.
Post-Cold War
The Kosovo War, which pitted Yugoslav security forces (Serbian police and Yugoslav army) against Albanian separatists of the guerrilla Kosovo Liberation Army, is an example of asymmetric warfare, due to Yugoslav forces' superior firepower and manpower, and due to the nature of insurgency/counter-insurgency operations. The NATO bombing of Yugoslavia (1999), which pitted NATO air power against the Yugoslav armed forces during the Kosovo war, can also be classified as asymmetric, exemplifying international conflict with asymmetry in weapons and strategy/tactics.
21st century
Israel/Palestine
The ongoing conflict between Israel and some Palestinian organizations (such as Hamas and PIJ) is a classic case of asymmetric warfare. Israel has a powerful army, air force and navy, while the Palestinian organizations have no access to large-scale military equipment with which to conduct operations; instead, they utilize asymmetric tactics, such as taking hostages, paragliding, small gunfights, cross-border sniping, indiscriminate mortar/rocket attacks, and others.
Sri Lanka
The Sri Lankan Civil War, which raged on and off from 1983 to 2009, between the Sri Lankan government and the Liberation Tigers of Tamil Eelam (LTTE) saw large-scale asymmetric warfare. The war started as an insurgency and progressed to a large-scale conflict with the mixture of guerrilla and conventional warfare, seeing the LTTE use suicide bombing (male/female suicide bombers) both on and off the battlefield use of explosive-filled boats for suicide attacks on military shipping; and use of light aircraft targeting military installations.
Iraq
The victory by the US-led coalition forces in the 1991 Persian Gulf War and the 2003 invasion of Iraq demonstrated that training, tactics and technology could provide overwhelming victories in the field of battle during modern conventional warfare. After Saddam Hussein's regime was removed from power, the Iraq campaign moved into a different type of asymmetric warfare where the coalition's use of superior conventional warfare training, tactics and technology was of much less use against continued opposition from the various partisan groups operating inside Iraq.
Syria
Much of the 2012–present Syrian Civil War has been asymmetrical. The Syrian National Coalition, Mujahideen, and Kurdish Democratic Union Party have been engaging with the forces of the Syrian government through asymmetric means. The conflict has seen large-scale asymmetric warfare across the country, with the forces opposed to the government unable to engage symmetrically with the Syrian government and resorting instead to other asymmetric tactics such as suicide bombings and targeted assassinations.
Ukraine
The 2022 Russian invasion of Ukraine has resulted in what could be described in some respects as an asymmetrical warfare scenario. Russia has a much larger economy, population, and has superior military might to Ukraine. The use of MAGURA V5 unmanned surface vehicles (USVs) to attack Russian Black Sea Fleet ships such as the Tsezar Kunikov has been cited as example of asymmetrical warfare by analysts.
Semi-symmetric warfare
A new understanding of warfare has emerged amidst the 2022 Russian invasion of Ukraine. Although this type of warfare does not oppose an insurgency to a counter-insurgency force, it does involve two actors with substantially asymmetrical means of waging war. Notably, as technology has improved war-fighting capabilities, it has also made them more complex, thus requiring greater expertise, training, flexibility and decentralization. The nominally weaker military can exploit those complexities and seek to eliminate the asymmetry. This has been observed in Ukraine, as defending forces used a rich arsenal of anti-tank and anti-air missiles to negate the invading forces' apparent mechanized and aerial superiority, thus denying their ability to conduct combined arms operations. The success of this strategy will be compounded by access to real-time intelligence and the adversary's inability to utilize its forces to the maximum of their potential due to factors such as the inability to plan, brief and execute complex, full-spectrum operations.
See also
References
Further reading
Bibliographies
Compiled by Joan T. Phillips Bibliographer at Air University Library: A Bibliography of Asymmetric Warfare, August 2005.
Asymmetric Warfare and the Revolution in Military Affairs (RMA) Debate sponsored by the Project on Defense Alternatives
Books
Articles and papers
A mathematical approach to the concept.
Warfare by type
Military strategy
Military science
Military doctrines
Warfare | Asymmetric warfare | [
"Physics"
] | 6,386 | [
"Symmetry",
"Asymmetry"
] |
46,890 | https://en.wikipedia.org/wiki/Frequency-hopping%20spread%20spectrum | Frequency-hopping spread spectrum (FHSS) is a method of transmitting radio signals by rapidly changing the carrier frequency among many frequencies occupying a large spectral band. The changes are controlled by a code known to both transmitter and receiver. FHSS is used to avoid interference, to prevent eavesdropping, and to enable code-division multiple access (CDMA) communications.
The frequency band is divided into smaller sub-bands. Signals rapidly change ("hop") their carrier frequencies among the center frequencies of these sub-bands in a determined order. Interference at a specific frequency will affect the signal only during a short interval.
FHSS offers four main advantages over a fixed-frequency transmission:
FHSS signals are highly resistant to narrowband interference because the signal hops to a different frequency band.
Signals are difficult to intercept if the frequency-hopping pattern is not known.
Jamming is also difficult if the pattern is unknown; the signal can be jammed only for a single hopping period if the spreading sequence is unknown.
FHSS transmissions can share a frequency band with many types of conventional transmissions with minimal mutual interference. FHSS signals add minimal interference to narrowband communications, and vice versa.
Usage
Military
Spread-spectrum signals are highly resistant to deliberate jamming unless the adversary has knowledge of the frequency-hopping pattern. Military radios generate the frequency-hopping pattern under the control of a secret Transmission Security Key (TRANSEC) that the sender and receiver share in advance. This key is generated by devices such as the KY-57 Speech Security Equipment. United States military radios that use frequency hopping include the JTIDS/MIDS family, the HAVE QUICK Aeronautical Mobile communications system, and the SINCGARS Combat Net Radio, Link-16.
Civilian
In the US, since the Federal Communications Commission (FCC) amended rules to allow FHSS systems in the unregulated 2.4 GHz band, many consumer devices in that band have employed various FHSS modes. eFCC CFR 47 part 15.247 covers the regulations in the US for 902–928 MHz, 2400–2483.5 MHz, and 5725–5850 MHz bands, and the requirements for frequency hopping.
Some walkie-talkies that employ FHSS technology have been developed for unlicensed use on the 900 MHz band. FHSS technology is also used in many hobby transmitters and receivers used for radio-controlled model cars, airplanes, and drones. A type of multiple access is achieved allowing hundreds of transmitter/receiver pairs to be operated simultaneously on the same band, in contrast to previous FM or AM radio-controlled systems that had limited simultaneous channels.
Technical considerations
The overall bandwidth required for frequency hopping is much wider than that required to transmit the same information using only one carrier frequency. But because transmission occurs only on a small portion of this bandwidth at any given time, the instantaneous interference bandwidth is really the same. While providing no extra protection against wideband thermal noise, the frequency-hopping approach reduces the degradation caused by narrowband interference sources.
One of the challenges of frequency-hopping systems is to synchronize the transmitter and receiver. One approach is to have a guarantee that the transmitter will use all the channels in a fixed period of time. The receiver can then find the transmitter by picking a random channel and listening for valid data on that channel. The transmitter's data is identified by a special sequence of data that is unlikely to occur over the segment of data for this channel, and the segment can also have a checksum for integrity checking and further identification. The transmitter and receiver can use fixed tables of frequency-hopping patterns, so that once synchronized they can maintain communication by following the table.
In the US, FCC part 15 on unlicensed spread spectrum systems in the 902–928 MHz and 2.4 GHz bands permits more power than is allowed for non-spread-spectrum systems. Both FHSS and direct-sequence spread-spectrum (DSSS) systems can transmit at 1 watt, a thousandfold increase from the 1 milliwatt limit on non-spread-spectrum systems. The FCC also prescribes a minimum number of frequency channels and a maximum dwell time for each channel.
Origins
In 1899, Guglielmo Marconi experimented with frequency-selective reception in an attempt to minimise interference.
The earliest mentions of frequency hopping in open literature are in US patent 725,605, awarded to Nikola Tesla on March 17, 1903, and in radio pioneer Jonathan Zenneck's book Wireless Telegraphy (German, 1908, English translation McGraw Hill, 1915), although Zenneck writes that Telefunken had already tried it. Nikola Tesla doesn't mention the phrase "frequency hopping" directly, but certainly alludes to it. Entitled Method of Signaling, the patent describes a system that would enable radio communication without any danger of the signals or messages being disturbed, intercepted, interfered with in any way.
The German military made limited use of frequency hopping for communication between fixed command points in World War I to prevent eavesdropping by British forces, who did not have the technology to follow the sequence. Jonathan Zenneck's book Wireless Telegraphy was originally published in German in 1908, but was translated into English in 1915 as the enemy started using frequency hopping on the front line.
In 1920, Otto B. Blackwell, De Loss K. Martin, and Gilbert S. Vernam filed a patent application for a "Secrecy Communication System", granted as U.S. Patent 1,598,673 in 1926. This patent described a method of transmitting signals on multiple frequencies in a random manner for secrecy, anticipating key features of later frequency hopping systems.
A Polish engineer and inventor, Leonard Danilewicz, claimed to have suggested the concept of frequency hopping in 1929 to the Polish General Staff, but it was rejected.
In 1932, was awarded to Willem Broertjes, named "Method of maintaining secrecy in the transmission of wireless telegraphic messages", which describes a system where "messages are transmitted by means of a group of frequencies... known to the sender and receiver alone, and alternated at will during transmission of the messages".
During World War II, the US Army Signal Corps was inventing a communication system called SIGSALY, which incorporated spread spectrum in a single frequency context. But SIGSALY was a top-secret communications system, so its existence was not known until the 1980s.
In 1942, actress Hedy Lamarr and composer George Antheil received for their "Secret Communications System", an early version of frequency hopping using a piano-roll to switch among 88 frequencies to make radio-guided torpedoes harder for enemies to detect or jam. They then donated the patent to the U.S. Navy.
Frequency-hopping ideas may have been rediscovered in the 1950s during patent searches when private companies were independently developing direct-sequence Code Division Multiple Access, a non-frequency-hopping form of spread-spectrum. In 1957, engineers at Sylvania Electronic Systems Division adopted a similar idea, using the recently invented transistor instead of Lamarr's and Antheil's clockwork technology. In 1962, the US Navy utilized Sylvania Electronic Systems Division's work during the Cuban Missile Crisis.
A practical application of frequency hopping was developed by Ray Zinn, co-founder of Micrel Corporation. Zinn developed a method allowing radio devices to operate without the need to synchronize a receiver with a transmitter. Using frequency hopping and sweep modes, Zinn's method is primarily applied in low data rate wireless applications such as utility metering, machine and equipment monitoring and metering, and remote control. In 2006 Zinn received for his "Wireless device and method using frequency hopping and sweep modes."
Variations
Adaptive frequency-hopping spread spectrum (AFH) as used in Bluetooth improves resistance to radio frequency interference by avoiding crowded frequencies in the hopping sequence. This sort of adaptive transmission is easier to implement with FHSS than with DSSS.
The key idea behind AFH is to use only the "good" frequencies and avoid the "bad" ones—those experiencing frequency selective fading, those on which a third party is trying to communicate, or those being actively jammed. Therefore, AFH should be complemented by a mechanism for detecting good and bad channels.
But if the radio frequency interference is itself dynamic, then AFH's strategy of "bad channel removal" may not work well. For example, if there are several colocated frequency-hopping networks (as Bluetooth Piconet), they are mutually interfering and AFH's strategy fails to avoid this interference.
The problem of dynamic interference, gradual reduction of available hopping channels and backward compatibility with legacy Bluetooth devices was resolved in version 1.2 of the Bluetooth Standard (2003). Such a situation can often happen in the scenarios that use unlicensed spectrum.
In addition, dynamic radio frequency interference is expected to occur in the scenarios related to cognitive radio, where the networks and the devices should exhibit frequency-agile operation.
Chirp modulation can be seen as a form of frequency-hopping that simply scans through the available frequencies in consecutive order to communicate.
Frequency hopping can be superimposed on other modulations or waveforms to enhance the system performance.
See also
Dynamic frequency hopping
List of multiple discoveries
Maximum length sequence
Orthogonal frequency-division multiplexing
Radio-frequency sweep
Notes
References
Bibliography
Computer network technology
Multiplexing
Quantized radio modulation modes
Radio frequency propagation
Radio resource management
Military radio systems
ja:スペクトラム拡散#周波数ホッピング | Frequency-hopping spread spectrum | [
"Physics"
] | 1,960 | [
"Physical phenomena",
"Spectrum (physical sciences)",
"Radio frequency propagation",
"Electromagnetic spectrum",
"Waves"
] |
46,895 | https://en.wikipedia.org/wiki/Euler%20numbers | In mathematics, the Euler numbers are a sequence En of integers defined by the Taylor series expansion
,
where is the hyperbolic cosine function. The Euler numbers are related to a special value of the Euler polynomials, namely:
The Euler numbers appear in the Taylor series expansions of the secant and hyperbolic secant functions. The latter is the function in the definition. They also occur in combinatorics, specifically when counting the number of alternating permutations of a set with an even number of elements.
Examples
The odd-indexed Euler numbers are all zero. The even-indexed ones have alternating signs. Some values are:
{|
|E0 ||=||align=right| 1
|-
|E2 ||=||align=right| −1
|-
|E4 ||=||align=right| 5
|-
|E6 ||=||align=right| −61
|-
|E8 ||=||align=right|
|-
|E10 ||=||align=right|
|-
|E12 ||=||align=right|
|-
|E14 ||=||align=right|
|-
|E16 ||=||align=right|
|-
|E18 ||=||align=right|
|}
Some authors re-index the sequence in order to omit the odd-numbered Euler numbers with value zero, or change all signs to positive . This article adheres to the convention adopted above.
Explicit formulas
In terms of Stirling numbers of the second kind
The following two formulas express the Euler numbers in terms of Stirling numbers of the second kind:
where denotes the Stirling numbers of the second kind, and denotes the rising factorial.
As a double sum
The following two formulas express the Euler numbers as double sums
As an iterated sum
An explicit formula for Euler numbers is:
where denotes the imaginary unit with .
As a sum over partitions
The Euler number can be expressed as a sum over the even partitions of ,
as well as a sum over the odd partitions of ,
where in both cases and
is a multinomial coefficient. The Kronecker deltas in the above formulas restrict the sums over the s to and to , respectively.
As an example,
As a determinant
is given by the determinant
As an integral
is also given by the following integrals:
Congruences
W. Zhang obtained the following combinational identities concerning the Euler numbers. For any prime , we have
W. Zhang and Z. Xu proved that, for any prime and integer , we have
where is the Euler's totient function.
Lower bound
The Euler numbers grow quite rapidly for large indices, as they have the lower bound
Euler zigzag numbers
The Taylor series of is
where is the Euler zigzag numbers, beginning with
1, 1, 1, 2, 5, 16, 61, 272, 1385, 7936, 50521, 353792, 2702765, 22368256, 199360981, 1903757312, 19391512145, 209865342976, 2404879675441, 29088885112832, ...
For all even ,
where is the Euler number, and for all odd ,
where is the Bernoulli number.
For every n,
See also
Bell number
Bernoulli number
Dirichlet beta function
Euler–Mascheroni constant
References
External links
Eponymous numbers in mathematics
Integer sequences
Leonhard Euler | Euler numbers | [
"Mathematics"
] | 752 | [
"Sequences and series",
"Integer sequences",
"Mathematical structures",
"Recreational mathematics",
"Mathematical objects",
"Combinatorics",
"Numbers",
"Number theory"
] |
46,943 | https://en.wikipedia.org/wiki/Audio%20time%20stretching%20and%20pitch%20scaling | Time stretching is the process of changing the speed or duration of an audio signal without affecting its pitch. Pitch scaling is the opposite: the process of changing the pitch without affecting the speed. Pitch shift is pitch scaling implemented in an effects unit and intended for live performance. Pitch control is a simpler process which affects pitch and speed simultaneously by slowing down or speeding up a recording.
These processes are often used to match the pitches and tempos of two pre-recorded clips for mixing when the clips cannot be reperformed or resampled. Time stretching is often used to adjust radio commercials and the audio of television advertisements to fit exactly into the 30 or 60 seconds available. It can be used to conform longer material to a designated time slot, such as a 1-hour broadcast.
Resampling
The simplest way to change the duration or pitch of an audio recording is to change the playback speed. For a digital audio recording, this can be accomplished through sample rate conversion. When using this method, the frequencies in the recording are always scaled at the same ratio as the speed, transposing its perceived pitch up or down in the process. Slowing down the recording to increase duration also lowers the pitch, while speeding it up for a shorter duration respectively raises the pitch, creating the so-called Chipmunk effect. When resampling audio to a notably lower pitch, it may be preferred that the source audio is of a higher sample rate, as slowing down the playback rate will reproduce an audio signal of a lower resolution, and therefore reduce the perceived clarity of the sound. On the contrary, when resampling audio to a notably higher pitch, it may be preferred to incorporate an interpolation filter, as frequencies that surpass the Nyquist frequency (determined by the sampling rate of the audio reproduction software or device) will create usually undesired sound distortions, a phenomenon that is also known as aliasing.
Frequency domain
Phase vocoder
One way of stretching the length of a signal without affecting the pitch is to build a phase vocoder after Flanagan, Golden, and Portnoff.
Basic steps:
compute the instantaneous frequency/amplitude relationship of the signal using the STFT, which is the discrete Fourier transform of a short, overlapping and smoothly windowed block of samples;
apply some processing to the Fourier transform magnitudes and phases (like resampling the FFT blocks); and
perform an inverse STFT by taking the inverse Fourier transform on each chunk and adding the resulting waveform chunks, also called overlap and add (OLA).
The phase vocoder handles sinusoid components well, but early implementations introduced considerable smearing on transient ("beat") waveforms at all non-integer compression/expansion rates, which renders the results phasey and diffuse. Recent improvements allow better quality results at all compression/expansion ratios but a residual smearing effect still remains.
The phase vocoder technique can also be used to perform pitch shifting, chorusing, timbre manipulation, harmonizing, and other unusual modifications, all of which can be changed as a function of time.
Sinusoidal spectral modeling
Another method for time stretching relies on a spectral model of the signal. In this method, peaks are identified in frames using the STFT of the signal, and sinusoidal "tracks" are created by connecting peaks in adjacent frames. The tracks are then re-synthesized at a new time scale. This method can yield good results on both polyphonic and percussive material, especially when the signal is separated into sub-bands. However, this method is more computationally demanding than other methods.
Time domain
SOLA
Rabiner and Schafer in 1978 put forth an alternate solution that works in the time domain: attempt to find the period (or equivalently the fundamental frequency) of a given section of the wave using some pitch detection algorithm (commonly the peak of the signal's autocorrelation, or sometimes cepstral processing), and crossfade one period into another.
This is called time-domain harmonic scaling or the synchronized overlap-add method (SOLA) and performs somewhat faster than the phase vocoder on slower machines but fails when the autocorrelation mis-estimates the period of a signal with complicated harmonics (such as orchestral pieces).
Adobe Audition (formerly Cool Edit Pro) seems to solve this by looking for the period closest to a center period that the user specifies, which should be an integer multiple of the tempo, and between 30 Hz and the lowest bass frequency.
This is much more limited in scope than the phase vocoder based processing, but can be made much less processor intensive, for real-time applications. It provides the most coherent results for single-pitched sounds like voice or musically monophonic instrument recordings.
High-end commercial audio processing packages either combine the two techniques (for example by separating the signal into sinusoid and transient waveforms), or use other techniques based on the wavelet transform, or artificial neural network processing, producing the highest-quality time stretching.
Frame-based approach
In order to preserve an audio signal's pitch when stretching or compressing its duration, many time-scale modification (TSM) procedures follow a frame-based approach.
Given an original discrete-time audio signal, this strategy's first step is to split the signal
into short analysis frames of fixed length.
The analysis frames are spaced by a fixed number of samples, called the analysis hopsize .
To achieve the actual time-scale modification, the analysis frames are then temporally relocated
to have a synthesis hopsize .
This frame relocation results in a modification of the signal's duration by a stretching factor of
.
However, simply superimposing the unmodified analysis frames typically results in undesired artifacts
such as phase discontinuities or amplitude fluctuations.
To prevent these kinds of artifacts, the analysis frames are adapted to form synthesis frames, prior to
the reconstruction of the time-scale modified output signal.
The strategy of how to derive the synthesis frames from the analysis frames is a key difference among
different TSM procedures.
Speed hearing and speed talking
For the specific case of speech, time stretching can be performed using PSOLA.
Time-compressed speech is the representation of verbal text in compressed time. While one might expect speeding up to reduce comprehension, Herb Friedman says that "Experiments have shown that the brain works most efficiently if the information rate through the ears—via speech—is the 'average' reading rate, which is about 200–300 wpm (words per minute), yet the average rate of speech is in the neighborhood of 100–150 wpm."
Listening to time-compressed speech is seen as the equivalent of speed reading.
Pitch scaling
These techniques can also be used to transpose an audio sample while holding speed or duration constant. This may be accomplished by time stretching and then resampling back to the original length. Alternatively, the frequency of the sinusoids in a sinusoidal model may be altered directly, and the signal reconstructed at the appropriate time scale.
Transposing can be called frequency scaling or pitch shifting, depending on perspective.
For example, one could move the pitch of every note up by a perfect fifth, keeping the tempo the same.
One can view this transposition as "pitch shifting", "shifting" each note up 7 keys on a piano keyboard, or adding a fixed amount on the Mel scale, or adding a fixed amount in linear pitch space.
One can view the same transposition as "frequency scaling", "scaling" (multiplying) the frequency of every note by 3/2.
Musical transposition preserves the ratios of the harmonic frequencies that determine the sound's timbre, unlike the frequency shift performed by amplitude modulation, which adds a fixed frequency offset to the frequency of every note. (In theory one could perform a literal pitch scaling in which the musical pitch space location is scaled [a higher note would be shifted at a greater interval in linear pitch space than a lower note], but that is highly unusual, and not musical.)
Time domain processing works much better here, as smearing is less noticeable, but scaling vocal samples distorts the formants into a sort of Alvin and the Chipmunks-like effect, which may be desirable or undesirable.
A process that preserves the formants and character of a voice involves analyzing the signal with a channel vocoder or LPC vocoder plus any of several pitch detection algorithms and then resynthesizing it at a different fundamental frequency.
A detailed description of older analog recording techniques for pitch shifting can be found at .
In consumer software
Pitch-corrected audio timestretch is found in every modern web browser as part of the HTML standard for media playback. Similar controls are ubiquitous in media applications and frameworks such as GStreamer and Unity.
See also
Beatmatching
Dynamic tonality — real-time changes of tuning and timbre
Pitch correction
Scrubbing (audio)
Nightcore
References
External links
Time Stretching and Pitch Shifting Overview A comprehensive overview of current time and pitch modification techniques by Stephan Bernsee
Stephan Bernsee's smbPitchShift C source code C source code for doing frequency domain pitch manipulation
pitchshift.js from KievII A Javascript pitchshifter based on smbPitchShift code, from the open source KievII library
The Phase Vocoder: A Tutorial - A good description of the phase vocoder
New Phase-Vocoder Techniques for Pitch-Shifting, Harmonizing and Other Exotic Effects
A new Approach to Transient Processing in the Phase Vocoder
PICOLA and TDHS
How to build a pitch shifter Theory, equations, figures and performances of a real-time guitar pitch shifter running on a DSP chip
ZTX Time Stretching Library Free and commercial versions of a popular 3rd party time stretching library for iOS, Linux, Windows and Mac OS X
Elastique by zplane commercial cross-platform library, mainly used by DJ and DAW manufacturers
Voice Synth from Qneo - specialized synthesizer for creative voice sculpting
TSM toolbox Free MATLAB implementations of various Time-Scale Modification procedures
, a well-known algorithm for extreme (>10×) time stretching
Bungee open source and commercial libraries for real time audio stretching
Rubber Band — open source library for time stretching and pitch shifting
SoundTouch — open-source library for changing the tempo, pitch and playback rate
Audio engineering
Digital signal processing
Sound effects | Audio time stretching and pitch scaling | [
"Engineering"
] | 2,126 | [
"Electrical engineering",
"Audio engineering"
] |
46,955 | https://en.wikipedia.org/wiki/Identity%20map%20pattern | In the design of DBMS, the identity map pattern is a database access design pattern used to improve performance by providing a context-specific, in-memory cache to prevent duplicate retrieval of the same object data from the database.
If the requested data has already been loaded from the database, the identity map returns the same instance of the already instantiated object, but if it has not been loaded yet, it loads it and stores the new object in the map. In this way, it follows a similar principle to lazy loading.
There are 4 types of identity maps
Explicit
Generic
Session
Class
See also
Active record
Identity function
Map (mathematics)
Lazy loading
References
Architectural pattern (computer science)
Software design patterns | Identity map pattern | [
"Technology"
] | 140 | [
"Computing stubs",
"Computer science",
"Computer science stubs"
] |
46,956 | https://en.wikipedia.org/wiki/Cepstrum | In Fourier analysis, the cepstrum (; plural cepstra, adjective cepstral) is the result of computing the inverse Fourier transform (IFT) of the logarithm of the estimated signal spectrum. The method is a tool for investigating periodic structures in frequency spectra. The power cepstrum has applications in the analysis of human speech.
The term cepstrum was derived by reversing the first four letters of spectrum. Operations on cepstra are labelled quefrency analysis (or quefrency alanysis), liftering, or cepstral analysis. It may be pronounced in the two ways given, the second having the advantage of avoiding confusion with kepstrum.
Origin
The concept of the cepstrum was introduced in 1963 by B. P. Bogert, M. J. Healy, and J. W. Tukey. It serves as a tool to investigate periodic structures in frequency spectra. Such effects are related to noticeable echos or reflections in the signal, or to the occurrence of harmonic frequencies (partials, overtones). Mathematically it deals with the problem of deconvolution of signals in the frequency space.
References to the Bogert paper, in a bibliography, are often edited incorrectly. The terms "quefrency", "alanysis", "cepstrum" and "saphe" were invented by the authors by rearranging the letters in frequency, analysis, spectrum, and phase. The invented terms are defined in analogy to the older terms.
General definition
The cepstrum is the result of following sequence of mathematical operations:
transformation of a signal from the time domain to the frequency domain
computation of the logarithm of the spectral amplitude
transformation to frequency domain, where the final independent variable, the quefrency, has a time scale.
Types
The cepstrum is used in many variants. Most important are:
power cepstrum: The logarithm is taken from the "power spectrum"
complex cepstrum: The logarithm is taken from the spectrum, which is calculated via Fourier analysis
The following abbreviations are used in the formulas to explain the cepstrum:
Power cepstrum
The "cepstrum" was originally defined as power cepstrum by the following relationship:
The power cepstrum has main applications in analysis of sound and vibration signals. It is a complementary tool to spectral analysis.
Sometimes it is also defined as:
Due to this formula, the cepstrum is also sometimes called the spectrum of a spectrum. It can be shown that both formulas are consistent with each other as the frequency spectral distribution remains the same, the only difference being a scaling factor which can be applied afterwards. Some articles prefer the second formula.
Other notations are possible due to the fact that the log of the power spectrum is equal to the log of the spectrum if a scaling factor 2 is applied:
and therefore:
which provides a relationship to the real cepstrum (see below).
Further, it shall be noted, that the final squaring operation in the formula for the power spectrum is sometimes called unnecessary and therefore sometimes omitted.
The real cepstrum is directly related to the power cepstrum:
It is derived from the complex cepstrum (defined below) by discarding the phase information (contained in the imaginary part of the complex logarithm). It has a focus on periodic effects in the amplitudes of the spectrum:
Complex cepstrum
The complex cepstrum was defined by Oppenheim in his development of homomorphic system theory. The formula is provided also in other literature.
As is complex the log-term can be also written with as a product of magnitude and phase, and subsequently as a sum. Further simplification is obvious, if log is a natural logarithm with base e:
Therefore: The complex cepstrum can be also written as:
The complex cepstrum retains the information about the phase. Thus it is always possible to return from the quefrency domain to the time domain by the inverse operation:
where b is the base of the used logarithm.
Main application is the modification of the signal in the quefrency domain (liftering) as an analog operation to filtering in the spectral frequency domain. An example is the suppression of echo effects by suppression of certain quefrencies.
The phase cepstrum (after phase spectrum) is related to the complex cepstrum as
phase spectrum = (complex cepstrum − time reversal of complex cepstrum)2.
Related concepts
The independent variable of a cepstral graph is called the quefrency. The quefrency is a measure of time, though not in the sense of a signal in the time domain. For example, if the sampling rate of an audio signal is 44100 Hz and there is a large peak in the cepstrum whose quefrency is 100 samples, the peak indicates the presence of a fundamental frequency that is 44100/100 = 441 Hz. This peak occurs in the cepstrum because the harmonics in the spectrum are periodic and the period corresponds to the fundamental frequency, since harmonics are integer multiples of the fundamental frequency.
The kepstrum, which stands for "Kolmogorov-equation power-series time response", is similar to the cepstrum and has the same relation to it as expected value has to statistical average, i.e. cepstrum is the empirically measured quantity, while kepstrum is the theoretical quantity. It was in use before the cepstrum.
The autocepstrum is defined as the cepstrum of the autocorrelation. The autocepstrum is more accurate than the cepstrum in the analysis of data with echoes.
Playing further on the anagram theme, a filter that operates on a cepstrum might be called a lifter. A low-pass lifter is similar to a low-pass filter in the frequency domain. It can be implemented by multiplying by a window in the quefrency domain and then converting back to the frequency domain, resulting in a modified signal, i.e. with signal echo being reduced.
Interpretation
The cepstrum can be seen as information about the rate of change in the different spectrum bands. It was originally invented for characterizing the seismic echoes resulting from earthquakes and bomb explosions. It has also been used to determine the fundamental frequency of human speech and to analyze radar signal returns. Cepstrum pitch determination is particularly effective because the effects of the vocal excitation (pitch) and vocal tract (formants) are additive in the logarithm of the power spectrum and thus clearly separate.
The cepstrum is a representation used in homomorphic signal processing, to convert signals combined by convolution (such as a source and filter) into sums of their cepstra, for linear separation. In particular, the power cepstrum is often used as a feature vector for representing the human voice and musical signals. For these applications, the spectrum is usually first transformed using the mel scale. The result is called the mel-frequency cepstrum or MFC (its coefficients are called mel-frequency cepstral coefficients, or MFCCs). It is used for voice identification, pitch detection and much more. The cepstrum is useful in these applications because the low-frequency periodic excitation from the vocal cords and the formant filtering of the vocal tract, which convolve in the time domain and multiply in the frequency domain, are additive and in different regions in the quefrency domain.
Note that a pure sine wave can not be used to test the cepstrum for its pitch determination from quefrency as a pure sine wave does not contain any harmonics and does not lead to quefrency peaks. Rather, a test signal containing harmonics should be used (such as the sum of at least two sines where the second sine is some harmonic (multiple) of the first sine, or better, a signal with a square or triangle waveform, as such signals provide many overtones in the spectrum.).
An important property of the cepstral domain is that the convolution of two signals can be expressed as the addition of their complex cepstra:
Applications
The concept of the cepstrum has led to numerous applications:
dealing with reflection inference (radar, sonar applications, earth seismology)
estimation of speaker fundamental frequency (pitch)
speech analysis and recognition
medical applications in analysis of electroencephalogram (EEG) and brain waves
machine vibration analysis based on harmonic patterns (gearbox faults, turbine blade failures, ...)
Recently, cepstrum-based deconvolution was used on surface electromyography signals, to remove the effect of the stochastic impulse train, which originates an sEMG signal, from the power spectrum of the sEMG signal itself. In this way, only information about the motor unit action potential (MUAP) shape and amplitude was maintained, which was then used to estimate the parameters of a time-domain model of the MUAP itself.
A short-time cepstrum analysis was proposed by Schroeder and Noll in the 1960s for application to pitch determination of human speech.
References
Further reading
"Speech Signal Analysis"
"Speech analysis: Cepstral analysis vs. LPC", www.advsolned.com
"A tutorial on Cepstrum and LPCCs"
Frequency-domain analysis
Signal processing | Cepstrum | [
"Physics",
"Technology",
"Engineering"
] | 1,943 | [
"Telecommunications engineering",
"Computer engineering",
"Spectrum (physical sciences)",
"Signal processing",
"Frequency-domain analysis"
] |
46,961 | https://en.wikipedia.org/wiki/Thomas%20Crapper | Thomas Crapper (baptised 28 September 1836; died 27 January 1910) was an English plumber and businessman. He founded Thomas Crapper & Co in London, a plumbing equipment company. His notability with regard to toilets has often been overstated, mostly due to the publication in 1969 of a fictional biography by New Zealand satirist Wallace Reyburn.
Crapper held nine patents, three of them for water closet improvements such as the floating ballcock. He improved the S-bend plumbing trap in 1880 by inventing the U-bend. The firm's lavatorial equipment was manufactured at premises in nearby Marlborough Road (now Draycott Avenue). The company owned the world's first bath, toilet and sink showroom in King's Road. Crapper was noted for the quality of his products and received several royal warrants.
Manhole covers with Crapper's company's name on them in Westminster Abbey have become one of London's minor tourist attractions.
Life
Thomas Crapper was born in Thorne, West Riding of Yorkshire, in 1836; the exact date is unknown, but he was baptised on 28 September 1836. His father, Charles, was a sailor. In 1853, he was apprenticed to his brother George, a master plumber in Chelsea, and thereafter spent three years as a journeyman plumber.
In 1861 Crapper set himself up as a sanitary engineer with his own brass foundry and workshops in nearby Marlborough Road.
In the 1880s Prince Albert (later Edward VII) purchased his country seat of Sandringham House in Norfolk and asked Thomas Crapper & Co. to supply the plumbing, including thirty lavatories with cedarwood seats and enclosures, thus giving Crapper his first Royal Warrant. The firm received further warrants from Edward as king and from George V, both as Prince of Wales and as king.
In 1904 Crapper retired, passing the firm to his nephew George and his business partner Robert Marr Wharam. Crapper lived at 12 Thornsett Road, Anerley, for the last six years of his life and died on 27 January 1910. He was buried in the nearby Elmers End Cemetery.
Posthumous fate of the Crapper company
In 1966 the Crapper company was sold by then-owner Robert G. Wharam (son of Robert Marr Wharam) upon his retirement to its rival John Bolding & Sons. Bolding went into liquidation in 1969. The company fell out of use until it was acquired by Simon Kirby, a historian and collector of antique bathroom fittings, who relaunched the company in Stratford-upon-Avon, producing authentic reproductions of Crapper's original Victorian bathroom fittings.
Achievements
As the first man to set up public showrooms for displaying sanitary ware, Crapper became known as an advocate of sanitary plumbing, popularising the notion of installation inside people's homes. He also helped refine and develop improvements to existing plumbing and sanitary fittings. As a part of his business he maintained a foundry and metal shop, which enabled him to try out new designs and develop more efficient plumbing solutions.
Crapper improved the S-bend trap in 1880. The new U-bend plumbing trap was a significant improvement on the "S" as it could not jam, and unlike the S-bend, it did not have a tendency to dry out and did not need an overflow. The BBC nominated the S-bend as one of the 50 Things That (have) Made the Modern Economy.
Crapper held nine patents, three of them for water closet improvements such as the floating ballcock, but none for the flush toilet itself.
Crapper's advertisements implied the siphonic flush was his invention. One such advertisement read, "Crapper's Valveless Water Waste Preventer (Patent #4,990) One movable part only", even though patent 4,990 (for a minor improvement to the water waste preventer) was not his, but that of Albert Giblin in 1898. However, Crapper's nephew, George, did improve the siphon mechanism by which the water flow starts. A patent for this development was awarded in 1897.
Origin of the word "crap"
It has often been claimed in popular culture that the vulgar slang term for human bodily waste, crap, originated with Thomas Crapper because of his association with lavatories. A common version of this story is that American servicemen stationed in England during World War I saw his name on cisterns and used it as Army slang, i.e., "I'm going to the crapper".
The word crap is actually of Middle English origin and predates its application to bodily waste. Its most likely etymological origin is a combination of two older words: the Dutch krappen (to pluck off, cut off, or separate) and the Old French crappe (siftings, waste or rejected matter, from the medieval Latin crappa). In English, it was used to refer to chaff and also to weeds or other rubbish. Its first recorded application to bodily waste, according to the Oxford English Dictionary, appeared in 1846, 10 years after Crapper was born, under a reference to a crapping ken, or a privy, where ken means a house.
References
Further reading
(fiction)
External links
Thomas Crapper at Snopes.com
Thomas Crapper & Co. Ltd. – the plumbing company founded by Thomas Crapper
Thomas Crapper Water Closet Products Advertisement
1836 births
1910 deaths
British chief executives
British plumbers
British royal warrant holders
People from Thorne, South Yorkshire
Toilets
19th-century British businesspeople
King's Road, Chelsea, London | Thomas Crapper | [
"Biology"
] | 1,152 | [
"Excretion",
"Toilets"
] |
46,966 | https://en.wikipedia.org/wiki/Sleep%20disorder | A sleep disorder, or somnipathy, is a medical disorder affecting an individual's sleep patterns, sometimes impacting physical, mental, social, and emotional functioning. Polysomnography and actigraphy are tests commonly ordered for diagnosing sleep disorders.
Sleep disorders are broadly classified into dyssomnias, parasomnias, circadian rhythm sleep disorders involving the timing of sleep, and other disorders, including those caused by medical or psychological conditions. When a person struggles to fall asleep or stay asleep without any obvious cause, it is referred to as insomnia, which is the most common sleep disorder. Other sleep disorders include sleep apnea, narcolepsy, hypersomnia (excessive sleepiness at inappropriate times), sleeping sickness (disruption of the sleep cycle due to infection), sleepwalking, and night terrors.
Sleep disruptions can be caused by various issues, including teeth grinding (bruxism) and night terrors. Managing sleep disturbances that are secondary to mental, medical, or substance abuse disorders should focus on addressing the underlying conditions.
Sleep disorders are common in both children and adults. However, there is a significant lack of awareness about sleep disorders in children, with many cases remaining unidentified. Several common factors involved in the onset of a sleep disorder include increased medication use, age-related changes in circadian rhythms, environmental changes, lifestyle changes, pre-diagnosed physiological problems, or stress. Among the elderly, the risk of developing sleep-disordered breathing, periodic limb movements, restless legs syndrome, REM sleep behavior disorders, insomnia, and circadian rhythm disturbances is especially high.
Causes
A systematic review found that traumatic childhood experiences, such as family conflict or sexual trauma, significantly increase the risk of several sleep disorders in adulthood, including sleep apnea, narcolepsy, and insomnia.
An evidence-based synopsis suggests that idiopathic REM sleep behavior disorder (iRBD) may have a hereditary component. A total of 632 participants, half with iRBD and half without, completed self-report questionnaires. The study results suggest that people with iRBD are more likely to report having a first-degree relative with the same sleep disorder than people of the same age and sex who do not have the disorder. More research is needed to further understand the hereditary nature of sleep disorders.
A population susceptible to the development of sleep disorders includes people who have experienced a traumatic brain injury (TBI). Due to the significant research focus on this issue, a systematic review was conducted to synthesize the findings. The results indicate that individuals who have experienced a TBI are most disproportionately at risk for developing narcolepsy, obstructive sleep apnea, excessive daytime sleepiness, and insomnia.
Sleep disorders and neurodegenerative diseases
Neurodegenerative diseases are often associated with sleep disorders, particularly when characterized by the abnormal accumulation of alpha-synuclein, as seen in multiple system atrophy (MSA), Parkinson's disease (PD), and Lewy body disease (LBD). For example, individuals diagnosed with PD frequently experience various sleep issues, such as insomnia (affecting approximately 70% of the PD population), hypersomnia (over 50%), and REM sleep behavior disorder (RBD) (around 40%), which is linked to increased motor symptoms. Moreover, RBD has been identified as a significant precursor for the future development of these neurodegenerative diseases over several years, presenting a promising opportunity for improving treatments.
Neurodegenerative conditions are commonly related to structural brain impairments, which may disrupt sleep and wakefulness, circadian rhythm, and motor or non-motor functioning. Conversely, sleep disturbances are often linked to worsening patients' cognitive functioning, emotional state, and quality of life. Additionally, these abnormal behavioral symptoms can place a significant burden on their relatives and caregivers. The limited research in this area, coupled with increasing life expectancy, highlights the need for a deeper understanding of the relationship between sleep disorders and neurodegenerative diseases.
Sleep disturbances and Alzheimer's disease
Sleep disturbances have also been observed in Alzheimer's disease (AD), affecting about 45% of its population. When based on caregiver reports, this percentage increases to about 70%. As in the PD population, insomnia and hypersomnia are frequently recognized in AD patients. These disturbances have been associated with the accumulation of beta-amyloid, circadian rhythm sleep disorders (CRSD), and melatonin alteration. Additionally, changes in sleep architecture are observed in AD. Although sleep architecture seems to naturally change with age, its development appears aggravated in AD patients. Slow-wave sleep (SWS) potentially decreases (and is sometimes absent), spindles and the length of time spent in REM sleep are also reduced, while its latency increases. Poor sleep onset in AD has been associated with dream-related hallucinations, increased restlessness, wandering, and agitation related to sundowning—a typical chronobiological phenomenon in the disease.
In Alzheimer's disease, in addition to cognitive decline and memory impairment, there are also significant sleep disturbances with modified sleep architecture. These disturbances may consist of sleep fragmentation, reduced sleep duration, insomnia, increased daytime napping, decreased quantity of some sleep stages, and a growing resemblance between some sleep stages (N1 and N2). More than 65% of people with Alzheimer's disease experience this type of sleep disturbance.
One factor that could explain this change in sleep architecture is a disruption in the circadian rhythm, which regulates sleep. This disruption can lead to sleep disturbances. Some studies show that people with Alzheimer's disease have a delayed circadian rhythm, whereas in normal aging, an advanced circadian rhythm is present.
In addition to these psychological symptoms, there are two main neurological symptoms of Alzheimer's disease. The first is the accumulation of beta-amyloid waste, forming aggregate "plaques". The second is the accumulation of tau protein.
It has been shown that the sleep-wake cycle influences the beta-amyloid burden, a central component found in Alzheimer's disease (AD). As individuals awaken, the production of beta-amyloid protein becomes more consistent compared to its production during sleep. This phenomenon can be explained by two factors. First, metabolic activity is higher during waking hours, resulting in greater secretion of beta-amyloid protein. Second, oxidative stress increases during waking hours, which leads to greater beta-amyloid production.
On the other hand, it is during sleep that beta-amyloid residues are degraded to prevent plaque formation. The glymphatic system is responsible for this through the phenomenon of glymphatic clearance. Thus, during wakefulness, the AB burden is greater because the metabolic activity and oxidative stress are higher, and there is no protein degradation by the glymphatic clearance. During sleep, the burden is reduced as there is less metabolic activity and oxidative stress (in addition to the glymphatic clearance that occurs).
Glymphatic clearance occurs during the NREM SWS sleep. This sleep stage decreases in normal aging, resulting in less glymphatic clearance and increased AB burden that will form AB plaques. Therefore, sleep disturbances in individuals with AD will amplify this phenomenon.
The decrease in the quantity and quality of the NREM SWS, as well as the disturbances of sleep will therefore increase the AB plaques. This initially occurs in the hippocampus, which is a brain structure integral in long-term memory formation. Hippocampus cell death occurs, which contributes to diminished memory performance and cognitive decline found in AD.
Although the causal relationship is unclear, the development of AD correlates with the development of prominent sleep disorders. In the same way, sleep disorders exacerbate disease progression, forming a positive feedback relationship. As a result, sleep disturbances are no longer only a symptom of AD; the relationship between sleep disturbances and AD is bidirectional.
At the same time, it has been shown that memory consolidation in long-term memory (which depends on the hippocampus) occurs during NREM sleep. This indicates that a decrease in the NREM sleep will result in less consolidation, resulting in poorer memory performances in hippocampal-dependent long-term memory. This drop in performance is one of the central symptoms of AD.
Recent studies have also linked sleep disturbances, neurogenesis and AD. The subgranular zone and the subventricular zone continued to produce new neurons in adult brains. These new cells are then incorporated into neuronal circuits and the subgranular zone, which is found in the hippocampus. These new cells contribute to learning and memory, playing an essential role in hippocampal-dependent memory.
However, recent studies have shown that several factors can interrupt neurogenesis, including stress and prolonged sleep deprivation (more than one day). The sleep disturbances encountered in AD could therefore suppress neurogenesis—and thus impair hippocampal functions. This would contribute to diminished memory performances and the progression of AD, and the progression of AD would aggravate sleep disturbances.
Changes in sleep architecture found in patients with AD occur during the preclinical phase of AD. These changes could be used to detect those most at risk of developing AD. However, this is still only theoretical.
While the exact mechanisms and the causal relationship between sleep disturbances and AD remains unclear, these findings already provide a better understanding and offer possibilities to improve targeting of at-risk populations—and the implementation of treatments to curb the cognitive decline of AD patients.
Sleep disorder symptoms in psychiatric illnesses
Schizophrenia
In individuals with psychiatric illnesses sleep disorders may include a variety of clinical symptoms, including but not limited to: excessive daytime sleepiness, difficulty falling asleep, difficulty staying asleep, nightmares, sleep talking, sleepwalking, and poor sleep quality. Sleep disturbances - insomnia, hypersomnia and delayed sleep-phase disorder - are quite prevalent in severe mental illnesses such as psychotic disorders. In those with schizophrenia, sleep disorders contribute to cognitive deficits in learning and memory. Sleep disturbances often occur before the onset of psychosis.
Sleep deprivation can also produce hallucinations, delusions and depression. A 2019 study investigated the three above-mentioned sleep disturbances in schizophrenia-spectrum (SCZ) and bipolar (BP) disorders in 617 SCZ individuals, 440 BP individuals, and 173 healthy controls (HC). Sleep disturbances were identified using the Inventory for Depressive Symptoms - clinician rated scale (IDS-C). Results suggested that at least one type of sleep disturbance was reported in 78% of the SCZ population, in 69% individuals with BD, and in 39% of healthy controls. The SCZ group reported the most number of sleep disturbances compared to the BD and HC groups; specifically, hypersomnia was more frequent among individuals with SCZ, and delayed sleep phase disorder was three times more common in the SCZ group compared to the BD group. Insomnias were the most frequently reported sleep disturbance across all three groups.
Bipolar disorder
One of the main behavioral symptoms of bipolar disorder is abnormal sleep. Studies have suggested that 23-78% of individuals with bipolar disorders consistently report symptoms of excessive time spent sleeping, or hypersomnia. The pathogenesis of bipolar disorder, including the higher risk of suicidal ideation, could possibly be linked to circadian rhythm variability, and sleep disturbances are a good predictor of mood swings. The most common sleep-related symptom of bipolar disorder is insomnia, in addition to hypersomnia, nightmares, poor sleep quality, OSA, extreme daytime sleepiness, etc. Moreover, animal models have shown that sleep debt can induce episodes of bipolar mania in laboratory mice, but these models are still limited in their potential to explain bipolar disease in humans with all its multifaceted symptoms, including those related to sleep disturbances.
Major depressive disorder (MDD)
Sleep disturbances (insomnia or hypersomnia) are not a necessary diagnostic criterion—but one of the most frequent symptoms of individuals with major depressive disorder (MDD). Among individuals with MDD, insomnia and hypersomnia have prevalence estimates of 88% and 27%, respectively, whereas individuals with insomnia have a threefold increased risk of developing MDD. Depressed mood and sleep efficiency strongly co-vary, and while sleep regulation problems may precede depressive episodes, such depressive episodes may also precipitate sleep deprivation. Fatigue, as well as sleep disturbances such as irregular and excessive sleepiness, are linked to symptoms of depression. Recent research has even pointed to sleep problems and fatigues as potential driving forces bridging MDD symptoms to those of co-occurring generalized anxiety disorder.
Treatment
Treatments for sleep disorders generally can be grouped into four categories:
Behavioral and psychotherapeutic treatment
Rehabilitation and management
Medication
Other somatic treatment
None of these general approaches are sufficient for all patients with sleep disorders. Rather, the choice of a specific treatment depends on the patient's diagnosis, medical and psychiatric history, and preferences, as well as the expertise of the treating clinician. Often, behavioral/psychotherapeutic and pharmacological approaches may be compatible, and can effectively be combined to maximize therapeutic benefits.
Management of sleep disturbances that are secondary to mental, medical, or substance abuse disorders should focus on the underlying conditions. Medications and somatic treatments may provide the most rapid symptomatic relief from certain disorders, such as narcolepsy, which is best treated with prescription drugs such as modafinil. Others, such as chronic and primary insomnia, may be more amenable to behavioral interventions—with more durable results.
Chronic sleep disorders in childhood, which affect some 70% of children with developmental or psychological disorders, are under-reported and under-treated. Sleep-phase disruption is also common among adolescents, whose school schedules are often incompatible with their natural circadian rhythm. Effective treatment begins with careful diagnosis using sleep diaries and perhaps sleep studies. Modifications in sleep hygiene may resolve the problem, but medical treatment is often warranted.
Special equipment may be required for treatment of several disorders such as obstructive apnea, circadian rhythm disorders and bruxism. In severe cases, it may be necessary for individuals to accept living with the disorder, however well managed.
Some sleep disorders have been found to compromise glucose metabolism.
Allergy treatment
Histamine plays a role in wakefulness in the brain. An allergic reaction over produces histamine, causing wakefulness and inhibiting sleep. Sleep problems are common in people with allergic rhinitis. A study from the N.I.H. found that sleep is dramatically impaired by allergic symptoms, and that the degree of impairment is related to the severity of those symptoms. Treatment of allergies has also been shown to help sleep apnea.
Acupuncture
A review of the evidence in 2012 concluded that current research is not rigorous enough to make recommendations around the use of acupuncture for insomnia. The pooled results of two trials on acupuncture showed a moderate likelihood that there may be some improvement to sleep quality for individuals with insomnia. This form of treatment for sleep disorders is generally studied in adults, rather than children. Further research would be needed to study the effects of acupuncture on sleep disorders in children.
Hypnosis
Research suggests that hypnosis may be helpful in alleviating some types and manifestations of sleep disorders in some patients. "Acute and chronic insomnia often respond to relaxation and hypnotherapy approaches, along with sleep hygiene instructions." Hypnotherapy has also helped with nightmares and sleep terrors. There are several reports of successful use of hypnotherapy for parasomnias specifically for head and body rocking, bedwetting and sleepwalking.
Hypnotherapy has been studied in the treatment of sleep disorders in both adults and children.
Music therapy
Although more research should be done to increase the reliability of this method of treatment, research suggests that music therapy can improve sleep quality in acute and chronic sleep disorders. In one particular study, participants (18 years or older) who had experienced acute or chronic sleep disorders were put in a randomly controlled trial, and their sleep efficiency, in the form of overall time asleep, was observed. In order to assess sleep quality, researchers used subjective measures (i.e. questionnaires) and objective measures (i.e. polysomnography). The results of the study suggest that music therapy did improve sleep quality in subjects with acute or chronic sleep disorders, though only when tested subjectively. Although these results are not fully conclusive and more research should be conducted, it still provides evidence that music therapy can be an effective treatment for sleep disorders.
In another study specifically looking to help people with insomnia, similar results were seen. The participants that listened to music experienced better sleep quality than those who did not listen to music. Listening to slower pace music before bed can help decrease the heart rate, making it easier to transition into sleep. Studies have indicated that music helps induce a state of relaxation that shifts an individual's internal clock towards the sleep cycle. This is said to have an effect on children and adults with various cases of sleep disorders. Music is most effective before bed once the brain has been conditioned to it, helping to achieve sleep much faster.
Melatonin
Research suggests that melatonin is useful in helping people fall asleep faster (decreased sleep latency), stay asleep longer, and experience improved sleep quality. To test this, a study was conducted that compared subjects who had taken melatonin to subjects with primary sleep disorders who had taken a placebo. Researchers assessed sleep onset latency, total minutes slept, and overall sleep quality in the melatonin and placebo groups to note the differences. In the end, researchers found that melatonin decreased sleep onset latency and increased total sleep time but had an insignificant and inconclusive impact on the quality of sleep compared to the placebo group.
Sleep medicine
Due to rapidly increasing knowledge and understanding of sleep in the 20th century, including the discovery of REM sleep in the 1950s and circadian rhythm disorders in the 70s and 80s, the medical importance of sleep was recognized. By the 1970s in the US, clinics and laboratories devoted to the study of sleep and sleep disorders had been founded, and a need for standards arose. The medical community began paying more attention to primary sleep disorders, such as sleep apnea, as well as the role and quality of sleep in other conditions.
Specialists in sleep medicine were originally and continue to be certified by the American Board of Sleep Medicine. Those passing the Sleep Medicine Specialty Exam received the designation "diplomate of the ABSM". Sleep medicine is now a recognized subspecialty within internal medicine, family medicine, pediatrics, otolaryngology, psychiatry and neurology in the United States. Certification in Sleep medicine shows that the specialist:
Competence in sleep medicine requires an understanding of a myriad of very diverse disorders. Many of which present with similar symptoms such as excessive daytime sleepiness, which, in the absence of volitional sleep deprivation, "is almost inevitably caused by an identifiable and treatable sleep disorder", such as sleep apnea, narcolepsy, idiopathic hypersomnia, Kleine–Levin syndrome, menstrual-related hypersomnia, idiopathic recurrent stupor, or circadian rhythm disturbances. Another common complaint is insomnia, a set of symptoms which can have a great many different causes, physical and mental. Management in the varying situations differs greatly and cannot be undertaken without a correct diagnosis.
Sleep dentistry (bruxism, snoring and sleep apnea), while not recognized as one of the nine dental specialties, qualifies for board-certification by the American Board of Dental Sleep Medicine (ABDSM). The qualified dentists collaborate with sleep physicians at accredited sleep centers, and can provide oral appliance therapy and upper airway surgery to treat or manage sleep-related breathing disorders. The resulting diplomate status is recognized by the American Academy of Sleep Medicine (AASM), and these dentists are organized in the Academy of Dental Sleep Medicine (USA).
Occupational therapy is an area of medicine that can also address a diagnosis of sleep disorder, as rest and sleep is listed in the Occupational Therapy Practice Framework (OTPF) as its own occupation of daily living. Rest and sleep are described as restorative in order to support engagement in other occupational therapy occupations. In the OTPF, the occupation of rest and sleep is broken down into rest, sleep preparation, and sleep participation. Occupational therapists have been shown to help improve restorative sleep through the use of assistive devices/equipment, cognitive behavioral therapy for Insomnia, therapeutic activities, and lifestyle interventions.
In the UK, knowledge of sleep medicine and possibilities for diagnosis and treatment seem to lag. The Imperial College Healthcare shows attention to obstructive sleep apnea syndrome (OSA) and very few other sleep disorders. Some NHS trusts have specialist clinics for respiratory and neurological sleep medicine.
Epidemiology
Children and young adults
According to one meta-analysis of sleep disorders in children, confusional arousals and sleepwalking are the two most common sleep disorders among children. An estimated 17.3% of kids between 3 and 13 years old experience confusional arousals. About 17% of children sleepwalk, with the disorder being more common among boys than girls, the peak ages of sleepwalking are from 8 to 12 years old.
A different systematic review offers a high range of prevalence rates of sleep bruxism for children. Parasomnias like sleepwalking and talking typically occur during the first part of an individual's sleep cycle, the first slow wave of sleep During the first slow wave of sleep period of the sleep cycle the mind and body slow down causing one to feel drowsy and relaxed. At this stage it is the easiest to wake up, therefore many children do not remember what happened during this time.
Nightmares are also considered a parasomnia among children, who typically remember what took place during the nightmare. However, nightmares only occur during the last stage of sleep - Rapid Eye Movement (REM) sleep. REM is the deepest stage of sleep, it is named for the host of neurological and physiological responses an individual can display during this period of the sleep cycle which are similar to being awake.
Between 15.29% and 38.6% of preschoolers grind their teeth at least one night a week. All but one of the included studies reports decreasing bruxist prevalence as age increased, as well as a higher prevalence among boys than girls.
Another systematic review noted 7-16% of young adults have delayed sleep phase disorder. This disorder reaches peak prevalence when people are in their 20s. Between 20 and 26% of adolescents report a sleep onset latency of greater than 30 minutes. Also, 7-36% have difficulty initiating sleep. Asian teens tend to have a higher prevalence of all of these adverse sleep outcomes—than their North American and European counterparts.
By adulthood, parasomnias can normally be resolved due to a person's growth; however, 4% of people have recurring symptoms.
Effects of Untreated Sleep Disorders
Children and young adults who do not get enough sleep due to sleep disorders also have many other health problems such as obesity and physical problems where it could interfere with everyday life. It is recommended that children and young adults stick to the hours of sleep recommended by the CDC, as it helps increase mental health, physical health, and more.
Insomnia
Insomnia is a prevalent form of sleep deprivation. Individuals with insomnia may have problems falling asleep, staying asleep, or a combination of both resulting in hyposomnia - i.e. insufficient quantity and poor quality of sleep.
Combining results from 17 studies on insomnia in China, a pooled prevalence of 15.0% is reported for the country. This result is consistent among other East Asian countries; however, this is considerably lower than a series of Western countries (50.5% in Poland, 37.2% in France and Italy, 27.1% in USA). Men and women residing in China experience insomnia at similar rates.
A separate meta-analysis focusing on this sleeping disorder in the elderly mentions that those with more than one physical or psychiatric malady experience it at a 60% higher rate than those with one condition or less. It also notes a higher prevalence of insomnia in women over the age of 50 than their male counterparts.
A study that was resulted from a collaboration between Massachusetts General Hospital and Merck describes the development of an algorithm to identify patients with sleep disorders using electronic medical records. The algorithm that incorporated a combination of structured and unstructured variables identified more than 36,000 individuals with physician-documented insomnia.
Insomnia can start off at the basic level but about 40% of people who struggle with insomnia have worse symptoms. There are treatments that can help with insomnia and that includes medication, planning out a sleep schedule, limiting oneself from caffeine intake, and cognitive behavioral therapy.
Obstructive sleep apnea
Obstructive sleep apnea (OSA) affects around 4% of men and 2% of women in the United States. In general, this disorder is more prevalent among men. However, this difference tends to diminish with age. Women experience the highest risk for OSA during pregnancy, and tend to report experiencing depression and insomnia in conjunction with obstructive sleep apnea.
In a meta-analysis of the various Asian countries, India and China present the highest prevalence of the disorder. Specifically, about 13.7% of the Indian population and 7% of Hong Kong's population is estimated to have OSA. The two groups in the study experience daytime OSA symptoms such as difficulties concentrating, mood swings, or high blood pressure, at similar rates (prevalence of 3.5% and 3.57%, respectively).
Obesity and Sleep Apnea
The worldwide incidence of obstructive sleep apnea (OSA) is on the rise, largely due to the increasing prevalence of obesity in society. In individuals who are obese, excess fat deposits in the upper respiratory tract can lead to breathing difficulties during sleep, giving rise to OSA. There is a strong connection between obesity and OSA, making it essential to screen obese individuals for OSA and related disorders. Moreover, both obesity and OSA patients are at higher risk of developing metabolic syndrome. Implementing dietary control in obese individuals can have a positive impact on sleep problems and can help alleviate associated issues such as depression, anxiety, and insomnia. Obesity can influence the disturbance in sleep patterns resulting in OSA. Obesity is a risk factor for OSA because it can affect the upper respiratory system by accumulating fat deposition around the muscles surrounding the lungs. Additionally, OSA can irritate the obesity by prolonging sleepiness throughout the day leading to reduces physical activity and an inactive lifestyle.
Sleep paralysis
A systematic review states 7.6% of the general population experiences sleep paralysis at least once in their lifetime. Its prevalence among men is 15.9%, while 18.9% of women experience it.
When considering specific populations, 28.3% of students and 31.9% of psychiatric patients have experienced this phenomenon at least once in their lifetime. Of those psychiatric patients, 34.6% have panic disorder. Sleep paralysis in students is slightly more prevalent for those of Asian descent (39.9%) than other ethnicities (Hispanic: 34.5%, African descent: 31.4%, Caucasian 30.8%).
Restless legs syndrome
According to one meta-analysis, the average prevalence rate for North America, and Western Europe is estimated to be 14.5±8.0%. Specifically in the United States, the prevalence of restless legs syndrome is estimated to be between 5% and 15.7% when using strict diagnostic criteria. RLS is over 35% more prevalent in American women than their male counterparts. Restless Leg Syndrome (RLS) is a sensorimotor disorder characterized by discomfort in the lower limbs. Typically, symptoms worsen in the evening, improve with movement, and exacerbate when at rest.
List of conditions
There are a numerous sleep disorders. The following list includes some of them:
Bruxism, involuntary grinding or clenching of the teeth while sleeping
Catathrenia, nocturnal groaning during prolonged exhalation
Delayed sleep phase disorder (DSPD), inability to awaken and fall asleep at socially acceptable times but no problem with sleep maintenance, a disorder of circadian rhythms. Other such disorders are advanced sleep phase disorder (ASPD), non-24-hour sleep–wake disorder (non-24) in the sighted or in the blind, and irregular sleep wake rhythm, all much less common than DSPD, as well as the situational shift work sleep disorder.
Fatal familial insomnia, an extremely rare and universally-fatal prion disease that causes a complete cessation of sleep.
Hypopnea syndrome, abnormally shallow breathing or slow respiratory rate while sleeping
Idiopathic hypersomnia, a primary, neurologic cause of long-sleeping, sharing many similarities with narcolepsy
Insomnia disorder (primary insomnia), chronic difficulty in falling asleep or maintaining sleep when no other cause is found for these symptoms. Insomnia can also be comorbid with or secondary to other disorders.
Kleine–Levin syndrome, a rare disorder characterized by persistent episodic hypersomnia and cognitive or mood changes
Narcolepsy, characterized by excessive daytime sleepiness (EDS) and so-called "sleep attacks", relatively sudden-onset, irresistible urges to sleep, which may interfere with occupational and social commitments. About 70% of those who have narcolepsy also have cataplexy, a sudden weakness in the motor muscles that can result in collapse to the floor while retaining full conscious awareness.
Night terror, Pavor nocturnus, sleep terror disorder, an abrupt awakening from sleep with behavior consistent with terror
Nocturia, a frequent need to get up and urinate at night. It differs from enuresis, or bed-wetting, in which the person does not arouse from sleep, but the bladder nevertheless empties.
Parasomnias, disruptive sleep-related events involving inappropriate actions during sleep, for example sleepwalking, night-terrors and catathrenia.
Periodic limb movements in sleep (PLMS), sudden involuntary movement of the arms or legs during sleep. In the absence of other sleep disorders, PLMS may cause sleep disruption and impair sleep quality, leading to periodic limb movement disorder (PLMD).
Other limb movements in sleep, including hypneic jerks and nocturnal myoclonus.
Rapid eye movement sleep behavior disorder (RBD), acting out violent or dramatic dreams while in REM sleep, sometimes injuring bed partner or self (REM sleep disorder or RSD)
Restless legs syndrome (RLS), an irresistible urge to move legs.
Shift work sleep disorder (SWSD), a situational circadian rhythm sleep disorder. (Jet lag was previously included as a situational circadian rhythm sleep disorder, but it does not appear in DSM-5, see Diagnostic and Statistical Manual of Mental Disorders for more).
Sleep apnea, obstructive sleep apnea, obstruction of the airway during sleep, causing lack of sufficient deep sleep, often accompanied by snoring. Other forms of sleep apnea are less common. Obstructive sleep apnea (OSA) is a medical disorder that is caused by repetitive collapse of the upper airway (back of the throat) during sleep. For the purposes of sleep studies, episodes of full upper airway collapse for at least ten seconds are called apneas.
Sleep paralysis, characterized by temporary paralysis of the body shortly before or after sleep. Sleep paralysis may be accompanied by visual, auditory or tactile hallucinations. It is not a disorder unless severe, and is often seen as part of narcolepsy.
Sleepwalking or somnambulism, engaging in activities normally associated with wakefulness (such as eating or dressing), which may include walking, without the conscious knowledge of the subject.
Somniphobia, one cause of sleep deprivation, a dread/ fear of falling asleep or going to bed. Signs of the illness include anxiety and panic attacks before and during attempts to sleep.
Types
Dyssomnias – A broad category of sleep disorders characterized by either hypersomnia or insomnia. The three major subcategories include intrinsic (i.e., arising from within the body), extrinsic (secondary to environmental conditions or various pathologic conditions), and disturbances of circadian rhythm.
Insomnia: Insomnia may be primary or it may be comorbid with or secondary to another disorder such as a mood disorder (i.e., emotional stress, anxiety, depression) or underlying health condition (i.e., asthma, diabetes, heart disease, pregnancy or neurological conditions).
Primary hypersomnia: Hypersomnia of central or brain origin
Narcolepsy: A chronic neurological disorder (or dyssomnia), which is caused by the brain's inability to control sleep and wakefulness.
Idiopathic hypersomnia: A chronic neurological disease similar to narcolepsy, in which there is an increased amount of fatigue and sleep during the day. Patients who have idiopathic hypersomnia cannot obtain a healthy amount of sleep for a regular day of activities. This hinders the patients' ability to perform well, and patients have to deal with this for the rest of their lives.
Recurrent hypersomnia, including Kleine–Levin syndrome
Post traumatic hypersomnia
Menstrual-related hypersomnia
Sleep disordered breathing (SDB), including (non-exhaustive):
Several types of sleep apnea
Snoring
Upper airway resistance syndrome
Restless leg syndrome
Periodic limb movement disorder
Circadian rhythm sleep disorders
Delayed sleep phase disorder
Advanced sleep phase disorder
Non-24-hour sleep–wake disorder
Parasomnias – A category of sleep disorders that involve abnormal and unnatural movements, behaviors, emotions, perceptions, and dreams in connection with sleep.
Bedwetting or sleep enuresis
Bruxism (Tooth-grinding)
Catathrenia – nocturnal groaning
Exploding head syndrome – Waking up in the night hearing loud noises.
Sleep terror (or Pavor nocturnus) – Characterized by a sudden arousal from deep sleep with a scream or cry, accompanied by some behavioral manifestations of intense fear.
REM sleep behavior disorder
Sleepwalking (or somnambulism)
Sleep talking (or somniloquy)
Sleep sex (or sexsomnia)
Medical or psychiatric conditions that may produce sleep disorders
22q11.2 deletion syndrome
Alcoholism
Mood disorders
Depression
Anxiety disorder
Nightmare disorder
Panic
Dissociative identity disorder
Psychosis (such as Schizophrenia)
Sleeping sickness – a parasitic disease which can be transmitted by the Tsetse fly.
Jet lag disorder – Jet lag disorder is a type of circadian rhythm sleep disorder that results from rapid travel across multiple time zones. Individuals experiencing jet lag may encounter symptoms such as excessive sleepiness, fatigue, insomnia, irritability, and gastrointestinal disturbances upon reaching their destination. These symptoms arise due to the mismatch between the body's circadian rhythm, synchronized with the departure location, and the new sleep/wake cycle needed at the destination.
See also
References
External links
Sleep Problems – information leaflet from mental health charity The Royal College of Psychiatrists
WebMD Sleep Disorders Health Center | Sleep disorder | [
"Biology"
] | 7,407 | [
"Behavior",
"Sleep",
"Sleep disorders"
] |
46,980 | https://en.wikipedia.org/wiki/Pollen | Pollen is a powdery substance produced by most types of flowers of seed plants for the purpose of sexual reproduction. It consists of pollen grains (highly reduced microgametophytes), which produce male gametes (sperm cells).
Pollen grains have a hard coat made of sporopollenin that protects the gametophytes during the process of their movement from the stamens to the pistil of flowering plants, or from the male cone to the female cone of gymnosperms. If pollen lands on a compatible pistil or female cone, it germinates, producing a pollen tube that transfers the sperm to the ovule containing the female gametophyte. Individual pollen grains are small enough to require magnification to see detail. The study of pollen is called palynology and is highly useful in paleoecology, paleontology, archaeology, and forensics.
Pollen in plants is used for transferring haploid male genetic material from the anther of a single flower to the stigma of another in cross-pollination. In a case of self-pollination, this process takes place from the anther of a flower to the stigma of the same flower.
Pollen is infrequently used as food and food supplement. Because of agricultural practices, it is often contaminated by agricultural pesticides.
Structure and formation
Pollen itself is not the male gamete. It is a gametophyte, something that could be considered an entire organism, which then produces the male gamete. Each pollen grain contains vegetative (non-reproductive) cells (only a single cell in most flowering plants but several in other seed plants) and a generative (reproductive) cell. In flowering plants the vegetative tube cell produces the pollen tube, and the generative cell divides to form the two sperm nuclei.
Pollen grains come in a wide variety of shapes, sizes, and surface markings characteristic of the species (see electron micrograph, right). Pollen grains of pines, firs, and spruces are winged. The smallest pollen grain, that of the forget-me-not (Myosotis spp.), is 2.5–5 μm (0.005 mm) in diameter. Corn pollen grains are large, about 90–100 μm. Most grass pollen is around 20–25 μm. Some pollen grains are based on geodesic polyhedra like a soccer ball.
Formation
Pollen is produced in the microsporangia in the male cone of a conifer or other gymnosperm or in the anthers of an angiosperm flower.
In angiosperms, during flower development the anther is composed of a mass of cells that appear undifferentiated, except for a partially differentiated dermis. As the flower develops, fertile sporogenous cells, the archespore, form within the anther. The sporogenous cells are surrounded by layers of sterile cells that grow into the wall of the pollen sac. Some of the cells grow into nutritive cells that supply nutrition for the microspores that form by meiotic division from the sporogenous cells. The archespore cells divide by mitosis and differentiate to form pollen mother cells (microsporocyte, meiocyte).
In a process called microsporogenesis, four haploid microspores are produced from each diploid pollen mother cell, after meiotic division. After the formation of the four microspores, which are contained by callose walls, the development of the pollen grain walls begins. The callose wall is broken down by an enzyme called callase and the freed pollen grains grow in size and develop their characteristic shape and form a resistant outer wall called the exine and an inner wall called the intine. The exine is what is preserved in the fossil record.
Two basic types of microsporogenesis are recognised, simultaneous and successive. In simultaneous microsporogenesis meiotic steps I and II are completed before cytokinesis, whereas in successive microsporogenesis cytokinesis follows. While there may be a continuum with intermediate forms, the type of microsporogenesis has systematic significance. The predominant form amongst the monocots is successive, but there are important exceptions.
During microgametogenesis, the unicellular microspores undergo mitosis and develop into mature microgametophytes containing the gametes. In some flowering plants, germination of the pollen grain may begin even before it leaves the microsporangium, with the generative cell forming the two sperm cells.
Structure
Except in the case of some submerged aquatic plants, the mature pollen grain has a double wall. The vegetative and generative cells are surrounded by a thin delicate wall of unaltered cellulose called the endospore or intine, and a tough resistant outer cuticularized wall composed largely of sporopollenin called the exospore or exine. The exine often bears spines or warts, or is variously sculptured, and the character of the markings is often of value for identifying genus, species, or even cultivar or individual.
The spines may be less than a micron in length (spinulus, plural spinuli) referred to as spinulose (scabrate), or longer than a micron (echina, echinae) referred to as echinate. Various terms also describe the sculpturing such as reticulate, a net like appearance consisting of elements (murus, muri) separated from each other by a lumen (plural lumina). These reticulations may also be referred to as brochi.
The pollen wall protects the sperm while the pollen grain is moving from the anther to the stigma; it protects the vital genetic material from drying out and solar radiation. The pollen grain surface is covered with waxes and proteins, which are held in place by structures called sculpture elements on the surface of the grain. The outer pollen wall, which prevents the pollen grain from shrinking and crushing the genetic material during desiccation, is composed of two layers. These two layers are the tectum and the foot layer, which is just above the intine. The tectum and foot layer are separated by a region called the columella, which is composed of strengthening rods. The outer wall is constructed with a resistant biopolymer called sporopollenin.
Pollen apertures are regions of the pollen wall that may involve exine thinning or a significant reduction in exine thickness. They allow shrinking and swelling of the grain caused by changes in moisture content. The process of shrinking the grain is called harmomegathy. Elongated apertures or furrows in the pollen grain are called colpi (singular: colpus) or sulci (singular: sulcus). Apertures that are more circular are called pores. Colpi, sulci and pores are major features in the identification of classes of pollen. Pollen may be referred to as inaperturate (apertures absent) or aperturate (apertures present).
The aperture may have a lid (operculum), hence is described as operculate. However the term inaperturate covers a wide range of morphological types, such as functionally inaperturate (cryptoaperturate) and omniaperturate. Inaperaturate pollen grains often have thin walls, which facilitates pollen tube germination at any position. Terms such as uniaperturate and triaperturate refer to the number of apertures present (one and three respectively). Spiraperturate refers to one or more apertures being spirally shaped.
The orientation of furrows (relative to the original tetrad of microspores) classifies the pollen as sulcate or colpate. Sulcate pollen has a furrow across the middle of what was the outer face when the pollen grain was in its tetrad. If the pollen has only a single sulcus, it is described as monosulcate, has two sulci, as bisulcate, or more, as polysulcate. Colpate pollen has furrows other than across the middle of the outer faces, and similarly may be described as polycolpate if more than two. Syncolpate pollen grains have two or more colpi that are fused at the ends. Eudicots have pollen with three colpi (tricolpate) or with shapes that are evolutionarily derived from tricolpate pollen. The evolutionary trend in plants has been from monosulcate to polycolpate or polyporate pollen.
Additionally, gymnosperm pollen grains often have air bladders, or vesicles, called sacci. The sacci are not actually balloons, but are sponge-like, and increase the buoyancy of the pollen grain and help keep it aloft in the wind, as most gymnosperms are anemophilous. Pollen can be monosaccate, (containing one saccus) or bisaccate (containing two sacci). Modern pine, spruce, and yellowwood trees all produce saccate pollen.
Pollination
The transfer of pollen grains to the female reproductive structure (pistil in angiosperms) is called pollination. Pollen transfer is frequently portrayed as a sequential process that begins with placement on the vector, moves through travel, and ends with deposition. This transfer can be mediated by the wind, in which case the plant is described as anemophilous (literally wind-loving). Anemophilous plants typically produce great quantities of very lightweight pollen grains, sometimes with air-sacs.
Non-flowering seed plants (e.g., pine trees) are characteristically anemophilous. Anemophilous flowering plants generally have inconspicuous flowers. Entomophilous (literally insect-loving) plants produce pollen that is relatively heavy, sticky and protein-rich, for dispersal by insect pollinators attracted to their flowers. Many insects and some mites are specialized to feed on pollen, and are called palynivores.
In non-flowering seed plants, pollen germinates in the pollen chamber, located beneath the micropyle, underneath the integuments of the ovule. A pollen tube is produced, which grows into the nucellus to provide nutrients for the developing sperm cells. Sperm cells of Pinophyta and Gnetophyta are without flagella, and are carried by the pollen tube, while those of Cycadophyta and Ginkgophyta have many flagella.
When placed on the stigma of a flowering plant, under favorable circumstances, a pollen grain puts forth a pollen tube, which grows down the tissue of the style to the ovary, and makes its way along the placenta, guided by projections or hairs, to the micropyle of an ovule. The nucleus of the tube cell has meanwhile passed into the tube, as does also the generative nucleus, which divides (if it has not already) to form two sperm cells. The sperm cells are carried to their destination in the tip of the pollen tube. Double-strand breaks in DNA that arise during pollen tube growth appear to be efficiently repaired in the generative cell that carries the male genomic information to be passed on to the next plant generation. However, the vegetative cell that is responsible for tube elongation appears to lack this DNA repair capability.
In the fossil record
The sporopollenin outer sheath of pollen grains affords them some resistance to the rigours of the fossilisation process that destroy weaker objects; it is also produced in huge quantities. There is an extensive fossil record of pollen grains, often disassociated from their parent plant. The discipline of palynology is devoted to the study of pollen, which can be used both for biostratigraphy and to gain information about the abundance and variety of plants alive — which can itself yield important information about paleoclimates. Also, pollen analysis has been widely used for reconstructing past changes in vegetation and their associated drivers.
Pollen is first found in the fossil record in the late Devonian period, but at that time it is indistinguishable from spores. It increases in abundance until the present day.
Allergy to pollen
Nasal allergy to pollen is called pollinosis, and allergy specifically to grass pollen is called hay fever. Generally, pollens that cause allergies are those of anemophilous plants (pollen is dispersed by air currents.) Such plants produce large quantities of lightweight pollen (because wind dispersal is random and the likelihood of one pollen grain landing on another flower is small), which can be carried for great distances and are easily inhaled, bringing it into contact with the sensitive nasal passages.
Pollen allergies are common in polar and temperate climate zones, where production of pollen is seasonal. In the tropics pollen production varies less by the season, and allergic reactions less.
In northern Europe, common pollens for allergies are those of birch and alder, and in late summer wormwood and different forms of hay. Grass pollen is also associated with asthma exacerbations in some people, a phenomenon termed thunderstorm asthma.
In the US, people often mistakenly blame the conspicuous goldenrod flower for allergies. Since this plant is entomophilous (its pollen is dispersed by animals), its heavy, sticky pollen does not become independently airborne. Most late summer and fall pollen allergies are probably caused by ragweed, a widespread anemophilous plant.
Arizona was once regarded as a haven for people with pollen allergies, although several ragweed species grow in the desert. However, as suburbs grew and people began establishing irrigated lawns and gardens, more irritating species of ragweed gained a foothold and Arizona lost its claim of freedom from hay fever.
Anemophilous spring blooming plants such as oak, birch, hickory, pecan, and early summer grasses may also induce pollen allergies. Most cultivated plants with showy flowers are entomophilous and do not cause pollen allergies.
Symptoms of pollen allergy include sneezing, itchy, or runny nose, nasal congestion, red, itchy, and watery eyes. Substances, including pollen, that cause allergies can trigger asthma. A study found a 54% increased chance of asthma attacks when exposed to pollen.
The number of people in the United States affected by hay fever is between 20 and 40 million, including around 6.1 million children and such allergy has proven to be the most frequent allergic response in the nation. Hay fever affects about 20% of Canadians and the prevalence is increasing. There are certain evidential suggestions pointing out hay fever and similar allergies to be of hereditary origin. Individuals who suffer from eczema or are asthmatic tend to be more susceptible to developing long-term hay fever.
Since 1990, pollen seasons have gotten longer and more pollen-filled, and climate change is responsible, according to a new study. The researchers attributed roughly half of the lengthening pollen seasons and 8% of the trend in pollen concentrations to climate changes driven by human activity.
In Denmark, decades of rising temperatures cause pollen to appear earlier and in greater amounts, exacerbated by the introduction of new species such as ragweed.
The most efficient way to handle a pollen allergy is by preventing contact with the material. Individuals carrying the ailment may at first believe that they have a simple summer cold, but hay fever becomes more evident when the apparent cold does not disappear. The confirmation of hay fever can be obtained after examination by a general physician.
Treatment
Antihistamines are effective at treating mild cases of pollinosis; this type of non-prescribed drugs includes loratadine, cetirizine and chlorpheniramine. They do not prevent the discharge of histamine, but it has been proven that they do prevent a part of the chain reaction activated by this biogenic amine, which considerably lowers hay fever symptoms.
Decongestants can be administered in different ways such as tablets and nasal sprays.
Allergy immunotherapy (AIT) treatment involves administering doses of allergens to accustom the body to pollen, thereby inducing specific long-term tolerance. Allergy immunotherapy can be administered orally (as sublingual tablets or sublingual drops), or by injections under the skin (subcutaneous). Discovered by Leonard Noon and John Freeman in 1911, allergy immunotherapy represents the only causative treatment for respiratory allergies.
Nutrition
Most major classes of predatory and parasitic arthropods contain species that eat pollen, despite the common perception that bees are the primary pollen-consuming arthropod group. Many Hymenoptera other than bees consume pollen as adults, though only a small number feed on pollen as larvae (including some ant larvae). Spiders are normally considered carnivores but pollen is an important source of food for several species, particularly for spiderlings, which catch pollen on their webs. It is not clear how spiderlings manage to eat pollen however, since their mouths are not large enough to consume pollen grains. Some predatory mites also feed on pollen, with some species being able to subsist solely on pollen, such as Euseius tularensis, which feeds on the pollen of dozens of plant species. Members of some beetle families such as Mordellidae and Melyridae feed almost exclusively on pollen as adults, while various lineages within larger families such as Curculionidae, Chrysomelidae, Cerambycidae, and Scarabaeidae are pollen specialists even though most members of their families are not (e.g., only 36 of 40,000 species of ground beetles, which are typically predatory, have been shown to eat pollen—but this is thought to be a severe underestimate as the feeding habits are only known for 1,000 species). Similarly, Ladybird beetles mainly eat insects, but many species also eat pollen, as either part or all of their diet. Hemiptera are mostly herbivores or omnivores but pollen feeding is known (and has only been well studied in the Anthocoridae). Many adult flies, especially Syrphidae, feed on pollen, and three UK syrphid species feed strictly on pollen (syrphids, like all flies, cannot eat pollen directly due to the structure of their mouthparts, but can consume pollen contents that are dissolved in a fluid). Some species of fungus, including Fomes fomentarius, are able to break down grains of pollen as a secondary nutrition source that is particularly high in nitrogen. Pollen may be valuable diet supplement for detritivores, providing them with nutrients needed for growth, development and maturation. It was suggested that obtaining nutrients from pollen, deposited on the forest floor during periods of pollen rains, allows fungi to decompose nutritionally scarce litter.
Some species of Heliconius butterflies consume pollen as adults, which appears to be a valuable nutrient source, and these species are more distasteful to predators than the non-pollen consuming species.
Although bats, butterflies, and hummingbirds are not pollen eaters per se, their consumption of nectar in flowers is an important aspect of the pollination process.
In humans
Bee pollen for human consumption is marketed as a food ingredient and as a dietary supplement. The largest constituent is carbohydrates, with protein content ranging from 7 to 35 percent depending on the plant species collected by bees.
Honey produced by bees from natural sources contains pollen derived p-coumaric acid, an antioxidant and natural bactericide that is also present in a wide variety of plants and plant-derived food products.
The U.S. Food and Drug Administration (FDA) has not found any harmful effects of bee pollen consumption, except for the usual allergies. However, FDA does not allow bee pollen marketers in the United States to make health claims about their produce, as no scientific basis for these has ever been proven. Furthermore, there are possible dangers not only from allergic reactions but also from contaminants such as pesticides and from fungi and bacteria growth related to poor storage procedures. A manufacturers's claim that pollen collecting helps the bee colonies is also controversial.
Pine pollen () is traditionally consumed in Korea as an ingredient in sweets and beverages. Māori of precolonial New Zealand would gather pollen of Typha orientalis to make a special bread called pungapunga.
Parasites
The growing industries in pollen harvesting for human and bee consumption rely on harvesting pollen baskets from honey bees as they return to their hives using a pollen trap. When this pollen has been tested for parasites, it has been found that a multitude of viruses and eukaryotic parasites are present in the pollen. It is currently unclear if the parasites are introduced by the bee that collected the pollen or if it is from the flower. Though this is not likely to pose a risk to humans, it is a major issue for the bumblebee rearing industry that relies on thousands of tonnes of honey bee collected pollen per year. Several sterilization methods have been employed, though no method has been 100% effective at sterilisation without reducing the nutritional value of the pollen
Forensic palynology
In forensic biology, pollen can tell a lot about where a person or object has been, because regions of the world, or even more particular locations such a certain set of bushes, will have a distinctive collection of pollen species. Pollen evidence can also reveal the season in which a particular object picked up the pollen. Pollen has been used to trace activity at mass graves in Bosnia, catch a burglar who brushed against a Hypericum bush during a crime, and has even been proposed as an additive for bullets to enable tracking them.
Spiritual purposes
In some Native American religions, pollen was used in prayers and rituals to symbolize life and renewal by sanctifying objects, dancing grounds, trails, and sandpaintings. It may also be sprinkled over heads or in mouths. Many Navajo people believed the body became holy when it traveled over a trail sprinkled with pollen.
Pollen grain staining
For agricultural research purposes, assessing the viability of pollen grains can be necessary and illuminating. A very common, efficient method to do so is known as Alexander's stain. This differential stain consists of ethanol, malachite green, distilled water, glycerol, phenol, chloral hydrate, acid fuchsin, orange G, and glacial acetic acid. (A less-toxic variation omits the phenol and chloral hydrate.) In angiosperms and gymnosperms non-aborted pollen grain will appear red or pink, and aborted pollen grains will appear blue or slightly green.
See also
European Pollen Database
Evolution of sex
Honeybee starvation
Pollen calendar
Pollen count
Pollen DNA barcoding
Pollen source
Polyphenol antioxidant
Bee pollen
References
Bibliography
External links
Pollen and Spore Identification Literature
Pollen micrographs at SEM and confocal microscope
The flight of a pollen cloud
PalDat (database comprising palynological data from a variety of plant families)
Pollen-Wiki - A digital Pollen-Atlas, retrieved 9 February 2018.
YouTube video of pollen clouds from Juncus gerardii plants
Plant anatomy
Plant morphology
Pollination
Allergology | Pollen | [
"Biology"
] | 4,842 | [
"Plant morphology",
"Plants"
] |
46,981 | https://en.wikipedia.org/wiki/Scutum%20%28constellation%29 | Scutum is a small constellation. Its name is Latin for shield, and it was originally named Scutum Sobiescianum by Johannes Hevelius in 1684. Located just south of the celestial equator, its four brightest stars form a narrow diamond shape. It is one of the 88 IAU designated constellations defined in 1922.
History
Scutum was named in 1684 by Polish astronomer Johannes Hevelius (Jan Heweliusz), who originally named it Scutum Sobiescianum (Shield of Sobieski) to commemorate the victory of the Christian forces led by Polish King John III Sobieski (Jan III Sobieski) in the Battle of Vienna in 1683. Later, the name was shortened to Scutum.
Five bright stars of Scutum (α Sct, β Sct, δ Sct, ε Sct and η Sct) were previously known as 1, 6, 2, 3, and 9 Aquilae respectively.
The constellation of Scutum was adopted by the International Astronomical Union in 1922 as one of the 88 constellations covering the entire sky, with the official abbreviation of "Sct". The constellation boundaries are defined by a quadrilateral. In the equatorial coordinate system, the right ascension coordinates of these borders lie between and , while the declination coordinates are between −3.83° and −15.94°.
Coincidentally, the Chinese also associated these stars with battle armor, incorporating them into the larger asterism known as Tien Pien, i.e., the Heavenly Casque (or Helmet).
Features
Stars
Scutum is not a bright constellation, with the brightest star, Alpha Scuti, being a K-type giant star at magnitude 3.85. However, some stars are notable in the constellation. Beta Scuti is the second brightest at magnitude 4.22, followed by Delta Scuti at magnitude 4.72. It is also known as 6 Aquilae. Beta Scuti is a binary system, with the primary with a spectral type similar to the Sun, although it is 1,270 times brighter. Delta Scuti is a bluish white giant star, which is now coming at the direction of the Solar System. Within 1.3 million years it will come as close to 10 light years from Earth, and will be much brighter than Sirius by that time.
UY Scuti is a red supergiant and is also one of the largest stars currently known with a radius over 900 times that of the Sun. RSGC1-F01 is another red supergiant whose radius is over 1,450 times that of the Sun. Scutum contains several clusters of supergiant stars, including RSGC1, Stephenson 2 and RSGC3.
Deep sky objects
Although not a large constellation, Scutum contains several open clusters, as well as a globular cluster and a planetary nebula. The two best known deep sky objects in Scutum are M11 (the Wild Duck Cluster) and the open cluster M26 (NGC 6694). The globular cluster NGC 6712 and the planetary nebula IC 1295 can be found in the eastern part of the constellation, only 24 arcminutes apart.
The most prominent open cluster in Scutum is the Wild Duck Cluster, M11. It was named by William Henry Smyth in 1844 for its resemblance in the eyepiece to a flock of ducks in flight. The cluster, 6200 light-years from Earth and 20 light-years in diameter, contains approximately 3000 stars, making it a particularly rich cluster. It is around 220 million years old, although some studies give older estimates. Estimates for the mass of the star cluster range from to .
Space exploration
The space probe Pioneer 11 is moving in the direction of this constellation. It will not near the closest star in this constellation for over a million years at its present speed, by which time its batteries will be long dead.
See also
Scutum (Chinese astronomy)
Taurus Poniatovii - a constellation created by the Polish astronomer Marcin Odlanicki Poczobutt in 1777 to honor King of Poland Stanisław August Poniatowski.
References
Sources
Ian Ridpath and Wil Tirion (2017). Stars and Planets Guide (5th ed.), Collins, London. . Princeton University Press, Princeton. .
External links
The Deep Photographic Guide to the Constellations: Scutum
Southern constellations
Constellations listed by Johannes Hevelius | Scutum (constellation) | [
"Astronomy"
] | 918 | [
"Scutum (constellation)",
"Southern constellations",
"Constellations",
"Constellations listed by Johannes Hevelius"
] |
46,982 | https://en.wikipedia.org/wiki/Transmission%20system | In telecommunications, a transmission system is a system that transmits a signal from one place to another. The signal can be an electrical, optical or radio signal. The goal of a transmission system is to transmit data accurately and efficiently from point A to point B over a distance, using a variety of technologies such as copper cable and fiber-optic cables, satellite links, and wireless communication technologies.
The International Telecommunication Union (ITU) and the European Telecommunications Standards Institute (ETSI) define a transmission system as the interface and medium through which peer physical layer entities transfer bits. It encompasses all the components and technologies involved in transmitting digital data from one location to another, including modems, cables, and other networking equipment.
Some transmission systems contain multipliers, which amplify a signal prior to re-transmission, or regenerators, which attempt to reconstruct and re-shape the coded message before re-transmission.
One of the most widely used transmission system technologies in the Internet and the public switched telephone network (PSTN) is synchronous optical networking (SONET).
Also, transmission system is the medium through which data is transmitted from one point to another. Examples of common transmission systems people use everyday are: the internet, mobile networks, cordless cables, etc.
Digital transmission system
The ITU defines a digital transmission system as a system that uses digital signals to transmit information. In a digital transmission system, the data is first converted into a digital format and then transmitted over a communication channel. The digital format provides a number of benefits over analog transmission systems, including improved signal quality, reduced noise and interference, and increased data accuracy.
ITU defines digital transmission system (DTS) as following:A specific means of providing a digital section.The ITU sets global standards for digital transmission systems, including the encoding and decoding methods used, the data rates and transmission speeds, and the types of communication channels used. These standards ensure that digital transmission systems are compatible and interoperable with each other, regardless of the type of data being transmitted or the geographical location of the sender and receiver.
Basic components of a DTS
Point-to-point links are communication systems between two endpoints, usually a sender (transmitter) and a receiver.
System performance analysis:
Link power budget is a power loss model for a point-to-point link.
Rise time budget is analysis method used to measure the amount of dispersion which is present in a link.
Line coding is the process of transforming data into digital signals for transmission over a point-to-point link. Can include binary data source, multiplexer and line coder.
Non-return-to-zero (NRZ)
Return-to-zero (RZ)
Phase-encoded (PE)
Block codes
Error correction techniques are used to detect and correct errors that occur during transmission.
Automatic repeat request (ARQ)
Forward error correction (FEC)
Noise effects on system performance can be minimized by using signal conditioning techniques such as signal amplification and filtering.
These techniques are used to improve signal-to-noise ratio, which helps to maintain the integrity of the signal during transmission.
See also
Signal transmission
Communications satellite
Communications system
Submarine communications cable – a cable on the sea bed
References
Telecommunications systems | Transmission system | [
"Technology"
] | 665 | [
"Telecommunications systems"
] |
46,999 | https://en.wikipedia.org/wiki/Buffer%20solution | A buffer solution is a solution where the pH does not change significantly on dilution or if an acid or base is added at constant temperature. Its pH changes very little when a small amount of strong acid or base is added to it. Buffer solutions are used as a means of keeping pH at a nearly constant value in a wide variety of chemical applications. In nature, there are many living systems that use buffering for pH regulation. For example, the bicarbonate buffering system is used to regulate the pH of blood, and bicarbonate also acts as a buffer in the ocean.
Principles of buffering
Buffer solutions resist pH change because of a chemical equilibrium between the weak acid HA and its conjugate base A−:
When some strong acid is added to an equilibrium mixture of the weak acid and its conjugate base, hydrogen ions (H+) are added, and the equilibrium is shifted to the left, in accordance with Le Chatelier's principle. Because of this, the hydrogen ion concentration increases by less than the amount expected for the quantity of strong acid added.
Similarly, if strong alkali is added to the mixture, the hydrogen ion concentration decreases by less than the amount expected for the quantity of alkali added. In Figure 1, the effect is illustrated by the simulated titration of a weak acid with pKa = 4.7. The relative concentration of undissociated acid is shown in blue, and of its conjugate base in red. The pH changes relatively slowly in the buffer region, pH = pKa ± 1, centered at pH = 4.7, where [HA] = [A−]. The hydrogen ion concentration decreases by less than the amount expected because most of the added hydroxide ion is consumed in the reaction
and only a little is consumed in the neutralization reaction (which is the reaction that results in an increase in pH)
Once the acid is more than 95% deprotonated, the pH rises rapidly because most of the added alkali is consumed in the neutralization reaction.
Buffer capacity
Buffer capacity is a quantitative measure of the resistance to change of pH of a solution containing a buffering agent with respect to a change of acid or alkali concentration. It can be defined as follows:
where is an infinitesimal amount of added base, or
where is an infinitesimal amount of added acid. pH is defined as −log10[H+], and d(pH) is an infinitesimal change in pH.
With either definition the buffer capacity for a weak acid HA with dissociation constant Ka can be expressed as
where [H+] is the concentration of hydrogen ions, and is the total concentration of added acid. Kw is the equilibrium constant for self-ionization of water, equal to 1.0. Note that in solution H+ exists as the hydronium ion H3O+, and further aquation of the hydronium ion has negligible effect on the dissociation equilibrium, except at very high acid concentration.
This equation shows that there are three regions of raised buffer capacity (see figure 2).
In the central region of the curve (coloured green on the plot), the second term is dominant, and Buffer capacity rises to a local maximum at pH = pKa. The height of this peak depends on the value of pKa. Buffer capacity is negligible when the concentration [HA] of buffering agent is very small and increases with increasing concentration of the buffering agent. Some authors show only this region in graphs of buffer capacity. Buffer capacity falls to 33% of the maximum value at pH = pKa ± 1, to 10% at pH = pKa ± 1.5 and to 1% at pH = pKa ± 2. For this reason the most useful range is approximately pKa ± 1. When choosing a buffer for use at a specific pH, it should have a pKa value as close as possible to that pH.
With strongly acidic solutions, pH less than about 2 (coloured red on the plot), the first term in the equation dominates, and buffer capacity rises exponentially with decreasing pH: This results from the fact that the second and third terms become negligible at very low pH. This term is independent of the presence or absence of a buffering agent.
With strongly alkaline solutions, pH more than about 12 (coloured blue on the plot), the third term in the equation dominates, and buffer capacity rises exponentially with increasing pH: This results from the fact that the first and second terms become negligible at very high pH. This term is also independent of the presence or absence of a buffering agent.
Applications of buffers
The pH of a solution containing a buffering agent can only vary within a narrow range, regardless of what else may be present in the solution. In biological systems this is an essential condition for enzymes to function correctly. For example, in human blood a mixture of carbonic acid (HCO) and bicarbonate (HCO) is present in the plasma fraction; this constitutes the major mechanism for maintaining the pH of blood between 7.35 and 7.45. Outside this narrow range (7.40 ± 0.05 pH unit), acidosis and alkalosis metabolic conditions rapidly develop, ultimately leading to death if the correct buffering capacity is not rapidly restored.
If the pH value of a solution rises or falls too much, the effectiveness of an enzyme decreases in a process, known as denaturation, which is usually irreversible. The majority of biological samples that are used in research are kept in a buffer solution, often phosphate buffered saline (PBS) at pH 7.4.
In industry, buffering agents are used in fermentation processes and in setting the correct conditions for dyes used in colouring fabrics. They are also used in chemical analysis and calibration of pH meters.
Simple buffering agents
{| class="wikitable"
! Buffering agent !! pKa !! Useful pH range
|-
| Citric acid || 3.13, 4.76, 6.40 || 2.1–7.4
|-
| Acetic acid || 4.8 || 3.8–5.8
|-
| KH2PO4 || 7.2 || 6.2–8.2
|-
| CHES || 9.3 || 8.3–10.3
|-
| Borate || 9.24 || 8.25–10.25
|}
For buffers in acid regions, the pH may be adjusted to a desired value by adding a strong acid such as hydrochloric acid to the particular buffering agent. For alkaline buffers, a strong base such as sodium hydroxide may be added. Alternatively, a buffer mixture can be made from a mixture of an acid and its conjugate base. For example, an acetate buffer can be made from a mixture of acetic acid and sodium acetate. Similarly, an alkaline buffer can be made from a mixture of the base and its conjugate acid.
"Universal" buffer mixtures
By combining substances with pKa values differing by only two or less and adjusting the pH, a wide range of buffers can be obtained. Citric acid is a useful component of a buffer mixture because it has three pKa values, separated by less than two. The buffer range can be extended by adding other buffering agents. The following mixtures (McIlvaine's buffer solutions) have a buffer range of pH 3 to 8.
{| class="wikitable"
! 0.2 M Na2HPO4 (mL)
! 0.1 M citric acid (mL)
! pH
|-
| 20.55
| 79.45
| style="background:#ff0000; color:white" | 3.0
|-
| 38.55
| 61.45
| style="background:#ff7777; color:white" |4.0
|-
| 51.50
| 48.50
| style="background:#ff7700;" | 5.0
|-
| 63.15
| 36.85
| style="background:#ffff00;" |6.0
|-
| 82.35
| 17.65
| style="background:#007777; color:white" | 7.0
|-
| 97.25
| 2.75
|style="background:#0077ff; color:white" | 8.0
|}
A mixture containing citric acid, monopotassium phosphate, boric acid, and diethyl barbituric acid can be made to cover the pH range 2.6 to 12.
Other universal buffers are the Carmody buffer and the Britton–Robinson buffer, developed in 1931.
Common buffer compounds used in biology
For effective range see Buffer capacity, above. Also see Good's buffers for the historic design principles and favourable properties of these buffer substances in biochemical applications.
Calculating buffer pH
Monoprotic acids
First write down the equilibrium expression
This shows that when the acid dissociates, equal amounts of hydrogen ion and anion are produced. The equilibrium concentrations of these three components can be calculated in an ICE table (ICE standing for "initial, change, equilibrium").
{| class="wikitable"
|+ ICE table for a monoprotic acid
|-
!
! [HA] !! [A−] !! [H+]
|-
! I
| C0 || 0 || y
|-
! C
| −x || x || x
|-
! E
| C0 − x || x || x + y
|}
The first row, labelled I, lists the initial conditions: the concentration of acid is C0, initially undissociated, so the concentrations of A− and H+ would be zero; y is the initial concentration of added strong acid, such as hydrochloric acid. If strong alkali, such as sodium hydroxide, is added, then y will have a negative sign because alkali removes hydrogen ions from the solution. The second row, labelled C for "change", specifies the changes that occur when the acid dissociates. The acid concentration decreases by an amount −x, and the concentrations of A− and H+ both increase by an amount +x. This follows from the equilibrium expression. The third row, labelled E for "equilibrium", adds together the first two rows and shows the concentrations at equilibrium.
To find x, use the formula for the equilibrium constant in terms of concentrations:
Substitute the concentrations with the values found in the last row of the ICE table:
Simplify to
With specific values for C0, Ka and y, this equation can be solved for x. Assuming that pH = −log10[H+], the pH can be calculated as pH = −log10(x + y).
Polyprotic acids
Polyprotic acids are acids that can lose more than one proton. The constant for dissociation of the first proton may be denoted as Ka1, and the constants for dissociation of successive protons as Ka2, etc. Citric acid is an example of a polyprotic acid H3A, as it can lose three protons.
{| class="wikitable" style="width: 230px;
|+ Stepwise dissociation constants
|-
! |Equilibrium!!Citric acid
|-
| H3A H2A− + H+||pKa1 = 3.13
|-
| H2A− HA2− + H+|| pKa2 = 4.76
|-
| HA2− A3− + H+|| pKa3 = 6.40
|}
When the difference between successive pKa values is less than about 3, there is overlap between the pH range of existence of the species in equilibrium. The smaller the difference, the more the overlap. In the case of citric acid, the overlap is extensive and solutions of citric acid are buffered over the whole range of pH 2.5 to 7.5.
Calculation of the pH with a polyprotic acid requires a speciation calculation to be performed. In the case of citric acid, this entails the solution of the two equations of mass balance:
CA is the analytical concentration of the acid, CH is the analytical concentration of added hydrogen ions, βq are the cumulative association constants. Kw is the constant for self-ionization of water. There are two non-linear simultaneous equations in two unknown quantities [A3−] and [H+]. Many computer programs are available to do this calculation. The speciation diagram for citric acid was produced with the program HySS.
N.B. The numbering of cumulative, overall constants is the reverse of the numbering of the stepwise, dissociation constants.
{| class="wikitable"
|+ Relationship between cumulative association constant (β) values and stepwise dissociation constant (K) values for a tribasic acid.
! Equilibrium!! Relationship
|-
| A3− + H+ AH2+||Log β1= pka3
|-
| A3− + 2H+ AH2+||Log β2 =pka2 + pka3
|-
| A3− + 3H+ AH3||Log β3 = pka1 + pka2 + pka3
|}
Cumulative association constants are used in general-purpose computer programs such as the one used to obtain the speciation diagram above.
See also
Henderson–Hasselbalch equation
Good's buffers
Common-ion effect
Metal ion buffer
Mineral redox buffer
References
External links
Acid–base chemistry
Acid–base physiology
Equilibrium chemistry | Buffer solution | [
"Chemistry"
] | 2,859 | [
"Acid–base chemistry",
"Buffer solutions",
"Acid–base physiology",
"Equilibrium chemistry",
"nan"
] |
47,000 | https://en.wikipedia.org/wiki/Colocation%20centre | A colocation centre (also spelled co-location, or shortened to colo) or "carrier hotel", is a type of data centre where equipment, space, and bandwidth are available for rental to retail customers. Colocation facilities provide space, power, cooling, and physical security for the server, storage, and networking equipment of other firms and also connect them to a variety of telecommunications and network service providers with a minimum of cost and complexity.
Configuration
Many colocation providers sell to a wide range of customers, ranging from large enterprises to small companies. Typically, the customer owns the information technology (IT) equipment and the facility provides power and cooling. Customers retain control over the design and usage of their equipment, but daily management of the data centre and facility are overseen by the multi-tenant colocation provider.
Cabinets – A cabinet is a locking unit that holds a server rack. In a multi-tenant data centre, servers within cabinets share raised-floor space with other tenants, in addition to sharing power and cooling infrastructure.
Cages – A cage is dedicated server space within a traditional raised-floor data centre; it is surrounded by mesh walls and entered through a locking door. Cages share power and cooling infrastructure with other data centre tenants.
Suites – A suite is a dedicated, private server space within a traditional raised-floor data centre; it is fully enclosed by solid partitions and entered through a locking door. Suites may share power and cooling infrastructure with other data center tenants, or have these resources provided on a dedicated basis.
Modules – data center modules are purpose-engineered modules and components to offer scalable data center capacity. They typically use standardized components, which make them easily added, integrated or retrofitted into existing data centers, and cheaper and easier to build. In a colocation environment, the data center module is a data center within a data center, with its own steel walls and security protocol, and its own cooling and power infrastructure. "A number of colocation companies have praised the modular approach to data centers to better match customer demand with physical build outs, and allow customers to buy a data center as a service, paying only for what they consume."
Building features
Buildings with data centres inside them are often easy to recognize by the amount of cooling equipment located outside or on the roof.
Colocation facilities have many other special characteristics:
Fire protection systems, including passive and active elements, as well as implementation of fire prevention programmes in operations. Smoke detectors are usually installed to provide early warning of a developing fire by detecting particles generated by smouldering components prior to the development of flame. This allows investigation, interruption of power, and manual fire suppression using hand held fire extinguishers before the fire grows to a large size. A fire sprinkler system is often provided to control a full scale fire if it develops. Clean agent fire suppression gaseous systems are sometimes installed to suppress a fire earlier than the fire sprinkler system. Passive fire protection elements include the installation of fire walls around the space, so a fire can be restricted to a portion of the facility for a limited time in the event of the failure of the active fire protection systems, or if they are not installed.
19-inch racks for data equipment and servers, 23-inch racks for telecommunications equipment
Cabinets and cages for physical access control over tenants' equipment. Depending on one's needs a cabinet can house individual or multiple racks.
Overhead or underfloor cable rack (tray) and fibreguide, power cables usually on separate rack from data
Air conditioning is used to control the temperature and humidity in the space. ASHRAE recommends a temperature range and humidity range for optimal electronic equipment conditions versus environmental issues. The electrical power used by the electronic equipment is converted to heat, which is rejected to the ambient air in the data centre space. Unless the heat is removed, the ambient temperature will rise, resulting in electronic equipment malfunction. By controlling the space air temperature, the server components at the board level are kept within the manufacturer's specified temperature and humidity range. Air conditioning systems help keep equipment space humidity within acceptable parameters by cooling the return space air below the dew point. Too much humidity and water may begin to condense on internal components. In case of a dry atmosphere, ancillary humidification systems may add water vapour to the space if the humidity is too low, to avoid static electricity discharge problems which may damage components.
Low-impedance electrical ground
Few, if any, windows
Colocation data centres are often audited to prove that they attain certain standards and levels of reliability; the most commonly seen systems are SSAE 16 SOC 1 Type I and Type II (formerly SAS 70 Type I and Type II) and the tier system by the Uptime Institute or TIA. For service organizations today, SSAE 16 calls for a description of its "system". This is far more detailed and comprehensive than SAS 70's description of "controls". Other data center compliance standards include Health Insurance Portability and Accountability Act (HIPAA) audit and PCI DSS Standards.
Power
Colocation facilities generally have generators that start automatically when utility power fails, usually running on diesel fuel. These generators may have varying levels of redundancy, depending on how the facility is built. Generators do not start instantaneously, so colocation facilities usually have battery backup systems. In many facilities, the operator of the facility provides large inverters to provide AC power from the batteries. In other cases, customers may install smaller UPSes in their racks.
Some customers choose to use equipment that is powered directly by 48 VDC (nominal) battery banks. This may provide better energy efficiency, and may reduce the number of parts that can fail, though the reduced voltage greatly increases necessary current, and thus the size (and cost) of power delivery wiring. An alternative to batteries is a motor–generator connected to a flywheel and diesel engine.
Many colocation facilities can provide redundant, A and B power feeds to customer equipment, and high end servers and telecommunications equipment often can have two power supplies installed.
Colocation facilities are sometimes connected to multiple sections of the utility power grid for additional reliability.
Internal connections
Colocation facility owners have differing rules regarding cross-connects between their customers, some of whom may be carriers. These rules may allow customers to run such connections at no charge, or allow customers to order such connections for a monthly fee. They may allow customers to order cross-connects to carriers, but not to other customers. Some colocation centres feature a "meet-me-room" where the different carriers housed in the centre can efficiently exchange data.
Most peering points sit in colocation centres and because of the high concentration of servers inside larger colocation centres, most carriers will be interested in bringing direct connections to such buildings. In many cases, there will be a larger Internet exchange point hosted inside a colocation centre, where customers can connect for peering.
See also
Carrier-neutral data center
References
External links
Build Or Colocate? The ROI Of Your Next Data Center
DCK Guide To Modular Data Centers: The Modular Market
Data centers
Internet architecture
Internet hosting
Servers (computing)
Web hosting | Colocation centre | [
"Technology"
] | 1,446 | [
"Data centers",
"Internet architecture",
"IT infrastructure",
"Computers"
] |
47,011 | https://en.wikipedia.org/wiki/Arrhenius%20equation | In physical chemistry, the Arrhenius equation is a formula for the temperature dependence of reaction rates. The equation was proposed by Svante Arrhenius in 1889, based on the work of Dutch chemist Jacobus Henricus van 't Hoff who had noted in 1884 that the van 't Hoff equation for the temperature dependence of equilibrium constants suggests such a formula for the rates of both forward and reverse reactions. This equation has a vast and important application in determining the rate of chemical reactions and for calculation of energy of activation. Arrhenius provided a physical justification and interpretation for the formula. Currently, it is best seen as an empirical relationship. It can be used to model the temperature variation of diffusion coefficients, population of crystal vacancies, creep rates, and many other thermally induced processes and reactions. The Eyring equation, developed in 1935, also expresses the relationship between rate and energy.
Formulation
The Arrhenius equation describes the exponential dependence of the rate constant of a chemical reaction on the absolute temperature as
where
is the rate constant (frequency of collisions resulting in a reaction),
is the absolute temperature,
is the pre-exponential factor or Arrhenius factor or frequency factor. Arrhenius originally considered A to be a temperature-independent constant for each chemical reaction. However more recent treatments include some temperature dependence – see below.
is the molar activation energy for the reaction,
is the universal gas constant.
Alternatively, the equation may be expressed as
where
is the activation energy for the reaction (in the same unit as kBT),
is the Boltzmann constant.
The only difference is the unit of : the former form uses energy per mole, which is common in chemistry, while the latter form uses energy per molecule directly, which is common in physics.
The different units are accounted for in using either the gas constant, , or the Boltzmann constant, , as the multiplier of temperature .
The unit of the pre-exponential factor are identical to those of the rate constant and will vary depending on the order of the reaction. If the reaction is first order it has the unit s−1, and for that reason it is often called the frequency factor or attempt frequency of the reaction. Most simply, is the number of collisions that result in a reaction per second, is the number of collisions (leading to a reaction or not) per second occurring with the proper orientation to react and is the probability that any given collision will result in a reaction. It can be seen that either increasing the temperature or decreasing the activation energy (for example through the use of catalysts) will result in an increase in rate of reaction.
Given the small temperature range of kinetic studies, it is reasonable to approximate the activation energy as being independent of the temperature. Similarly, under a wide range of practical conditions, the weak temperature dependence of the pre-exponential factor is negligible compared to the temperature dependence of the factor ; except in the case of "barrierless" diffusion-limited reactions, in which case the pre-exponential factor is dominant and is directly observable.
With this equation it can be roughly estimated that the rate of reaction increases by a factor of about 2 to 3 for every 10 °C rise in temperature, for common values of activation energy and temperature range.
The factor denotes the fraction of molecules with energy greater than or equal to .
Derivation
Van't Hoff argued that the temperature of a reaction and the standard equilibrium constant exhibit the relation:
where denotes the apposite standard internal energy change value.
Let and respectively denote the forward and backward reaction rates of the reaction of interest, then
, an equation from which naturally follows.
Substituting the expression for in eq.(), we obtain .
The preceding equation can be broken down into the following two equations:
and
where and are the activation energies associated with the forward and backward reactions respectively, with .
Experimental findings suggest that the constants in eq.() and eq.() can be treated as being equal to zero, so that and
Integrating these equations and taking the exponential yields the results and , where each pre-exponential factor or is mathematically the exponential of the constant of integration for the respective indefinite integral in question.
Arrhenius plot
Taking the natural logarithm of Arrhenius equation yields:
Rearranging yields:
This has the same form as an equation for a straight line:
where x is the reciprocal of T.
So, when a reaction has a rate constant obeying the Arrhenius equation, a plot of ln k versus T−1 gives a straight line, whose slope and intercept can be used to determine Ea and A respectively. This procedure is common in experimental chemical kinetics. The activation energy is simply obtained by multiplying by (−R) the slope of the straight line drawn from a plot of ln k versus (1/T):
Modified Arrhenius equation
The modified Arrhenius equation makes explicit the temperature dependence of the pre-exponential factor. The modified equation is usually of the form
The original Arrhenius expression above corresponds to . Fitted rate constants typically lie in the range . Theoretical analyses yield various predictions for n. It has been pointed out that "it is not feasible to establish, on the basis of temperature studies of the rate constant, whether the predicted T1/2 dependence of the pre-exponential factor is observed experimentally". However, if additional evidence is available, from theory and/or from experiment (such as density dependence), there is no obstacle to incisive tests of the Arrhenius law.
Another common modification is the stretched exponential form
where β is a dimensionless number of order 1. This is typically regarded as a purely empirical correction or fudge factor to make the model fit the data, but can have theoretical meaning, for example showing the presence of a range of activation energies or in special cases like the Mott variable range hopping.
Theoretical interpretation
Arrhenius's concept of activation energy
Arrhenius argued that for reactants to transform into products, they must first acquire a minimum amount of energy, called the activation energy Ea. At an absolute temperature T, the fraction of molecules that have a kinetic energy greater than Ea can be calculated from statistical mechanics. The concept of activation energy explains the exponential nature of the relationship, and in one way or another, it is present in all kinetic theories.
The calculations for reaction rate constants involve an energy averaging over a Maxwell–Boltzmann distribution with as lower bound and so are often of the type of incomplete gamma functions, which turn out to be proportional to .
Collision theory
One approach is the collision theory of chemical reactions, developed by Max Trautz and William Lewis in the years 1916–18. In this theory, molecules are supposed to react if they collide with a relative kinetic energy along their line of centers that exceeds Ea. The number of binary collisions between two unlike molecules per second per unit volume is found to be
where NA is the Avogadro constant, dAB is the average diameter of A and B, T is the temperature which is multiplied by the Boltzmann constant kB to convert to energy, and μAB is the reduced mass.
The rate constant is then calculated as , so that the collision theory predicts that the pre-exponential factor is equal to the collision number zAB. However for many reactions this agrees poorly with experiment, so the rate constant is written instead as . Here is an empirical steric factor, often much less than 1.00, which is interpreted as the fraction of sufficiently energetic collisions in which the two molecules have the correct mutual orientation to react.
Transition state theory
The Eyring equation, another Arrhenius-like expression, appears in the "transition state theory" of chemical reactions, formulated by Eugene Wigner, Henry Eyring, Michael Polanyi and M. G. Evans in the 1930s. The Eyring equation can be written:
where is the Gibbs energy of activation, is the entropy of activation, is the enthalpy of activation, is the Boltzmann constant, and is the Planck constant.
At first sight this looks like an exponential multiplied by a factor that is linear in temperature. However, free energy is itself a temperature dependent quantity. The free energy of activation is the difference of an enthalpy term and an entropy term multiplied by the absolute temperature. The pre-exponential factor depends primarily on the entropy of activation. The overall expression again takes the form of an Arrhenius exponential (of enthalpy rather than energy) multiplied by a slowly varying function of T. The precise form of the temperature dependence depends upon the reaction, and can be calculated using formulas from statistical mechanics involving the partition functions of the reactants and of the activated complex.
Limitations of the idea of Arrhenius activation energy
Both the Arrhenius activation energy and the rate constant k are experimentally determined, and represent macroscopic reaction-specific parameters that are not simply related to threshold energies and the success of individual collisions at the molecular level. Consider a particular collision (an elementary reaction) between molecules A and B. The collision angle, the relative translational energy, the internal (particularly vibrational) energy will all determine the chance that the collision will produce a product molecule AB. Macroscopic measurements of E and k are the result of many individual collisions with differing collision parameters. To probe reaction rates at molecular level, experiments are conducted under near-collisional conditions and this subject is often called molecular reaction dynamics.
Another situation where the explanation of the Arrhenius equation parameters falls short is in heterogeneous catalysis, especially for reactions that show Langmuir-Hinshelwood kinetics. Clearly, molecules on surfaces do not "collide" directly, and a simple molecular cross-section does not apply here. Instead, the pre-exponential factor reflects the travel across the surface towards the active site.
There are deviations from the Arrhenius law during the glass transition in all classes of glass-forming matter. The Arrhenius law predicts that the motion of the structural units (atoms, molecules, ions, etc.) should slow down at a slower rate through the glass transition than is experimentally observed. In other words, the structural units slow down at a faster rate than is predicted by the Arrhenius law. This observation is made reasonable assuming that the units must overcome an energy barrier by means of a thermal activation energy. The thermal energy must be high enough to allow for translational motion of the units which leads to viscous flow of the material.
See also
Accelerated aging
Eyring equation
Q10 (temperature coefficient)
Van 't Hoff equation
Clausius–Clapeyron relation
Gibbs–Helmholtz equation
Cherry blossom frontpredicted using the Arrhenius equation
References
Bibliography
External links
Carbon Dioxide solubility in Polyethylene – Using Arrhenius equation for calculating species solubility in polymers
Chemical kinetics
Eponymous equations of physics
Statistical mechanics | Arrhenius equation | [
"Physics",
"Chemistry"
] | 2,243 | [
"Chemical reaction engineering",
"Equations of physics",
"Eponymous equations of physics",
"Statistical mechanics",
"Chemical kinetics"
] |
47,012 | https://en.wikipedia.org/wiki/Peering | In computer networking, peering is a voluntary interconnection of administratively separate Internet networks for the purpose of exchanging traffic between the "down-stream" users of each network. Peering is settlement-free, also known as "bill-and-keep" or "sender keeps all", meaning that neither party pays the other in association with the exchange of traffic; instead, each derives and retains revenue from its own customers.
An agreement by two or more networks to peer is instantiated by a physical interconnection of the networks, an exchange of routing information through the Border Gateway Protocol (BGP) routing protocol, tacit agreement to norms of conduct and, in some extraordinarily rare cases (0.07%), a formalized contractual document.
In 0.02% of cases the word "peering" is used to describe situations where there is some settlement involved. Because these outliers can be viewed as creating ambiguity, the phrase "settlement-free peering" is sometimes used to explicitly denote normal cost-free peering.
History
The first Internet exchange point was the Commercial Internet eXchange (CIX), formed by Alternet/UUNET (now Verizon Business), PSI, and CERFNET to exchange traffic without regard for whether the traffic complied with the acceptable use policy (AUP) of the NSFNet or ANS' interconnection policy. The CIX infrastructure consisted of a single router, managed by PSI, and was initially located in Santa Clara, California. Paying CIX members were allowed to attach to the router directly or via leased lines. After some time, the router was also attached to the Pacific Bell SMDS cloud. The router was later moved to the Palo Alto Internet Exchange, or PAIX, which was developed and operated by Digital Equipment Corporation (DEC). Because the CIX operated at OSI layer 3, rather than OSI layer 2, and because it was not neutral, in the sense that it was operated by one of its participants rather than by all of them collectively, and it conducted lobbying activities supported by some of its participants and not by others, it would not today be considered an Internet exchange point. Nonetheless, it was the first thing to bear that name.
The first exchange point to resemble modern, neutral, Ethernet-based exchanges was the Metropolitan Area Ethernet, or MAE, in Tysons Corner, Virginia. When the United States government de-funded the NSFNET backbone, Internet exchange points were needed to replace its function, and initial governmental funding was used to aid the preexisting MAE and bootstrap three other exchanges, which they dubbed NAPs, or "Network Access Points," in accordance with the terminology of the National Information Infrastructure document. All four are now defunct or no longer functioning as Internet exchange points:
MAE-East – Located in Tysons Corner, Virginia, and later relocated to Ashburn, Virginia
Chicago NAP – Operated by Ameritech and located in Chicago, Illinois
New York NAP – Operated by Sprint and located in Pennsauken, New Jersey
San Francisco NAP – Operated by PacBell and located in the Bay Area
As the Internet grew, and traffic levels increased, these NAPs became a network bottleneck. Most of the early NAPs utilized FDDI technology, which provided only 100 Mbit/s of capacity to each participant. Some of these exchanges upgraded to ATM technology, which provided OC-3 (155 Mbit/s) and OC-12 (622 Mbit/s) of capacity.
Other prospective exchange point operators moved directly into offering Ethernet technology, such as gigabit Ethernet (1,000 Mbit/s), which quickly became the predominant choice for Internet exchange points due to the reduced cost and increased capacity offered. Today, almost all significant exchange points operate solely over Ethernet, and most of the largest exchange points offer 10, 40, and even 100 gigabit service.
During the dot-com boom, many exchange point and carrier-neutral colocation providers had plans to build as many as 50 locations to promote carrier interconnection in the United States alone. Essentially all of these plans were abandoned following the dot-com bust, and today it is considered both economically and technically infeasible to support this level of interconnection among even the largest of networks.
How peering works
The Internet is a collection of separate and distinct networks referred to as autonomous systems, each one consisting of a set of globally unique IP addresses and a unique global BGP routing policy.
The interconnection relationships between Autonomous Systems are of exactly two types:
Peering - Two networks exchange traffic between their users freely, and for mutual benefit.
Transit – One network pays another network for access to the Internet.
Therefore, in order for a network to reach any specific other network on the Internet, it must either:
Sell transit service to that network or a chain of resellers ending at that network (making them a 'customer'),
Peer with that network or with a network which sells transit service to that network, or
Buy transit service from any other network (which is then responsible for providing interconnection to the rest of the Internet).
The Internet is based on the principle of global or end-to-end reachability, which means that any Internet user can transparently exchange traffic with any other Internet user. Therefore, a network is connected to the Internet if and only if it buys transit, or peers with every other network which also does not purchase transit (which together constitute a "default free zone" or "DFZ").
Public peering is done at Internet exchange points (IXPs), while private peering can be done with direct links between networks.
Motivations for peering
Peering involves two networks coming together to exchange traffic with each other freely, and for mutual benefit. This 'mutual benefit' is most often the motivation behind peering, which is often described solely by "reduced costs for transit services". Other less tangible motivations can include:
Increased redundancy (by reducing dependence on one or more transit providers).
Increased capacity for extremely large amounts of traffic (distributing traffic across many networks).
Increased routing control over one's traffic.
Improved performance (attempting to bypass potential bottlenecks with a "direct" path).
Improved perception of one's network (being able to claim a "higher tier").
Ease of requesting for emergency aid (from friendly peers).
Physical interconnections for peering
The physical interconnections used for peering are categorized into two types:
Public peering – Interconnection utilizing a multi-party shared switch fabric such as an Ethernet switch.
Private peering – Interconnection utilizing a point-to-point link between two parties.
Public peering
Public peering is accomplished across a Layer 2 access technology, generally called a shared fabric. At these locations, multiple carriers interconnect with one or more other carriers across a single physical port. Historically, public peering locations were known as network access points (NAPs). Today they are most often called exchange points or Internet exchanges ("IXP"). Many of the largest exchange points in the world can have hundreds of participants, and some span multiple buildings and colocation facilities across a city.
Since public peering allows networks interested in peering to interconnect with many other networks through a single port, it is often considered to offer "less capacity" than private peering, but to a larger number of networks. Many smaller networks, or networks which are just beginning to peer, find that public peering exchange points provide an excellent way to meet and interconnect with other networks which may be open to peering with them. Some larger networks utilize public peering as a way to aggregate a large number of "smaller peers", or as a location for conducting low-cost "trial peering" without the expense of provisioning private peering on a temporary basis, while other larger networks are not willing to participate at public exchanges at all.
A few exchange points, particularly in the United States, are operated by commercial carrier-neutral third parties, which are critical for achieving cost-effective data center connectivity.
Private peering
Private peering is the direct interconnection between only two networks, across a Layer 1 or 2 medium that offers dedicated capacity that is not shared by any other parties. Early in the history of the Internet, many private peers occurred across "telco" provisioned SONET circuits between individual carrier-owned facilities. Today, most private interconnections occur at carrier hotels or carrier neutral colocation facilities, where a direct crossconnect can be provisioned between participants within the same building, usually for a much lower cost than telco circuits.
Most of the traffic on the Internet, especially traffic between the largest networks, occurs via private peering. However, because of the resources required to provision each private peer, many networks are unwilling to provide private peering to "small" networks, or to "new" networks which have not yet proven that they will provide a mutual benefit.
Peering agreement
Throughout the history of the Internet, there have been a spectrum of kinds of agreements between peers, ranging from handshake agreements to written contracts as required by one or more parties. Such agreements set forth the details of how traffic is to be exchanged, along with a list of expected activities which may be necessary to maintain the peering relationship, a list of activities which may be considered abusive and result in termination of the relationship, and details concerning how the relationship can be terminated. Detailed contracts of this type are typically used between the largest ISPs, as well as the ones operating in the most heavily regulated economies. As of 2011, such contracts account for less than 0.5% of all peering agreements.
Depeering
By definition, peering is the voluntary and free exchange of traffic between two networks, for mutual benefit. If one or both networks believes that there is no longer a mutual benefit, they may decide to cease peering: this is known as depeering. Some of the reasons why one network may wish to depeer another include:
A desire that the other network pay settlement, either in exchange for continued peering or for transit services.
A belief that the other network is "profiting unduly" from the no-settlement interconnection.
Concern over traffic ratios, which is related to the fair sharing of cost for the interconnection.
A desire to peer with the upstream transit provider of the peered network.
Abuse of the interconnection by the other party, such as pointing default or utilizing the peer for transit.
Instability of the peered network, repeated routing leaks, lack of response to network abuse issues, etc.
The inability or unwillingness of the peered network to provision additional capacity for peering.
The belief that the peered network is unduly peering with one's customers.
Various external political factors (including personal conflicts between individuals at each network).
In some situations, networks which are being depeered have been known to attempt to fight to keep the peering by intentionally breaking the connectivity between the two networks when the peer is removed, either through a deliberate act or an act of omission. The goal is to force the depeering network to have so many customer complaints that they are willing to restore peering. Examples of this include forcing traffic via a path that does not have enough capacity to handle the load, or intentionally blocking alternate routes to or from the other network. Some notable examples of these situations have included:
BBN Planet vs Exodus Communications
PSINet vs Cable & Wireless
AOL Transit Data Network (ATDN) vs Cogent Communications
France Telecom vs Cogent Communications
France Telecom (Wanadoo) vs Proxad (Free)
Level 3 Communications vs XO Communications
Level 3 Communications vs Cogent Communications
Telecom/Telefónica/Impsat/Prima vs CABASE (Argentina)
Cogent Communications vs TeliaSonera
Sprint-Nextel vs Cogent Communications
SFR vs OVH
The French ISP 'Free' vs YouTube
Modern peering
Donut peering model
The "donut peering" model describes the intensive interconnection of small and medium-sized regional networks that make up much of the Internet. Traffic between these regional networks can be modeled as a toroid, with a core "donut hole" that is poorly interconnected to the networks around it.
As detailed above, some carriers attempted to form a cartel of self-described Tier 1 networks, nominally refusing to peer with any networks outside the oligopoly. Seeking to reduce transit costs, connections between regional networks bypass those "core" networks. Data takes a more direct path, reducing latency and packet loss. This also improves resiliency between consumers and content providers via multiple connections in many locations around the world, in particular during business disputes between the core transit providers.
Multilateral peering
The majority of BGP AS-AS adjacencies are the product of multilateral peering agreements, or MLPAs. In multilateral peering, an unlimited number of parties agree to exchange traffic on common terms, using a single agreement to which they each accede. The multilateral peering is typically technically instantiated in a route server or route reflector (which differ from looking glasses in that they serve routes back out to participants, rather than just listening to inbound routes) to redistribute routes via a BGP hub-and-spoke topology, rather than a partial-mesh topology. The two primary criticisms of multilateral peering are that it breaks the shared fate of the forwarding and routing planes, since the layer-2 connection between two participants could hypothetically fail while their layer-2 connections with the route server remained up, and that they force all participants to treat each other with the same, undifferentiated, routing policy. The primary benefit of multilateral peering is that it minimizes configuration for each peer, while maximizing the efficiency with which new peers can begin contributing routes to the exchange. While optional multilateral peering agreements and route servers are now widely acknowledged to be a good practice, mandatory multilateral peering agreements (MMLPAs) have long been agreed to not be a good practice.
Peering locations
The modern Internet operates with significantly more peering locations than at any time in the past, resulting in improved performance and better routing for the majority of the traffic on the Internet. However, in the interests of reducing costs and improving efficiency, most networks have attempted to standardize on relatively few locations within these individual regions where they will be able to quickly and efficiently interconnect with their peering partners.
Exchange points
As of 2021, the largest exchange points in the world are Ponto de Troca de Tráfego Metro São Paulo, in São Paulo, with 2,289 peering networks; OpenIXP in Jakarta, with 1,097 peering networks; and DE-CIX in Frankfurt, with 1,050 peering networks. The United States, with a historically larger focus on private peering and commercial public peering, has much less traffic visible on public peering switch-fabrics compared to other regions that are dominated by non-profit membership exchange points. Collectively, the many exchange points operated by Equinix are generally considered to be the largest, though traffic figures are not generally published. Other important but smaller exchange points include AMS-IX in Amsterdam, LINX and LONAP in London, and NYIIX in New York.
URLs to some public traffic statistics of exchange points include:
AMS-IX
DE-CIX
LINX
MSK-IX
TORIX
NYIIX
LAIIX
TOP-IX
Netnod
Mix Milano
ix.br SP
SFMIX
Peering and BGP
A great deal of the complexity in the BGP routing protocol exists to aid the enforcement and fine-tuning of peering and transit agreements. BGP allows operators to define a policy that determines where traffic is routed. Three things are commonly used to determine routing: local-preference, multi exit discriminators (MEDs) and AS-Path. Local-preference is used internally within a network to differentiate classes of networks. For example, a particular network will have a higher preference set on internal and customer advertisements. Settlement free peering is then configured to be preferred over paid IP transit.
Networks that speak BGP to each other can engage in multi exit discriminator exchange with each other, although most do not. When networks interconnect in several locations, MEDs can be used to reference that network's interior gateway protocol cost. This results in both networks sharing the burden of transporting each other's traffic on their own network (or cold potato). Hot-potato or nearest-exit routing, which is typically the normal behavior on the Internet, is where traffic destined to another network is delivered to the closest interconnection point.
Law and policy
Internet interconnection is not regulated in the same way that public telephone network interconnection is regulated. Nevertheless, Internet interconnection has been the subject of several areas of federal policy in the United States. Perhaps the most dramatic example of this is the attempted MCI Worldcom/Sprint merger. In this case, the Department of Justice blocked the merger specifically because of the impact of the merger on the Internet backbone market (thereby requiring MCI to divest itself of its successful "internetMCI" business to gain approval). In 2001, the Federal Communications Commission's advisory committee, the Network Reliability and Interoperability Council recommended that Internet backbones publish their peering policies, something that they had been hesitant to do beforehand. The FCC has also reviewed competition in the backbone market in its Section 706 proceedings which review whether advanced telecommunications are being provided to all Americans in a reasonable and timely manner.
Finally, Internet interconnection has become an issue in the international arena under something known as the International Charging Arrangements for Internet Services (ICAIS). In the ICAIS debate, countries underserved by Internet backbones have complained that it is unfair that they must pay the full cost of connecting to an Internet exchange point in a different country, frequently the United States. These advocates argue that Internet interconnection should work like international telephone interconnection, with each party paying half of the cost. Those who argue against ICAIS point out that much of the problem would be solved by building local exchange points. A significant amount of the traffic, it is argued, that is brought to the US and exchanged then leaves the US, using US exchange points as switching offices but not terminating in the US. In some worst-case scenarios, traffic from one side of a street is brought all the way to a distant exchange point in a foreign country, exchanged, and then returned to another side of the street. Countries with liberalized telecommunications and open markets, where competition between backbone providers occurs, tend to oppose ICAIS.
See also
Autonomous system
Default-free zone
Interconnect agreement
Internet traffic engineering
Net neutrality
North American Network Operators' Group (NANOG)
References
External links
PeeringDB: A free database of peering locations and participants
The peering Playbook (PDF): Strategies of peering networks
Example Tier 1 Peering Requirements: AT&T (AS7018)
Example Tier 1 Peering Requirements: AOL Transit Data Network (AS1668)
Example Tier 2 Peering Requirements: Entanet (AS8468)
Cybertelecom :: Backbones – Federal Internet Law and Policy
How the 'Net works: an introduction into Peering and Transit, Ars Technica
Internet architecture
Net neutrality | Peering | [
"Technology",
"Engineering"
] | 3,955 | [
"Net neutrality",
"Internet architecture",
"IT infrastructure",
"Computer networks engineering"
] |
47,014 | https://en.wikipedia.org/wiki/Metric%20time | Metric time is the measure of time intervals using the metric system. The modern SI system defines the second as the base unit of time, and forms multiples and submultiples with metric prefixes such as kiloseconds and milliseconds. Other units of time – minute, hour, and day – are accepted for use with SI, but are not part of it. Metric time is a measure of time intervals, while decimal time is a means of recording time of day.
History
The second is derived from the sexagesimal system, which originated with the Sumerians and Babylonians. This system divides a base unit into sixty minutes, each minute into sixty seconds, and each second into sixty tierces. The word "minute" comes from the Latin pars minuta prima, meaning "first small part", and "second" from pars minuta secunda or "second small part". Angular measure also uses sexagesimal units; there, it is the degree that is subdivided into minutes and seconds, while in time, it is the hour.
In 1790, French diplomat Charles Maurice de Talleyrand-Périgord proposed that the fundamental unit of length for the metric system should be the length of a pendulum with a one-second period, measured at sea level on the 45th parallel (50 grades in the new angular measures), thus basing the metric system on the value of the second. A Commission of Weights and Measures was formed within the French Academy of Sciences to develop the system. The commission rejected the seconds-pendulum definition of the metre the following year because the second of time was an arbitrary period equal to 1/86,400 day, rather than a decimal fraction of a natural unit. Instead, the metre would be defined as a decimal fraction of the length of the Paris Meridian between the equator and the North Pole.
The commission initially proposed the decimal time units later enacted as part of the new Republican calendar. In January, 1791, Jean-Charles de Borda commissioned Louis Berthoud to manufacture a decimal chronometer displaying these units. On March 28, 1794, the commission's president, Joseph Louis Lagrange, proposed using the day (French jour) as the base unit of time, with divisions déci-jour and centi-jour, and suggested representing 4 déci-jours and 5 centi-jours as "4,5", "4/5", or just "45". The final system, as introduced in 1795, included units for length, area, dry volume, liquid capacity, weight or mass, and currency, but not time. Decimal time of day had been introduced in France two years earlier, but mandatory use was suspended at the same time the metric system was inaugurated, and did not follow the metric pattern of a base unit and prefixed units.
Base units equivalent to decimal divisions of the day, such as 1/10, 1/100, 1/1,000, or 1/100,000 day, or other divisions of the day, such as 1/20 or 1/40 day, have also been proposed, with various names. Such alternative units did not gain any notable acceptance. In China, during the Song dynasty, a day was divided into smaller units, called kè (). One kè was usually defined as of a day until 1628, though there were short periods before then where days had 96, 108 or 120 kè. A kè is about 14.4 minutes, or 14 minutes 24 seconds. In the 19th century, Joseph Charles François de Rey-Pailhade endorsed Lagrange’s proposal of using centijours, but abbreviated cé, and divided into 10 decicés, 100 centicés, 1,000 millicés, and 10,000 dimicés.
James Clerk Maxwell and Elihu Thomson (through the British Association for the Advancement of Science, or BAAS) introduced the Centimetre gram second system of units in 1874 to derive electric and magnetic metric units, following the recommendation of Carl Friedrich Gauss in 1832.
In 1897, the Commission de décimalisation du temps was created by the French Bureau of Longitude, with the mathematician Henri Poincaré as secretary. The commission proposed making the standard hour the base unit of metric time, but the proposal did not gain acceptance and was eventually abandoned.
When the modern SI system was defined at the 10th General Conference on Weights and Measures (CGPM) in 1954, the ephemeris second (1/86400 of a mean solar day) was made one of the system's base units. Because the Earth's rotation is slowly decelerating at an irregular rate and was thus unsuitable as a reference point for precise measurements, the SI second was later redefined more precisely as the duration of 9,192,631,770 periods of the radiation corresponding to the transition between the two hyperfine levels of the ground state of the caesium-133 atom. The international standard atomic clocks use caesium-133 measurements as their main benchmark.
In computing
In computing, at least internally, metric time gained widespread use for ease of computation. Unix time gives date and time as the number of seconds since January 1, 1970, and Microsoft's NTFS FILETIME as multiples of 100 ns since January 1, 1601. VAX/VMS uses the number of 100 ns since November 17, 1858, and RISC OS the number of centiseconds since January 1, 1900. Microsoft Excel uses number of days (with decimals, floating point) since January 1, 1900.
All these systems present time for the user using traditional units. None of these systems is strictly linear, as they each have discontinuities at leap seconds.
Prefixes
Metric prefixes for subdivisions of a second are commonly used in science and technology. Milliseconds and microseconds are particularly common. Prefixes for multiples of a second are rarely used:
See also
List of unusual units of measurement#Time, under which prefixed multiples of the second are included
Soviet calendar
References
External links
Metric unit of time (second) Official text of SI brochure from International Bureau of Weights and Measures
Metric Time? University of Illinois Physics Department
International System of Units
Time measurement systems
Decimal time
de:Dezimalzeit | Metric time | [
"Physics"
] | 1,289 | [
"Spacetime",
"Time measurement systems",
"Physical quantities",
"Time"
] |
47,096 | https://en.wikipedia.org/wiki/Friedrich%20Engels | Friedrich Engels ( ; ; 28 November 1820 – 5 August 1895; in English also spelled as "Frederick Engels") was a German philosopher, political theorist, historian, journalist, and revolutionary socialist. He was also a businessman and Karl Marx's lifelong friend and closest collaborator, serving as a leading authority on Marxism.
Engels, the son of a wealthy textile manufacturer, met Marx in 1844. They jointly authored works including The Holy Family (1844), The German Ideology (written 1846), and The Communist Manifesto (1848), and worked as political organisers and activists in the Communist League and First International. Engels also supported Marx financially for much of his life, enabling him to continue writing after he moved to London in 1849. After Marx's death in 1883, Engels edited from manuscript and completed Volumes II and III of his Das Kapital (1885 and 1894).
Engels wrote several important works of his own, including The Condition of the Working Class in England (1845), Anti-Dühring (1878), Dialectics of Nature (1878–1882), The Origin of the Family, Private Property and the State (1884), and Ludwig Feuerbach and the End of Classical German Philosophy (1886).
Life and work
Early life
Friedrich Engels was born on 28 November 1820 in Barmen, Jülich-Cleves-Berg, Prussia (now Wuppertal, Germany), as the eldest son of (1796–1860) and of Elisabeth "Elise" Franziska Mauritia van Haar (1797–1873). The wealthy Engels family owned large cotton-textile mills in Barmen and Salford, England, both expanding industrial cities. Friedrich's parents were devout Calvinists and raised their children accordingly—he was baptised in the Calvinist Reformed Evangelical Parish of Elberfeld.
At the age of 13, Engels attended secondary school (Gymnasium) in the adjacent city of Elberfeld but had to leave at 17 due to pressure from his father, who wanted him to become a businessman and work as a mercantile apprentice in the family firm. After a year in Barmen, the young Engels was, in 1838, sent by his father to undertake an apprenticeship at a trading house in Bremen. His parents expected that he would follow his father into a career in the family business. Their son's revolutionary activities disappointed them. It would be some years before he joined the family firm.
While at Bremen, Engels began reading the philosophy of Georg Wilhelm Friedrich Hegel, whose teachings dominated German philosophy at that time. In September 1838 he published his first work, a poem entitled "The Bedouin", in the Bremisches Conversationsblatt No. 40. He also engaged in other literary work and began writing newspaper articles critiquing the societal ills of industrialisation. He wrote under the pseudonym "Friedrich Oswald" to avoid connecting his family with his provocative writings.
In 1841, Engels performed his military service in the Prussian Army as a member of the Household Artillery (). Assigned to Berlin, he attended university lectures at the University of Berlin and began to associate with groups of Young Hegelians. He anonymously published articles in the Rheinische Zeitung, exposing the poor employment and living conditions endured by factory workers. The editor of the Rheinische Zeitung was Karl Marx, but Engels would not meet Marx until late November 1842. Engels acknowledged the influence of German philosophy on his intellectual development throughout his career. In 1840, he also wrote: "To get the most out of life you must be active, you must live and you must have the courage to taste the thrill of being young."
Engels developed atheistic beliefs and his relationship with his parents became strained.
Manchester and Salford
In 1842, his parents sent the 22-year-old Engels to Salford, England, a manufacturing centre where industrialisation was on the rise. He was to work in Weaste, Salford, in the offices of Ermen and Engels's Victoria Mill, which made sewing threads. Engels's father thought that working at the Salford firm might make his son reconsider some of his radical opinions. On his way to Salford and Manchester, Engels visited the office of the Rheinische Zeitung in Cologne and met Karl Marx for the first time. Initially they were not impressed with each other. Marx mistakenly thought that Engels was still associated with the Young Hegelians of Berlin, with whom Marx had just broken off ties.
In Manchester, Engels met Mary Burns, a fierce young Irish woman with radical opinions who worked in the Engels factory. They began a relationship that lasted 20 years until her death in 1863. The two never married, as both were against the institution of marriage. While Engels regarded stable monogamy as a virtue, he considered the current state and church-regulated marriage as a form of class oppression. Burns guided Engels through Manchester and Salford, showing him the worst districts for his research.
Engels was often described as a man with a very strong libido and not much restraint. He had many lovers and despite his condemnation of prostitution as "exploitation of the proletariat by the bourgeoisie" he also occasionally paid for sex. In 1846 he wrote to Marx: "If I had an income of 5000 francs I would do nothing but work and amuse myself with women until I went to pieces. If there were no Frenchwomen, life wouldn't be worth living. But so long as there are grisettes, well and good!" At a Workers' Union meeting in Brussels, Engels's friend turned rival Moses Hess accused Engels of raping his wife Sibylle. Engels vehemently denied the charge, writing in a letter to Marx that Sibylle's "rage with me is unrequited love, pure and simple."
While in Manchester between October and November 1843, Engels wrote his first critique of political economy, entitled "Umrisse zu einer Kritik der Nationalökonomie" (Outlines of a Critique of Political Economy). Engels sent the article to Paris, where Marx and Arnold Ruge published it in the Deutsch–Französische Jahrbücher in 1844.
Engels observed the slums of Manchester in close detail, and took notes of its horrors, such as child labour, the despoiled environment, and overworked and impoverished labourers. He sent a trilogy of articles to Marx. These were published in the Rheinische Zeitung and then in the Deutsch–Französische Jahrbücher, chronicling the conditions among the working class in Manchester. He later collected these articles for his influential first book, The Condition of the Working Class in England (1845). Written between September 1844 and March 1845, the book was published in German in 1845. In the book, Engels described the "grim future of capitalism and the industrial age", noting the details of the squalor in which the working people lived. The book was published in English in 1887. Archival resources contemporary to Engels's stay in Manchester shed light on some of the conditions he describes, including a manuscript (MMM/10/1) held by special collections at the University of Manchester. This recounts cases seen in the Manchester Royal Infirmary, where industrial accidents dominated, and which resonate with Engels's comments on the disfigured persons seen walking round Manchester as a result of such accidents.
Engels continued his involvement with radical journalism and politics. He frequented areas popular among members of the English labour and Chartist movements, whom he met. He also wrote for several journals, including The Northern Star, Robert Owen's New Moral World, and the Democratic Review newspaper.
Paris
Engels returned to Germany in 1844. On the way, he stopped in Paris to meet Karl Marx, with whom he had an earlier correspondence. Marx had been living in Paris since late October 1843, after the Rheinische Zeitung was banned in March 1843 by the Prussian government. Prior to meeting Marx, Engels had become established as a fully developed materialist and scientific socialist, independent of Marx's philosophical development.
In Paris, Marx and Arnold Ruge were publishing the Deutsch–Französische Jahrbücher, of which only one issue appeared (in 1844), and in which Engels wrote Outlines of a Critique of Political Economy. Engels met Marx for a second time at the Café de la Régence on the Place du Palais, on 28 August 1844. The two quickly became close friends and remained so their entire lives. Marx had read and was impressed by Engels's articles on The Condition of the Working Class in England in which he had written that "[a] class which bears all the disadvantages of the social order without enjoying its advantages, [...] Who can demand that such a class respect this social order?" Marx adopted Engels's idea that the working class would lead the revolution against the bourgeoisie as society advanced toward socialism, and incorporated this as part of his own philosophy.
Engels stayed in Paris to help Marx write The Holy Family. It was an attack on the Young Hegelians and the Bauer brothers and was published in late February 1845. Engels's earliest contribution to Marx's work was writing for the Deutsch–Französische Jahrbücher, edited by both Marx and Arnold Ruge, in Paris in 1844. During this time in Paris, both Marx and Engels began their association with and then joined the secret revolutionary society called the League of the Just. The League of the Just had been formed in 1837 in France to promote an egalitarian society through the overthrow of the existing governments. In 1839, the League participated in the 1839 rebellion fomented by the French utopian revolutionary socialist, Louis Auguste Blanqui; as Ruge remained a Young Hegelian in his belief, Marx and Ruge soon split and Ruge left the Deutsch–Französische Jahrbücher. Following the split, Marx remained friendly enough with Ruge that he sent Ruge a warning on 15 January 1845 that the Paris police were going to execute orders against him, Marx and others at the Deutsch–Französische Jahrbücher requiring all to leave Paris within 24 hours. Marx was expelled from Paris by French authorities on 3 February 1845 and settled in Brussels with his wife and one daughter. Having left Paris on 6 September 1844, Engels returned to his home in Barmen to work on his The Condition of the Working Class in England, which was published in late May 1845. Even before the publication of his book, Engels moved to Brussels in late April 1845, to collaborate with Marx on another book, German Ideology. While living in Barmen, Engels began making contact with Socialists in the Rhineland to raise money for Marx's publication efforts in Brussels; these contacts became more important as both Marx and Engels began political organising for the Social Democratic Workers' Party of Germany.
Brussels
The nation of Belgium, founded in 1830, had one of the most liberal constitutions in Europe and functioned as refuge for progressives from other countries. From 1845 to 1848, Engels and Marx lived in Brussels, spending much of their time organising the city's German workers. Shortly after their arrival, they contacted and joined the underground German Communist League. The Communist League was the successor organisation to the League of the Just which had been founded in 1837 but had recently disbanded. Influenced by Wilhelm Weitling, the Communist League was an international society of proletarian revolutionaries with branches in various European cities.
The Communist League also had contacts with the underground conspiratorial organisation of Louis Auguste Blanqui. Many of Marx's and Engels's current friends became members of the Communist League. Old friends like Georg Friedrich Herwegh, who had worked with Marx on the Rheinsche Zeitung, Heinrich Heine, the famous poet, a young physician by the name of Roland Daniels, Heinrich Bürgers and August Herman Ewerbeck, all maintained their contacts with Marx and Engels in Brussels. Georg Weerth, who had become a friend of Engels in England in 1843, now settled in Brussels. Carl Wallau and Stephen Born (real name Simon Buttermilch) were both German immigrant typesetters who settled in Brussels to help Marx and Engels with their Communist League work. Marx and Engels made many new important contacts through the Communist League. One of the first was Wilhelm Wolff, who soon became one of Marx's and Engels's closest collaborators. Others were Joseph Weydemeyer and Ferdinand Freiligrath, a famous revolutionary poet. While most of the associates of Marx and Engels were German immigrants living in Brussels, some were Belgians. Phillipe Gigot, a Belgian philosopher and Victor Tedesco, a lawyer from Liège, both joined the Communist League. Joachim Lelewel, a prominent Polish historian and participant in the Polish uprising of 1830–1831, was also a frequent associate.
The Communist League commissioned Marx and Engels to write a pamphlet explaining the principles of communism. This became the Manifesto of the Communist Party, better known as The Communist Manifesto. It was first published on 21 February 1848 and ends with the world-famous phrase: "Let the ruling classes tremble at a Communistic revolution. The proletariat have nothing to lose but their chains. They have a world to win. Working Men of All Countries, Unite!"
Engels's mother wrote in a letter to him of her concerns, commenting that he had "really gone too far" and "begged" him "to proceed no further". She further stated:You have paid more heed to other people, to strangers, and have taken no account of your mother's pleas. God alone knows what I have felt and suffered of late. I was trembling when I picked up the newspaper and saw therein that a warrant was out for my son's arrest.
Return to Prussia
There was a revolution in France in 1848 that soon spread to other Western European countries. These events caused Engels and Marx to return to Cologne in their homeland of Prussia. While living there, they created and served as editors for a new daily newspaper called the Neue Rheinische Zeitung. Besides Marx and Engels, other frequent contributors to the Neue Rheinische Zeitung included Karl Schapper, Wilhelm Wolff, Ernst Dronke, Peter Nothjung, Heinrich Bürgers, Ferdinand Wolff and Carl Cramer. Engels's mother gave unwitting witness to the effect of the Neue Rheinische Zeitung on the revolutionary uprising in Cologne in 1848. Criticising his involvement in the uprising she states in a 5 December 1848 letter to Friedrich that "nobody, ourselves included, doubted that the meetings at which you and your friends spoke, and also the language of (Neue) Rh.Z. were largely the cause of these disturbances."
Engels's parents hoped that young Engels would "decide to turn to activities other than those which you have been pursuing in recent years and which have caused so much distress". At this point, his parents felt the only hope for their son was to emigrate to America and start his life over. They told him that he should do this or he would "cease to receive money from us"; however, the problem in the relationship between Engels and his parents was worked out without Engels having to leave England or being cut off from financial assistance from his parents. In July 1851, Engels's father arrived to visit him in Manchester, England. During the visit, his father arranged for Engels to meet Peter Ermen of the office of Ermen & Engels, to move to Liverpool and to take over sole management of the office in Manchester.
In 1849, Engels travelled to Bavaria for the Baden and Palatinate revolutionary uprising, an even more dangerous involvement. Starting with an article called "The Magyar Struggle", written on 8 January 1849, Engels, himself, began a series of reports on the Revolution and War for Independence of the newly founded Hungarian Republic. Engels's articles on the Hungarian Republic became a regular feature in the Neue Rheinische Zeitung under the heading "From the Theatre of War"; however, the newspaper was suppressed during the June 1849 Prussian coup d'état. After the coup, Marx lost his Prussian citizenship, was deported and fled to Paris, then London. Engels stayed in Prussia and took part in an armed uprising in South Germany as an aide-de-camp in the volunteer corps of August Willich. Engels also took two cases of rifle cartridges with him when he went to join the uprising in Elberfeld on 10 May 1849. Later when Prussian troops came to Kaiserslautern to suppress an uprising there, Engels joined a group of volunteers under the command of August Willich, who were going to fight the Prussian troops. When the uprising was crushed, Engels was one of the last members of Willich's volunteers to escape by crossing the Swiss border. Marx and others became concerned for Engels's life until they heard from him.
Engels travelled through Switzerland as a refugee and eventually made it to safety in England. On 6 June 1849 Prussian authorities issued an arrest warrant for him which contained a physical description as "height: 5 feet 6 inches; hair: blond; forehead: smooth; eyebrows: blond; eyes: blue; nose and mouth: well proportioned; beard: reddish; chin: oval; face: oval; complexion: healthy; figure: slender. Special characteristics: speaks very rapidly and is short-sighted". As to his "short-sightedness", Engels admitted as much in a letter written to Joseph Weydemeyer on 19 June 1851 in which he says he was not worried about being selected for the Prussian military because of "my eye trouble, as I have now found out once and for all which renders me completely unfit for active service of any sort". Once he was safe in Switzerland, Engels began to write down all his memories of the recent military campaign against the Prussians. This writing eventually became the article published as "The Campaign for the German Imperial Constitution".
Back in Britain
To help Marx with Neue Rheinische Zeitung Politisch-ökonomische Revue, the new publishing effort in London, Engels sought ways to escape the continent and travel to London. On 5 October 1849, Engels arrived in the Italian port city of Genoa. There, Engels booked passage on the English schooner, Cornish Diamond under the command of a Captain Stevens. The voyage across the western Mediterranean, around the Iberian Peninsula by sailing schooner took about five weeks. Finally, the Cornish Diamond sailed up the River Thames to London on 10 November 1849 with Engels on board.
Upon his return to Britain, Engels re-entered the Manchester company in which his father held shares to support Marx financially as he worked on Das Kapital. Unlike his first period in England (1843), Engels was now under police surveillance. He had "official" homes and "unofficial homes" all over Salford, Weaste and other inner-city Manchester districts where he lived with Mary Burns under false names to confuse the police. Little more is known, as Engels destroyed over 1,500 letters between himself and Marx after the latter's death so as to conceal the details of their secretive lifestyle.
Despite his work at the mill, Engels found time to write a book on Martin Luther, the Protestant Reformation and the 1525 revolutionary war of the peasants, entitled The Peasant War in Germany. He also wrote a number of newspaper articles including "The Campaign for the German Imperial Constitution" which he finished in February 1850 and "On the Slogan of the Abolition of the State and the German 'Friends of Anarchy'" written in October 1850. In April 1851, he wrote the pamphlet "Conditions and Prospects of a War of the Holy Alliance against France".
Marx and Engels denounced Louis Bonaparte when he carried out a coup against the French government and made himself president for life on 2 December 1851. Engels wrote to Marx on 3 December 1851, characterising the coup as "comical" and referred to it as occurring on "the 18th Brumaire", the date of Napoleon I's coup of 1799 according to the French Republican Calendar. Marx later incorporated this comically ironic characterisation of the coup into his essay about it. He called the essay The Eighteenth Brumaire of Louis Bonaparte using Engels's suggested characterisation. Marx also borrowed Engels' characterisation of Hegel's notion of the World Spirit that history occurred twice, "once as a tragedy and secondly as a farce" in the first paragraph of his new essay.
Meanwhile, Engels started working at the mill owned by his father in Manchester as an office clerk, the same position he held in his teens while in Germany where his father's company was based. Engels worked his way up to become a partner of the firm in 1864. Five years later, Engels retired from the business and could focus more on his studies. At this time, Marx was living in London but they were able to exchange ideas through daily correspondence. One of the ideas that Engels and Marx contemplated was the possibility and character of a potential revolution in Russia. As early as April 1853, Engels and Marx anticipated an "aristocratic-bourgeois revolution in Russia which would begin in "St. Petersburg with a resulting civil war in the interior". The model for this type of aristocratic-bourgeois revolution in Russia against the autocratic Tsarist government in favour of a constitutional government had been provided by the Decembrist Revolt of 1825.
Despite the unsuccessful revolt against the Tsarist government in favour of a constitutional government, both Engels and Marx anticipated a bourgeois revolution in Russia would occur, which would bring about a bourgeois stage in Russian development to precede a communist stage. By 1881, both Marx and Engels began to contemplate a course of development in Russia that would lead directly to the communist stage without the intervening bourgeois stage. This analysis was based on what Marx and Engels saw as the exceptional characteristics of the Russian village commune or obshchina. While doubt was cast on this theory by Georgi Plekhanov, Plekhanov's reasoning was based on the first edition of Das Kapital (1867) which predated Marx's interest in Russian peasant communes by two years. Later editions of the text demonstrate Marx's sympathy for the argument of Nikolay Chernyshevsky, that it should be possible to establish socialism in Russia without an intermediary bourgeois stage provided that the peasant commune were used as the basis for the transition.
In 1870, Engels moved to London where he and Marx lived until Marx's death in 1883. Engels's London home from 1870 to 1894 was at 122 Regent's Park Road. In October 1894 he moved to 41 Regent's Park Road, Primrose Hill, NW1, where he died the following year.
Marx's first London residence was a cramped flat at 28 Dean Street, Soho. From 1856, he lived at 9 Grafton Terrace, Kentish Town, and then in a tenement at 41 Maitland Park Road in Belsize Park from 1875 until his death in March 1883.
Mary Burns died suddenly of heart disease in 1863, after which Engels became close with her younger sister Lydia ("Lizzie"). They lived openly as a couple in London and married on 11 September 1878, hours before Lizzie's death.
Later years
Later in their lives, Marx and Engels came to argue that in some countries workers might be able to achieve their aims through peaceful means. In following this, Engels argued that socialists were evolutionists, although they remained committed to social revolution. Similarly, Tristram Hunt argues that Engels was sceptical of "top-down revolutions" and later in life advocated "a peaceful, democratic road to socialism". Engels also wrote in his introduction to the 1891 edition of Marx's The Class Struggles in France that "rebellion in the old style, street fighting with barricades, which decided the issue everywhere up to 1848, was to a considerable extent obsolete", although some such as David W. Lowell empashised their cautionary and tactical meaning, arguing that "Engels questions only rebellion 'in the old style', that is, insurrection: he does not renounce revolution. The reason for Engels' caution is clear: he candidly admits that ultimate victory for any insurrection is rare, simply on military and tactical grounds".
In his introduction to the 1895 edition of Marx's The Class Struggles in France, Engels attempted to resolve the division between reformists and revolutionaries in the Marxist movement by declaring that he was in favour of short-term tactics of electoral politics that included gradualist and evolutionary socialist measures while maintaining his belief that revolutionary seizure of power by the proletariat should remain a goal. In spite of this attempt by Engels to merge gradualism and revolution, his effort only diluted the distinction of gradualism and revolution and had the effect of strengthening the position of the revisionists. Engels's statements in the French newspaper Le Figaro, in which he wrote that "revolution" and the "so-called socialist society" were not fixed concepts, but rather constantly changing social phenomena, and argued that this made "us socialists all evolutionists", increased the public perception that Engels was gravitating towards evolutionary socialism. Engels also argued that it would be "suicidal" to talk about a revolutionary seizure of power at a time when the historical circumstances favoured a parliamentary road to power that he predicted could bring "social democracy into power as early as 1898". Engels's stance of openly accepting gradualist, evolutionary and parliamentary tactics while claiming that the historical circumstances did not favour revolution caused confusion. Marxist revisionist Eduard Bernstein interpreted this as indicating that Engels was moving towards accepting parliamentary reformist and gradualist stances, but he ignored that Engels's stances were tactical as a response to the particular circumstances and that Engels was still committed to revolutionary socialism. Engels was deeply distressed when he discovered that his introduction to a new edition of The Class Struggles in France had been edited by Bernstein and orthodox Marxist Karl Kautsky in a manner which left the impression that he had become a proponent of a peaceful road to socialism. On 1 April 1895, four months before his death, Engels responded to Kautsky:
I was amazed to see today in the Vorwärts an excerpt from my 'Introduction' that had been printed without my knowledge and tricked out in such a way as to present me as a peace-loving proponent of legality [at all costs]. Which is all the more reason why I should like it to appear in its entirety in the Neue Zeit in order that this disgraceful impression may be erased. I shall leave Liebknecht in no doubt as to what I think about it and the same applies to those who, irrespective of who they may be, gave him this opportunity of perverting my views and, what's more, without so much as a word to me about it.
After Marx's death, Engels devoted much of his remaining years to editing Marx's unfinished volumes of Das Kapital. He is credited with preventing the work from being lost due to Marx's "incredibly difficult handwriting". He had to provide it with structure and develop its lines of thought, so that the second and third volumes of Capital are effectively joint in authorship and its content (except for the extensive forewords added by Engels) cannot be attributed exclusively to either author. Some scholars, notably , thought that Engels had altered the course of Marx's analysis, but the shift in focus from the exploitation of labourers to the accumulation of capital, and the introduction of the possibility that capitalism could survive the tendency of the rate of profit to fall is argued by van Holthoon to be already Marx's, with the latter notion present in the long unpublished Grundrisse.
While the task of editing Capital forced Engels to abandon his unfinished Dialectics of Nature, he still completed two other works of his own in the years following Marx's death. In The Origin of the Family, Private Property and the State (1884), he made an argument using anthropological evidence of the time to show that family structures changed over history, and that the concept of monogamous marriage came from the necessity within class society for men to control women to ensure their own children would inherit their property. He argued a future communist society would allow people to make decisions about their relationships free of economic constraints. Ludwig Feuerbach and the End of Classical German Philosophy saw publication in 1886. On 5 August 1895, Engels died of throat cancer in London, aged 74. Following cremation at Woking Crematorium, his ashes were scattered off Beachy Head, near Eastbourne, as he had requested. He left a considerable estate to Eduard Bernstein and Louise Freyberger (wife of Ludwig Freyberger), valued for probate at £25,265 0s. 11d, equivalent to £ in .
Personality
Engels's interests included poetry, fox hunting and hosting regular Sunday parties for London's left-wing intelligentsia where, as one regular put it, "no one left before two or three in the morning". His stated personal motto was "take it easy" while "jollity" was listed as his favourite virtue.
Of Engels's personality and appearance, Robert Heilbroner described him in The Worldly Philosophers as "tall and rather elegant, he had the figure of a man who liked to fence and to ride to hounds and who had once swum the Weser River four times without a break" as well as having been "gifted with a quick wit and facile mind" and of a gay temperament, being able to "stutter in twenty languages". He had a great enjoyment of wine and other "bourgeois pleasures". Engels favoured forming romantic relationships with women of the proletariat and found a long-term partner in a working-class woman named Mary Burns, although they never married. After her death, Engels was romantically involved with her younger sister Lydia Burns.
Historian and former Labour MP Tristram Hunt, author of The Frock-Coated Communist: The Revolutionary Life of Friedrich Engels, argues that Engels "almost certainly was, in other words, the kind of man Stalin would have had shot". Hunt sums up the disconnect between Engels's personality and the Soviet Union which later utilised his works, stating:
As to the religious persuasion attributable to Engels, Hunt writes:
Engels was a polyglot and was able to write and speak in numerous languages, including Russian, Italian, Portuguese, Irish, Spanish, Polish, French, English, German and the Milanese dialect.
Legacy
In his biography of Engels, Vladimir Lenin wrote: "After his friend Karl Marx (who died in 1883), Engels was the finest scholar and teacher of the modern proletariat in the whole civilised world. [...] In their scientific works, Marx and Engels were the first to explain that socialism is not the invention of dreamers, but the final aim and necessary result of the development of the productive forces in modern society. All recorded history hitherto has been a history of class struggle, of the succession of the rule and victory of certain social classes over others." According to Paul Kellogg, there is "some considerable controversy" regarding "the place of Frederick Engels in the canon of 'classical Marxism'". While some such as Terrell Carver dispute "Engels' claim that Marx agreed with the views put forward in Engels' major theoretical work, Anti-Dühring", others such as E. P. Thompson "identified a tendency to make 'old Engels into a whipping boy, and to impugn him any sign that once chooses to impugn subsequent Marxsisms.
Tristram Hunt argues that Engels has become a convenient scapegoat, too easily blamed for the state crimes of Communist regimes such as China, the Soviet Union and those in Africa and Southeast Asia, among others. Hunt writes that "Engels is left holding the bag of 20th century ideological extremism" while Karl Marx "is rebranded as the acceptable, post–political seer of global capitalism". Hunt largely exonerates Engels, stating that "[i]n no intelligible sense can Engels or Marx bear culpability for the crimes of historical actors carried out generations later, even if the policies were offered up in their honor". Andrew Lipow describes Marx and Engels as "the founders of modern revolutionary democratic socialism".
While admitting the distance between Marx and Engels on one hand and Joseph Stalin on the other, some writers such as Robert Service are less charitable, noting that the anarchist Mikhail Bakunin predicted the oppressive potential of their ideas, arguing that "[i]t is a fallacy that Marxism's flaws were exposed only after it was tried out in power. [...] [Marx and Engels] were centralisers. While talking about 'free associations of producers', they advocated discipline and hierarchy". Paul Thomas, of the University of California, Berkeley, claims that while Engels had been the most important and dedicated facilitator and diffuser of Marx's writings, he significantly altered Marx's intents as he held, edited and released them in a finished form and commentated on them. Engels attempted to fill gaps in Marx's system and extend it to other fields. In particular, Engels is said to have stressed historical materialism, assigning it a character of scientific discovery and a doctrine, forming Marxism as such. A case in point is Anti-Dühring which both supporters and detractors of socialism treated as an encompassing presentation of Marx's thought. While in his extensive correspondence with German socialists Engels modestly presented his own secondary place in the couple's intellectual relationship and always emphasised Marx's outstanding role, Russian communists such as Lenin raised Engels up with Marx and conflated their thoughts as if they were necessarily congruous. Soviet Marxists then developed this tendency to the state doctrine of dialectical materialism.
Since 1931, Engels has had a Russian city named after him—Engels, Saratov Oblast. It served as the capital of the Volga German Republic within Soviet Russia and as part of Saratov Oblast. A town named Marx is located northeast.
In July 2017, as part of the Manchester International Festival, a Soviet-era statue of Engels was installed by sculptor Phil Collins at Tony Wilson Place in Manchester. It was transported from the village of Mala Pereshchepina in Eastern Ukraine, after the statue had been deposed from its central position in the village in the wake of laws outlawing communist symbols in Ukraine introduced in 2015. In recognition of the important influence Manchester had on his work, the 3.5-metre statue now stands on Manchester's First Street. The installation of what was originally an instrument of propaganda drew criticism from Kevin Bolton in The Guardian.
The Friedrich Engels Guards Regiment (also known as NVA Guard Regiment 1) was a special guard unit of the East German National People's Army (NVA). The guard regiment was established in 1962 from parts of the Hugo Eberlein Guards Regiment and given the title "Friedrich Engels" in 1970.
Influences
According to Norman Levine, in spite of his criticism of the utopian socialists, Engels's beliefs were influenced by the French socialist Charles Fourier. From Fourier, he derives four main points that characterise the social conditions of a communist state. The first point maintains that every individual would be able to fully develop their talents by eliminating the specialisation of production. Without specialisation, every individual would be permitted to exercise any vocation of their choosing for as long or as little as they would like. If talents permitted it, one could be a baker for a year and an engineer the next. The second point builds upon the first, as with the ability of workers to cycle through different jobs of their choosing, the fundamental basis of the social division of labour is destroyed and the social division of labour will disappear as a result. If anyone can be employed at any job that they wish, then there are clearly no longer any divisions or barriers to entry for labour, otherwise such fluidity between entirely different jobs would not exist. The third point continues from the second as once the social division of labour is gone, the division of social classes based on property ownership will fade with it. If labour division puts a man in charge of a farm, that farmer owns the productive resources of that farm. The same applies to the ownership of a factory or a bank. Without labour division, no single social class may claim exclusive rights to a particular means of production since the absence of labour division allows all to use it. Finally, the fourth point concludes that the elimination of social classes destroys the sole purpose of the state and it will cease to exist. As Engels stated in his own writing, the only purpose of the state is to abate the effects of class antagonisms. With the elimination of social classes based on property, the state becomes obsolete and a communist society, at least in the eyes of Engels, is achieved.
Major works
The Holy Family (1844)
This book was written by Marx and Engels in November 1844. It is a critique on the Young Hegelians and their trend of thought which was very popular in academic circles at the time. The title was suggested by the publisher and is meant as a sarcastic reference to the Bauer Brothers and their supporters.
The book created a controversy with much of the press and caused Bruno Bauer to attempt to refute the book in an article published in Vierteljahrsschrift in 1845. Bauer claimed that Marx and Engels misunderstood what he was trying to say. Marx later replied to his response with his own article published in the journal in January 1846. Marx also discussed the argument in chapter 2 of The German Ideology.
The Condition of the Working Class in England (1845)
A study of the deprived conditions of the working class in Manchester and Salford, based on Engels's personal observations. The work also contains seminal thoughts on the state of socialism and its development. Originally published in German and only translated into English in 1887, the work initially had little impact in England; however, it was very influential with historians of British industrialisation throughout the twentieth century.
The Peasant War in Germany (1850)
An account of the early 16th-century uprising known as the German Peasants' War, with a comparison with the recent revolutionary uprisings of 1848–1849 across Europe.
Herr Eugen Dühring's Revolution in Science (1878)
Popularly known as Anti-Dühring, this book is a detailed critique of the philosophical positions of Eugen Dühring, a German philosopher and critic of Marxism. In the course of replying to Dühring, Engels reviews recent advances in science and mathematics seeking to demonstrate the way in which the concepts of dialectics apply to natural phenomena. Many of these ideas were later developed in the unfinished work, Dialectics of Nature. Three chapters of Anti-Dühring were later edited and published under the separate title, Socialism: Utopian and Scientific.
Socialism: Utopian and Scientific (1880)
In this work, one of the best selling socialist books of the era, Engels briefly described and analyzed the ideas of notable utopian socialists such as Charles Fourier and Robert Owen. Engels pointed out their strongpoints and shortcomings, and provided an explanation of the scientific socialist framework for understanding of capitalism, and an outline of the progression of social and economic development from the perspective of historical materialism.
Dialectics of Nature (1883)
Dialectics of Nature (German: "Dialektik der Natur") is an unfinished 1883 work by Engels that applies Marxist ideas, particularly those of dialectical materialism, to science. It was first published in the Soviet Union in 1925.
The Origin of the Family, Private Property and the State (1884)
In this work, Engels argues that the family is an ever-changing institution that has been shaped by capitalism. It contains a historical view of the family in relation to issues of class, female subjugation and private property.
References
Sources
Further reading
Studies
Commentaries on Engels
Royle, Camilla (2020), A Rebel's Guide to Engels, London: Bookmarks.
Fiction works
Square Enix (2017), Nier: Automata. Where a machine named Engels tries to overthrow the android species for the machines' sake.
Graphic novel
A biographical German graphic novel called Engels – Unternehmer und Revolutionär ("Engels – Businessman and Revolutionary") was published in 2020.
External links
Marx/Engels Biographical Archive
The Legend of Marx, or "Engels the founder" by Maximilien Rubel
Reason in Revolt: Marxism and Modern Science
Engels: The Che Guevara of his Day
The Brave New World: Tristram Hunt On Marx and Engels' Revolutionary Vision
German Biography from dhm.de
Frederick Engels: A Biography (Soviet work)
Frederick Engels: A Biography (East German work)
Engels was Right: Early Human Kinship was Matriliineal
Archive of Karl Marx / Friedrich Engels Papers at the International Institute of Social History
Libcom.org/library Friedrich Engels archive
Works by Friedrich Engels at Zeno.org
Pathfinder Press
Friedrich Engels, "On Rifled Cannon", articles from the New York Tribune, April, May and June 1860, reprinted in Military Affairs 21, no. 4 (Winter 1957) ed. Morton Borden, 193–198.
Marx and Engels in their native German language
Engels in Eastbourne – Commemorating the life, work and legacy of Friedrich Engels in Eastbourne
1820 births
1895 deaths
19th-century atheists
19th-century German businesspeople
19th-century German economists
19th-century German male writers
19th-century German non-fiction writers
19th-century German historians
19th-century German philosophers
19th-century German writers
19th-century Prussian people
Atheist philosophers
Businesspeople from Wuppertal
Critics of political economy
Deaths from throat cancer in England
Economists from the Kingdom of Prussia
European democratic socialists
German communist writers
German anti-capitalists
German atheism activists
German writers on atheism
German emigrants to England
German industrialists
German journalists
German Marxist writers
German political philosophers
German revolutionaries
Karl Marx
Marxist theorists
Materialists
Members of the International Workingmen's Association
Orthodox Marxists
People from the Province of Jülich-Cleves-Berg
People from the Rhine Province
German people of the Revolutions of 1848
German philosophers of culture
Philosophers of economics
German philosophers of history
Prussian Army personnel
Socialist economists
Theoretical historians
Theorists on Western civilization
Urban theorists
Writers from Wuppertal | Friedrich Engels | [
"Physics"
] | 8,801 | [
"Materialism",
"Matter",
"Materialists"
] |
47,102 | https://en.wikipedia.org/wiki/IP%20over%20Avian%20Carriers | In computer networking, IP over Avian Carriers (IPoAC) is an ostensibly functional proposal to carry Internet Protocol (IP) traffic by birds such as homing pigeons. IP over Avian Carriers was initially described in issued by the Internet Engineering Task Force, written by David Waitzman, and released on April 1, 1990. It is one of several April Fools' Day Request for Comments.
Waitzman described an improvement of his protocol in , IP over Avian Carriers with Quality of Service (1 April 1999). Later, in —released on 1 April 2011, and 13 years after the introduction of IPv6—Brian Carpenter and Robert Hinden published Adaptation of RFC 1149 for IPv6.
IPoAC has been successfully implemented, but for only nine packets of data, with a packet loss ratio of 55% (due to operator error), and a response time ranging from to over . Thus, this technology suffers from high latency.
Real-life implementation
On 28 April 2001, IPoAC was implemented by the Bergen Linux user group, under the name CPIP (for Carrier Pigeon Internet Protocol). They sent nine packets over a distance of approximately , each carried by an individual pigeon and containing one ping (ICMP echo request), and received four responses.
Script started on Sat Apr 28 11:24:09 2001
$ /sbin/ifconfig tun0
tun0 Link encap:Point-to-Point Protocol
inet addr:10.0.3.2 P-t-P:10.0.3.1 Mask:255.255.255.255
UP POINTOPOINT RUNNING NOARP MULTICAST MTU:150 Metric:1
RX packets:1 errors:0 dropped:0 overruns:0 frame:0
TX packets:2 errors:0 dropped:0 overruns:0 carrier:0
collisions:0
RX bytes:88 (88.0 b) TX bytes:168 (168.0 b)
$ ping -c 9 -i 900 10.0.3.1
PING 10.0.3.1 (10.0.3.1): 56 data bytes
64 bytes from 10.0.3.1: icmp_seq=0 ttl=255 time=6165731.1 ms
64 bytes from 10.0.3.1: icmp_seq=4 ttl=255 time=3211900.8 ms
64 bytes from 10.0.3.1: icmp_seq=2 ttl=255 time=5124922.8 ms
64 bytes from 10.0.3.1: icmp_seq=1 ttl=255 time=6388671.9 ms
--- 10.0.3.1 ping statistics ---
9 packets transmitted, 4 packets received, 55% packet loss
round-trip min/avg/max = 3211900.8/5222806.6/6388671.9 ms
Script done on Sat Apr 28 14:14:28 2001
This real life implementation was mentioned by the French member of parliament Martine Billard in the French National Assembly, during debates about HADOPI.
Risks
In December 2005, a Gartner report on bird flu that concluded "A pandemic wouldn't affect IT systems directly" was humorously criticized for neglecting to consider RFC 1149 and RFC 2549 in its analysis.
Known risks to the protocol include:
Carriers being attacked by birds of prey. RFC2549: "Unintentional encapsulation in hawks has been known to occur, with decapsulation being messy and the packets mangled."
Carriers being blown off course. RFC1149: "While broadcasting is not specified, storms can cause data loss."
The absence of viable local carriers. RFC6214: "In some locations, such as New Zealand, a significant proportion of carriers are only able to execute short hops, and only at times when the background level of photon emission is extremely low." This describes the flightless and nocturnal nature of kiwi.
Loss of availability of species, such as the extinction of the passenger pigeon.
Disease affecting the carriers. RFC6214: "There is a known risk of infection by the so-called H5N1 virus."
The network topologies supported for multicast communication are limited by the homing abilities of carriers. RFC6214: "... [carriers] prove to have no talent for multihoming, and in fact enter a routing loop whenever multihoming is attempted."
Other avian data transfer methods
Rafting photographers already use pigeons as a sneakernet to transport digital photos on flash media from the camera to the tour operator. Over a distance, a single pigeon may be able to carry tens of gigabytes of data in around an hour, which on an average bandwidth basis compared very favorably to early ADSL standards, even when accounting for lost drives.
On March 12, 2004, Yossi Vardi, Ami Ben-Bassat, and Guy Vardi sent three homing pigeons a distance of , "each carrying 20–22 tiny memory cards containing 1.3 GB, amounting in total of 4 GB of data." An effective throughput of was achieved. The purpose of the test was to measure and confirm an improvement over RFC 2549. Since the developers used flash memory instead of paper notes as specified by RFC 2549, the experiment was widely criticized as an example in which an optimized implementation breaks an official standard.
Inspired by RFC 2549, on 9 September 2009, the marketing team of The Unlimited, a regional company in South Africa, decided to host a tongue-in-cheek pigeon race between their pet pigeon Winston and local telecom company Telkom SA. The race was to send 4 gigabytes of data from Howick to Hillcrest, approximately apart. The pigeon carried a microSD card and competed against a Telkom ADSL line. Winston beat the data transfer over Telkom's ADSL line, with a total time of two hours, six minutes and 57 seconds from uploading data on the microSD card to completion of download from the card. At the time of Winston's victory, the ADSL transfer was just under 4% complete.
In November 2009, the Australian comedy/current-affairs television program Hungry Beast repeated this experiment. The Hungry Beast team took up the challenge after a fiery parliament session wherein the government of the time blasted the opposition for not supporting telecommunications investments, saying that if the opposition had their way, Australians would be doing data transfer over carrier pigeons. The Hungry Beast team had read about the South African experiment and assumed that, as a developed Western country, Australia would have higher speeds. The experiment had the team transfer a 700 MB file via three delivery methods to determine which was the fastest: a carrier pigeon with a microSD card, a car carrying a USB stick, and a Telstra (Australia's largest telecom provider) ADSL line. The data was to be transferred from Tarana in rural New South Wales to the western-Sydney suburb of Prospect, New South Wales, a distance of by road. Approximately halfway through the race, the internet connection unexpectedly dropped and the transfer had to be restarted. The pigeon won the race with a time of approximately 1 hour 5 minutes, the car came in second at 2 hours 10 minutes, while the internet transfer did not finish, having dropped out a second time and not come back. The estimated time to upload completion at one point was as high as 9 hours, and at no point did the estimated upload time fall below 4 hours.
A similar pigeon race was conducted in September 2010 by tech blogger (trefor.net) and ISP Timico CTO Trefor Davies with farmer Michelle Brumfield in rural Yorkshire, England: delivering a five-minute video to a BBC correspondent 75 miles away in Skegness. The pigeon (carrying a memory card with a 300 MB HD video of Davies having a haircut) was pitted against an upload to YouTube via British Telecom broadband; the pigeon was released at 11:05 am and arrived in the loft one hour and fifteen minutes later while the upload was still incomplete, having failed once in the interim.
See also
Hyper Text Coffee Pot Control Protocol
Pigeon post
Semaphore Flag Signaling System
Sneakernet
References
External links
"Carrier Pigeons Bringing Contraband into Prisons", Bruce Schneier, www.schneier.com (blog), June 27, 2008
Pigeon-powered Internet takes flight, Stephen Shankland, CNET News, May 4, 2001
"The Unlimited"
Pigeon carries data bundles faster than Telkom, 10 Sep 2009, M&G
RFC1149 Game
April Fools' Day jokes
Computer humour
Internet architecture
Link protocols
Physical layer protocols
Wireless networking
Domestic pigeons | IP over Avian Carriers | [
"Technology",
"Engineering"
] | 1,819 | [
"Wireless networking",
"Internet architecture",
"IT infrastructure",
"Computer networks engineering"
] |
47,107 | https://en.wikipedia.org/wiki/Phosgene | Phosgene is an organic chemical compound with the formula . It is a toxic, colorless gas; in low concentrations, its musty odor resembles that of freshly cut hay or grass. It can be thought of chemically as the double acyl chloride analog of carbonic acid, or structurally as formaldehyde with the hydrogen atoms replaced by chlorine atoms. In 2013, about 75–80 % of global phosgene was consumed for isocyanates, 18% for polycarbonates and about 5% for other fine chemicals.
Phosgene is extremely poisonous and was used as a chemical weapon during World War I, where it was responsible for 85,000 deaths. It is a highly potent pulmonary irritant and quickly filled enemy trenches due to it being a heavy gas.
It is classified as a Schedule 3 substance under the Chemical Weapons Convention. In addition to its industrial production, small amounts occur from the breakdown and the combustion of organochlorine compounds, such as chloroform.
Structure and basic properties
Phosgene is a planar molecule as predicted by VSEPR theory. The C=O distance is 1.18 Å, the C−Cl distance is 1.74 Å and the Cl−C−Cl angle is 111.8°. Phosgene is a carbon oxohalide and it can be considered one of the simplest acyl chlorides, being formally derived from carbonic acid.
Production
Industrially, phosgene is produced by passing purified carbon monoxide and chlorine gas through a bed of porous activated carbon, which serves as a catalyst:
(ΔHrxn = −107.6 kJ/mol)
This reaction is exothermic and is typically performed between 50 and 150 °C. Above 200 °C, phosgene reverts to carbon monoxide and chlorine, Keq(300 K) = 0.05. World production of this compound was estimated to be 2.74 million tonnes in 1989.
Phosgene is fairly simple to produce, but is listed as a Schedule 3 substance under the Chemical Weapons Convention. As such, it is usually considered too dangerous to transport in bulk quantities. Instead, phosgene is usually produced and consumed within the same plant, as part of an "on demand" process. This involves maintaining equivalent rates of production and consumption, which keeps the amount of phosgene in the system at any one time fairly low, reducing the risks in the event of an accident. Some batch production does still take place, but efforts are made to reduce the amount of phosgene stored.
Inadvertent generation
Atmospheric chemistry
Simple organochlorides slowly convert into phosgene when exposed to ultraviolet (UV) irradiation in the presence of oxygen. Before the discovery of the ozone hole in the late 1970s large quantities of organochlorides were routinely used by industry, which inevitably led to them entering the atmosphere. In the 1970-80s phosgene levels in the troposphere were around 20-30 pptv (peak 60 pptv). These levels had not decreased significantly nearly 30 years later, despite organochloride production becoming restricted under the Montreal Protocol.
Phosgene in the troposphere can last up to about 70 days and is removed primarily by hydrolysis with ambient humidity or cloudwater. Less than 1% makes it to the stratosphere, where it is expected to have a lifetime of several years, since this layer is much drier and phosgene decomposes slowly through UV photolysis. It plays a minor part in ozone depletion.
Combustion
Carbon tetrachloride () can turn into phosgene when exposed to heat in air. This was a problem as carbon tetrachloride is an effective fire suppressant and was formerly in widespread use in fire extinguishers. There are reports of fatalities caused by its use to fight fires in confined spaces. Carbon tetrachloride's generation of phosgene and its own toxicity mean it is no longer used for this purpose.
Biologically
Phosgene is also formed as a metabolite of chloroform, likely via the action of cytochrome P-450.
History
Phosgene was synthesized by the Cornish chemist John Davy (1790–1868) in 1812 by exposing a mixture of carbon monoxide and chlorine to sunlight. He named it "phosgene" from Greek (, light) and (, to give birth) in reference of the use of light to promote the reaction. It gradually became important in the chemical industry as the 19th century progressed, particularly in dye manufacturing.
Reactions and uses
The reaction of an organic substrate with phosgene is called phosgenation. Phosgenation of diols give carbonates (R = H, alkyl, aryl), which can be either linear or cyclic:
An example is the reaction of phosgene with bisphenol A to form polycarbonates. Phosgenation of diamines gives di-isocyanates, like toluene diisocyanate (TDI), methylene diphenyl diisocyanate (MDI), hexamethylene diisocyanate (HDI), and isophorone diisocyanate (IPDI). In these conversions, phosgene is used in excess to increase yield and minimize side reactions. The phosgene excess is separated during the work-up of resulting end products and recycled into the process, with any remaining phosgene decomposed in water using activated carbon as the catalyst. Diisocyanates are precursors to polyurethanes. More than 90% of the phosgene is used in these processes, with the biggest production units located in the United States (Texas and Louisiana), Germany, Shanghai, Japan, and South Korea. The most important producers are Dow Chemical, Covestro, and BASF. Phosgene is also used to produce monoisocyanates, used as pesticide precursors (e.g. methyl isocyanate (MIC).
Aside from the widely used reactions described above, phosgene is also used to produce acyl chlorides from carboxylic acids:
For this application, thionyl chloride is commonly used instead of phosgene.
Laboratory uses
The synthesis of isocyanates from amines illustrates the electrophilic character of this reagent and its use in introducing the equivalent synthon "CO2+":
, where R = alkyl, aryl
Such reactions are conducted on laboratory scale in the presence of a base such as pyridine that neutralizes the hydrogen chloride side-product.
Phosgene is used to produce chloroformates such as benzyl chloroformate:
In these syntheses, phosgene is used in excess to prevent formation of the corresponding carbonate ester.
With amino acids, phosgene (or its trimer) reacts to give amino acid N-carboxyanhydrides. More generally, phosgene acts to link two nucleophiles by a carbonyl group. For this purpose, alternatives to phosgene such as carbonyldiimidazole (CDI) are safer, albeit expensive. CDI itself is prepared by reacting phosgene with imidazole.
Phosgene is stored in metal cylinders. In the US, the cylinder valve outlet is a tapered thread known as "CGA 160" that is used only for phosgene.
Alternatives to phosgene
In the research laboratory, due to safety concerns phosgene nowadays finds limited use in organic synthesis. A variety of substitutes have been developed, notably trichloromethyl chloroformate ("diphosgene"), a liquid at room temperature, and bis(trichloromethyl) carbonate ("triphosgene"), a crystalline substance.
Other reactions
Phosgene reacts with water to release hydrogen chloride and carbon dioxide:
Analogously, upon contact with ammonia, it converts to urea:
Halide exchange with nitrogen trifluoride and aluminium tribromide gives and , respectively.
Chemical warfare
It is listed on Schedule 3 of the Chemical Weapons Convention: All production sites manufacturing more than 30 tonnes per year must be declared to the OPCW. Although less toxic than many other chemical weapons such as sarin, phosgene is still regarded as a viable chemical warfare agent because of its simpler manufacturing requirements when compared to that of more technically advanced chemical weapons such as tabun, a first-generation nerve agent.
Phosgene was first deployed as a chemical weapon by the French in 1915 in World War I. It was also used in a mixture with an equal volume of chlorine, with the chlorine helping to spread the denser phosgene. Phosgene was more potent than chlorine, though some symptoms took 24 hours or more to manifest.
Following the extensive use of phosgene during World War I, it was stockpiled by various countries.
Phosgene was then only infrequently used by the Imperial Japanese Army against the Chinese during the Second Sino-Japanese War. Gas weapons, such as phosgene, were produced by the IJA's Unit 731.
Toxicology and safety
Phosgene is an insidious poison as the odor may not be noticed and symptoms may be slow to appear.
At low concentrations, phosgene may have a pleasant odor of freshly mown hay or green corn, but has also been described as sweet, like rotten banana peels.
The odor detection threshold for phosgene is 0.4 ppm, four times the threshold limit value (time weighted average). Its high toxicity arises from the action of the phosgene on the , and groups of the proteins in pulmonary alveoli (the site of gas exchange), respectively forming ester, amide and thioester functional groups in accord with the reactions discussed above. This results in disruption of the blood–air barrier, eventually causing pulmonary edema. The extent of damage in the alveoli does not primarily depend on phosgene concentration in the inhaled air, with the dose (amount of inhaled phosgene) being the critical factor. Dose can be approximately calculated as "concentration" × "duration of exposure". Therefore, persons in workplaces where there exists risk of accidental phosgene release usually wear indicator badges close to the nose and mouth. Such badges indicate the approximate inhaled dose, which allows for immediate treatment if the monitored dose rises above safe limits.
In case of low or moderate quantities of inhaled phosgene, the exposed person is to be monitored and subjected to precautionary therapy, then released after several hours. For higher doses of inhaled phosgene (above 150 ppm × min) a pulmonary edema often develops which can be detected by X-ray imaging and regressive blood oxygen concentration. Inhalation of such high doses can eventually result in fatality within hours up to 2–3 days of the exposure.
The risk connected to a phosgene inhalation is based not so much on its toxicity (which is much lower in comparison to modern chemical weapons like sarin or tabun) but rather on its typical effects: the affected person may not develop any symptoms for hours until an edema appears, at which point it could be too late for medical treatment to assist. Nearly all fatalities as a result of accidental releases from the industrial handling of phosgene occurred in this fashion. On the other hand, pulmonary edemas treated in a timely manner usually heal in the mid- and longterm, without major consequences once some days or weeks after exposure have passed. Nonetheless, the detrimental health effects on pulmonary function from untreated, chronic low-level exposure to phosgene should not be ignored; although not exposed to concentrations high enough to immediately cause an edema, many synthetic chemists (e.g. Leonidas Zervas) working with the compound were reported to experience chronic respiratory health issues and eventual respiratory failure from continuous low-level exposure.
If accidental release of phosgene occurs in an industrial or laboratory setting, it can be mitigated with ammonia gas; in the case of liquid spills (e.g. of diphosgene or phosgene solutions) an absorbent and sodium carbonate can be applied.
Accidents
The first major phosgene-related incident happened in May 1928 when eleven tons of phosgene escaped from a war surplus store in central Hamburg. Three hundred people were poisoned, of whom ten died.
In the second half of 20th century several fatal incidents implicating phosgene occurred in Europe, Asia and the US. Most of them have been investigated by authorities and the outcome made accessible to the public. For example, phosgene was initially blamed for the Bhopal disaster, but investigations proved methyl isocyanate to be responsible for the numerous poisonings and fatalities.
Recent major incidents happened in January 2010 and May 2016. An accidental release of phosgene gas at a DuPont facility in West Virginia killed one employee in 2010. The US Chemical Safety Board released a video detailing the accident. Six years later, a phosgene leak occurred in a BASF plant in South Korea, where a contractor inhaled a lethal dose of phosgene.
2023 Ohio train derailment: A freight train carrying vinyl chloride derailed and burned in East Palestine, Ohio, releasing phosgene and hydrogen chloride into the air and contaminating the Ohio River.
See also
Carbonyl bromide
Carbonyl fluoride
Oxalyl chloride
Thiophosgene
Thionyl chloride
Perfluoroisobutene
Bis(trifluoromethyl) disulfide
References
External links
Davy's account of his discovery of phosgene
International Chemical Safety Card 0007
CDC - Phosgene - NIOSH Workplace Safety and Health Topic
NIOSH Pocket Guide to Chemical Hazards
U.S. CDC Emergency Preparedness & Response
U.S. EPA Acute Exposure Guideline Levels
Regime For Schedule 3 Chemicals And Facilities Related To Such Chemicals, OPCW website
CBWInfo website
Use of Phosgene in WWII and in modern-day warfare
US Chemical Safety Board Video on accidental release at DuPont facility in West Virginia
Acyl chlorides
Inorganic carbon compounds
Nonmetal halides
Oxychlorides
Carbon oxohalides
Pulmonary agents
Reagents for organic chemistry
World War I chemical weapons | Phosgene | [
"Chemistry"
] | 3,050 | [
"Inorganic compounds",
"Chemical weapons",
"Inorganic carbon compounds",
"Reagents for organic chemistry",
"World War I chemical weapons",
"Pulmonary agents"
] |
47,172 | https://en.wikipedia.org/wiki/Misanthropy | Misanthropy is the general hatred, dislike, or distrust of the human species, human behavior, or human nature. A misanthrope or misanthropist is someone who holds such views or feelings. Misanthropy involves a negative evaluative attitude toward humanity that is based on humankind's flaws. Misanthropes hold that these flaws characterize all or at least the greater majority of human beings. They claim that there is no easy way to rectify them short of a complete transformation of the dominant way of life. Various types of misanthropy are distinguished in the academic literature based on what attitude is involved, at whom it is directed, and how it is expressed. Either emotions or theoretical judgments can serve as the foundation of the attitude. It can be directed toward all humans without exception or exclude a few idealized people. In this regard, some misanthropes condemn themselves while others consider themselves superior to everyone else. Misanthropy is sometimes associated with a destructive outlook aiming to hurt other people or an attempt to flee society. Other types of misanthropic stances include activism by trying to improve humanity, quietism in the form of resignation, and humor mocking the absurdity of the human condition.
The negative misanthropic outlook is based on different types of human flaws. Moral flaws and unethical decisions are often seen as the foundational factor. They include cruelty, selfishness, injustice, greed, and indifference to the suffering of others. They may result in harm to humans and animals, such as genocides and factory farming of livestock. Other flaws include intellectual flaws, like dogmatism and cognitive biases, as well as aesthetic flaws concerning ugliness and lack of sensitivity to beauty. Many debates in the academic literature discuss whether misanthropy is a valid viewpoint and what its implications are. Proponents of misanthropy usually point to human flaws and the harm they have caused as a sufficient reason for condemning humanity. Critics have responded to this line of thought by claiming that severe flaws concern only a few extreme cases, like mentally ill perpetrators, but not humanity at large. Another objection is based on the claim that humans also have virtues besides their flaws and that a balanced evaluation might be overall positive. A further criticism rejects misanthropy because of its association with hatred, which may lead to violence, and because it may make people friendless and unhappy. Defenders of misanthropy have responded by claiming that this applies only to some forms of misanthropy but not to misanthropy in general.
A related issue concerns the question of the psychological and social factors that cause people to become misanthropes. They include socio-economic inequality, living under an authoritarian regime, and undergoing personal disappointments in life. Misanthropy is relevant in various disciplines. It has been discussed and exemplified by philosophers throughout history, like Heraclitus, Diogenes, Thomas Hobbes, Jean-Jacques Rousseau, Arthur Schopenhauer, and Friedrich Nietzsche. Misanthropic outlooks form part of some religious teachings discussing the deep flaws of human beings, like the Christian doctrine of original sin. Misanthropic perspectives and characters are also found in literature and popular culture. They include William Shakespeare's portrayal of Timon of Athens, Molière's play The Misanthrope, and Gulliver's Travels by Jonathan Swift. Misanthropy is closely related to but not identical to philosophical pessimism. Some misanthropes promote antinatalism, the view that humans should abstain from procreation.
Definition
Misanthropy is traditionally defined as hatred or dislike of humankind. The word originated in the 17th century and has its roots in the Greek words μῖσος mīsos 'hatred' and ἄνθρωπος ānthropos 'man, human'. In contemporary philosophy, the term is usually understood in a wider sense as a negative evaluation of humanity as a whole based on humanity's vices and flaws. This negative evaluation can express itself in various forms, hatred being only one of them. In this sense, misanthropy has a cognitive component based on a negative assessment of humanity and is not just a blind rejection. Misanthropy is usually contrasted with philanthropy, which refers to the love of humankind and is linked to efforts to increase human well-being, for example, through good will, charitable aid, and donations. Both terms have a range of meanings and do not necessarily contradict each other. In this regard, the same person may be a misanthrope in one sense and a philanthrope in another sense.
One central aspect of all forms of misanthropy is that their target is not local but ubiquitous. This means that the negative attitude is not just directed at some individual persons or groups but at humanity as a whole. In this regard, misanthropy is different from other forms of negative discriminatory attitudes directed at a particular group of people. This distinguishes it from the intolerance exemplified by misogynists, misandrists, and racists, which hold a negative attitude toward women, men, or certain races. According to literature theorist Andrew Gibson, misanthropy does not need to be universal in the sense that a person literally dislikes every human being. Instead, it depends on the person's horizon. For instance, a villager who loathes every other villager without exception is a misanthrope if their horizon is limited to only this village.
Both misanthropes and their critics agree that negative features and failings are not equally distributed, i.e. that the vices and bad traits are exemplified much more strongly in some than in others. But for misanthropy, the negative assessment of humanity is not based on a few extreme and outstanding cases: it is a condemnation of humanity as a whole that is not just directed at exceptionally bad individuals but includes regular people as well. Because of this focus on the ordinary, it is sometimes held that these flaws are obvious and trivial but people may ignore them due to intellectual flaws. Some see the flaws as part of human nature as such. Others also base their view on non-essential flaws, i.e. what humanity has come to be. This includes flaws seen as symptoms of modern civilization in general. Nevertheless, both groups agree that the relevant flaws are "entrenched". This means that there is either no or no easy way to rectify them and nothing short of a complete transformation of the dominant way of life would be required if that is possible at all.
Types
Various types of misanthropy are distinguished in the academic literature. They are based on what attitude is involved, how it is expressed, and whether the misanthropes include themselves in their negative assessment. The differences between them often matter for assessing the arguments for and against misanthropy. An early categorization suggested by Immanuel Kant distinguishes between positive and negative misanthropes. Positive misanthropes are active enemies of humanity. They wish harm to other people and undertake attempts to hurt them in one form or another. Negative misanthropy, by contrast, is a form of peaceful anthropophobia that leads people to isolate themselves. They may wish others well despite seeing serious flaws in them and prefer to not involve themselves in the social context of humanity. Kant associates negative misanthropy with moral disappointment due to previous negative experiences with others.
Another distinction focuses on whether the misanthropic condemnation of humanity is only directed at other people or at everyone including oneself. In this regard, self-inclusive misanthropes are consistent in their attitude by including themselves in their negative assessment. This type is contrasted with self-aggrandizing misanthropes, who either implicitly or explicitly exclude themselves from the general condemnation and see themselves instead as superior to everyone else. In this regard, it may be accompanied by an exaggerated sense of self-worth and self-importance. According to literature theorist Joseph Harris, the self-aggrandizing type is more common. He states that this outlook seems to undermine its own position by constituting a form of hypocrisy. A closely related categorization developed by Irving Babbitt distinguishes misanthropes based on whether they allow exceptions in their negative assessment. In this regard, misanthropes of the naked intellect regard humanity as a whole as hopeless. Tender misanthropes exclude a few idealized people from their negative evaluation. Babbitt cites Rousseau and his fondness for natural uncivilized man as an example of tender misanthropy and contrasts it with Jonathan Swift's thorough dismissal of all of humanity.
A further way to categorize forms of misanthropy is in relation to the type of attitude involved toward humanity. In this regard, philosopher Toby Svoboda distinguishes the attitudes of dislike, hate, contempt, and judgment. A misanthrope based on dislike harbors a distaste in the form of negative feelings toward other people. Misanthropy focusing on hatred involves an intense form of dislike. It includes the additional component of wishing ill upon others and at times trying to realize this wish. In the case of contempt, the attitude is not based on feelings and emotions but on a more theoretical outlook. It leads misanthropes to see other people as worthless and look down on them while excluding themselves from this assessment. If the misanthropic attitude has its foundation in judgment, it is also theoretical but does not distinguish between self and others. It is the view that humanity is in general bad without implying that the misanthrope is in any way better than the rest. According to Svoboda, only misanthropy based on judgment constitutes a serious philosophical position. He holds that misanthropy focusing on contempt is biased against other people while misanthropy in the form of dislike and hate is difficult to assess since these emotional attitudes often do not respond to objective evidence.
Misanthropic forms of life
Misanthropy is usually not restricted to a theoretical opinion but involves an evaluative attitude that calls for a practical response. It can express itself in different forms of life. They come with different dominant emotions and practical consequences for how to lead one's life. These responses to misanthropy are sometimes presented through simplified archetypes that may be too crude to accurately capture the mental life of any single person. Instead, they aim to portray common attitudes among groups of misanthropes. The two responses most commonly linked to misanthropy involve either destruction or fleeing from society. The destructive misanthrope is said to be driven by a hatred of humankind and aims at tearing it down, with violence if necessary. For the fugitive misanthrope, fear is the dominant emotion and leads the misanthrope to seek a secluded place in order to avoid the corrupting contact with civilization and humanity as much as possible.
The contemporary misanthropic literature has also identified further less-known types of misanthropic lifestyles. The activist misanthrope is driven by hope despite their negative appraisal of humanity. This hope is a form of meliorism based on the idea that it is possible and feasible for humanity to transform itself and the activist tries to realize this ideal. A weaker version of this approach is to try to improve the world incrementally to avoid some of the worst outcomes without the hope of fully solving the basic problem. Activist misanthropes differ from quietist misanthropes, who take a pessimistic approach toward what the person can do for bringing about a transformation or significant improvements. In contrast to the more drastic reactions of the other responses mentioned, they resign themselves to quiet acceptance and small-scale avoidance. A further approach is focused on humor based on mockery and ridicule at the absurdity of the human condition. An example is that humans hurt each other and risk future self-destruction for trivial concerns like a marginal increase in profit. This way, humor can act both as a mirror to portray the terrible truth of the situation and as its palliative at the same time.
Forms of human flaws
A core aspect of misanthropy is that its negative attitude toward humanity is based on human flaws. Various misanthropes have provided extensive lists of flaws, including cruelty, greed, selfishness, wastefulness, dogmatism, self-deception, and insensitivity to beauty. These flaws can be categorized in many ways. It is often held that moral flaws constitute the most serious case. Other flaws discussed in the contemporary literature include intellectual flaws, aesthetic flaws, and spiritual flaws.
Moral flaws are usually understood as tendencies to violate moral norms or as mistaken attitudes toward what is the good. They include cruelty, indifference to the suffering of others, selfishness, moral laziness, cowardice, injustice, greed, and ingratitude. The harm done because of these flaws can be divided into three categories: harm done directly to humans, harm done directly to other animals, and harm done indirectly to both humans and other animals by harming the environment. Examples of these categories include the Holocaust, factory farming of livestock, and pollution causing climate change. In this regard, it is not just relevant that human beings cause these forms of harm but also that they are morally responsible for them. This is based on the idea that they can understand the consequences of their actions and could act differently. However, they decide not to, for example, because they ignore the long-term well-being of others in order to get short-term personal benefits.
Intellectual flaws concern cognitive capacities. They can be defined as what leads to false beliefs, what obstructs knowledge, or what violates the demands of rationality. They include intellectual vices, like arrogance, wishful thinking, and dogmatism. Further examples are stupidity, gullibility, and cognitive biases, like the confirmation bias, the self-serving bias, the hindsight bias, and the anchoring bias. Intellectual flaws can work in tandem with all kinds of vices: they may deceive someone about having a vice. This prevents the affected person from addressing it and improving themselves, for instance, by being mindless and failing to recognize it. They also include forms of self-deceit, wilful ignorance, and being in denial about something. Similar considerations have prompted some traditions to see intellectual failings, like ignorance, as the root of all evil.
Aesthetic flaws are usually not given the same importance as moral and intellectual flaws, but they also carry some weight for misanthropic considerations. These flaws relate to beauty and ugliness. They concern ugly aspects of human life itself, like defecation and aging. Other examples are ugliness caused by human activities, like pollution and litter, and inappropriate attitudes toward aesthetic aspects, like being insensitive to beauty.
Causes
Various psychological and social factors have been identified in the academic literature as possible causes of misanthropic sentiments. The individual factors by themselves may not be able to fully explain misanthropy but can show instead how it becomes more likely. For example, disappointments and disillusionments in life can cause a person to adopt a misanthropic outlook. In this regard, the more idealistic and optimistic the person initially was, the stronger this reversal and the following negative outlook tend to be. This type of psychological explanation is found as early as Plato's Phaedo. In it, Socrates considers a person who trusts and admires someone without knowing them sufficiently well. He argues that misanthropy may arise if it is discovered later that the admired person has serious flaws. In this case, the initial attitude is reversed and universalized to apply to all others, leading to general distrust and contempt toward other humans. Socrates argues that this becomes more likely if the admired person is a close friend and if it happens more than once. This form of misanthropy may be accompanied by a feeling of moral superiority in which the misanthrope considers themselves to be better than everyone else.
Other types of negative personal experiences in life may have a similar effect. Andrew Gibson uses this line of thought to explain why some philosophers became misanthropes. He uses the example of Thomas Hobbes to explain how a politically unstable environment and the frequent wars can foster a misanthropic attitude. Regarding Arthur Schopenhauer, he states that being forced to flee one's home at an early age and never finding a place to call home afterward can have a similar effect. Another psychological factor concerns negative attitudes toward the human body, especially in the form of general revulsion from sexuality.
Besides the psychological causes, some wider social circumstances may also play a role. Generally speaking, the more negative the circumstances are, the more likely misanthropy becomes. For instance, according to political scientist Eric M. Uslaner, socio-economic inequality in the form of unfair distribution of wealth increases the tendency to adopt a misanthropic perspective. This has to do with the fact that inequality tends to undermine trust in the government and others. Uslaner suggests that it may be possible to overcome or reduce this source of misanthropy by implementing policies that build trust and promote a more equal distribution of wealth. The political regime is another relevant factor. This specifically concerns authoritarian regimes using all means available to repress their population and stay in power. For example, it has been argued that the severe forms of repression of the Ancien Régime in the late 17th century made it more likely for people to adopt a misanthropic outlook because their freedom was denied. Democracy may have the opposite effect since it allows more personal freedom due to its more optimistic outlook on human nature.
Empirical studies often use questions related to trust in other people to measure misanthropy. This concerns specifically whether the person believes that others would be fair and helpful. In an empirical study on misanthropy in American society, Tom W. Smith concludes that factors responsible for an increased misanthropic outlook are low socioeconomic status, being from racial and ethnic minorities, and having experienced recent negative events in one's life. In regard to religion, misanthropy is higher for people who do not attend church and for fundamentalists. Some factors seem to play no significant role, like gender, having undergone a divorce, and never having been married. Another study by Morris Rosenberg finds that misanthropy is linked to certain political outlooks. They include being skeptical about free speech and a tendency to support authoritarian policies. This concerns, for example, tendencies to suppress political and religious liberties.
Arguments
Various discussions in the academic literature concern the question of whether misanthropy is an accurate assessment of humanity and what the consequences of adopting it are. Many proponents of misanthropy focus on human flaws together with examples of when they exercise their negative influences. They argue that these flaws are so severe that misanthropy is an appropriate response.
Special importance in this regard is usually given to moral faults. This is based on the idea that humans do not merely cause a great deal of suffering and destruction but are also morally responsible for them. The reason is that they are intelligent enough to understand the consequences of their actions and could potentially make balanced long-term decisions instead of focusing on personal short-term gains.
Proponents of misanthropy sometimes focus on extreme individual manifestations of human flaws, like mass killings ordered by dictators. Others emphasize that the problem is not limited to a few cases, for example, that many ordinary people are complicit in their manifestation by supporting the political leaders committing them. A closely related argument is to claim that the underlying flaws are there in everyone, even if they reach their most extreme manifestation only in a few. Another approach is to focus not on the grand extreme cases but on the ordinary small-scale manifestations of human flaws in everyday life, such as lying, cheating, breaking promises, and being ungrateful.
Some arguments for misanthropy focus not only on general tendencies but on actual damage caused by humans in the past. This concerns, for instance, damages done to the ecosystem, like ecological catastrophes resulting in mass extinctions.
Criticism
Various theorists have criticized misanthropy. Some opponents acknowledge that there are extreme individual manifestations of human flaws, like mentally ill perpetrators, but claim that these cases do not reflect humanity at large and cannot justify the misanthropic attitude. For instance, while there are cases of extreme human brutality, like the mass killings committed by dictators and their forces, listing such cases is not sufficient for condemning humanity at large.
Some critics of misanthropy acknowledge that humans have various flaws but state that they present just one side of humanity while evaluative attitudes should take all sides into account. This line of thought is based on the idea that humans possess equally important virtues that make up for their shortcomings. For example, accounts that focus only on the great wars, cruelties, and tragedies in human history ignore its positive achievements in the sciences, arts, and humanities.
Another explanation given by critics is that the negative assessment should not be directed at humanity but at some social forces. These forces can include capitalism, communism, patriarchy, racism, religious fundamentalism, or imperialism. Supporters of this argument would adopt an opposition to one of these social forces rather than a misanthropic opposition to humanity.
Some objections to misanthropy are based not on whether this attitude appropriately reflects the negative value of humanity but on the costs of accepting such a position. The costs can affect both the individual misanthrope and the society at large. This is especially relevant if misanthropy is linked to hatred, which may turn easily into violence against social institutions and other humans and may result in harm. Misanthropy may also deprive the person of most pleasures by making them miserable and friendless.
Another form of criticism focuses more on the theoretical level and claims that misanthropy is an inconsistent and self-contradictory position. An example of this inconsistency is the misanthrope's tendency to denounce the social world while still being engaged in it and being unable to fully leave it behind. This criticism applies specifically to misanthropes who exclude themselves from the negative evaluation and look down on others with contempt from an arrogant position of inflated ego but it may not apply to all types of misanthropy. A closely related objection is based on the claim that misanthropy is an unnatural attitude and should therefore be seen as an aberration or a pathological case.
In various disciplines
History of philosophy
Misanthropy has been discussed and exemplified by philosophers throughout history. One of the earliest cases was the pre-Socratic philosopher Heraclitus. He is often characterized as a solitary person who is not fond of social interactions with others. A central factor to his negative outlook on human beings was their lack of comprehension of the true nature of reality. This concerns especially cases in which they remain in a state of ignorance despite having received a thorough explanation of the issue in question. Another early discussion is found in Plato's Phaedo, where misanthropy is characterized as the result of frustrated expectations and excessively naïve optimism.
Various reflections on misanthropy are also found in the cynic school of philosophy. There it is argued, for instance, that humans keep on reproducing and multiplying the evils they are attempting to flee. An example given by the first-century philosopher Dio Chrysostom is that humans move to cities to defend themselves against outsiders but this process thwarts their initial goal by leading to even more violence due to high crime rates within the city. Diogenes is a well-known cynic misanthrope. He saw other people as hypocritical and superficial. He openly rejected all kinds of societal norms and values, often provoking others by consciously breaking conventions and behaving rudely.
Thomas Hobbes is an example of misanthropy in early modern philosophy. His negative outlook on humanity is reflected in many of his works. For him, humans are egoistic and violent: they act according to their self-interest and are willing to pursue their goals at the expense of others. In their natural state, this leads to a never-ending war in which "every man to every man ... is an enemy". He saw the establishment of an authoritative state characterized by the strict enforcement of laws to maintain order as the only way to tame the violent human nature and avoid perpetual war.
A further type of misanthropy is found in Jean-Jacques Rousseau. He idealizes the harmony and simplicity found in nature and contrasts them with the confusion and disorder found in humanity, especially in the form of society and institutions. For instance, he claims that "Man is born free; and everywhere he is in chains". This negative outlook was also reflected in his lifestyle: he lived solitary and preferred to be with plants rather than humans.
Arthur Schopenhauer is often mentioned as a prime example of misanthropy. According to him, everything in the world, including humans and their activities, is an expression of one underlying will. This will is blind, which causes it to continuously engage in futile struggles. On the level of human life, this "presents itself as a continual deception" since it is driven by pointless desires. They are mostly egoistic and often result in injustice and suffering to others. Once they are satisfied, they only give rise to new pointless desires and more suffering. In this regard, Schopenhauer dismisses most things that are typically considered precious or meaningful in human life, like romantic love, individuality, and liberty. He holds that the best response to the human condition is a form of asceticism by denying the expression of the will. This is only found in rare humans and "the dull majority of men" does not live up to this ideal.
Friedrich Nietzsche, who was strongly influenced by Schopenhauer, is also often cited as an example of misanthropy. He saw man as a decadent and "sick animal" that shows no progress over other animals. He even expressed a negative attitude toward apes since they are more similar to human beings than other animals, for example, with regard to cruelty. For Nietzsche, a noteworthy flaw of human beings is their tendency to create and enforce systems of moral rules that favor weak people and suppress true greatness. He held that the human being is something to be overcome and used the term Übermensch to describe an ideal individual who has transcended traditional moral and societal norms.
Religion
Some misanthropic views are also found in religious teachings. In Christianity, for instance, this is linked to the sinful nature of humans and the widespread manifestation of sin in everyday life. Common forms of sin are discussed in terms of the seven deadly sins. Examples are an excessive sense of self-importance in the form of pride and strong sexual cravings constituting lust. They also include the tendency to follow greed for material possessions as well as being envious of the possessions of others. According to the doctrine of original sin, this flaw is found in every human being since the doctrine states that human nature is already tainted by sin from birth by inheriting it from Adam and Eve's rebellion against God's authority. John Calvin's theology of Total depravity has been described by some theologians as misanthropic.
Misanthropic perspectives can also be discerned in various Buddhist teachings. For example, Buddha had a negative outlook on the widespread flaws of human beings, including lust, hatred, delusion, sorrow, and despair. These flaws are identified with some form of craving or attachment (taṇhā) and cause suffering (dukkha). Buddhists hold that it is possible to overcome these failings in the process of achieving Buddhahood or enlightenment. However, this is seen as a difficult achievement, meaning that these failings apply to most human beings.
However, there are also many religious teachings opposed to misanthropy, such as the emphasis on kindness and helping others. In Christianity, this is found in the concept of agape, which involves selfless and unconditional love in the form of compassion and a willingness to help others. Buddhists see the practice of loving kindness (metta) as a central aspect that implies a positive intention of compassion and the expression of kindness toward all sentient beings.
Literature and popular culture
Many examples of misanthropy are also found in literature and popular culture. Timon of Athens by William Shakespeare is a famous portrayal of the life of the Ancient Greek Timon, who is widely known for his extreme misanthropic attitude. Shakespeare depicts him as a wealthy and generous gentleman. However, he becomes disillusioned with his ungrateful friends and humanity at large. This way, his initial philanthropy turns into an unrestrained hatred of humanity, which prompts him to leave society in order to live in a forest. Molière's play The Misanthrope is another famous example. Its protagonist, Alceste, has a low opinion of the people around him. He tends to focus on their flaws and openly criticizes them for their superficiality, insincerity, and hypocrisy. He rejects most social conventions and thereby often offends others, for example, by refusing to engage in social niceties like polite small talk.
The author Jonathan Swift had a reputation for being misanthropic. In some statements, he openly declares that he hates and detests "that animal called man". Misanthropy is also found in many of his works. An example is Gulliver's Travels, which tells the adventures of the protagonist Gulliver, who journeys to various places, like an island inhabited by tiny people and a land ruled by intelligent horses. Through these experiences of the contrast between humans and other species, he comes to see more and more the deep flaws of humanity, leading him to develop a revulsion toward other human beings. Ebenezer Scrooge from Charles Dickens's A Christmas Carol is an often-cited example of misanthropy. He is described as a cold-hearted, solitary miser who detests Christmas. He is greedy, selfish, and has no regard for the well-being of others. Other writers associated with misanthropy include Gustave Flaubert and Philip Larkin.
The Joker from the DC Universe is an example of misanthropy in popular culture. He is one of the main antagonists of Batman and acts as an agent of chaos. He believes that people are selfish, cruel, irrational, and hypocritical. He is usually portrayed as a sociopath with a twisted sense of humor who uses violent means to expose and bring down organized society.
Related concepts
Philosophical pessimism
Misanthropy is closely related but not identical to philosophical pessimism. Philosophical pessimism is the view that life is not worth living or that the world is a bad place, for example, because it is meaningless and full of suffering. This view is exemplified by Arthur Schopenhauer and Philipp Mainländer. Philosophical pessimism is often accompanied by misanthropy if the proponent holds that humanity is also bad and partially responsible for the negative value of the world. However, the two views do not require each other and can be held separately. A non-misanthropic pessimist may hold, for instance, that humans are just victims of a terrible world but not to blame for it. Eco-misanthropists, by contrast, may claim that the world and its nature are valuable but that humanity exerts a negative and destructive influence.
Antinatalism and human extinction
Antinatalism is the view that coming into existence is bad and that humans have a duty to abstain from procreation. A central argument for antinatalism is called the misanthropic argument. It sees the deep flaws of humans and their tendency to cause harm as a reason for avoiding the creation of more humans. These harms include wars, genocides, factory farming, and damages done to the environment. This argument contrasts with philanthropic arguments, which focus on the future suffering of the human about to come into existence. They argue that the only way to avoid their future suffering is to prevent them from being born. The Voluntary Human Extinction Movement and the Church of Euthanasia are well-known examples of social movements in favor of antinatalism and human extinction.
Antinatalism is commonly endorsed by misanthropic thinkers. However, there are numerous other ways that could lead to the involuntary extinction of the human species, with various suggestions having been made about threats to the long-term survival of the human species, including nuclear wars, self-replicating nanorobots, or super-pathogens. While such cases can be seen as terrible scenarios for all life, misanthropes may instead interpret them as reasons for hope that the dominance of humanity in history will eventually come to an end. A similar sentiment is expressed by Bertrand Russell. He states in relation to the existence of human life on earth and its misdeeds that they are "a passing nightmare; in time the earth will become again incapable of supporting life, and peace will return."
Human exceptionalism and deep ecology
Human exceptionalism is the claim that human beings have unique importance and are exceptional compared to all other species. It is often based on the claim that they stand out because of their special capacities, like intelligence, rationality, and autonomy. In religious contexts, it is frequently explained in relation to a unique role that God foresaw for them or that they were created in God's image. Human exceptionalism is usually combined with the claim that human well-being matters more than the well-being of other species. This line of thought can be used to draw various ethical conclusions. One is the claim that humans have the right to rule the planet and impose their will on other species. Another is that inflicting harm on other species may be morally acceptable if it is done with the purpose of promoting human well-being and excellence.
Generally speaking, the position of human exceptionalism is at odds with misanthropy in relation to the value of humanity. But this is not necessarily the case and it may be possible to hold both positions at the same time. One way to do this is to claim that humanity is exceptional because of a few rare individuals but that the average person is bad. Another approach is to hold that human beings are exceptional in a negative sense: given their destructive and harmful history, they are much worse than any other species.
Theorists in the field of deep ecology are also often critical of human exceptionalism and tend to favor a misanthropic perspective. Deep ecology is a philosophical and social movement that stresses the inherent value of nature and advocates a radical change in human behavior toward nature. Various theorists have criticized deep ecology based on the claim that it is misanthropic by privileging other species over humans. For example, the deep ecology movement Earth First! faced severe criticism when they praised the AIDS epidemic in Africa as a solution to the problem of human overpopulation in their newsletter.
See also
Asociality – lack of motivation to engage in social interaction
Antihumanism – rejection of humanism
Antisocial personality disorder
Cosmicism
Emotional isolation
Hatred (video game)
Nihilism
Social alienation
References
Citations
Sources
External links
Anti-social behaviour
Concepts in social philosophy
Human behavior
Philosophical pessimism
Philosophy of life
Psychological attitude
Social emotions | Misanthropy | [
"Biology"
] | 7,434 | [
"Anti-social behaviour",
"Behavior",
"Human behavior"
] |
47,200 | https://en.wikipedia.org/wiki/4%20Vesta | Vesta (minor-planet designation: 4 Vesta) is one of the largest objects in the asteroid belt, with a mean diameter of . It was discovered by the German astronomer Heinrich Wilhelm Matthias Olbers on 29 March 1807 and is named after Vesta, the virgin goddess of home and hearth from Roman mythology.
Vesta is thought to be the second-largest asteroid, both by mass and by volume, after the dwarf planet Ceres. Measurements give it a nominal volume only slightly larger than that of Pallas (about 5% greater), but it is 25% to 30% more massive. It constitutes an estimated 9% of the mass of the asteroid belt. Vesta is the only known remaining rocky protoplanet (with a differentiated interior) of the kind that formed the terrestrial planets. Numerous fragments of Vesta were ejected by collisions one and two billion years ago that left two enormous craters occupying much of Vesta's southern hemisphere. Debris from these events has fallen to Earth as howardite–eucrite–diogenite (HED) meteorites, which have been a rich source of information about Vesta.
Vesta is the brightest asteroid visible from Earth. It is regularly as bright as magnitude 5.1, at which times it is faintly visible to the naked eye. Its maximum distance from the Sun is slightly greater than the minimum distance of Ceres from the Sun, although its orbit lies entirely within that of Ceres.
NASA's Dawn spacecraft entered orbit around Vesta on 16 July 2011 for a one-year exploration and left the orbit of Vesta on 5 September 2012 en route to its final destination, Ceres. Researchers continue to examine data collected by Dawn for additional insights into the formation and history of Vesta.
History
Discovery
Heinrich Olbers discovered Pallas in 1802, the year after the discovery of Ceres. He proposed that the two objects were the remnants of a destroyed planet. He sent a letter with his proposal to the British astronomer William Herschel, suggesting that a search near the locations where the orbits of Ceres and Pallas intersected might reveal more fragments. These orbital intersections were located in the constellations of Cetus and Virgo. Olbers commenced his search in 1802, and on 29 March 1807 he discovered Vesta in the constellation Virgo—a coincidence, because Ceres, Pallas, and Vesta are not fragments of a larger body. Because the asteroid Juno had been discovered in 1804, this made Vesta the fourth object to be identified in the region that is now known as the asteroid belt. The discovery was announced in a letter addressed to German astronomer Johann H. Schröter dated 31 March. Because Olbers already had credit for discovering a planet (Pallas; at the time, the asteroids were considered to be planets), he gave the honor of naming his new discovery to German mathematician Carl Friedrich Gauss, whose orbital calculations had enabled astronomers to confirm the existence of Ceres, the first asteroid, and who had computed the orbit of the new planet in the remarkably short time of 10 hours. Gauss decided on the Roman virgin goddess of home and hearth, Vesta.
Name and symbol
Vesta was the fourth asteroid to be discovered, hence the number 4 in its formal designation. The name Vesta, or national variants thereof, is in international use with two exceptions: Greece and China. In Greek, the name adopted was the Hellenic equivalent of Vesta, Hestia in English, that name is used for (Greeks use the name "Hestia" for both, with the minor-planet numbers used for disambiguation). In Chinese, Vesta is called the 'hearth-god(dess) star', , naming the asteroid for Vesta's role, similar to the Chinese names of Uranus, Neptune, and Pluto.
Upon its discovery, Vesta was, like Ceres, Pallas, and Juno before it, classified as a planet and given a planetary symbol. The symbol represented the altar of Vesta with its sacred fire and was designed by Gauss. In Gauss's conception, now obsolete, this was drawn . His form is in the pipeline for Unicode 17.0 as U+1F777 .
The asteroid symbols were gradually retired from astronomical use after 1852, but the symbols for the first four asteroids were resurrected for astrology in the 1970s. The abbreviated modern astrological variant of the Vesta symbol is .
After the discovery of Vesta, no further objects were discovered for 38 years, and during this time the Solar System was thought to have eleven planets. However, in 1845, new asteroids started being discovered at a rapid pace, and by 1851 there were fifteen, each with its own symbol, in addition to the eight major planets (Neptune had been discovered in 1846). It soon became clear that it would be impractical to continue inventing new planetary symbols indefinitely, and some of the existing ones proved difficult to draw quickly. That year, the problem was addressed by Benjamin Apthorp Gould, who suggested numbering asteroids in their order of discovery, and placing this number in a disk (circle) as the generic symbol of an asteroid. Thus, the fourth asteroid, Vesta, acquired the generic symbol . This was soon coupled with the name into an official number–name designation, as the number of minor planets increased. By 1858, the circle had been simplified to parentheses, which were easier to typeset. Other punctuation, such as and was also briefly used, but had more or less completely died out by 1949.
Early measurements
Photometric observations of Vesta were made at the Harvard College Observatory in 1880–1882 and at the Observatoire de Toulouse in 1909. These and other observations allowed the rotation rate of Vesta to be determined by the 1950s. However, the early estimates of the rotation rate came into question because the light curve included variations in both shape and albedo.
Early estimates of the diameter of Vesta ranged from in 1825, to . E.C. Pickering produced an estimated diameter of in 1879, which is close to the modern value for the mean diameter, but the subsequent estimates ranged from a low of up to a high of during the next century. The measured estimates were based on photometry. In 1989, speckle interferometry was used to measure a dimension that varied between during the rotational period. In 1991, an occultation of the star SAO 93228 by Vesta was observed from multiple locations in the eastern United States and Canada. Based on observations from 14 different sites, the best fit to the data was an elliptical profile with dimensions of about . Dawn confirmed this measurement. These measurements will help determine the thermal history, size of the core, role of water in asteroid evolution and what meteorites found on Earth come from these bodies, with the ultimate goal of understanding the conditions and processes present at the solar system's earliest epoch and the role of water content and size in planetary evolution.
Vesta became the first asteroid to have its mass determined. Every 18 years, the asteroid 197 Arete approaches within of Vesta. In 1966, based upon observations of Vesta's gravitational perturbations of Arete, Hans G. Hertz estimated the mass of Vesta at (solar masses). More refined estimates followed, and in 2001 the perturbations of 17 Thetis were used to calculate the mass of Vesta to be . Dawn determined it to be .
Orbit
Vesta orbits the Sun between Mars and Jupiter, within the asteroid belt, with a period of 3.6 Earth years, specifically in the inner asteroid belt, interior to the Kirkwood gap at 2.50 AU. Its orbit is moderately inclined (i = 7.1°, compared to 7° for Mercury and 17° for Pluto) and moderately eccentric (e = 0.09, about the same as for Mars).
True orbital resonances between asteroids are considered unlikely. Because of their small masses relative to their large separations, such relationships should be very rare. Nevertheless, Vesta is able to capture other asteroids into temporary 1:1 resonant orbital relationships (for periods up to 2 million years or more) and about forty such objects have been identified. Decameter-sized objects detected in the vicinity of Vesta by Dawn may be such quasi-satellites rather than proper satellites.
Rotation
Vesta's rotation is relatively fast for an asteroid (5.342 h) and prograde, with the north pole pointing in the direction of right ascension 20 h 32 min, declination +48° (in the constellation Cygnus) with an uncertainty of about 10°. This gives an axial tilt of 29°.
Coordinate systems
Two longitudinal coordinate systems are used for Vesta, with prime meridians separated by 150°. The IAU established a coordinate system in 1997 based on Hubble photos, with the prime meridian running through the center of Olbers Regio, a dark feature 200 km across. When Dawn arrived at Vesta, mission scientists found that the location of the pole assumed by the IAU was off by 10°, so that the IAU coordinate system drifted across the surface of Vesta at 0.06° per year, and also that Olbers Regio was not discernible from up close, and so was not adequate to define the prime meridian with the precision they needed. They corrected the pole, but also established a new prime meridian 4° from the center of Claudia, a sharply defined crater 700 meters across, which they say results in a more logical set of mapping quadrangles. All NASA publications, including images and maps of Vesta, use the Claudian meridian, which is unacceptable to the IAU. The IAU Working Group on Cartographic Coordinates and Rotational Elements recommended a coordinate system, correcting the pole but rotating the Claudian longitude by 150° to coincide with Olbers Regio. It was accepted by the IAU, although it disrupts the maps prepared by the Dawn team, which had been positioned so they would not bisect any major surface features.
Physical characteristics
Vesta is the second most massive body in the asteroid belt, although it is only 28% as massive as Ceres, the most massive body. Vesta is however the most massive body that formed in the asteroid belt, as Ceres is believed to have formed between Jupiter and Saturn. Vesta's density is lower than those of the four terrestrial planets but is higher than those of most asteroids, as well as all of the moons in the Solar System except Io. Vesta's surface area is about the same as the land area of Pakistan, Venezuela, Tanzania, or Nigeria; slightly under . It has a differentiated interior. Vesta is only slightly larger () than 2 Pallas () in mean diameter, but is about 25% more massive.
Vesta's shape is close to a gravitationally relaxed oblate spheroid, but the large concavity and protrusion at the southern pole (see 'Surface features' below) combined with a mass less than precluded Vesta from automatically being considered a dwarf planet under International Astronomical Union (IAU) Resolution XXVI 5. A 2012 analysis of Vesta's shape and gravity field using data gathered by the Dawn spacecraft has shown that Vesta is currently not in hydrostatic equilibrium.
Temperatures on the surface have been estimated to lie between about with the Sun overhead, dropping to about at the winter pole. Typical daytime and nighttime temperatures are and , respectively. This estimate is for 6 May 1996, very close to perihelion, although details vary somewhat with the seasons.
Surface features
Before the arrival of the Dawn spacecraft, some Vestan surface features had already been resolved using the Hubble Space Telescope and ground-based telescopes (e.g., the Keck Observatory). The arrival of Dawn in July 2011 revealed the complex surface of Vesta in detail.
Rheasilvia and Veneneia
The most prominent of these surface features are two enormous impact basins, the -wide Rheasilvia, centered near the south pole; and the wide Veneneia. The Rheasilvia impact basin is younger and overlies the Veneneia. The Dawn science team named the younger, more prominent crater Rheasilvia, after the mother of Romulus and Remus and a mythical vestal virgin. Its width is 95% of the mean diameter of Vesta. The crater is about deep. A central peak rises above the lowest measured part of the crater floor and the highest measured part of the crater rim is above the crater floor low point. It is estimated that the impact responsible excavated about 1% of the volume of Vesta, and it is likely that the Vesta family and V-type asteroids are the products of this collision. If this is the case, then the fact that fragments have survived bombardment until the present indicates that the crater is at most only about 1 billion years old. It would also be the site of origin of the HED meteorites. All the known V-type asteroids taken together account for only about 6% of the ejected volume, with the rest presumably either in small fragments, ejected by approaching the 3:1 Kirkwood gap, or perturbed away by the Yarkovsky effect or radiation pressure. Spectroscopic analyses of the Hubble images have shown that this crater has penetrated deep through several distinct layers of the crust, and possibly into the mantle, as indicated by spectral signatures of olivine.
The large peak at the center of Rheasilvia is high and wide, and is possibly a result of a planetary-scale impact.
Other craters
Several old, degraded craters approach Rheasilvia and Veneneia in size, although none are quite so large. They include Feralia Planitia, shown at right, which is across. More-recent, sharper craters range up to Varronilla and Postumia.
Dust fills up some craters, creating so-called dust ponds. They are a phenomenon where pockets of dust are seen in celestial bodies without a significant atmosphere. These are smooth deposits of dust accumulated in depressions on the surface of the body (like craters), contrasting from the Rocky terrain around them. On the surface of Vesta, we have identified both type 1 (formed from impact melt) and type 2 (electrostatically made) dust ponds within 0˚–30°N/S, that is, Equatorial region. 10 craters have been identified with such formations.
"Snowman craters"
The "snowman craters" are a group of three adjacent craters in Vesta's northern hemisphere. Their official names, from largest to smallest (west to east), are Marcia, Calpurnia, and Minucia. Marcia is the youngest and cross-cuts Calpurnia. Minucia is the oldest.
Troughs
The majority of the equatorial region of Vesta is sculpted by a series of parallel troughs designated Divalia Fossae; its longest trough is wide and long. Despite the fact that Vesta is a one-seventh the size of the Moon, Divalia Fossae dwarfs the Grand Canyon. A second series, inclined to the equator, is found further north. This northern trough system is named Saturnalia Fossae, with its largest trough being roughly 40 km wide and over 370 km long. These troughs are thought to be large-scale graben resulting from the impacts that created Rheasilvia and Veneneia craters, respectively. They are some of the longest chasms in the Solar System, nearly as long as Ithaca Chasma on Tethys. The troughs may be graben that formed after another asteroid collided with Vesta, a process that can happen only in a body that, like Vesta, is differentiated. Vesta's differentiation is one of the reasons why scientists consider it a protoplanet. Alternatively, it is proposed that the troughs may be radial sculptures created by secondary cratering from Rheasilvia.
Surface composition
Compositional information from the visible and infrared spectrometer (VIR), gamma-ray and neutron detector (GRaND), and framing camera (FC), all indicate that the majority of the surface composition of Vesta is consistent with the composition of the howardite, eucrite, and diogenite meteorites. The Rheasilvia region is richest in diogenite, consistent with the Rheasilvia-forming impact excavating material from deeper within Vesta. The presence of olivine within the Rheasilvia region would also be consistent with excavation of mantle material. However, olivine has only been detected in localized regions of the northern hemisphere, not within Rheasilvia. The origin of this olivine is currently unclear. Though olivine was expected by astronomers to have originated from Vesta's mantle prior to the arrival of the Dawn orbiter, the lack of olivine within the Rheasilvia and Veneneia impact basins complicates this view. Both impact basins excavated Vestian material down to 60–100 km, far deeper than the expected thickness of ~30–40 km for Vesta's crust. Vesta's crust may be far thicker than expected or the violent impact events that created Rheasilvia and Veneneia may have mixed material enough to obscure olivine from observations. Alternatively, Dawn observations of olivine could instead be due to delivery by olivine-rich impactors, unrelated to Vesta's internal structure.
Features associated with volatiles
Pitted terrain has been observed in four craters on Vesta: Marcia, Cornelia, Numisia and Licinia. The formation of the pitted terrain is proposed to be degassing of impact-heated volatile-bearing material. Along with the pitted terrain, curvilinear gullies are found in Marcia and Cornelia craters. The curvilinear gullies end in lobate deposits, which are sometimes covered by pitted terrain, and are proposed to form by the transient flow of liquid water after buried deposits of ice were melted by the heat of the impacts. Hydrated materials have also been detected, many of which are associated with areas of dark material. Consequently, dark material is thought to be largely composed of carbonaceous chondrite, which was deposited on the surface by impacts. Carbonaceous chondrites are comparatively rich in mineralogically bound OH.
Geology
A large collection of potential samples from Vesta is accessible to scientists, in the form of over 1200 HED meteorites (Vestan achondrites), giving insight into Vesta's geologic history and structure. NASA Infrared Telescope Facility (NASA IRTF) studies of asteroid suggest that it originated from deeper within Vesta than the HED meteorites.
Vesta is thought to consist of a metallic iron–nickel core 214–226 km in diameter, an overlying rocky olivine mantle, with a surface crust. From the first appearance of calcium–aluminium-rich inclusions (the first solid matter in the Solar System, forming about 4.567 billion years ago), a likely time line is as follows:
Vesta is the only known intact asteroid that has been resurfaced in this manner. Because of this, some scientists refer to Vesta as a protoplanet. However, the presence of iron meteorites and achondritic meteorite classes without identified parent bodies indicates that there once were other differentiated planetesimals with igneous histories, which have since been shattered by impacts.
On the basis of the sizes of V-type asteroids (thought to be pieces of Vesta's crust ejected during large impacts), and the depth of Rheasilvia crater (see below), the crust is thought to be roughly thick.
Findings from the Dawn spacecraft have found evidence that the troughs that wrap around Vesta could be graben formed by impact-induced faulting (see Troughs section above), meaning that Vesta has more complex geology than other asteroids. Vesta's differentiated interior implies that it was in hydrostatic equilibrium and thus a dwarf planet in the past, but it is not today. The impacts that created the Rheasilvia and Veneneia craters occurred when Vesta was no longer warm and plastic enough to return to an equilibrium shape, distorting its once rounded shape and prohibiting it from being classified as a dwarf planet today.
Regolith
Vesta's surface is covered by regolith distinct from that found on the Moon or asteroids such as Itokawa. This is because space weathering acts differently. Vesta's surface shows no significant trace of nanophase iron because the impact speeds on Vesta are too low to make rock melting and vaporization an appreciable process. Instead, regolith evolution is dominated by brecciation and subsequent mixing of bright and dark components. The dark component is probably due to the infall of carbonaceous material, whereas the bright component is the original Vesta basaltic soil.
Fragments
Some small Solar System bodies are suspected to be fragments of Vesta caused by impacts. The Vestian asteroids and HED meteorites are examples. The V-type asteroid 1929 Kollaa has been determined to have a composition akin to cumulate eucrite meteorites, indicating its origin deep within Vesta's crust.
Vesta is currently one of only eight identified Solar System bodies of which we have physical samples, coming from a number of meteorites suspected to be Vestan fragments. It is estimated that 1 out of 16 meteorites originated from Vesta. The other identified Solar System samples are from Earth itself, meteorites from Mars, meteorites from the Moon, and samples returned from the Moon, the comet Wild 2, and the asteroids 25143 Itokawa, 162173 Ryugu, and 101955 Bennu.
Exploration
In 1981, a proposal for an asteroid mission was submitted to the European Space Agency (ESA). Named the Asteroidal Gravity Optical and Radar Analysis (AGORA), this spacecraft was to launch some time in 1990–1994 and perform two flybys of large asteroids. The preferred target for this mission was Vesta. AGORA would reach the asteroid belt either by a gravitational slingshot trajectory past Mars or by means of a small ion engine. However, the proposal was refused by the ESA. A joint NASA–ESA asteroid mission was then drawn up for a Multiple Asteroid Orbiter with Solar Electric Propulsion (MAOSEP), with one of the mission profiles including an orbit of Vesta. NASA indicated they were not interested in an asteroid mission. Instead, the ESA set up a technological study of a spacecraft with an ion drive. Other missions to the asteroid belt were proposed in the 1980s by France, Germany, Italy and the United States, but none were approved. Exploration of Vesta by fly-by and impacting penetrator was the second main target of the first plan of the multi-aimed Soviet Vesta mission, developed in cooperation with European countries for realisation in 1991–1994 but canceled due to the dissolution of the Soviet Union.
In the early 1990s, NASA initiated the Discovery Program, which was intended to be a series of low-cost scientific missions. In 1996, the program's study team recommended a mission to explore the asteroid belt using a spacecraft with an ion engine as a high priority. Funding for this program remained problematic for several years, but by 2004 the Dawn vehicle had passed its critical design review and construction proceeded.
It launched on 27 September 2007 as the first space mission to Vesta. On 3 May 2011, Dawn acquired its first targeting image 1.2 million kilometers from Vesta. On 16 July 2011, NASA confirmed that it received telemetry from Dawn indicating that the spacecraft successfully entered Vesta's orbit. It was scheduled to orbit Vesta for one year, until July 2012. Dawn arrival coincided with late summer in the southern hemisphere of Vesta, with the large crater at Vesta's south pole (Rheasilvia) in sunlight. Because a season on Vesta lasts eleven months, the northern hemisphere, including anticipated compression fractures opposite the crater, would become visible to Dawn cameras before it left orbit. Dawn left orbit around Vesta on 4 September 2012 to travel to Ceres.
NASA/DLR released imagery and summary information from a survey orbit, two high-altitude orbits (60–70 m/pixel) and a low-altitude mapping orbit (20 m/pixel), including digital terrain models, videos and atlases. Scientists also used Dawn to calculate Vesta's precise mass and gravity field. The subsequent determination of the J2 component yielded a core diameter estimate of about 220 km assuming a crustal density similar to that of the HED.
Dawn data can be accessed by the public at the UCLA website.
Observations from Earth orbit
Observations from Dawn
Vesta comes into view as the Dawn spacecraft approaches and enters orbit:
True-color images
Detailed images retrieved during the high-altitude (60–70 m/pixel) and low-altitude (~20 m/pixel) mapping orbits are available on the Dawn Mission website of JPL/NASA.
Visibility
Its size and unusually bright surface make Vesta the brightest asteroid, and it is occasionally visible to the naked eye from dark skies (without light pollution). In May and June 2007, Vesta reached a peak magnitude of +5.4, the brightest since 1989. At that time, opposition and perihelion were only a few weeks apart. It was brighter still at its 22 June 2018 opposition, reaching a magnitude of +5.3.
Less favorable oppositions during late autumn 2008 in the Northern Hemisphere still had Vesta at a magnitude of from +6.5 to +7.3. Even when in conjunction with the Sun, Vesta will have a magnitude around +8.5; thus from a pollution-free sky it can be observed with binoculars even at elongations much smaller than near opposition.
2010–2011
In 2010, Vesta reached opposition in the constellation of Leo on the night of 17–18 February, at about magnitude 6.1, a brightness that makes it visible in binocular range but generally not for the naked eye. Under perfect dark sky conditions where all light pollution is absent it might be visible to an experienced observer without the use of a telescope or binoculars. Vesta came to opposition again on 5 August 2011, in the constellation of Capricornus at about magnitude 5.6.
2012–2013
Vesta was at opposition again on 9 December 2012. According to Sky and Telescope magazine, this year Vesta came within about 6 degrees of 1 Ceres during the winter of 2012 and spring 2013. Vesta orbits the Sun in 3.63 years and Ceres in 4.6 years, so every 17.4 years Vesta overtakes Ceres (the previous overtaking was in April 1996). On 1 December 2012, Vesta had a magnitude of 6.6, but it had decreased to 8.4 by 1 May 2013.
2014
Ceres and Vesta came within one degree of each other in the night sky in July 2014.
See also
3103 Eger
3551 Verenia
3908 Nyx
4055 Magellan
Asteroids in fiction
Diogenite
Eucrite
List of former planets
Howardite
Vesta family (vestoids)
List of tallest mountains in the Solar System
Notes
References
Bibliography
The Dawn Mission to Minor Planets 4 Vesta and 1 Ceres, Christopher T. Russell and Carol A. Raymond (Editors), Springer (2011),
Keil, K.; Geological History of Asteroid 4 Vesta: The Smallest Terrestrial Planet in Asteroids III, William Bottke, Alberto Cellino, Paolo Paolicchi, and Richard P. Binzel, (Editors), University of Arizona Press (2002),
External links
Interactive 3D gravity simulation of the Dawn spacecraft in orbit around Vesta
Vesta Trek – An integrated map browser of datasets and maps for 4 Vesta
JPL Ephemeris
Views of the Solar System: Vesta
HubbleSite: Hubble Maps the Asteroid Vesta
Encyclopædia Britannica, Vesta – full article
HubbleSite: short movie composed from Hubble Space Telescope images from November 1994.
Adaptive optics views of Vesta from Keck Observatory
4 Vesta images at ESA/Hubble
Dawn at Vesta (NASA press kit on Dawn's operations at Vesta)
NASA video
Vesta atlas
Vesta
Vesta
20110716
Former dwarf planets
Former dwarf planet candidates
Articles containing video clips
V-type asteroids (Tholen)
V-type asteroids (SMASS)
18070329
18070329
Vesta (mythology)
Solar System | 4 Vesta | [
"Astronomy"
] | 5,833 | [
"Outer space",
"Solar System"
] |
47,264 | https://en.wikipedia.org/wiki/Asteroid%20belt | The asteroid belt is a torus-shaped region in the Solar System, centered on the Sun and roughly spanning the space between the orbits of the planets Jupiter and Mars. It contains a great many solid, irregularly shaped bodies called asteroids or minor planets. The identified objects are of many sizes, but much smaller than planets, and, on average, are about one million kilometers (or six hundred thousand miles) apart. This asteroid belt is also called the main asteroid belt or main belt to distinguish it from other asteroid populations in the Solar System.
The asteroid belt is the smallest and innermost known circumstellar disc in the Solar System. Classes of small Solar System bodies in other regions are the near-Earth objects, the centaurs, the Kuiper belt objects, the scattered disc objects, the sednoids, and the Oort cloud objects. About 60% of the main belt mass is contained in the four largest asteroids: Ceres, Vesta, Pallas, and Hygiea. The total mass of the asteroid belt is estimated to be 3% that of the Moon.
Ceres, the only object in the asteroid belt large enough to be a dwarf planet, is about 950 km in diameter, whereas Vesta, Pallas, and Hygiea have mean diameters less than 600 km. The remaining mineralogically classified bodies range in size down to a few metres. The asteroid material is so thinly distributed that numerous uncrewed spacecraft have traversed it without incident. Nonetheless, collisions between large asteroids occur and can produce an asteroid family, whose members have similar orbital characteristics and compositions. Individual asteroids within the belt are categorized by their spectra, with most falling into three basic groups: carbonaceous (C-type), silicate (S-type), and metal-rich (M-type).
The asteroid belt formed from the primordial solar nebula as a group of planetesimals, the smaller precursors of the protoplanets. However, between Mars and Jupiter gravitational perturbations from Jupiter disrupted their accretion into a planet, imparting excess kinetic energy which shattered colliding planetesimals and most of the incipient protoplanets. As a result, 99.9% of the asteroid belt's original mass was lost in the first 100 million years of the Solar System's history. Some fragments eventually found their way into the inner Solar System, leading to meteorite impacts with the inner planets. Asteroid orbits continue to be appreciably perturbed whenever their period of revolution about the Sun forms an orbital resonance with Jupiter. At these orbital distances, a Kirkwood gap occurs as they are swept into other orbits.
History of observation
In 1596, Johannes Kepler wrote, "Between Mars and Jupiter, I place a planet," in his Mysterium Cosmographicum, stating his prediction that a planet would be found there. While analyzing Tycho Brahe's data, Kepler thought that too large a gap existed between the orbits of Mars and Jupiter to fit his own model of where planetary orbits should be found.
In an anonymous footnote to his 1766 translation of Charles Bonnet's Contemplation de la Nature, the astronomer Johann Daniel Titius of Wittenberg noted an apparent pattern in the layout of the planets, now known as the Titius-Bode Law. If one began a numerical sequence at 0, then included 3, 6, 12, 24, 48, etc., doubling each time, and added four to each number and divided by 10, this produced a remarkably close approximation to the radii of the orbits of the known planets as measured in astronomical units, provided one allowed for a "missing planet" (equivalent to 24 in the sequence) between the orbits of Mars (12) and Jupiter (48). In his footnote, Titius declared, "But should the Lord Architect have left that space empty? Not at all." When William Herschel discovered Uranus in 1781, the planet's orbit closely matched the law, leading some astronomers to conclude that a planet had to be between the orbits of Mars and Jupiter.
On January 1, 1801, Giuseppe Piazzi, chairman of astronomy at the University of Palermo, Sicily, found a tiny moving object in an orbit with exactly the radius predicted by this pattern. He dubbed it "Ceres", after the Roman goddess of the harvest and patron of Sicily. Piazzi initially believed it to be a comet, but its lack of a coma suggested it was a planet.
Thus, the aforementioned pattern predicted the semimajor axes of all eight planets of the time (Mercury, Venus, Earth, Mars, Ceres, Jupiter, Saturn, and Uranus). Concurrent with the discovery of Ceres, an informal group of 24 astronomers dubbed the "celestial police" was formed under the invitation of Franz Xaver von Zach with the express purpose of finding additional planets; they focused their search for them in the region between Mars and Jupiter where the Titius–Bode law predicted there should be a planet.
About 15 months later, Heinrich Olbers, a member of the celestial police, discovered a second object in the same region, Pallas. Unlike the other known planets, Ceres and Pallas remained points of light even under the highest telescope magnifications instead of resolving into discs. Apart from their rapid movement, they appeared indistinguishable from stars.
Accordingly, in 1802, William Herschel suggested they be placed into a separate category, named "asteroids", after the Greek asteroeides, meaning "star-like". Upon completing a series of observations of Ceres and Pallas, he concluded,
Neither the appellation of planets nor that of comets can with any propriety of language be given to these two stars ... They resemble small stars so much as hardly to be distinguished from them. From this, their asteroidal appearance, if I take my name, and call them Asteroids; reserving for myself, however, the liberty of changing that name, if another, more expressive of their nature, should occur.
By 1807, further investigation revealed two new objects in the region: Juno and Vesta. The burning of Lilienthal in the Napoleonic wars, where the main body of work had been done, brought this first period of discovery to a close.
Despite Herschel's coinage, for several decades it remained common practice to refer to these objects as planets and to prefix their names with numbers representing their sequence of discovery: 1 Ceres, 2 Pallas, 3 Juno, 4 Vesta. In 1845, though, the astronomer Karl Ludwig Hencke detected a fifth object (5 Astraea) and, shortly thereafter, new objects were found at an accelerating rate. Counting them among the planets became increasingly cumbersome. Eventually, they were dropped from the planet list (as first suggested by Alexander von Humboldt in the early 1850s) and Herschel's coinage, "asteroids", gradually came into common use.
The discovery of Neptune in 1846 led to the discrediting of the Titius–Bode law in the eyes of scientists because its orbit was nowhere near the predicted position. To date, no scientific explanation for the law has been given, and astronomers' consensus regards it as a coincidence.
[[File:951 Gaspra.jpg|right|thumb|951 Gaspra, the first asteroid imaged by a spacecraft, as viewed during Galileo'''s 1991 flyby; colors are exaggerated]]
The expression "asteroid belt" came into use in the early 1850s, although pinpointing who coined the term is difficult. The first English use seems to be in the 1850 translation (by Elise Otté) of Alexander von Humboldt's Cosmos: "[...] and the regular appearance, about the 13th of November and the 11th of August, of shooting stars, which probably form part of a belt of asteroids intersecting the Earth's orbit and moving with planetary velocity". Another early appearance occurred in Robert James Mann's A Guide to the Knowledge of the Heavens: "The orbits of the asteroids are placed in a wide belt of space, extending between the extremes of [...]". The American astronomer Benjamin Peirce seems to have adopted that terminology and to have been one of its promoters.
Over 100 asteroids had been located by mid-1868, and in 1891, the introduction of astrophotography by Max Wolf accelerated the rate of discovery. A total of 1,000 asteroids had been found by 1921, 10,000 by 1981, and 100,000 by 2000. Modern asteroid survey systems now use automated means to locate new minor planets in ever-increasing numbers.
On 22 January 2014, European Space Agency (ESA) scientists reported the detection, for the first definitive time, of water vapor on Ceres, the largest object in the asteroid belt. The detection was made by using the far-infrared abilities of the Herschel Space Observatory. The finding was unexpected because comets, not asteroids, are typically considered to "sprout jets and plumes". According to one of the scientists, "The lines are becoming more and more blurred between comets and asteroids".
Origin
Formation
In 1802, shortly after discovering Pallas, Olbers suggested to Herschel and Carl Gauss that Ceres and Pallas were fragments of a much larger planet that once occupied the Mars–Jupiter region, with this planet having suffered an internal explosion or a cometary impact many million years before, while Odesan astronomer K. N. Savchenko suggested that Ceres, Pallas, Juno, and Vesta were escaped moons rather than fragments of the exploded planet. The large amount of energy required to destroy a planet, combined with the belt's low combined mass, which is only about 4% of the mass of Earth's Moon, does not support these hypotheses. Further, the significant chemical differences between the asteroids become difficult to explain if they come from the same planet.
A modern hypothesis for the asteroid belt's creation relates to how, in general for the Solar System, planetary formation is thought to have occurred via a process comparable to the long-standing nebular hypothesis; a cloud of interstellar dust and gas collapsed under the influence of gravity to form a rotating disc of material that then conglomerated to form the Sun and planets. During the first few million years of the Solar System's history, an accretion process of sticky collisions caused the clumping of small particles, which gradually increased in size. Once the clumps reached sufficient mass, they could draw in other bodies through gravitational attraction and become planetesimals. This gravitational accretion led to the formation of the planets.
Planetesimals within the region that would become the asteroid belt were strongly perturbed by Jupiter's gravity. Orbital resonances occurred where the orbital period of an object in the belt formed an integer fraction of the orbital period of Jupiter, perturbing the object into a different orbit; the region lying between the orbits of Mars and Jupiter contains many such orbital resonances. As Jupiter migrated inward following its formation, these resonances would have swept across the asteroid belt, dynamically exciting the region's population and increasing their velocities relative to each other. In regions where the average velocity of the collisions was too high, the shattering of planetesimals tended to dominate over accretion, preventing the formation of a planet. Instead, they continued to orbit the Sun as before, occasionally colliding.
During the early history of the Solar System, the asteroids melted to some degree, allowing elements within them to be differentiated by mass. Some of the progenitor bodies may even have undergone periods of explosive volcanism and formed magma oceans. Because of the relatively small size of the bodies, though, the period of melting was necessarily brief compared to the much larger planets, and had generally ended about 4.5 billion years ago, in the first tens of millions of years of formation. In August 2007, a study of zircon crystals in an Antarctic meteorite believed to have originated from Vesta suggested that it, and by extension the rest of the asteroid belt, had formed rather quickly, within 10 million years of the Solar System's origin.
Evolution
The asteroids are not pristine samples of the primordial Solar System. They have undergone considerable evolution since their formation, including internal heating (in the first few tens of millions of years), surface melting from impacts, space weathering from radiation, and bombardment by micrometeorites. Although some scientists refer to the asteroids as residual planetesimals, other scientists consider them distinct.
The current asteroid belt is believed to contain only a small fraction of the mass of the primordial belt. Computer simulations suggest that the original asteroid belt may have contained mass equivalent to the Earth's. Primarily because of gravitational perturbations, most of the material was ejected from the belt within about 1 million years of formation, leaving behind less than 0.1% of the original mass. Since its formation, the size distribution of the asteroid belt has remained relatively stable; no significant increase or decrease in the typical dimensions of the main-belt asteroids has occurred.
The 4:1 orbital resonance with Jupiter, at a radius 2.06 astronomical units (AUs), can be considered the inner boundary of the asteroid belt. Perturbations by Jupiter send bodies straying there into unstable orbits. Most bodies formed within the radius of this gap were swept up by Mars (which has an aphelion at 1.67 AU) or ejected by its gravitational perturbations in the early history of the Solar System. The Hungaria asteroids lie closer to the Sun than the 4:1 resonance, but are protected from disruption by their high inclination.
When the asteroid belt was first formed, the temperatures at a distance of 2.7 AU from the Sun formed a "snow line" below the freezing point of water. Planetesimals formed beyond this radius were able to accumulate ice.
In 2006, a population of comets had been discovered within the asteroid belt beyond the snow line, which may have provided a source of water for Earth's oceans. According to some models, outgassing of water during the Earth's formative period was insufficient to form the oceans, requiring an external source such as a cometary bombardment.
The outer asteroid belt appears to include a few objects that may have arrived there during the last few hundred years, the list includes also known as 362P.
Characteristics
Contrary to popular imagery, the asteroid belt is mostly empty. The asteroids are spread over such a large volume that reaching an asteroid without aiming carefully would be improbable. Nonetheless, hundreds of thousands of asteroids are currently known, and the total number ranges in the millions or more, depending on the lower size cutoff. Over 200 asteroids are known to be larger than 100 km, and a survey in the infrared wavelengths has shown that the asteroid belt has between 700,000 and 1.7 million asteroids with a diameter of 1 km or more.
The number of asteroids in the main belt steadily increases with decreasing size. Although the size distribution generally follows a power law, there are 'bumps' in the curve at about and , where more asteroids than expected from such a curve are found. Most asteroids larger than approximately in diameter are primordial, having survived from the accretion epoch, whereas most smaller asteroids are products of fragmentation of primordial asteroids. The primordial population of the main belt was probably 200 times what it is today.
The absolute magnitudes of most of the known asteroids are between 11 and 19, with the median at about 16. On average the distance between the asteroids is about , although this varies among asteroid families and smaller undetected asteroids might be even closer. The total mass of the asteroid belt is estimated to be kg, which is 3% of the mass of the Moon. The four largest objects, Ceres, Vesta, Pallas, and Hygiea, contain an estimated 62% of the belt's total mass, with 39% accounted for by Ceres alone.For recent estimates of the masses of Ceres, Vesta, Pallas and Hygiea, see the references in the infoboxes of their respective articles.
Composition
The present day belt consists primarily of three categories of asteroids: C-type carbonaceous asteroids, S-type silicate asteroids, and a hybrid group of X-type asteroids. The hybrid group have featureless spectra, but they can be divided into three groups based on reflectivity, yielding the M-type metallic, P-type primitive, and E-type enstatite asteroids. Additional types have been found that do not fit within these primary classes. There is a compositional trend of asteroid types by increasing distance from the Sun, in the order of S, C, P, and the spectrally-featureless D-types.
Carbonaceous asteroids, as their name suggests, are carbon-rich. They dominate the asteroid belt's outer regions, and are rare in the inner belt. Together they comprise over 75% of the visible asteroids. They are redder in hue than the other asteroids and have a low albedo. Their surface compositions are similar to carbonaceous chondrite meteorites. Chemically, their spectra match the primordial composition of the early Solar System, with hydrogen, helium, and volatiles removed.
S-type (silicate-rich) asteroids are more common toward the inner region of the belt, within 2.5 AU of the Sun. The spectra of their surfaces reveal the presence of silicates and some metal, but no significant carbonaceous compounds. This indicates that their materials have been significantly modified from their primordial composition, probably through melting and reformation. They have a relatively high albedo and form about 17% of the total asteroid population.
M-type (metal-rich) asteroids are typically found in the middle of the main belt, and they make up much of the remainder of the total population. Their spectra resemble that of iron-nickel. Some are believed to have formed from the metallic cores of differentiated progenitor bodies that were disrupted through collision. However, some silicate compounds also can produce a similar appearance. For example, the large M-type asteroid 22 Kalliope does not appear to be primarily composed of metal. Within the asteroid belt, the number distribution of M-type asteroids peaks at a semimajor axis of about 2.7 AU. Whether all M-types are compositionally similar, or whether it is a label for several varieties which do not fit neatly into the main C and S classes is not yet clear.
One mystery is the relative rarity of V-type (Vestoid) or basaltic asteroids in the asteroid belt. Theories of asteroid formation predict that objects the size of Vesta or larger should form crusts and mantles, which would be composed mainly of basaltic rock, resulting in more than half of all asteroids being composed either of basalt or of olivine. However, observations suggest that 99% of the predicted basaltic material is missing. Until 2001, most basaltic bodies discovered in the asteroid belt were believed to originate from the asteroid Vesta (hence their name V-type), but the discovery of the asteroid 1459 Magnya revealed a slightly different chemical composition from the other basaltic asteroids discovered until then, suggesting a different origin. This hypothesis was reinforced by the further discovery in 2007 of two asteroids in the outer belt, 7472 Kumakiri and , with a differing basaltic composition that could not have originated from Vesta. These two are the only V-type asteroids discovered in the outer belt to date.
The temperature of the asteroid belt varies with the distance from the Sun. For dust particles within the belt, typical temperatures range from 200 K (−73 °C) at 2.2 AU down to 165 K (−108 °C) at 3.2 AU. However, due to rotation, the surface temperature of an asteroid can vary considerably as the sides are alternately exposed to solar radiation then to the stellar background.
Main-belt comets
Several otherwise unremarkable bodies in the outer belt show cometary activity. Because their orbits cannot be explained through the capture of classical comets, many of the outer asteroids are thought to be icy, with the ice occasionally exposed to sublimation through small impacts. Main-belt comets may have been a major source of the Earth's oceans because the deuterium-hydrogen ratio is too low for classical comets to have been the principal source.
Orbits
Most asteroids within the asteroid belt have orbital eccentricities of less than 0.4, and an inclination of less than 30°. The orbital distribution of the asteroids reaches a maximum at an eccentricity around 0.07 and an inclination below 4°. Thus, although a typical asteroid has a relatively circular orbit and lies near the plane of the ecliptic, some asteroid orbits can be highly eccentric or travel well outside the ecliptic plane.
Sometimes, the term "main belt" is used to refer only to the more compact "core" region where the greatest concentration of bodies is found. This lies between the strong 4:1 and 2:1 Kirkwood gaps at 2.06 and 3.27 AU, and at orbital eccentricities less than roughly 0.33, along with orbital inclinations below about 20°. , this "core" region contained 93% of all discovered and numbered minor planets within the Solar System. The JPL Small-Body Database lists over 1 million known main-belt asteroids.
Kirkwood gaps
The semimajor axis of an asteroid is used to describe the dimensions of its orbit around the Sun, and its value determines the minor planet's orbital period. In 1866, Daniel Kirkwood announced the discovery of gaps in the distances of these bodies' orbits from the Sun. They were located in positions where their period of revolution about the Sun was an integer fraction of Jupiter's orbital period. Kirkwood proposed that the gravitational perturbations of the planet led to the removal of asteroids from these orbits.
When the mean orbital period of an asteroid is an integer fraction of the orbital period of Jupiter, a mean-motion resonance with the gas giant is created that is sufficient to perturb an asteroid to new orbital elements. Primordial asteroids entered these gaps because of the migration of Jupiter's orbit. Subsequently, asteroids primarily migrate into these gap orbits due to the Yarkovsky effect, but may also enter because of perturbations or collisions. After entering, an asteroid is gradually nudged into a different, random orbit with a larger or smaller semimajor axis.
Collisions
The high population of the asteroid belt makes for an active environment, where collisions between asteroids occur frequently (on deep time scales). Impact events between main-belt bodies with a mean radius of 10 km are expected to occur about once every 10 million years. A collision may fragment an asteroid into numerous smaller pieces (leading to the formation of a new asteroid family). Conversely, collisions that occur at low relative speeds may also join two asteroids. After more than 4 billion years of such processes, the members of the asteroid belt now bear little resemblance to the original population.
Evidence suggests that most main belt asteroids between 200 m and 10 km in diameter are rubble piles formed by collisions. These bodies consist of a multitude of irregular objects that are mostly bound together by self-gravity, resulting in significant amounts of internal porosity. Along with the asteroid bodies, the asteroid belt also contains bands of dust with particle radii of up to a few hundred micrometres. This fine material is produced, at least in part, from collisions between asteroids, and by the impact of micrometeorites upon the asteroids. Due to the Poynting–Robertson effect, the pressure of solar radiation causes this dust to slowly spiral inward toward the Sun.
The combination of this fine asteroid dust, as well as ejected cometary material, produces the zodiacal light. This faint auroral glow can be viewed at night extending from the direction of the Sun along the plane of the ecliptic. Asteroid particles that produce visible zodiacal light average about 40 μm in radius. The typical lifetimes of main-belt zodiacal cloud particles are about 700,000 years. Thus, to maintain the bands of dust, new particles must be steadily produced within the asteroid belt. It was once thought that collisions of asteroids form a major component of the zodiacal light. However, computer simulations by Nesvorný and colleagues attributed 85 percent of the zodiacal-light dust to fragmentations of Jupiter-family comets, rather than to comets and collisions between asteroids in the asteroid belt. At most 10 percent of the dust is attributed to the asteroid belt.
Meteorites
Some of the debris from collisions can form meteoroids that enter the Earth's atmosphere. Of the 50,000 meteorites found on Earth to date, 99.8 percent are believed to have originated in the asteroid belt.
Families and groups
In 1918, the Japanese astronomer Kiyotsugu Hirayama noticed that the orbits of some of the asteroids had similar parameters, forming families or groups.
Approximately one-third of the asteroids in the asteroid belt are members of an asteroid family. These share similar orbital elements, such as semi-major axis, eccentricity, and orbital inclination as well as similar spectral features, which indicate a common origin in the breakup of a larger body. Graphical displays of these element pairs, for members of the asteroid belt, show concentrations indicating the presence of an asteroid family. There are about 20 to 30 associations that are likely asteroid families. Additional groupings have been found that are less certain. Asteroid families can be confirmed when the members display similar spectral features. Smaller associations of asteroids are called groups or clusters.
Some of the most prominent families in the asteroid belt (in order of increasing semi-major axes) are the Flora, Eunomia, Koronis, Eos, and Themis families. The Flora family, one of the largest with more than 800 known members, may have formed from a collision less than 1 billion years ago.
The largest asteroid to be a true member of a family is 4 Vesta. (This is in contrast to an interloper, in the case of Ceres with the Gefion family.) The Vesta family is believed to have formed as the result of a crater-forming impact on Vesta. Likewise, the HED meteorites may also have originated from Vesta as a result of this collision.
Three prominent bands of dust have been found within the asteroid belt. These have similar orbital inclinations as the Eos, Koronis, and Themis asteroid families, and so are possibly associated with those groupings.
The main belt evolution after the Late Heavy Bombardment was likely affected by the passages of large Centaurs and trans-Neptunian objects (TNOs). Centaurs and TNOs that reach the inner Solar System can modify the orbits of main belt asteroids, though only if their mass is of the order of for single encounters or, one order less in case of multiple close encounters. However, Centaurs and TNOs are unlikely to have significantly dispersed young asteroid families in the main belt, although they can have perturbed some old asteroid families. Current main belt asteroids that originated as Centaurs or trans-Neptunian objects may lie in the outer belt with short lifetime of less than 4 million years, most likely orbiting between 2.8 and 3.2 AU at larger eccentricities than typical of main belt asteroids.
Periphery
Skirting the inner edge of the belt (ranging between 1.78 and 2.0 AU, with a mean semi-major axis of 1.9 AU) is the Hungaria family of minor planets. They are named after the main member, 434 Hungaria; the group contains at least 52 named asteroids. The Hungaria group is separated from the main body by the 4:1 Kirkwood gap and their orbits have a high inclination. Some members belong to the Mars-crossing category of asteroids, and gravitational perturbations by Mars are likely a factor in reducing the total population of this group.
Another high-inclination group in the inner part of the asteroid belt is the Phocaea family. These are composed primarily of S-type asteroids, whereas the neighboring Hungaria family includes some E-types. The Phocaea family orbit between 2.25 and 2.5 AU from the Sun.
Skirting the outer edge of the asteroid belt is the Cybele group, orbiting between 3.3 and 3.5 AU. These have a 7:4 orbital resonance with Jupiter. The Hilda family orbit between 3.5 and 4.2 AU with relatively circular orbits and a stable 3:2 orbital resonance with Jupiter. There are few asteroids beyond 4.2 AU, until Jupiter's orbit. At the latter the two families of Trojan asteroids can be found, which, at least for objects larger than 1 km, are approximately as numerous as the asteroids of the asteroid belt.
New families
Some asteroid families have formed recently, in astronomical terms. The Karin family apparently formed about 5.7 million years ago from a collision with a progenitor asteroid 33 km in radius. The Veritas family formed about 8.3 million years ago; evidence includes interplanetary dust recovered from ocean sediment.
More recently, the Datura cluster appears to have formed about 530,000 years ago from a collision with a main-belt asteroid. The age estimate is based on the probability of the members having their current orbits, rather than from any physical evidence. However, this cluster may have been a source for some zodiacal dust material. Other recent cluster formations, such as the Iannini cluster ( million years ago), may have provided additional sources of this asteroid dust.
Exploration
The first spacecraft to traverse the asteroid belt was Pioneer 10, which entered the region on 16 July 1972. At the time there was some concern that the debris in the belt would pose a hazard to the spacecraft, but it has since been safely traversed by multiple spacecraft without incident. Pioneer 11, Voyagers 1 and 2 and Ulysses passed through the belt without imaging any asteroids. Cassini measured plasma and fine dust grains while traversing the belt in 2000. On its way to Jupiter, Juno traversed the asteroid belt without collecting science data. Due to the low density of materials within the belt, the odds of a probe running into an asteroid are estimated at less than 1 in 1 billion.
Most main belt asteroids imaged to date have come from brief flyby opportunities by probes headed for other targets. Only the Dawn mission has studied main belt asteroids for a protracted period in orbit. The Galileo spacecraft imaged 951 Gaspra in 1991 and 243 Ida in 1993, then NEAR imaged 253 Mathilde in 1997 and landed on near–Earth asteroid 433 Eros in February 2001. Cassini imaged 2685 Masursky in 2000, Stardust imaged 5535 Annefrank in 2002, New Horizons imaged 132524 APL in 2006, and Rosetta imaged 2867 Šteins in September 2008 and 21 Lutetia in July 2010. Dawn orbited Vesta between July 2011 and September 2012 and has orbited Ceres since March 2015.
The Lucy space probe made a flyby of 152830 Dinkinesh in 2023, on its way to the Jupiter Trojans. ESA's JUICE mission will pass through the asteroid belt twice, with a proposed flyby of the asteroid 223 Rosa in 2029. The Psyche'' spacecraft is a NASA mission to the large M-type asteroid 16 Psyche.
See also
References
External links
Asteroids Page at NASA's Solar System Exploration
Plots of eccentricity vs. semi-major axis and inclination vs. semi-major axis at Asteroid Dynamic Site
Asteroid groups and families
Solar System | Asteroid belt | [
"Astronomy"
] | 6,510 | [
"Outer space",
"Solar System"
] |
47,266 | https://en.wikipedia.org/wiki/UNICOS | UNICOS is a range of Unix and later Linux operating system (OS) variants developed by Cray for its supercomputers. UNICOS is the successor of the Cray Operating System (COS). It provides network clustering and source code compatibility layers for some other Unixes. UNICOS was originally introduced in 1985 with the Cray-2 system and later ported to other Cray models. The original UNICOS was based on UNIX System V Release 2, and had many Berkeley Software Distribution (BSD) features (e.g., computer networking and file system enhancements) added to it.
Development
CX-OS was the original name given to what is now UNICOS. This was a prototype system which ran on a Cray X-MP in 1984 before the Cray-2 port. It was used to demonstrate the feasibility of using Unix on a supercomputer system, before Cray-2 hardware was available.
The operating system revamp was part of a larger movement inside Cray Research to modernize their corporate software: including rewriting their most important Fortran compiler (cft to cft77) in a higher-level language (Pascal) with more modern optimizations and vectorizations.
As a migration path for existing COS customers wishing to transition to UNICOS, a Guest Operating System (GOS) capability was introduced into COS. The only guest OS that was ever supported was UNICOS. A COS batch job would be submitted to start up UNICOS, which would then run as a subsystem under COS, using a subset of the systems CPUs, memory, and peripheral devices. The UNICOS that ran under GOS was exactly the same as when it ran stand-alone: the difference was that the kernel would make certain low-level hardware requests through the COS GOS hook, rather than directly to the hardware.
One of the sites that ran very early versions of UNICOS was Bell Labs, where Unix pioneers including Dennis Ritchie ported parts of their Eighth Edition Unix (including STREAMS input/output (I/O)) to UNICOS. They also experimented with a guest facility within UNICOS, allowing the stand-alone version of the OS to host itself.
Releases
Cray released several different OSs under the name UNICOS, including:
UNICOS: the original Cray Unix, based on System V. Used on the Cray-1, Cray-2, X-MP, Y-MP, C90, etc.
UNICOS MAX: a Mach-based microkernel used on the T3D's processing elements, together with UNICOS on the host Y-MP or C90 system.
UNICOS/mk: a serverized version of UNICOS using the Chorus microkernel to make a distributed operating system. Used on the T3E. This was the last Cray OS really based on UNICOS sources, as the following products were based on different sources and simply used the "UNICOS" name.
UNICOS/mp: not derived from UNICOS, but based on IRIX 6.5. Used on the X1.
UNICOS/lc: not derived from UNICOS, but based on SUSE Linux. Used on the XT3, XT4 and XT5. UNICOS/lc 1.x comprises a combination of
the compute elements run the Catamount microkernel (which itself is based on Cougaar)
the service elements run SUSE Linux
Cray Linux Environment (CLE): from release 2.1 onward, UNICOS/lc is now called Cray Linux Environment
the compute elements run Compute Node Linux (CNL) (which is a customized Linux kernel)
the service elements run SUSE Linux Enterprise Server
See also
Scientific Linux, a Linux distribution by Fermilab and CERN
Rocks Cluster Distribution, a Linux distribution for supercomputers
References
1984 software
Cray software
Linux distributions
Microkernel-based operating systems
Microkernels
Supercomputer operating systems
Unix distributions
UNIX System V | UNICOS | [
"Technology"
] | 849 | [
"Supercomputer operating systems",
"Supercomputing"
] |
47,271 | https://en.wikipedia.org/wiki/Sponge | Sponges or sea sponges are primarily marine invertebrates of the metazoan phylum Porifera ( ; meaning 'pore bearer'), a basal animal clade and a sister taxon of the diploblasts. They are sessile filter feeders that are bound to the seabed, and are one of the most ancient members of macrobenthos, with many historical species being important reef-building organisms.
Sponges are multicellular organisms consisting of jelly-like mesohyl sandwiched between two thin layers of cells, and usually have tube-like bodies full of pores and channels that allow water to circulate through them. They have unspecialized cells that can transform into other types and that often migrate between the main cell layers and the mesohyl in the process. They do not have complex nervous, digestive or circulatory systems. Instead, most rely on maintaining a constant water flow through their bodies to obtain food and oxygen and to remove wastes, usually via flagella movements of the so-called "collar cells".
Believed to be some of the most basal animals alive today, sponges were possibly the first outgroup to branch off the evolutionary tree from the last common ancestor of all animals, with fossil evidence of primitive sponges such as Otavia from as early as the Tonian period (around 800 Mya). The branch of zoology that studies sponges is known as spongiology.
Etymology
The term sponge derives from the Ancient Greek word . The scientific name Porifera is a neuter plural of the Modern Latin term porifer, which comes from the roots porus meaning "pore, opening", and -fer meaning "bearing or carrying".
Overview
Sponges are similar to other animals in that they are multicellular, heterotrophic, lack cell walls and produce sperm cells. Unlike other animals, they lack true tissues and organs. Some of them are radially symmetrical, but most are asymmetrical. The shapes of their bodies are adapted for maximal efficiency of water flow through the central cavity, where the water deposits nutrients and then leaves through a hole called the osculum. The single-celled choanoflagellates resemble the choanocyte cells of sponges which are used to drive their water flow systems and capture most of their food. This along with phylogenetic studies of ribosomal molecules have been used as morphological evidence to suggest sponges are the sister group to the rest of animals. A great majority are marine (salt-water) species, ranging in habitat from tidal zones to depths exceeding , though there are freshwater species. All adult sponges are sessile, meaning that they attach to an underwater surface and remain fixed in place (i.e., do not travel). While in their larval stage of life, they are motile.
Many sponges have internal skeletons of spicules (skeletal-like fragments of calcium carbonate or silicon dioxide), and/or spongin (a modified type of collagen protein). An internal gelatinous matrix called mesohyl functions as an endoskeleton, and it is the only skeleton in soft sponges that encrust such hard surfaces as rocks. More commonly, the mesohyl is stiffened by mineral spicules, by spongin fibers, or both. 90% of all known sponge species that have the widest range of habitats including all freshwater ones are demosponges that use spongin; many species have silica spicules, whereas some species have calcium carbonate exoskeletons. Calcareous sponges have calcium carbonate spicules and, in some species, calcium carbonate exoskeletons, are restricted to relatively shallow marine waters where production of calcium carbonate is easiest. The fragile glass sponges, with "scaffolding" of silica spicules, are restricted to polar regions and the ocean depths where predators are rare. Fossils of all of these types have been found in rocks dated from . In addition Archaeocyathids, whose fossils are common in rocks from , are now regarded as a type of sponge.
Although most of the approximately 5,000–10,000 known species of sponges feed on bacteria and other microscopic food in the water, some host photosynthesizing microorganisms as endosymbionts, and these alliances often produce more food and oxygen than they consume. A few species of sponges that live in food-poor environments have evolved as carnivores that prey mainly on small crustaceans.
Most sponges reproduce sexually, but they can also reproduce asexually. Sexually reproducing species release sperm cells into the water to fertilize ova released or retained by its mate or "mother"; the fertilized eggs develop into larvae which swim off in search of places to settle. Sponges are known for regenerating from fragments that are broken off, although this only works if the fragments include the right types of cells. Some species reproduce by budding. When environmental conditions become less hospitable to the sponges, for example as temperatures drop, many freshwater species and a few marine ones produce gemmules, "survival pods" of unspecialized cells that remain dormant until conditions improve; they then either form completely new sponges or recolonize the skeletons of their parents.
The few species of demosponge that have entirely soft fibrous skeletons with no hard elements have been used by humans over thousands of years for several purposes, including as padding and as cleaning tools. By the 1950s, though, these had been overfished so heavily that the industry almost collapsed, and most sponge-like materials are now synthetic. Sponges and their microscopic endosymbionts are now being researched as possible sources of medicines for treating a wide range of diseases. Dolphins have been observed using sponges as tools while foraging.
Distinguishing features
Sponges constitute the phylum Porifera, and have been defined as sessile metazoans (multicelled immobile animals) that have water intake and outlet openings connected by chambers lined with choanocytes, cells with whip-like flagella. However, a few carnivorous sponges have lost these water flow systems and the choanocytes. All known living sponges can remold their bodies, as most types of their cells can move within their bodies and a few can change from one type to another.
Even if a few sponges are able to produce mucus – which acts as a microbial barrier in all other animals – no sponge with the ability to secrete a functional mucus layer has been recorded. Without such a mucus layer their living tissue is covered by a layer of microbial symbionts, which can contribute up to 40–50% of the sponge wet mass. This inability to prevent microbes from penetrating their porous tissue could be a major reason why they have never evolved a more complex anatomy.
Like cnidarians (jellyfish, etc.) and ctenophores (comb jellies), and unlike all other known metazoans, sponges' bodies consist of a non-living jelly-like mass (mesohyl) sandwiched between two main layers of cells. Cnidarians and ctenophores have simple nervous systems, and their cell layers are bound by internal connections and by being mounted on a basement membrane (thin fibrous mat, also known as "basal lamina"). Sponges do not have a nervous system similar to that of vertebrates but may have one that is quite different. Their middle jelly-like layers have large and varied populations of cells, and some types of cells in their outer layers may move into the middle layer and change their functions.
Basic structure
Cell types
A sponge's body is hollow and is held in shape by the mesohyl, a jelly-like substance made mainly of collagen and reinforced by a dense network of fibers also made of collagen. 18 distinct cell types have been identified. The inner surface is covered with choanocytes, cells with cylindrical or conical collars surrounding one flagellum per choanocyte. The wave-like motion of the whip-like flagella drives water through the sponge's body. All sponges have ostia, channels leading to the interior through the mesohyl, and in most sponges these are controlled by tube-like porocytes that form closable inlet valves. Pinacocytes, plate-like cells, form a single-layered external skin over all other parts of the mesohyl that are not covered by choanocytes, and the pinacocytes also digest food particles that are too large to enter the ostia, while those at the base of the animal are responsible for anchoring it.
Other types of cells live and move within the mesohyl:
Lophocytes are amoeba-like cells that move slowly through the mesohyl and secrete collagen fibres.
Collencytes are another type of collagen-producing cell.
Rhabdiferous cells secrete polysaccharides that also form part of the mesohyl.
Oocytes and spermatocytes are reproductive cells.
Sclerocytes secrete the mineralized spicules ("little spines") that form the skeletons of many sponges and in some species provide some defense against predators.
In addition to or instead of sclerocytes, demosponges have spongocytes that secrete a form of collagen that polymerizes into spongin, a thick fibrous material that stiffens the mesohyl.
Myocytes ("muscle cells") conduct signals and cause parts of the animal to contract.
"Grey cells" act as sponges' equivalent of an immune system.
Archaeocytes (or amoebocytes) are amoeba-like cells that are totipotent, in other words, each is capable of transformation into any other type of cell. They also have important roles in feeding and in clearing debris that block the ostia.
Many larval sponges possess neuron-less eyes that are based on cryptochromes. They mediate phototaxic behavior.
Glass sponges present a distinctive variation on this basic plan. Their spicules, which are made of silica, form a scaffolding-like framework between whose rods the living tissue is suspended like a cobweb that contains most of the cell types. This tissue is a syncytium that in some ways behaves like many cells that share a single external membrane, and in others like a single cell with multiple nuclei.
Water flow and body structures
Most sponges work rather like chimneys: they take in water at the bottom and eject it from the osculum at the top. Since ambient currents are faster at the top, the suction effect that they produce by Bernoulli's principle does some of the work for free. Sponges can control the water flow by various combinations of wholly or partially closing the osculum and ostia (the intake pores) and varying the beat of the flagella, and may shut it down if there is a lot of sand or silt in the water.
Although the layers of pinacocytes and choanocytes resemble the epithelia of more complex animals, they are not bound tightly by cell-to-cell connections or a basal lamina (thin fibrous sheet underneath). The flexibility of these layers and re-modeling of the mesohyl by lophocytes allow the animals to adjust their shapes throughout their lives to take maximum advantage of local water currents.
The simplest body structure in sponges is a tube or vase shape known as "asconoid", but this severely limits the size of the animal. The body structure is characterized by a stalk-like spongocoel surrounded by a single layer of choanocytes. If it is simply scaled up, the ratio of its volume to surface area increases, because surface increases as the square of length or width while volume increases proportionally to the cube. The amount of tissue that needs food and oxygen is determined by the volume, but the pumping capacity that supplies food and oxygen depends on the area covered by choanocytes. Asconoid sponges seldom exceed in diameter.
Some sponges overcome this limitation by adopting the "syconoid" structure, in which the body wall is pleated. The inner pockets of the pleats are lined with choanocytes, which connect to the outer pockets of the pleats by ostia. This increase in the number of choanocytes and hence in pumping capacity enables syconoid sponges to grow up to a few centimeters in diameter.
The "leuconoid" pattern boosts pumping capacity further by filling the interior almost completely with mesohyl that contains a network of chambers lined with choanocytes and connected to each other and to the water intakes and outlet by tubes. Leuconid sponges grow to over in diameter, and the fact that growth in any direction increases the number of choanocyte chambers enables them to take a wider range of forms, for example, "encrusting" sponges whose shapes follow those of the surfaces to which they attach. All freshwater and most shallow-water marine sponges have leuconid bodies. The networks of water passages in glass sponges are similar to the leuconid structure.
In all three types of structure, the cross-section area of the choanocyte-lined regions is much greater than that of the intake and outlet channels. This makes the flow slower near the choanocytes and thus makes it easier for them to trap food particles. For example, in Leuconia, a small leuconoid sponge about tall and in diameter, water enters each of more than 80,000 intake canals at 6 cm per minute. However, because Leuconia has more than 2 million flagellated chambers whose combined diameter is much greater than that of the canals, water flow through chambers slows to 3.6 cm per hour, making it easy for choanocytes to capture food. All the water is expelled through a single osculum at about 8.5 cm per second, fast enough to carry waste products some distance away.
Skeleton
In zoology, a skeleton is any fairly rigid structure of an animal, irrespective of whether it has joints and irrespective of whether it is biomineralized. The mesohyl functions as an endoskeleton in most sponges, and is the only skeleton in soft sponges that encrust hard surfaces such as rocks. More commonly the mesohyl is stiffened by mineral spicules, by spongin fibers or both. Spicules, which are present in most but not all species, may be made of silica or calcium carbonate, and vary in shape from simple rods to three-dimensional "stars" with up to six rays. Spicules are produced by sclerocyte cells, and may be separate, connected by joints, or fused.
Some sponges also secrete exoskeletons that lie completely outside their organic components. For example, sclerosponges ("hard sponges") have massive calcium carbonate exoskeletons over which the organic matter forms a thin layer with choanocyte chambers in pits in the mineral. These exoskeletons are secreted by the pinacocytes that form the animals' skins.
Vital functions
Movement
Although adult sponges are fundamentally sessile animals, some marine and freshwater species can move across the sea bed at speeds of per day, as a result of amoeba-like movements of pinacocytes and other cells. A few species can contract their whole bodies, and many can close their oscula and ostia. Juveniles drift or swim freely, while adults are stationary.
Respiration, feeding and excretion
Sponges do not have distinct circulatory, respiratory, digestive, and excretory systems – instead, the water flow system supports all these functions. They filter food particles out of the water flowing through them. Particles larger than 50 micrometers cannot enter the ostia and pinacocytes consume them by phagocytosis (engulfing and intracellular digestion). Particles from 0.5 μm to 50 μm are trapped in the ostia, which taper from the outer to inner ends. These particles are consumed by pinacocytes or by archaeocytes which partially extrude themselves through the walls of the ostia. Bacteria-sized particles, below 0.5 micrometers, pass through the ostia and are caught and consumed by choanocytes. Since the smallest particles are by far the most common, choanocytes typically capture 80% of a sponge's food supply. Archaeocytes transport food packaged in vesicles from cells that directly digest food to those that do not. At least one species of sponge has internal fibers that function as tracks for use by nutrient-carrying archaeocytes, and these tracks also move inert objects.
It used to be claimed that glass sponges could live on nutrients dissolved in sea water and were very averse to silt. However, a study in 2007 found no evidence of this and concluded that they extract bacteria and other micro-organisms from water very efficiently (about 79%) and process suspended sediment grains to extract such prey. Collar bodies digest food and distribute it wrapped in vesicles that are transported by dynein "motor" molecules along bundles of microtubules that run throughout the syncytium.
Sponges' cells absorb oxygen by diffusion from water into cells as water flows through body, into which carbon dioxide and other soluble waste products such as ammonia also diffuse. Archeocytes remove mineral particles that threaten to block the ostia, transport them through the mesohyl and generally dump them into the outgoing water current, although some species incorporate them into their skeletons.
Carnivorous sponges
In waters where the supply of food particles is very poor, some species prey on crustaceans and other small animals. So far only 137 species have been discovered. Most belong to the family Cladorhizidae, but a few members of the Guitarridae and Esperiopsidae are also carnivores. In most cases, little is known about how they actually capture prey, although some species are thought to use either sticky threads or hooked spicules. Most carnivorous sponges live in deep waters, up to , and the development of deep-ocean exploration techniques is expected to lead to the discovery of several more. However, one species has been found in Mediterranean caves at depths of , alongside the more usual filter-feeding sponges. The cave-dwelling predators capture crustaceans under long by entangling them with fine threads, digest them by enveloping them with further threads over the course of a few days, and then return to their normal shape; there is no evidence that they use venom.
Most known carnivorous sponges have completely lost the water flow system and choanocytes. However, the genus Chondrocladia uses a highly modified water flow system to inflate balloon-like structures that are used for capturing prey.
Endosymbionts
Freshwater sponges often host green algae as endosymbionts within archaeocytes and other cells and benefit from nutrients produced by the algae. Many marine species host other photosynthesizing organisms, most commonly cyanobacteria but in some cases dinoflagellates. Symbiotic cyanobacteria may form a third of the total mass of living tissue in some sponges, and some sponges gain 48% to 80% of their energy supply from these micro-organisms. In 2008, a University of Stuttgart team reported that spicules made of silica conduct light into the mesohyl, where the photosynthesizing endosymbionts live. Sponges that host photosynthesizing organisms are most common in waters with relatively poor supplies of food particles and often have leafy shapes that maximize the amount of sunlight they collect.
A recently discovered carnivorous sponge that lives near hydrothermal vents hosts methane-eating bacteria and digests some of them.
"Immune" system
Sponges do not have the complex immune systems of most other animals. However, they reject grafts from other species but accept them from other members of their own species. In a few marine species, gray cells play the leading role in rejection of foreign material. When invaded, they produce a chemical that stops movement of other cells in the affected area, thus preventing the intruder from using the sponge's internal transport systems. If the intrusion persists, the grey cells concentrate in the area and release toxins that kill all cells in the area. The "immune" system can stay in this activated state for up to three weeks.
Reproduction
Asexual
Sponges have three asexual methods of reproduction: after fragmentation, by budding, and by producing gemmules. Fragments of sponges may be detached by currents or waves. They use the mobility of their pinacocytes and choanocytes and reshaping of the mesohyl to re-attach themselves to a suitable surface and then rebuild themselves as small but functional sponges over the course of several days. The same capabilities enable sponges that have been squeezed through a fine cloth to regenerate. A sponge fragment can only regenerate if it contains both collencytes to produce mesohyl and archeocytes to produce all the other cell types. A very few species reproduce by budding.
Gemmules are "survival pods" which a few marine sponges and many freshwater species produce by the thousands when dying and which some, mainly freshwater species, regularly produce in autumn. Spongocytes make gemmules by wrapping shells of spongin, often reinforced with spicules, round clusters of archeocytes that are full of nutrients. Freshwater gemmules may also include photosynthesizing symbionts. The gemmules then become dormant, and in this state can survive cold, drying out, lack of oxygen and extreme variations in salinity. Freshwater gemmules often do not revive until the temperature drops, stays cold for a few months and then reaches a near-"normal" level. When a gemmule germinates, the archeocytes round the outside of the cluster transform into pinacocytes, a membrane over a pore in the shell bursts, the cluster of cells slowly emerges, and most of the remaining archeocytes transform into other cell types needed to make a functioning sponge. Gemmules from the same species but different individuals can join forces to form one sponge. Some gemmules are retained within the parent sponge, and in spring it can be difficult to tell whether an old sponge has revived or been "recolonized" by its own gemmules.
Sexual
Most sponges are hermaphrodites (function as both sexes simultaneously), although sponges have no gonads (reproductive organs). Sperm are produced by choanocytes or entire choanocyte chambers that sink into the mesohyl and form spermatic cysts while eggs are formed by transformation of archeocytes, or of choanocytes in some species. Each egg generally acquires a yolk by consuming "nurse cells". During spawning, sperm burst out of their cysts and are expelled via the osculum. If they contact another sponge of the same species, the water flow carries them to choanocytes that engulf them but, instead of digesting them, metamorphose to an ameboid form and carry the sperm through the mesohyl to eggs, which in most cases engulf the carrier and its cargo.
A few species release fertilized eggs into the water, but most retain the eggs until they hatch. By retaining the eggs, the parents can transfer symbiotic microorganisms directly to their offspring through vertical transmission, while the species who release their eggs into the water has to acquire symbionts horizontally (a combination of both is probably most common, where larvae with vertically transmitted symbionts also acquire others horizontally). There are four types of larvae, but all are lecithotrophic (non-feeding) balls of cells with an outer layer of cells whose flagella or cilia enable the larvae to move. After swimming for a few days the larvae sink and crawl until they find a place to settle. Most of the cells transform into archeocytes and then into the types appropriate for their locations in a miniature adult sponge.
Glass sponge embryos start by dividing into separate cells, but once 32 cells have formed they rapidly transform into larvae that externally are ovoid with a band of cilia round the middle that they use for movement, but internally have the typical glass sponge structure of spicules with a cobweb-like main syncitium draped around and between them and choanosyncytia with multiple collar bodies in the center. The larvae then leave their parents' bodies.
Meiosis
The cytological progression of porifera oogenesis and spermatogenesis (gametogenesis) is very similar to that of other metazoa. Most of the genes from the classic set of meiotic genes, including genes for DNA recombination and double-strand break repair, that are conserved in eukaryotes are expressed in the sponges (e.g. Geodia hentscheli and Geodia phlegraei). Since porifera are considered to be the earliest divergent animals, these findings indicate that the basic toolkit of meiosis including capabilities for recombination and DNA repair were present early in eukaryote evolution.
Life cycle
Sponges in temperate regions live for at most a few years, but some tropical species and perhaps some deep-ocean ones may live for 200 years or more. Some calcified demosponges grow by only per year and, if that rate is constant, specimens wide must be about 5,000 years old. Some sponges start sexual reproduction when only a few weeks old, while others wait until they are several years old.
Coordination of activities
Adult sponges lack neurons or any other kind of nervous tissue. However, most species have the ability to perform movements that are coordinated all over their bodies, mainly contractions of the pinacocytes, squeezing the water channels and thus expelling excess sediment and other substances that may cause blockages. Some species can contract the osculum independently of the rest of the body. Sponges may also contract in order to reduce the area that is vulnerable to attack by predators. In cases where two sponges are fused, for example if there is a large but still unseparated bud, these contraction waves slowly become coordinated in both of the "Siamese twins". The coordinating mechanism is unknown, but may involve chemicals similar to neurotransmitters. However, glass sponges rapidly transmit electrical impulses through all parts of the syncytium, and use this to halt the motion of their flagella if the incoming water contains toxins or excessive sediment. Myocytes are thought to be responsible for closing the osculum and for transmitting signals between different parts of the body.
Sponges contain genes very similar to those that contain the "recipe" for the post-synaptic density, an important signal-receiving structure in the neurons of all other animals. However, in sponges these genes are only activated in "flask cells" that appear only in larvae and may provide some sensory capability while the larvae are swimming. This raises questions about whether flask cells represent the predecessors of true neurons or are evidence that sponges' ancestors had true neurons but lost them as they adapted to a sessile lifestyle.
Ecology
Habitats
Sponges are worldwide in their distribution, living in a wide range of ocean habitats, from the polar regions to the tropics. Most live in quiet, clear waters, because sediment stirred up by waves or currents would block their pores, making it difficult for them to feed and breathe. The greatest numbers of sponges are usually found on firm surfaces such as rocks, but some sponges can attach themselves to soft sediment by means of a root-like base.
Sponges are more abundant but less diverse in temperate waters than in tropical waters, possibly because organisms that prey on sponges are more abundant in tropical waters. Glass sponges are the most common in polar waters and in the depths of temperate and tropical seas, as their very porous construction enables them to extract food from these resource-poor waters with the minimum of effort. Demosponges and calcareous sponges are abundant and diverse in shallower non-polar waters.
The different classes of sponge live in different ranges of habitat:
{|class="wikitable"
|-
! Class !! Water type !! Depth !! Type of surface
|-
! Calcarea
|Marine ||less than ||Hard
|-
! Glass sponges
|Marine ||Deep ||Soft or firm sediment
|-
! Demosponges
|Marine, brackish; and about 150 freshwater species ||Inter-tidal to abyssal; a carnivorous demosponge has been found at ||Any
|}
As primary producers
Sponges with photosynthesizing endosymbionts produce up to three times more oxygen than they consume, as well as more organic matter than they consume. Such contributions to their habitats' resources are significant along Australia's Great Barrier Reef but relatively minor in the Caribbean.
Defenses
Many sponges shed spicules, forming a dense carpet several meters deep that keeps away echinoderms which would otherwise prey on the sponges. They also produce toxins that prevent other sessile organisms such as bryozoans or sea squirts from growing on or near them, making sponges very effective competitors for living space. One of many examples includes ageliferin.
A few species, the Caribbean fire sponge Tedania ignis, cause a severe rash in humans who handle them. Turtles and some fish feed mainly on sponges. It is often said that sponges produce chemical defenses against such predators. However, experiments have been unable to establish a relationship between the toxicity of chemicals produced by sponges and how they taste to fish, which would diminish the usefulness of chemical defenses as deterrents. Predation by fish may even help to spread sponges by detaching fragments. However, some studies have shown fish showing a preference for non chemically defended sponges, and another study found that high levels of coral predation did predict the presence of chemically defended species.
Glass sponges produce no toxic chemicals, and live in very deep water where predators are rare.
Predation
Spongeflies, also known as spongillaflies (Neuroptera, Sisyridae), are specialist predators of freshwater sponges. The female lays her eggs on vegetation overhanging water. The larvae hatch and drop into the water where they seek out sponges to feed on. They use their elongated mouthparts to pierce the sponge and suck the fluids within. The larvae of some species cling to the surface of the sponge while others take refuge in the sponge's internal cavities. The fully grown larvae leave the water and spin a cocoon in which to pupate.
Bioerosion
The Caribbean chicken-liver sponge Chondrilla nucula secretes toxins that kill coral polyps, allowing the sponges to grow over the coral skeletons. Others, especially in the family Clionaidae, use corrosive substances secreted by their archeocytes to tunnel into rocks, corals and the shells of dead mollusks. Sponges may remove up to per year from reefs, creating visible notches just below low-tide level.
Diseases
Caribbean sponges of the genus Aplysina suffer from Aplysina red band syndrome. This causes Aplysina to develop one or more rust-colored bands, sometimes with adjacent bands of necrotic tissue. These lesions may completely encircle branches of the sponge. The disease appears to be contagious and impacts approximately ten percent of A. cauliformis on Bahamian reefs. The rust-colored bands are caused by a cyanobacterium, but it is unknown whether this organism actually causes the disease.
Collaboration with other organisms
In addition to hosting photosynthesizing endosymbionts, sponges are noted for their wide range of collaborations with other organisms. The relatively large encrusting sponge Lissodendoryx colombiensis is most common on rocky surfaces, but has extended its range into seagrass meadows by letting itself be surrounded or overgrown by seagrass sponges, which are distasteful to the local starfish and therefore protect Lissodendoryx against them; in return, the seagrass sponges get higher positions away from the sea-floor sediment.
Shrimps of the genus Synalpheus form colonies in sponges, and each shrimp species inhabits a different sponge species, making Synalpheus one of the most diverse crustacean genera. Specifically, Synalpheus regalis utilizes the sponge not only as a food source, but also as a defense against other shrimp and predators. As many as 16,000 individuals inhabit a single loggerhead sponge, feeding off the larger particles that collect on the sponge as it filters the ocean to feed itself. Other crustaceans such as hermit crabs commonly have a specific species of sponge, Pseudospongosorites, grow on them as both the sponge and crab occupy gastropod shells until the crab and sponge outgrow the shell, eventually resulting in the crab using the sponge's body as protection instead of the shell until the crab finds a suitable replacement shell.
Sponge loop
Most sponges are detritivores which filter organic debris particles and microscopic life forms from ocean water. In particular, sponges occupy an important role as detritivores in coral reef food webs by recycling detritus to higher trophic levels.
The hypothesis has been made that coral reef sponges facilitate the transfer of coral-derived organic matter to their associated detritivores via the production of sponge detritus, as shown in the diagram. Several sponge species are able to convert coral-derived DOM into sponge detritus, and transfer organic matter produced by corals further up the reef food web. Corals release organic matter as both dissolved and particulate mucus, as well as cellular material such as expelled Symbiodinium.
Organic matter could be transferred from corals to sponges by all these pathways, but DOM likely makes up the largest fraction, as the majority (56 to 80%) of coral mucus dissolves in the water column, and coral loss of fixed carbon due to expulsion of Symbiodinium is typically negligible (0.01%) compared with mucus release (up to ~40%). Coral-derived organic matter could also be indirectly transferred to sponges via bacteria, which can also consume coral mucus.
Sponge holobiont
Besides a one to one symbiotic relationship, it is possible for a host to become symbiotic with a microbial consortium, resulting in a diverse sponge microbiome. Sponges are able to host a wide range of microbial communities that can also be very specific. The microbial communities that form a symbiotic relationship with the sponge can amount to as much as 35% of the biomass of its host. The term for this specific symbiotic relationship, where a microbial consortia pairs with a host is called a holobiotic relationship. The sponge as well as the microbial community associated with it will produce a large range of secondary metabolites that help protect it against predators through mechanisms such as chemical defense.
Some of these relationships include endosymbionts within bacteriocyte cells, and cyanobacteria or microalgae found below the pinacoderm cell layer where they are able to receive the highest amount of light, used for phototrophy. They can host over 50 different microbial phyla and candidate phyla, including Alphaprotoebacteria, Actinomycetota, Chloroflexota, Nitrospirota, "Cyanobacteria", the taxa Gamma-, the candidate phylum Poribacteria, and Thaumarchaea.
Systematics
Taxonomy
Carl Linnaeus, who classified most kinds of sessile animals as belonging to the order Zoophyta in the class Vermes, mistakenly identified the genus Spongia as plants in the order Algae. For a long time thereafter, sponges were assigned to subkingdom Parazoa ("beside the animals") separated from the Eumetazoa which formed the rest of the kingdom Animalia.
The phylum Porifera is further divided into classes mainly according to the composition of their skeletons:
Hexactinellida (glass sponges) have silicate spicules, the largest of which have six rays and may be individual or fused. The main components of their bodies are syncytia in which large numbers of cell share a single external membrane.
Calcarea have skeletons made of calcite, a form of calcium carbonate, which may form separate spicules or large masses. All the cells have a single nucleus and membrane.
Most Demospongiae have silicate spicules or spongin fibers or both within their soft tissues. However, a few also have massive external skeletons made of aragonite, another form of calcium carbonate. All the cells have a single nucleus and membrane.
Archeocyatha are known only as fossils from the Cambrian period.
In the 1970s, sponges with massive calcium carbonate skeletons were assigned to a separate class, Sclerospongiae, otherwise known as "coralline sponges".
However, in the 1980s, it was found that these were all members of either the Calcarea or the Demospongiae.
So far scientific publications have identified about 9,000 poriferan species, of which: about 400 are glass sponges; about 500 are calcareous species; and the rest are demosponges. However, some types of habitat, vertical rock and cave walls and galleries in rock and coral boulders, have been investigated very little, even in shallow seas.
Classes
Sponges were traditionally distributed in three classes: calcareous sponges (Calcarea), glass sponges (Hexactinellida) and demosponges (Demospongiae). However, studies have now shown that the Homoscleromorpha, a group thought to belong to the Demospongiae, has a genetic relationship well separated from other sponge classes. Therefore, they have recently been recognized as the fourth class of sponges.
Sponges are divided into classes mainly according to the composition of their skeletons: These are arranged in evolutionary order as shown below in ascending order of their evolution from top to bottom:
{|class="wikitable"
! Class !! Type of cells !! Spicules !! Spongin fibers !! Massive exoskeleton !! Body form
|-
! Hexactinellida
|Mostly syncytia in all species||SilicaMay be individual or fused ||Never ||Never ||Leuconoid
|-
! Demospongiae
|Single nucleus, single external membrane ||Silica ||In many species ||In some species.Made of aragonite if present.||Leuconoid
|-
! Calcarea
|Single nucleus, single external membrane||CalciteMay be individual or large masses ||Never ||Common.Made of calcite if present.||Asconoid, syconoid, leuconoid or solenoid
|-
! Homoscleromorpha
|Single nucleus, single external membrane||Silica ||In many species ||Never ||Sylleibid or leuconoid
|}
Phylogeny
The phylogeny of sponges has been debated heavily since the advent of phylogenetics. Originally thought to be the most basal animal phylum, there is now considerable evidence that Ctenophora may hold that title instead. Additionally, the monophyly of the phylum is now under question. Several studies have concluded that all other animals emerged from within the sponges, and usually recover that the calcareous sponges and Homoscleromorpha are closer to other animals than to demosponges. The internal relationships of Porifera have proven to be less uncertain. A close relationship of Homoscleromorpha and Calcarea has been recovered in nearly all studies, whether or not they support sponge or eumetazoan monophyly. The position of glass sponges is also fairly certain, with a majority of studies recovering them as the sister of the demosponges. Thus, the uncertainty at the base of the animal family tree is probably best represented by the below cladogram.
Evolutionary history
Fossil record
Although molecular clocks and biomarkers suggest sponges existed well before the Cambrian explosion of life, silica spicules like those of demosponges are absent from the fossil record until the Cambrian. An unsubstantiated 2002 report exists of spicules in rocks dated around . Well-preserved fossil sponges from about in the Ediacaran period have been found in the Doushantuo Formation. These fossils, which include: spicules; pinacocytes; porocytes; archeocytes; sclerocytes; and the internal cavity, have been classified as demosponges. The Ediacaran record of sponges also contains two other genera: the stem-hexactinellid Helicolocellus from the Dengying Formation and the possible stem-archaeocyathan Arimasia from the Nama Group. These genera are both from the “Nama assemblage” of Ediacaran biota, although whether this is due to a genuine lack beforehand or preservational bias is uncertain. Fossils of glass sponges have been found from around in rocks in Australia, China, and Mongolia. Early Cambrian sponges from Mexico belonging to the genus Kiwetinokia show evidence of fusion of several smaller spicules to form a single large spicule. Calcium carbonate spicules of calcareous sponges have been found in Early Cambrian rocks from about in Australia. Other probable demosponges have been found in the Early Cambrian Chengjiang fauna, from . Fossils found in the Canadian Northwest Territories dating to may be sponges; if this finding is confirmed, it suggests the first animals appeared before the Neoproterozoic oxygenation event.
Freshwater sponges appear to be much younger, as the earliest known fossils date from the Mid-Eocene period about . Although about 90% of modern sponges are demosponges, fossilized remains of this type are less common than those of other types because their skeletons are composed of relatively soft spongin that does not fossilize well.
The earliest sponge symbionts are known from the early Silurian.
A chemical tracer is 24-isopropyl cholestane, which is a stable derivative of 24-isopropyl cholesterol, which is said to be produced by demosponges but not by eumetazoans ("true animals", i.e. cnidarians and bilaterians). Since choanoflagellates are thought to be animals' closest single-celled relatives, a team of scientists examined the biochemistry and genes of one choanoflagellate species. They concluded that this species could not produce 24-isopropyl cholesterol but that investigation of a wider range of choanoflagellates would be necessary in order to prove that the fossil 24-isopropyl cholestane could only have been produced by demosponges.
Although a previous publication reported traces of the chemical 24-isopropyl cholestane in ancient rocks dating to , recent research using a much more accurately dated rock series has revealed that these biomarkers only appear before the end of the Marinoan glaciation approximately , and that "Biomarker analysis has yet to reveal any convincing evidence for ancient sponges pre-dating the first globally extensive Neoproterozoic glacial episode (the Sturtian, ~ in Oman)". While it has been argued that this 'sponge biomarker' could have originated from marine algae, recent research suggests that the algae's ability to produce this biomarker evolved only in the Carboniferous; as such, the biomarker remains strongly supportive of the presence of demosponges in the Cryogenian.
Archaeocyathids, which some classify as a type of coralline sponge, are very common fossils in rocks from the Early Cambrian about , but apparently died out by the end of the Cambrian .
It has been suggested that they were produced by: sponges; cnidarians; algae; foraminiferans; a completely separate phylum of animals, Archaeocyatha; or even a completely separate kingdom of life, labeled Archaeata or Inferibionta. Since the 1990s, archaeocyathids have been regarded as a distinctive group of sponges.
It is difficult to fit chancelloriids into classifications of sponges or more complex animals. An analysis in 1996 concluded that they were closely related to sponges on the grounds that the detailed structure of chancellorid sclerites ("armor plates") is similar to that of fibers of spongin, a collagen protein, in modern keratose (horny) demosponges such as Darwinella. However, another analysis in 2002 concluded that chancelloriids are not sponges and may be intermediate between sponges and more complex animals, among other reasons because their skins were thicker and more tightly connected than those of sponges. In 2008, a detailed analysis of chancelloriids' sclerites concluded that they were very similar to those of halkieriids, mobile bilaterian animals that looked like slugs in chain mail and whose fossils are found in rocks from the very Early Cambrian to the Mid Cambrian. If this is correct, it would create a dilemma, as it is extremely unlikely that totally unrelated organisms could have developed such similar sclerites independently, but the huge difference in the structures of their bodies makes it hard to see how they could be closely related.
Relationships to other animal groups
In the 1990s, sponges were widely regarded as a monophyletic group, all of them having descended from a common ancestor that was itself a sponge, and as the "sister-group" to all other metazoans (multi-celled animals), which themselves form a monophyletic group. On the other hand, some 1990s analyses also revived the idea that animals' nearest evolutionary relatives are choanoflagellates, single-celled organisms very similar to sponges' choanocytes – which would imply that most Metazoa evolved from very sponge-like ancestors and therefore that sponges may not be monophyletic, as the same sponge-like ancestors may have given rise both to modern sponges and to non-sponge members of Metazoa.
Analyses since 2001 have concluded that Eumetazoa (more complex than sponges) are more closely related to particular groups of sponges than to other sponge groups. Such conclusions imply that sponges are not monophyletic, because the last common ancestor of all sponges would also be a direct ancestor of the Eumetazoa, which are not sponges. A study in 2001 based on comparisons of ribosome DNA concluded that the most fundamental division within sponges was between glass sponges and the rest, and that Eumetazoa are more closely related to calcareous sponges (those with calcium carbonate spicules) than to other types of sponge. In 2007, one analysis based on comparisons of RNA and another based mainly on comparison of spicules concluded that demosponges and glass sponges are more closely related to each other than either is to the calcareous sponges, which in turn are more closely related to Eumetazoa.
Other anatomical and biochemical evidence links the Eumetazoa with Homoscleromorpha, a sub-group of demosponges. A comparison in 2007 of nuclear DNA, excluding glass sponges and comb jellies, concluded that:
Homoscleromorpha are most closely related to Eumetazoa;
calcareous sponges are the next closest;
the other demosponges are evolutionary "aunts" of these groups; and
the chancelloriids, bag-like animals whose fossils are found in Cambrian rocks, may be sponges.
The sperm of Homoscleromorpha share features with the sperm of Eumetazoa, that sperm of other sponges lack. In both Homoscleromorpha and Eumetazoa layers of cells are bound together by attachment to a carpet-like basal membrane composed mainly of "typ IV" collagen, a form of collagen not found in other sponges – although the spongin fibers that reinforce the mesohyl of all demosponges is similar to "type IV" collagen.
The analyses described above concluded that sponges are closest to the ancestors of all Metazoa, of all multi-celled animals including both sponges and more complex groups. However, another comparison in 2008 of 150 genes in each of 21 genera, ranging from fungi to humans but including only two species of sponge, suggested that comb jellies (ctenophora) are the most basal lineage of the Metazoa included in the sample. If this is correct, either modern comb jellies developed their complex structures independently of other Metazoa, or sponges' ancestors were more complex and all known sponges are drastically simplified forms. The study recommended further analyses using a wider range of sponges and other simple Metazoa such as Placozoa.
However, reanalysis of the data showed that the computer algorithms used for analysis were misled by the presence of specific ctenophore genes that were markedly different from those of other species, leaving sponges as either the sister group to all other animals, or an ancestral paraphyletic grade. 'Family trees' constructed using a combination of all available data – morphological, developmental and molecular – concluded that the sponges are in fact a monophyletic group, and with the cnidarians form the sister group to the bilaterians.
A very large and internally consistent alignment of 1,719 proteins at the metazoan scale, published in 2017, showed that (i) sponges – represented by Homoscleromorpha, Calcarea, Hexactinellida, and Demospongiae – are monophyletic, (ii) sponges are sister-group to all other multicellular animals, (iii) ctenophores emerge as the second-earliest branching animal lineage, and (iv) placozoans emerge as the third animal lineage, followed by cnidarians sister-group to bilaterians.
In March 2021, scientists from Dublin found additional evidence that sponges are the sister group to all other animals, while in May 2023, Schultz et al. found patterns of irreversible change in genome synteny that provide strong evidence that ctenophores are the sister group to all other animals instead.
Notable spongiologists
Céline Allewaert
Patricia Bergquist
James Scott Bowerbank
Maurice Burton
Henry John Carter
Max Walker de Laubenfels
Arthur Dendy
Édouard Placide Duchassaing de Fontbressin
Randolph Kirkpatrick
Robert J. Lendlmayer von Lendenfeld
Edward Alfred Minchin
Giovanni Domenico Nardo
Eduard Oscar Schmidt
Émile Topsent
Use
By dolphins
A report in 1997 described use of sponges as a tool by bottlenose dolphins in Shark Bay in Western Australia. A dolphin will attach a marine sponge to its rostrum, which is presumably then used to protect it when searching for food in the sandy sea bottom. The behavior, known as sponging, has only been observed in this bay and is almost exclusively shown by females. A study in 2005 concluded that mothers teach the behavior to their daughters and that all the sponge users are closely related, suggesting that it is a fairly recent innovation.
By humans
Skeleton
The calcium carbonate or silica spicules of most sponge genera make them too rough for most uses, but two genera, Hippospongia and Spongia, have soft, entirely fibrous skeletons. Early Europeans used soft sponges for many purposes, including padding for helmets, portable drinking utensils and municipal water filters. Until the invention of synthetic sponges, they were used as cleaning tools, applicators for paints and ceramic glazes and discreet contraceptives. However, by the mid-20th century, overfishing brought both the animals and the industry close to extinction.
Many objects with sponge-like textures are now made of substances not derived from poriferans. Synthetic sponges include personal and household cleaning tools, breast implants, and contraceptive sponges. Typical materials used are cellulose foam, polyurethane foam, and less frequently, silicone foam.
The luffa "sponge", also spelled loofah, which is commonly sold for use in the kitchen or the shower, is not derived from an animal but mainly from the fibrous "skeleton" of the sponge gourd (Luffa aegyptiaca, Cucurbitaceae).
Antibiotic compounds
Sponges have medicinal potential due to the presence in sponges themselves or their microbial symbionts of chemicals that may be used to control viruses, bacteria, tumors and fungi.
Other biologically active compounds
Lacking any protective shell or means of escape, sponges have evolved to synthesize a variety of unusual compounds. One such class is the oxidized fatty acid derivatives called oxylipins. Members of this family have been found to have anti-cancer, anti-bacterial and anti-fungal properties. One example isolated from the Okinawan Plakortis sponges, plakoridine A, has shown potential as a cytotoxin to murine lymphoma cells.
See also
Lists of sponges
Sponge Reef Project
3-Alkylpyridinium, compounds found in marine Haplosclerida sponges
References
Further reading
External links
Water flow and feeding in the phylum Porifera (sponges) – Flash animations of sponge body structures, water flow and feeding
Carsten's Spongepage, Information on the ecology and the biotechnological potential of sponges and their associated bacteria.
History of Tarpon Springs sponge industry, Tarpon Springs, Florida
Nature's 'fibre optics' experts
The Sponge Reef Project
Queensland Museum information about sponges
Queensland Museum Sessile marine invertebrates collections
Queensland Museum Sessile marine invertebrates research
Sponge Guide for Britain and Ireland , Bernard Picton, Christine Morrow & Rob van Soest
World Porifera database, the world list of extant sponges, includes a searchable database.
Sponges: World production and markets // Food and Agriculture Organisation
Sponges
Aquatic animals
Ediacaran first appearances
Parazoa | Sponge | [
"Biology"
] | 11,395 | [
"Parazoa",
"Sponges",
"Animals"
] |
47,279 | https://en.wikipedia.org/wiki/Closed%20set | In geometry, topology, and related branches of mathematics, a closed set is a set whose complement is an open set. In a topological space, a closed set can be defined as a set which contains all its limit points. In a complete metric space, a closed set is a set which is closed under the limit operation. This should not be confused with closed manifold.
Sets that are both open and closed and are called clopen sets.
Definition
Given a topological space , the following statements are equivalent:
a set is in
is an open subset of ; that is,
is equal to its closure in
contains all of its limit points.
contains all of its boundary points.
An alternative characterization of closed sets is available via sequences and nets. A subset of a topological space is closed in if and only if every limit of every net of elements of also belongs to In a first-countable space (such as a metric space), it is enough to consider only convergent sequences, instead of all nets. One value of this characterization is that it may be used as a definition in the context of convergence spaces, which are more general than topological spaces. Notice that this characterization also depends on the surrounding space because whether or not a sequence or net converges in depends on what points are present in
A point in is said to be a subset if (or equivalently, if belongs to the closure of in the topological subspace meaning where is endowed with the subspace topology induced on it by ).
Because the closure of in is thus the set of all points in that are close to this terminology allows for a plain English description of closed subsets:
a subset is closed if and only if it contains every point that is close to it.
In terms of net convergence, a point is close to a subset if and only if there exists some net (valued) in that converges to
If is a topological subspace of some other topological space in which case is called a of then there exist some point in that is close to (although not an element of ), which is how it is possible for a subset to be closed in but to be closed in the "larger" surrounding super-space
If and if is topological super-space of then is always a (potentially proper) subset of which denotes the closure of in indeed, even if is a closed subset of (which happens if and only if ), it is nevertheless still possible for to be a proper subset of However, is a closed subset of if and only if for some (or equivalently, for every) topological super-space of
Closed sets can also be used to characterize continuous functions: a map is continuous if and only if for every subset ; this can be reworded in plain English as: is continuous if and only if for every subset maps points that are close to to points that are close to Similarly, is continuous at a fixed given point if and only if whenever is close to a subset then is close to
More about closed sets
The notion of closed set is defined above in terms of open sets, a concept that makes sense for topological spaces, as well as for other spaces that carry topological structures, such as metric spaces, differentiable manifolds, uniform spaces, and gauge spaces.
Whether a set is closed depends on the space in which it is embedded. However, the compact Hausdorff spaces are "absolutely closed", in the sense that, if you embed a compact Hausdorff space in an arbitrary Hausdorff space then will always be a closed subset of ; the "surrounding space" does not matter here. Stone–Čech compactification, a process that turns a completely regular Hausdorff space into a compact Hausdorff space, may be described as adjoining limits of certain nonconvergent nets to the space.
Furthermore, every closed subset of a compact space is compact, and every compact subspace of a Hausdorff space is closed.
Closed sets also give a useful characterization of compactness: a topological space is compact if and only if every collection of nonempty closed subsets of with empty intersection admits a finite subcollection with empty intersection.
A topological space is disconnected if there exist disjoint, nonempty, open subsets and of whose union is Furthermore, is totally disconnected if it has an open basis consisting of closed sets.
Properties
A closed set contains its own boundary. In other words, if you are "outside" a closed set, you may move a small amount in any direction and still stay outside the set. This is also true if the boundary is the empty set, e.g. in the metric space of rational numbers, for the set of numbers of which the square is less than
Any intersection of any family of closed sets is closed (this includes intersections of infinitely many closed sets)
The union of closed sets is closed.
The empty set is closed.
The whole set is closed.
In fact, if given a set and a collection of subsets of such that the elements of have the properties listed above, then there exists a unique topology on such that the closed subsets of are exactly those sets that belong to
The intersection property also allows one to define the closure of a set in a space which is defined as the smallest closed subset of that is a superset of
Specifically, the closure of can be constructed as the intersection of all of these closed supersets.
Sets that can be constructed as the union of countably many closed sets are denoted Fσ sets. These sets need not be closed.
Examples
The closed interval of real numbers is closed. (See for an explanation of the bracket and parenthesis set notation.)
The unit interval is closed in the metric space of real numbers, and the set of rational numbers between and (inclusive) is closed in the space of rational numbers, but is not closed in the real numbers.
Some sets are neither open nor closed, for instance the half-open interval in the real numbers.
The ray is closed.
The Cantor set is an unusual closed set in the sense that it consists entirely of boundary points and is nowhere dense.
Singleton points (and thus finite sets) are closed in T1 spaces and Hausdorff spaces.
The set of integers is an infinite and unbounded closed set in the real numbers.
If is a function between topological spaces then is continuous if and only if preimages of closed sets in are closed in
See also
Notes
Citations
References
General topology | Closed set | [
"Mathematics"
] | 1,300 | [
"General topology",
"Topology"
] |
47,280 | https://en.wikipedia.org/wiki/Infrastructure%20bias | In economics and social policy, infrastructure bias is the influence of the location and availability of pre-existing infrastructure, such as roads and telecommunications facilities, on social and economic development.
In science, infrastructure bias is the influence of existing social or scientific infrastructure on scientific observations.
In astronomy and particle physics, where the availability of particular kinds of telescopes or particle accelerators acts as a constraint on the types of experiments that can be done, the data that can be retrieved is biased towards that which can be obtained by the equipment.
Procedural bias, related to infrastructure bias, is shown by a case of irregular genetic sampling of Bolivian wild potatoes. A 2000 report of previous studies' sampling found that 60% of samples had been taken near towns or roads, where 22% would be the average, had the samples been taken at random (or from equidistant points, or at specifically varying distances from towns, representative of the average terrain density).
References
Bias
Sampling (statistics)
Sampling techniques
Research
Infrastructure | Infrastructure bias | [
"Engineering"
] | 200 | [
"Construction",
"Infrastructure"
] |
47,306 | https://en.wikipedia.org/wiki/Regular%20open%20set | A subset of a topological space is called a regular open set if it is equal to the interior of its closure; expressed symbolically, if or, equivalently, if where and denote, respectively, the interior, closure and boundary of
A subset of is called a regular closed set if it is equal to the closure of its interior; expressed symbolically, if or, equivalently, if
Examples
If has its usual Euclidean topology then the open set is not a regular open set, since Every open interval in is a regular open set and every non-degenerate closed interval (that is, a closed interval containing at least two distinct points) is a regular closed set. A singleton is a closed subset of but not a regular closed set because its interior is the empty set so that
Properties
A subset of is a regular open set if and only if its complement in is a regular closed set. Every regular open set is an open set and every regular closed set is a closed set.
Each clopen subset of (which includes and itself) is simultaneously a regular open subset and regular closed subset.
The interior of a closed subset of is a regular open subset of and likewise, the closure of an open subset of is a regular closed subset of The intersection (but not necessarily the union) of two regular open sets is a regular open set. Similarly, the union (but not necessarily the intersection) of two regular closed sets is a regular closed set.
The collection of all regular open sets in forms a complete Boolean algebra; the join operation is given by the meet is and the complement is
See also
Notes
References
Lynn Arthur Steen and J. Arthur Seebach, Jr., Counterexamples in Topology. Springer-Verlag, New York, 1978. Reprinted by Dover Publications, New York, 1995. (Dover edition).
General topology | Regular open set | [
"Mathematics"
] | 371 | [
"General topology",
"Topology"
] |
11,782,911 | https://en.wikipedia.org/wiki/GABA%20receptor%20antagonist | GABA receptor antagonists are drugs that inhibit the action of GABA. In general these drugs produce stimulant and convulsant effects, and are mainly used for counteracting overdoses of sedative drugs.
Examples include bicuculline, securinine and metrazol, and the benzodiazepine GABAA receptor antagonist flumazenil.
Other agents which may have GABAA receptor antagonism include the antibiotic ciprofloxacin, tranexamic acid, thujone, ginkgo biloba, and kudzu.
See also
GABAA receptor negative allosteric modulators
External links
References
Biochemistry | GABA receptor antagonist | [
"Chemistry",
"Biology"
] | 144 | [
"Biochemistry",
"nan"
] |
11,783,365 | https://en.wikipedia.org/wiki/Sequence%20of%20events%20recorder | A sequence of events recorder (SER) is an intelligent standalone microprocessor based system, which monitors external inputs and records the time and sequence of the changes. They usually have an external time source such as a GPS or radio clock. When wired inputs change state, the time and state of each change is recorded.
SERs enable rapid root cause analysis after multiple events have occurred due to the secure recording of the sequence of events in the order of occurrence. SERs are therefore utilized as a diagnostic tool to minimize plant downtime. SERs are often interfaced with a SCADA system, distributed control system or programmable logic controller (PLC).
SER reports are used by electrical engineers to analyze large and small electrical system blackouts. After the Northeast blackout of 2003, the North American Electric Reliability Corporation specified that electrical system data should be time-tagged to the nearest millisecond.
In 1984, the Tetragenics Company, a subsidiary of the Montana Power Company, introduced the first remote terminal unit (RTU) that time-tagged events to the nearest millisecond, and now there are also other RTUs with this capability. Digital protective relays and some PLCs now also include time-tagging to the nearest millisecond; SCADA systems that incorporate these devices provide SER functions without a dedicated SER device.
See also
Data logger
References
Recording devices | Sequence of events recorder | [
"Technology"
] | 281 | [
"Computing stubs",
"Recording devices",
"Computer hardware stubs"
] |
11,783,757 | https://en.wikipedia.org/wiki/Bateson%27s%20cube | Bateson's cube is a model of the cost–benefit analysis for animal research developed by Professor Patrick Bateson, president of the Zoological Society of London.
Background
Bateson's cube evaluates proposed research through three criteria:
the degree of animal suffering,
the quality of the research,
the potential medical benefit.
Bateson suggested that research that does not meet these requirements should not be approved or performed, in accordance with the Animals (Scientific Procedures) Act 1986. It is not intended as a formal model for optimal trade-offs, but rather a tool for making judicial decisions, since the three axes are not in a common currency. The third criterion also does not necessarily have to be medical benefit, but could be a wider form of utility.
Bateson cube has three axes measuring suffering, certainty of benefit, and quality of research. If the research is high-quality, certain to be beneficial, and not going to inflict suffering, then it will fall into the hollow section meaning research should proceed. Painful, low-quality research with lower likelihood of success will be lower back in the solid area, and should not proceed. Most research will not be clear-cut, but the guiding principle is 'hollow' should continue, 'solid' should not.
References
Animal testing | Bateson's cube | [
"Chemistry"
] | 257 | [
"Animal testing"
] |
11,783,828 | https://en.wikipedia.org/wiki/MK-886 | MK-886, or L-663536, is a leukotriene antagonist. It may perform this by blocking the 5-lipoxygenase activating protein (FLAP), thus inhibiting 5-lipoxygenase (5-LOX), and may help in treating atherosclerosis.
References
Indoles
Thioethers
4-Chlorophenyl compounds
Isopropyl compounds
Carboxylic acids
Tert-butyl compounds | MK-886 | [
"Chemistry",
"Biology"
] | 101 | [
"Biotechnology stubs",
"Carboxylic acids",
"Functional groups",
"Biochemistry stubs",
"Biochemistry"
] |
11,784,110 | https://en.wikipedia.org/wiki/Linphone | Linphone (contraction of Linux phone) is a free voice over IP softphone, SIP client and service. It may be used for audio and video direct calls and calls through any VoIP softswitch or IP-PBX. Linphone also provides the possibility to exchange instant messages. It has a simple multilanguage interface based on Qt for GUI and can also be run as a console-mode application on Linux.
Both SIP service and software could be used together, but also independently: it's possible to connect Linphone service with any SIP client (software or hardware), and to use Linphone software with any SIP service.
The softphone is currently developed by Belledonne Communications in France. Linphone was initially developed for Linux but now supports many additional platforms including Microsoft Windows, macOS, and mobile phones running Windows Phone, iOS or Android. It supports ZRTP for end-to-end encrypted voice and video communication.
Linphone is licensed under the GNU GPL-3.0-or-later and supports IPv6. Linphone can also be used behind network address translator (NAT), meaning it can run behind home routers. It is compatible with telephony by using an Internet telephony service provider (ITSP).
Features
Linphone hosts a free SIP service on its website.
The Linphone client provides access to following functionalities:
Multi-account work
Registration on any SIP-service and line status management
Contact list with status of other users
Conference call initiation
Combination of message history and call details
DTMF signals sending (SIP INFO / RFC 2833)
File sharing
Additional plugins
Open standards support
Protocols
SIP according to RFC 3261 (UDP, TCP and TLS)
SIP SIMPLE
NAT traversal by TURN and ICE
RTP/RTCP
Media-security: SRTP and ZRTP
Audio codecs
Audio codec support: Speex (narrow band and wideband), G.711 (μ-law, A-law), GSM, Opus, and iLBC (through an optional plugin)
Video codecs
Video codec support: MPEG-4, Theora, VP8 and H.264 (with a plugin based on x264), with resolutions from QCIF (176×144) to SVGA (800×600) provided that network bandwidth and CPU power are sufficient.
Gallery
See also
Comparison of VoIP software
List of SIP software
Opportunistic encryption
References
External links
Cross-platform software
Android (operating system) software
Free and open-source Android software
Communication software
Free VoIP software
Instant messaging clients
Instant messaging clients for Linux
IOS software
MacOS instant messaging clients
Videotelephony
VoIP software
Windows instant messaging clients
BlackBerry software | Linphone | [
"Technology"
] | 565 | [
"Instant messaging",
"Instant messaging clients"
] |
11,785,111 | https://en.wikipedia.org/wiki/Dung%20midden | Dung middens, also known as dung hills, are piles of dung that mammals periodically return to and build up. They are used as a form of territorial marker. A range of animals are known to use them including steenbok, hyrax, and rhinoceros. Other animals are attracted to middens for a variety of purposes, including finding food and locating mates. Some species, such as the dung beetle genus Dicranocara of the Richtersveld in South western Africa spend their whole lifecycle in close association with dung middens. Dung middens are also used in the field of paleobotany, which relies on the fact that each ecosystem is characterized by certain plants, which in turn act as a proxy for climate. Dung middens are useful as they often contain pollen which means fossilized dung middens can be used in paleobotany to learn about past climates.
Examples of dung midden production in wild
Hippopotamus
The common hippopotamus has been known to use dung middens as a social tool. The middens are created and maintained by bulls to mark territorial boundaries. To mark their scent upon a midden, the bull will approach the midden in reverse and simultaneously defecate and urinate on the mound, using its tail to disperse, or paddle, the excrement. This action is called dung showering and thought to assert dominance. The middens, usually several feet across, are constantly maintained during the bulls' travels in the night and day.
Rhinoceros
Dung-midden production is also observed in the white and black rhinoceroses. The middens are shown to provide cues as to the age, sex, and reproductive health of the producer. Some of the middens can be 65 feet across. Dung beetles are frequently found in these middens and lay their eggs within the mounds. Their presence and activity in the middens also aid in pest and parasite control. Unlike the hippopotamus, rhino dung middens are shared between individuals that are not necessarily related.
White rhino middens are distinguished by a black color and a primarily grass composition whereas black rhino middens tend to be brown and contain more twigs and branches, a product of the distinct diets.
Black garden ants
Midden formation in insects was first observed in black garden ants, Lasius niger. The middens created by the ants are called "kitchen middens" and are composed of food scraps, ant corpses, and other detritus. A reason for the behavior has yet to be determined though it is thought to serve as a feeding ground for larvae.
Lemurs
The dry bush weasel lemur and southern gentle lemur are known to construct middens. It is thought that these act primarily as communal latrines and communication tools, signaling dominance and other social cues, for families spread over large tracts of land.
Hyraxes
Hyrax, or Procavia, are small herbivorous mammals from across the African continent and normally inhabit in rock shelters, not typically wandering more than 500 meters from their shelter for fear of predation. These organisms use fixed dung middens for urinating and defecation, often under overhanging rocks in protected areas. Layers of dung are quickly hardened and sealed by Hyraceum, creating mainly horizontal middens.
Antelopes
Middens created by antelopes, as well as other herbivores, play an important role by providing nutrients to certain areas of land. It has been described that duiker and steenbok antelopes defecate in exposed sites, generally on sandy soil, thus enriching the nutrient-deficient areas, as well as depositing plant seed there.
Mountain gazelles
Many gazelle species use middens (see also Animal latrine) for activities related to territory maintenance, advertisement and olfactory communication. Due to the investment required to maintain a midden, it is likely that middens would not be randomly placed throughout the environment, but rather would be distributed on different landmarks. Placing middens on conspicuous sites could attract the attention of hunters and provide the hunters with information about the location and activity of their prey. A group of researchers examined midden selection and use by mountain gazelles (Gazella gazelle) in central Saudi Arabia and hypothesized that if middens are used for territorial or communication purposes, then they would tend to be placed at the largest trees in the immediate area. Additionally, if mountain gazelle midden selection and use was predictable, then this would corroborate poachers' claims that gazelles are easy to hunt because of their predictable behavior. Ultimately it was found that midden size and the freshness of newly deposited feces could inform poachers about the gazelles' rates of midden use and potentially which middens are used more often. It was also found that middens are important communication centers for the mountain gazelles, and they are used by both sexes and by gazelles of various ages.
Ecological implications
The widespread presence of dung midden use throughout the animal kingdom is coupled with a distinct variation in how dung middens are used from species to species. Dung midden use has been implicated in the context of both intraspecific markers of territory, sexual availability, and a part of anti-parasite behavior, but also as an essential part of the ecosystem, with interspecies interactions between the creators and users of dung midden piles. In some cases, it has been found that midden piles are the focal points of grazing lawns, not the other way around, as demonstrated by high frequency of grazing when old middens are present.
Intraspecific markers of territory
Territory or home-range maintenance is found in many species of animals as a way to divide resources, including food and mates. Often markers are employed to define such territories, and dung middens are one form of the markers employed. An example of dung midden use for territorial marking is found in the mountain gazelle, in which latrines/dung middens are found in the home-range cores and serve as a concentrated area to repel intruders while facilitating communication amongst the members of the female group. This method of dung midden use is distinct from other species such as the Thornson's gazelle and the Günther's dik-dik, both of which use dung middens as peripheral territory markers instead.
Sexual availability
Olfactory communication through dung middens can also indicate sexual availability to conspecifics. In white rhino dung, a mixture of volatile organic compounds present signal the defecator's sex and age class, and depending on whether they are a male or female, also indicate the male territorial status or female oestrous state. Furthermore, dung middens act as a communication center for white rhino groups since the species practices communal defecation, allowing for these signals to easily reach potential mates.
Anti-parasite behavior
Dung with high parasite loads are a significant source of fecal-oral transmitted parasites, which impose a high cost on individual fitness in wild ungulates. Quantifying studies of parasite loads in dung midden piles of free ranging dik-dik found that nematode concentrations were elevated in the vicinity of middens in comparison to single fecal-pellet groups or dung-free areas. Further feeding experiments found that the dik-diks tend to avoid the areas around dung middens when feeding, implying selective defecation and selective foraging where fecal avoidance could play a part in anti-parasite behavior in this species.
Mammalian-termite interactions
Termites are usually viewed as both herbivores and decomposers when present within an ecological community. In some cases, they are the link between mammalian consumers and the microbial decomposers that perform the final breaking down of organic matter within the local cycle of nutrients. A case of this relationship between termites and mammalian dung middens is observed in South Africa, between the endemic blesbok and harvester termites. The blesbok have been observed to deliberately place dung middens when they are in the vicinity of the harvester termite mounds. It has been suggested that this could be due to the fact that termite mounds are built on ground where the surrounding is cleared. This allows the blesboks greater ability to detect predators if foraging in the area, and termite presence in the vicinity could be an indicator of richer resources available from recycling of nutrients. Since decomposers such as termites increase the quality of the surrounding vegetation for foraging, this suggests that there is a positive evolutionary feedback within this interaction, with both participants in this interaction providing resources for the other.
Use in paleobiology
Climate information
Pollen that becomes fossilized in dung midden can provide information about the climate and environment during the time period when it was fossilized. This provides researchers with a better understanding of what historical environmental changes may have occurred leading up to the biodiversity and present day environment of various places.
Fossilized hyrax (small herbivorous mammals resembling rodents but more closely associated with elephants and manatees) dung has been found in a rock shelter on the Brandberg Mountain in Namibia, has been found to possess fossilized pollen. Radiocarbon dating places it between 30,000 years ago to modern times, making it the first evidence of pollen from the Late Pleistocene in south-western Africa. The pollen is preserved by layers of dung that are piled upon each other and sealed by urine. The dung found from this time is that of the family Asteraceae, a family not known to be found in Namibia or deserts. This suggests that climate in this area may have been tropical during this time, but it is also hypothesized that the spores were spread either aromatically or aquatically from another location.
In an earlier Brandberg Mountain sample from 17,000 years ago, Stoebe pollen was found in dung. There is also the presence of fern-spores indicating a moist climate during that time. This moisture would most likely be from melting and evaporating glaciers and not heavy rain.
Sources of midden as old as 6,000 years ago can also be used to view the climate through the presence of certain pollen and the attributed rainfall necessary for those plants to be present and flowering. However, the changing presence of some plants can also be due to erratic conditions such as grazing and human interference by Nomadic people. Although, this is not thought to explain all of the aridity and variation of the area at certain times. The presence of certain flowering plants during the mid-Holocene that require more moisture leads to a conclusion of increased summer rainfall. This also accounts for the seasonal variability as many of plants found in the dung do not rely upon winter rain.
Example of dung midden use in paleobiology: Namib Desert
Much is unknown about the origins of the unique biodiversity in the Namib desert. It has an arid climate and granitic substrate, which does not favor the preservation of organic material that would typically help provide insight into the history of the biodiversity. Common artifacts typically used to study environmental conditions such as lake or swamp deposits, caves, river systems, or dune-fields do not exist. Thus it has been difficult to understand the history of the Namib desert. Through the use of dung middens found in various parts of the desert, researchers are able to reconstruct the paleoenvironmental conditions. Specifically, fossilized hyrax dung in shallow cave shelters contains fossilized pollen and dust which contains information on the vegetation that was consumed by the hyrax. Pollen data can provide information on the vegetation during different time periods, and using this data the changes in moisture levels in desert areas such as the desert northwest of Namibia can be determined.
While the pollen and dust in the dung provides information on the types of vegetation that previously existed, it is also important to use radiocarbon dating for information on the era that the dung is from. In a town in South Africa, researchers found conflicting data about the time period the dung midden they were studying was from. The initial researchers failed to consider the impact of local radiocarbon concentrations that were higher than usual due to the testing of nuclear arms. Through pollen analysis, radiocarbon dating, and considering the history of radiocarbon levels in the atmosphere, dung middens are able to provide useful information about the historical environment of dry and arid places such as the Namib desert.
References
Ethology
Animal communication | Dung midden | [
"Biology"
] | 2,578 | [
"Behavioural sciences",
"Ethology",
"Behavior"
] |
11,785,522 | https://en.wikipedia.org/wiki/Weyl%20integral | In mathematics, the Weyl integral (named after Hermann Weyl) is an operator defined, as an example of fractional calculus, on functions f on the unit circle having integral 0 and a Fourier series. In other words there is a Fourier series for f of the form
with a0 = 0.
Then the Weyl integral operator of order s is defined on Fourier series by
where this is defined. Here s can take any real value, and for integer values k of s the series expansion is the expected k-th derivative, if k > 0, or (−k)th indefinite integral normalized by integration from θ = 0.
The condition a0 = 0 here plays the obvious role of excluding the need to consider division by zero. The definition is due to Hermann Weyl (1917).
See also
Sobolev space
References
Fourier series
Fractional calculus | Weyl integral | [
"Mathematics"
] | 176 | [
"Fractional calculus",
"Calculus"
] |
11,785,669 | https://en.wikipedia.org/wiki/Trench%20shoring | Trench shoring is the process of bracing the walls of a trench to prevent collapse and cave-ins. The phrase can also be used as a noun to refer to the materials used in the process.
Several methods can be used to shore up a trench. Hydraulic shoring is the use of hydraulic pistons that can be pumped outward until they press up against the trench walls. This is typically combined with steel plate or a special heavy plywood called Finform. Another method is called beam and plate, in which steel I-beams are driven into the ground and steel plates are slid in amongst them. A similar method that uses wood planks is called soldier boarding. Hydraulics tend to be faster and easier; the other methods tend to be used for longer term applications or larger excavations.
Shoring should not be confused with shielding by means of trench shields. Shoring is designed to prevent collapse, whilst shielding is only designed to protect workers should collapse occur. Most professionals agree that shoring is the safer approach of the two.
See also
Retaining wall
References
Geotechnical shoring structures
Cuts (earthmoving) | Trench shoring | [
"Technology"
] | 223 | [
"Structural system",
"Geotechnical shoring structures"
] |
11,787,239 | https://en.wikipedia.org/wiki/Antimony%28III%29%20acetate | Antimony(III) acetate is the compound of antimony with the chemical formula of Sb(CH3CO2)3. It is a white powder, is moderately water-soluble, and is used as a catalyst in the production of polyesters.
Preparation
It can be prepared by the reaction of antimony(III) oxide with acetic anhydride:
Sb2O3 + 3 C4H6O3 → 2 Sb(CH3CO2)3
Structure
The crystal structure of antimony(III) acetate has been determined by X-ray crystallography. It consists of discrete Sb(OAc)3 monomers with monodentate acetate ligands. The monomers are linked together into chains by weaker C=O···Sb intermolecular interactions.
References
Antimony(III) compounds
Acetates | Antimony(III) acetate | [
"Chemistry"
] | 177 | [
"Inorganic compounds",
"Inorganic compound stubs"
] |
11,787,655 | https://en.wikipedia.org/wiki/Alkali%20soil | Alkali, or alkaline, soils are clay soils with high pH (greater than 8.5), a poor soil structure and a low infiltration capacity. Often they have a hard calcareous layer at 0.5 to 1 metre depth. Alkali soils owe their unfavorable physico-chemical properties mainly to the dominating presence of sodium carbonate, which causes the soil to swell and difficult to clarify/settle. They derive their name from the alkali metal group of elements, to which sodium belongs, and which can induce basicity. Sometimes these soils are also referred to as alkaline sodic soils. Alkaline soils are basic, but not all basic soils are alkaline.
Causes
The causes of soil alkalinity can be natural or man-made:
The natural cause is the presence of soil minerals producing sodium carbonate (Na2CO3) and sodium bicarbonate (NaHCO3) upon weathering.
Coal-fired boilers / power plants, when using coal or lignite rich in limestone, produce ash containing calcium oxide. CaO readily dissolves in water to form slaked lime, Ca(OH)2, and carried by rain water to rivers / irrigation water. Lime softening process precipitates Ca2+ and Mg2+ ions / removes hardness in the water and also converts sodium bicarbonates in river water into sodium carbonate. Sodium carbonates (washing soda) further reacts with the remaining Ca2+ and Mg2+ in the water to remove / precipitate the total hardness. Also water-soluble sodium salts present in the ash enhance the sodium content in water. The global coal consumption in the world was 7.7 billion tons in the year 2011. Thus river water is made devoid of Ca2+ and Mg2+ ions and enhanced Na+ by coal-fired boilers.
Many sodium salts are used in industrial and domestic applications such as sodium carbonate, sodium bicarbonate (baking soda), sodium sulphate, sodium hydroxide (caustic soda), sodium hypochlorite (bleaching powder), etc. in huge quantities. These salts are mainly produced from sodium chloride (common salt). All the sodium in these salts enter into the river / ground water during their production process or consumption enhancing water sodicity. The total global consumption of sodium chloride is 270 million tons in the year 2010. This is nearly equal to the salt load in the mighty Amazon River. Man made sodium salts contribution is nearly 7% of total salt load of all the rivers. Sodium salt load problem aggravates in the downstream of intensively cultivated river basins located in China, India, Egypt, Pakistan, west Asia, Australia, western US, etc. due to accumulation of salts in the remaining water after meeting various transpiration and evaporation losses.
Another source of man made sodium salts addition to the agriculture fields / land mass is in the vicinity of the wet cooling towers using sea water to dissipate waste heat generated in various industries located near the sea coast. Huge capacity cooling towers are installed in oil refineries, petrochemical complexes, fertilizer plants, chemical plants, nuclear & thermal power stations, centralized HVAC systems, etc. The drift / fine droplets emitted from the cooling towers contain nearly 6% sodium chloride which would deposit on the vicinity areas. This problem aggravates where the national pollution control norms are not imposed or not implemented to minimize the drift emissions to the best industrial norm for the sea water based wet cooling towers.
The man-made cause is the application of softened water in irrigation (surface or ground water) containing relatively high proportion of sodium bicarbonates and less calcium and magnesium.
Agricultural problems
Alkaline soils are difficult to take into agricultural production. Due to the low infiltration capacity, rain water stagnates on the soil easily and, in dry periods, cultivation is hardly possible without copious irrigated water and good drainage. Agriculture is limited to crops tolerant to surface waterlogging (e.g. rice, grass) and the productivity is lower.
Chemistry
Soil alkalinity is associated with the presence of sodium carbonate (Na2CO3) or sodium bicarbonate (NaHCO3) in the soil, either as a result of natural weathering of the soil particles or brought in by irrigation and/or flood water.
This salt is extremely soluble, when it undergoes hydration, it dissociates in:
→ 2 +
The carbonate anion , is a weak base accepting a proton, so it hydrolyses in water to give the bicarbonate ion and a hydroxyl ion:
+ → +
which in turn gives carbonic acid and hydroxyl:
+ → +
See carbonate for the equilibrium of carbonate-bicarbonate-carbon dioxide.
The above reactions are similar to the dissolution of calcium carbonate, the solubility of the two salts being the only difference. Na2CO3 is about times more soluble than CaCO3, so it can dissolve far larger amounts of , thus rising the pH to values higher than 8.5, which is above the maximum attainable pH when the equilibrium between calcium carbonate and dissolved carbon dioxide are in equilibrium in soil solution.
Notes:
Water (H2O) is partly dissociated into H3O+ (hydronium) and OH– (hydroxyl) ions. The ion H3O+ has a positive electric charge (+) and its concentration is usually written as [H+]. The hydroxyl ion OH– has a negative charge (−) and its concentration is written as [OH−].
In pure water, at 25 °C, the dissociation constant of water (Kw) is 10−14.Since Kw = [H+] × [OH–], then both the concentration of H3O+ and OH– ions equal 10−7 M (a very small concentration).
In neutral water, the pH, being the negative decimal logarithm of the H3O+ concentration, it is 7. Similarly, the pOH is also 7. Each unit decrease in pH indicates a tenfold increase of the H3O+ concentration. Similarly, each unit increase in pH indicates a tenfold increase of the OH– concentration.
In water with dissolved salts, the concentrations of the H3O+ and the OH– ions may change, but their sum remains constant, namely . A pH of 7 therefore corresponds to a pOH of 7, and a pH of 9 with a pOH of 5.
Formally it is preferred to express the ion concentrations in terms of chemical activity, but this hardly affects the value of the pH.
Water with excess H3O+ ions is called acid (), and water with excess OH– ions is called alkaline or rather basic (). Soil moisture with is called very acid and with very alkaline (basic).
H2CO3 (carbonic acid) is unstable and produces H2O (water) and CO2 (carbon dioxide gas, escaping into the atmosphere). This explains the remaining alkalinity (or rather basicity) in the form of soluble sodium hydroxide and the high pH or low pOH.
Not all the dissolved sodium carbonate undergoes the above chemical reaction. The remaining sodium carbonate, and hence the presence of ions, causes CaCO3 (which is only slightly soluble) to precipitate as solid calcium carbonate (limestone), because the product of the concentration and the Ca2+ concentration exceeds the allowable limit. Hence, the calcium ions Ca2+ are immobilized.
The presence of abundant Na+ ions in the soil solution and the precipitation of Ca2+ ions as a solid mineral causes the clay particles, which have negative electric charges along their surfaces, to adsorb more Na+ in the diffuse adsorption zone (DAZ, also more commonly called diffuse double layer (DDL), or electrical double layer (EDL), see the corresponding figure) and, in exchange, release previously adsorbed Ca2+, by which their exchangeable sodium percentage (ESP) is increased as illustrated in the same figure.
Na+ is more mobile and has a smaller electric charge than Ca2+ so that the thickness of the DDL increases as more sodium ions occupy it. The DDL thickness is also influenced by the total concentration of ions in the soil moisture in the sense that higher concentrations cause the DDL zone to shrink.
Clay particles with considerable ESP (> 16), in contact with non-saline soil moisture have an expanded DDL zone and the soil swells (dispersion).
The phenomenon results in deterioration of the soil structure, and especially crust formation and compaction of the top layer.
Hence the infiltration capacity of the soil and the water availability in the soil is reduced, whereas the surface-water-logging or surface runoff is increased. Seedling emergence and crop production are badly affected.
Note:
Under saline conditions, the many ions in the soil solution counteract the swelling of the soil, so that saline soils usually do not have unfavorable physical properties. Alkaline soils, in principle, are not saline since the alkalinity problem is worse as the salinity is less.
Alkalinity problems are more pronounced in clay soils than in loamy, silty or sandy soils. The clay soils containing montmorillonite or smectite (swelling clays) are more subject to alkalinity problems than illite or kaolinite clay soils. The reason is that the former types of clay have larger specific surface areas (i.e. the surface area of the soil particles divided by their volume) and higher cation exchange capacity (CEC).
Note:
Certain clay minerals with almost 100% ESP (i.e. almost fully sodium saturated) are called bentonite, which is used in civil engineering to place impermeable curtains in the soil, e.g. below dams, to prevent seepage of water.
The quality of the irrigation water in relation to the alkalinity hazard is expressed by the following two indexes:
The sodium adsorption ratio (SAR, )
The formula for calculating sodium adsorption ratio is:
SAR = =
where: [ ] stands for concentration in milliequivalents/liter (briefly meq/L), and { } stands for concentration in mg/L.
It is seen that Mg (magnesium) is thought to play a similar role as Ca (calcium).
The SAR should not be much higher than 20 and preferably less than 10;
When the soil has been exposed to water with a certain SAR value for some time, the ESP value tends to become about equal to the SAR value.
The residual sodium carbonate (RSC, meq/L):
The formula for calculating the residual sodium carbonate is:
which must not be much higher than 1 and preferably less than 0.5.
The above expression recognizes the presence of bicarbonates (), the form in which most carbonates are dissolved.
While calculating SAR and RSC, the water quality present at the root zone of the crop should be considered which would take into account the leaching factor in the field. The partial pressure of dissolved CO2 at the plants root zone also decides the calcium present in dissolved form in the field water. USDA follows the adjusted SAR for calculating water sodicity.
Soil improvement
Alkaline soils with solid CaCO3 can be reclaimed with grass cultures, organic compost, waste hair / feathers, organic garbage, waste paper, rejected lemons/oranges, etc. ensuring the incorporation of much acidifying material (inorganic or organic material) into the soil, and enhancing dissolved Ca in the field water by releasing CO2 gas. Deep ploughing and incorporating the calcareous subsoil into the top soil also helps.
Many times salts' migration to the top soil takes place from the underground water sources rather than surface sources. Where the underground water table is high and the land is subjected to high solar radiation, ground water oozes to the land surface due to capillary action and gets evaporated leaving the dissolved salts in the top layer of the soil. Where the underground water contains high salts, it leads to acute salinity problem. This problem can be reduced by applying mulch to the land. Using poly-houses or shade netting during summer for cultivating vegetables/crops is also advised to mitigate soil salinity and conserve water / soil moisture. Poly-houses filter the intense summer solar radiation in tropical countries to save the plants from water stress and leaf burns.
Where the ground water quality is not alkaline / saline and ground water table is high, salts build up in the soil can be averted by using the land throughout the year for growing plantation trees / permanent crops with the help of lift irrigation. When the ground water is used at required leaching factor, the salts in the soil would not build up.
Plowing the field soon after cutting the crop is also advised to prevent salt migration to the top soil and conserve the soil moisture during the intense summer months. This is done to break the capillary pores in the soil to prevent water reaching the surface of the soil.
Clay soils in high annual rain fall (more than 100 cm) areas do not generally suffer from high alkalinity as the rain water runoff is able to reduce/leach the soil salts to comfortable levels if proper rainwater harvesting methods are followed. In some agricultural areas, the use of subsurface "tile lines" are used to facilitate drainage and leach salts. Continuous drip irrigation would lead to alkali soils formation in the absence of leaching / drainage water from the field.
It is also possible to reclaim alkaline soils by adding acidifying minerals like pyrite or cheaper alum or aluminium sulfate.
Alternatively, gypsum (calcium sulfate, · 2 ) can also be applied as a source of Ca2+ ions to replace the sodium at the exchange complex. Gypsum also reacts with sodium carbonate to convert into sodium sulphate which is a neutral salt and does not contribute to high pH. There must be enough natural drainage to the underground, or else an artificial subsurface drainage system must be present, to permit leaching of the excess sodium by percolation of rain and/or irrigation water through the soil profile.
Calcium chloride is also used to reclaim alkali soils. CaCl2 converts Na2CO3 into NaCl precipitating CaCO3. NaCl is drained off by leaching water. Calcium nitrate has a similar effect, with NaNO3 in the leachate. Spent acid (HCl, H2SO4, etc.) can also be used to reduce the excess Na2CO3 in the soil/water.
Where urea is made available cheaply to farmers, it is also used to reduce the soil alkalinity / salinity primarily. The ammonium () cation produced by urea hydrolysis which is a strongly sorbing cation exchanges with the weakly sorbing Na+ cation from the soil structure and Na+ is released into water. Thus alkali soils adsorb / consume more urea compared to other soils.
To reclaim the soils completely one needs prohibitively high doses of amendments. Most efforts are therefore directed to improving the top layer only (say the first 10 cm of the soils), as the top layer is most sensitive to deterioration of the soil structure. The treatments, however, need to be repeated in a few (say 5) years' time. Trees / plants follow gravitropism. It is difficult to survive in alkali soils for the trees with deeper rooting system which can be more than 60 meters deep in good non-alkali soils.
It will be important to refrain from irrigation (ground water or surface water) with poor quality water. In viticulture, adding naturally occurring chelating agents such as tartaric acid to irrigation water has been suggested, to solubilize calcium and magnesium carbonates in sodic soils.
One way of reducing sodium carbonate is to cultivate glasswort or saltwort or barilla plants. These plants sequester the sodium carbonate they absorb from alkali soil into their tissues. The ash of these plants contains good quantity of sodium carbonate which can be commercially extracted and used in place of sodium carbonate derived from common salt which is highly energy intensive process. Thus alkali lands deterioration can be checked by cultivating barilla plants which can serve as food source, biomass fuel and raw material for soda ash and potash, etc.
Leaching saline sodic soils
Saline soils are mostly also sodic (the predominant salt is sodium chloride), but they do not have a very high pH nor a poor infiltration rate. Upon leaching they are usually not converted into a (sodic) alkali soil as the Na+ ions are easily removed. Therefore, saline (sodic) soils mostly do not need gypsum applications for their reclamation.
Remediation and utilization via aquaculture
Since 1990s, research and experimentation have been conducted in China and elsewhere for remediation and utilization of alkali land via combined agriculture and aquaculture practices, with considerable success and experiences. Aquaculture technology of utilizing inland saline-alkali water for seafood production is becoming mature, covering wide-range of seafood species including shrimps, crabs, shellfish and fish such as sea bass and grouper.
In recent years, aquaculture (or salt-alkali land aquaculture) has been recommended by the Ministry of Agriculture and Rural Affairs of China as a successful model for the transformation and utilization of saline-alkali land. FAO noted in a recent newsletter that alkaline land is one area that there are innovative ways and opportunities for aquaculture to expand.
See also
Ammonia volatilization from urea
Agreti green vegetable
Barilla
Biosalinity
Cation-exchange capacity
Drip irrigation
Environmental impact of irrigation
Fertilizer
Halotolerance
Index of soil-related articles
Phosphate rich organic manure
Phosphogypsum
Red mud
Residual Sodium Carbonate Index
Sajji Khar
Soda lake
Soil fertility
Soil pH
Soil salinity
Soil salinity control
References
Alkaline soils
Land reclamation | Alkali soil | [
"Chemistry"
] | 3,730 | [
"Soil chemistry",
"Alkaline soils"
] |
11,787,686 | https://en.wikipedia.org/wiki/Daly%20detector | A Daly detector is a gas-phase ion detector that consists of a metal "doorknob", a scintillator (phosphor screen) and a photomultiplier. It was named after its inventor Norman Richard Daly. Daly detectors are typically used in mass spectrometers.
Principle of operation
Ions that hit the doorknob release secondary electrons. A high voltage (about ) between the doorknob and the scintillator accelerates the electrons onto the phosphor screen, where they are converted to photons. These photons are detected by the photomultiplier.
The advantage of the Daly detector is that the photomultiplier can be separated by a window, which lets the photons through from the high vacuum of the mass spectrometer, thus preventing an otherwise possible contamination and extending life span of the detector. The Daly detector also allows a higher acceleration after the field-free region of a time-of-flight mass spectrometer flight tube, which can improve the sensitivity for heavy ions.
Norman Richard Daly
Norman Daly was awarded 6 patents in the years 1962–1973 relating to ion detection and mass spectrometers, from his work at the United Kingdom Atomic Energy Authority.
References
Mass spectrometry
Measuring instruments
Photochemistry
Particle detectors
Phosphors and scintillators | Daly detector | [
"Physics",
"Chemistry",
"Technology",
"Engineering"
] | 277 | [
"Luminescence",
"Spectrum (physical sciences)",
"Instrumental analysis",
"Mass",
"Measuring instruments",
"Particle detectors",
"Phosphors and scintillators",
"Mass spectrometry",
"nan",
"Matter"
] |
11,788,703 | https://en.wikipedia.org/wiki/Ansamycin | Ansamycins is a family of bacterial secondary metabolites that show antimicrobial activity against many Gram-positive and some Gram-negative bacteria, and includes various compounds, including streptovaricins and rifamycins. In addition, these compounds demonstrate antiviral activity towards bacteriophages and poxviruses.
Structure
They are named ansamycins (from the Latin ansa, handle) because of their unique structure, which consists of an aromatic moiety bridged by an aliphatic chain. The main difference between various derivatives of ansamycins is the aromatic moiety, which can be a naphthalene ring or a naphthoquinone ring as in rifamycin and the naphthomycins. Another variation consists of benzene or a benzoquinone ring system as in geldanamycin or ansamitocin. Ansamycins were first discovered in 1959 by Sensi et al. from Amycolatopsis mediterranei, an actinomycete bacterium.
Examples
Rifamycins are a subclass of ansamycins with high potency against mycobacteria. This resulted in their widespread use in the treatment of tuberculosis, leprosy, and AIDS-related mycobacterial infections. Since then various analogues have been isolated from other prokaryotes.
References
Polyketides
Carbamates
Lactams
1,4-Benzoquinones
Ethers
Ansamycins
Polyketide antibiotics | Ansamycin | [
"Chemistry"
] | 326 | [
"Biomolecules by chemical classification",
"Natural products",
"Functional groups",
"Organic compounds",
"Polyketides",
"Ethers"
] |
11,788,913 | https://en.wikipedia.org/wiki/Bioaerosol | Bioaerosols (short for biological aerosols) are a subcategory of particles released from terrestrial and marine ecosystems into the atmosphere. They consist of both living and non-living components, such as fungi, pollen, bacteria and viruses. Common sources of bioaerosols include soil, water, and sewage.
Bioaerosols are typically introduced into the air via wind turbulence over a surface. Once in the atmosphere, they can be transported locally or globally: common wind patterns/strengths are responsible for local dispersal, while tropical storms and dust plumes can move bioaerosols between continents. Over ocean surfaces, bioaerosols are generated via sea spray and bubbles.
Bioaerosols can transmit microbial pathogens, endotoxins, and allergens to which humans are sensitive. A well-known case was the meningococcal meningitis outbreak in sub-Saharan Africa, which was linked to dust storms during dry seasons. Other outbreaks linked to dust events including Mycoplasma pneumonia and tuberculosis.
Another instance was an increase in human respiratory problems in the Caribbean that may have been caused by traces of heavy metals, microorganism bioaerosols, and pesticides transported via dust clouds passing over the Atlantic Ocean.
Background
Charles Darwin was the first to observe the transport of dust particles but Louis Pasteur was the first to research microbes and their activity within the air. Prior to Pasteur’s work, laboratory cultures were used to grow and isolate different bioaerosols.
Since not all microbes can be cultured, many were undetected before the development of DNA-based tools. Pasteur also developed experimental procedures for sampling bioaerosols and showed that more microbial activity occurred at lower altitudes and decreased at higher altitudes.
Types of bioaerosols
Bioaerosols include fungi, bacteria, viruses, and pollen. Their concentrations are greatest in the planetary boundary layer (PBL) and decrease with altitude. Survival rate of bioaerosols depends on a number of biotic and abiotic factors which include climatic conditions, ultraviolet (UV) light, temperature and humidity, as well as resources present within dust or clouds.
Bioaerosols found over marine environments primarily consist of bacteria, while those found over terrestrial environments are rich in bacteria, fungi and pollen. The dominance of particular bacteria and their nutrient sources are subject to change according to time and location.
Bioaerosols can range in size from 10 nanometer virus particles to 100 micrometers pollen grains. Pollen grains are the largest bioaerosols and are less likely to remain suspended in the air over a long period of time due to their weight.
Consequently, pollen particle concentration decreases more rapidly with height than smaller bioaerosols such as bacteria, fungi and possibly viruses, which may be able to survive in the upper troposphere. At present, there is little research on the specific altitude tolerance of different bioaerosols. However, scientists believe that atmospheric turbulence impacts where different bioaerosols may be found.
Fungi
Fungal cells usually die when they travel through the atmosphere due to the desiccating effects of higher altitudes. However, some particularly resilient fungal bioaerosols have been shown to survive in atmospheric transport despite exposure to severe UV light conditions. Although bioaerosol levels of fungal spores increase in higher humidity conditions, they can also be active in low humidity conditions and in most temperature ranges. Certain fungal bioaerosols even increase at relatively low levels of humidity.
Bacteria
Unlike other bioaerosols, bacteria are able to complete full reproductive cycles within the days or weeks that they survive in the atmosphere, making them a major component of the air biota ecosystem. These reproductive cycles support a currently unproven theory that bacteria bioaerosols form communities in an atmospheric ecosystem. The survival of bacteria depends on water droplets from fog and clouds that provide bacteria with nutrients and protection from UV light. The four known bacterial groupings that are abundant in aeromicrobial environments around the world include Bacillota, Actinomycetota, Pseudomonadota, and Bacteroidota.
Viruses
The air transports viruses and other pathogens. Since viruses are smaller than other bioaerosols, they have the potential to travel further distances. In one simulation, a virus and a fungal spore were simultaneously released from the top of a building; the spore traveled only 150 meters while the virus traveled almost 200,000 horizontal kilometers.
In one study, aerosols (<5 μm) containing SARS-CoV-1 and SARS-CoV-2 were generated by an atomizer and fed into a Goldberg drum to create an aerosolized environment. The inoculum yielded cycle thresholds between 20 and 22, similar to those observed in human upper and lower respiratory tract samples. SARS-CoV-2 remained viable in aerosols for 3 hours, with a decrease in infection titre similar to SARS-CoV-1. The half-life of both viruses in aerosols was 1.1 to 1.2 hours on average. The results suggest that the transmission of both viruses by aerosols is plausible, as they can remain viable and infectious in suspended aerosols for hours and on surfaces for up to days.
Pollen
Despite being larger and heavier than other bioaerosols, some studies show that pollen can be transported thousands of kilometers. They are a major source of wind-dispersed allergens, coming particularly from seasonal releases from grasses and trees. Tracking distance, transport, resources, and deposition of pollen to terrestrial and marine environments are useful for interpreting pollen records.
Collection
The main tools used to collect bioaerosols are collection plates, electrostatic collectors, mass spectrometers, and impactors, other methods are used but are more experimental in nature. Polycarbonate (PC) filters have had the most accurate bacterial sampling success when compared to other PC filter options.
Single-stage impactors
To collect bioaerosols falling within a specific size range, impactors can be stacked to capture the variation of particulate matter (PM). For example, a PM10 filter lets smaller sizes pass through. This is similar to the size of a human hair. Particulates are deposited onto the slides, agar plates, or tape at the base of the impactor. The Hirst spore trap samples at 10 liters/minute (LPM) and has a wind vane to always sample in the direction of wind flow. Collected particles are impacted onto a vertical glass slide greased with petroleum.
Variations such as the 7-day recording volumetric spore trap have been designed for continuous sampling using a slowly rotating drum that deposits impacted material onto a coated plastic tape. The airborne bacteria sampler can sample at rates up to 700 LPM, allowing for large samples to be collected in a short sampling time. Biological material is impacted and deposited onto an agar lined Petri dish, allowing cultures to develop.
Cascade impactors
Similar to single-stage impactors in collection methods, cascade impactors have multiple size cuts (PM10, PM2.5), allowing for bioaerosols to separate according to size. Separating biological material by aerodynamic diameter is useful due to size ranges being dominated by specific types of organisms (bacteria exist range from 1–20 micrometers and pollen from 10–100 micrometers). The Andersen line of cascade impactors are most widely used to test air particles.
Cyclones
A cyclone sampler consists of a circular chamber with the aerosol stream entering through one or more tangential nozzles. Like an impactor, a cyclone sampler depends upon the inertia of the particle to cause it to deposit on the sampler wall as the air stream curves around inside the chamber. Also like an impactor, the collection efficiency depends upon the flow rate. Cyclones are less prone to particle bounce than impactors and can collect larger quantities of material. They also may provide a more gentle collection than impactors, which can improve the recovery of viable microorganisms. However, cyclones tend to have collection efficiency curves that are less sharp than impactors, and it is simpler to design a compact cascade impactor compared to a cascade of cyclone samplers.
Impingers
Instead of collecting onto a greased substrate or agar plate, impingers have been developed to impact bioaerosols into liquids, such as deionized water or phosphate buffer solution. Collection efficiencies of impingers are shown by Ehrlich et al. (1966) to be generally higher than similar single stage impactor designs. Commercially available impingers include the AGI-30 (Ace Glass Inc.) and Biosampler (SKC, Inc).
Electrostatic precipitators
Electrostatic precipitators, ESPs, have recently gained renewed interest for bioaerosol sampling due to their highly efficient particle removal efficiencies and gentler sampling method as compared with impinging. ESPs charge and remove incoming aerosol particles from an air stream by employing a non-uniform electrostatic field between two electrodes, and a high field strength. This creates a region of high density ions, a corona discharge, which charges incoming aerosol droplets, and the electric field deposits the charges particles onto a collection surface.
Since biological particles are typically analysed using liquid-based assays (PCR, immunoassays, viability assay) it is preferable to sample directly into a liquid volume for downstream analysis. For example, Pardon et al. show sampling of aerosols down to a microfluidic air-liquid interface, and Ladhani et al., show sampling of airborne Influenza down to a small liquid droplet. The use of low-volume liquids is ideal for minimising sample dilution, and has the potential to be couple to lab-on-chip technologies for rapid point-of-care analysis.
Filters
Filters are often used to collect bioaerosols because of their simplicity and low cost. Filter collection is especially useful for personal bioaerosol sampling since they are light and unobtrusive. Filters can be preceded by a size-selective inlet, such as a cyclone or impactor, to remove larger particles and provide size-classification of the bioaerosol particles. Aerosol filters are often described using the term "pore size" or "equivalent pore diameter". Note that the filter pore size does NOT indicate the minimum particle size that will be collected by the filter; in fact, aerosol filters generally will collect particles much smaller than the nominal pore size.
Transport mechanisms
Ejection of bioaerosols into the atmosphere
Bioaerosols are typically introduced into the air via wind turbulence over a surface. Once airborne they typically remain in the planetary boundary layer (PBL), but in some cases reach the upper troposphere and stratosphere. Once in the atmosphere, they can be transported locally or globally: common wind patterns/strengths are responsible for local dispersal, while tropical storms and dust plumes can move bioaerosols between continents. Over ocean surfaces, bioaerosols are generated via sea spray and bubbles.
Small scale transport via clouds
Knowledge of bioaerosols has shaped our understanding of microorganisms and the differentiation between microbes, including airborne pathogens. In the 1970s, a breakthrough occurred in atmospheric physics and microbiology when ice nucleating bacteria were identified.
The highest concentration of bioaerosols is near the Earth’s surface in the PBL. Here wind turbulence causes vertical mixing, bringing particles from the ground into the atmosphere. Bioaerosols introduced to the atmosphere can form clouds, which are then blown to other geographic locations and precipitate out as rain, hail, or snow. Increased levels of bioaerosols have been observed in rain forests during and after rain events. Bacteria and phytoplankton from marine environments have been linked to cloud formation.
However, for this same reason, bioaerosols cannot be transported long distances in the PBL since the clouds will eventually precipitate them out. Furthermore, it would take additional turbulence or convection at the upper limits of the PBL to inject bioaerosols into the troposphere where they may transported larger distances as part of tropospheric flow. This limits the concentration of bioaerosols at these altitudes.
Cloud droplets, ice crystals, and precipitation use bioaerosols as a nucleus where water or crystals can form or hold onto their surface. These interactions show that air particles can change the hydrological cycle, weather conditions, and weathering around the world. Those changes can lead to effects such as desertification which is magnified by climate shifts. Bioaerosols also intermix when pristine air and smog meet, changing visibility and/or air quality.
Large scale transport via dust plumes
Satellite images show that storms over Australian, African, and Asian deserts create dust plumes which can carry dust to altitudes of over 5 kilometers above the Earth's surface. This mechanism transports the material thousands of kilometers away, even moving it between continents. Multiple studies have supported the theory that bioaerosols can be carried along with dust. One study concluded that a type of airborne bacteria present in a particular desert dust was found at a site 1,000 kilometers downwind.
Possible global scale highways for bioaerosols in dust include:
Storms over Northern Africa picking up dust, which can then be blown across the Atlantic to the Americas, or north to Europe. For transatlantic transport, there is a seasonal shift in the destination of the dust: North America during the summer, and South America during the winter.
Dust from the Gobi and Taklamakan deserts is transported to North America, mainly during the Northern Hemisphere spring.
Dust from Australia is carried out into the Pacific Ocean, with the possibility of being deposited in New Zealand.
Community dispersal
Bioaerosol transport and distribution is not consistent around the globe. While bioaerosols may travel thousands of kilometers before deposition, their ultimate distance of travel and direction is dependent on meteorological, physical, and chemical factors. The branch of biology that studies the dispersal of these particles is called Aerobiology. One study generated an airborne bacteria/fungi map of the United States from observational measurements, resulting community profiles of these bioaerosols were connected to soil pH, mean annual precipitation, net primary productivity, and mean annual temperature, among other factors.
Biogeochemical impacts
Bioaerosols impact a variety of biogeochemical systems on earth including, but not limited to atmospheric, terrestrial, and marine ecosystems. As long-standing as these relationships are, the topic of bioaerosols is not very well-known. Bioaerosols can affect organisms in a multitude of ways including influencing the health of living organisms through allergies, disorders, and disease. Additionally, the distribution of pollen and spore bioaerosols contribute to the genetic diversity of organisms across multiple habitats.
Cloud formation
A variety of bioaerosols may contribute to cloud condensation nuclei or cloud ice nuclei, possible bioaerosol components are living or dead cells, cell fragments, hyphae, pollen, or spores. Cloud formation and precipitation are key features of many hydrologic cycles to which ecosystems are tied. In addition, global cloud cover is a significant factor in the overall radiation budget and therefore, temperature of the Earth.
Bioaerosols make up a small fraction of the total cloud condensation nuclei in the atmosphere (between 0.001% and 0.01%) so their global impact (i.e. radiation budget) is questionable. However, there are specific cases where bioaerosols may form a significant fraction of the clouds in an area. These include:
Areas where there is cloud formation at temperatures over -15 °C since some bacteria have developed proteins which allow them to nucleate ice at higher temperatures.
Areas over vegetated regions or under remote conditions where the air is less impacted by anthropogenic activity.
Near surface air in remote marine regions like the Southern Ocean where sea spray may be more prevalent than dust transported from continents.
The collection of bioaerosol particles on a surface is called deposition. The removal of these particles from the atmosphere affects human health in regard to air quality and respiratory systems.
Alpine lakes in Spain
Alpine lakes located in the Central Pyrenees region of northeast Spain are unaffected by anthropogenic factors making these oligotrophic lakes ideal indicators for sediment input and environmental change. Dissolved organic matter and nutrients from dust transport can aid bacteria with growth and production in low nutrient waters. Within the collected samples of one study, a high diversity of airborne microorganisms were detected and had strong similarities to Mauritian soils despite Saharan dust storms occurring at the time of detection.
Affected ocean species
The types and sizes of bioaerosols vary in marine environments and occur largely because of wet-discharges caused by changes in osmotic pressure or surface tension. Some types of marine originated bioaerosols excrete dry-discharges of fungal spores that are transported by the wind.
One instance of impact on marine species was the 1983 die off of Caribbean sea fans and sea urchins that correlated with dust storms originating in Africa. This correlation was determined by the work of microbiologists and a Total Ozone Mapping Spectrometer, which identified bacteria, viral, and fungal bioaerosols in the dust clouds that were tracked over the Atlantic Ocean. Another instance in of this occurred in 1997 when El Niño possibly impacted seasonal trade wind patterns from Africa to Barbados, resulting in similar die offs. Modeling instances like these can contribute to more accurate predictions of future events.
Spread of diseases
The aerosolization of bacteria in dust contributes heavily to the transport of bacterial pathogens. A well-known case of disease outbreak by bioaerosol was the meningococcal meningitis outbreak in sub-Saharan Africa, which was linked to dust storms during dry seasons.
Other outbreaks have been reportedly linked to dust events including Mycoplasma pneumonia and tuberculosis. Another instance of bioaerosol-spread health issues was an increase in human respiratory problems for Caribbean-region residents that may have been caused by traces of heavy metals, microorganism bioaerosols, and pesticides transported via dust clouds passing over the Atlantic Ocean.
Common sources of bioaerosols include soil, water, and sewage. Bioaerosols can transmit microbial pathogens, endotoxins, and allergens and can excrete both endotoxins and exotoxins. Exotoxins can be particularly dangerous when transported through the air and distribute pathogens to which humans are sensitive. Cyanobacteria are particularly prolific in their pathogen distribution and are abundant in both terrestrial and aquatic environments.
Future research
The potential role of bioaerosols in climate change offers an abundance of research opportunities. Specific areas of study include monitoring bioaerosol impacts on different ecosystems and using meteorological data to forecast ecosystem changes. Determining global interactions is possible through methods like collecting air samples, DNA extraction from bioaerosols, and PCR amplification.
Developing more efficient modelling systems will reduce the spread of human disease and benefit economic and ecologic factors. An atmospheric modeling tool called the Atmospheric Dispersion Modelling System (ADMS 3) is currently in use for this purpose. The ADMS 3 uses computational fluid dynamics (CFD) to locate potential problem areas, minimizing the spread of harmful bioaerosol pathogens include tracking occurrences.
Agroecosystems have an array of potential future research avenues within bioaerosols. Identification of deteriorated soils may identify sources of plant or animal pathogens.
See also
Mycotoxin
Indoor air quality
Indoor bioaerosol
Mold growth, assessment, and remediation
Mold health issues
Sick building syndrome
References
External links
Aeromicrobiology, MicrobeWiki
Bioaerosols and OSH, OSHWIKI
Rutgers University Project
Sampling and characterization of bioaerosols, NIOSH Manual of Analytical Methods
Physical chemistry
Aerosols
Aerosol measurement | Bioaerosol | [
"Physics",
"Chemistry"
] | 4,120 | [
"Applied and interdisciplinary physics",
"Colloids",
"Aerosols",
"nan",
"Physical chemistry"
] |
11,789,067 | https://en.wikipedia.org/wiki/Compatible%20ink | Compatible ink (or compatible toner) is manufactured by third-party manufacturers and is designed to work in designated printers without infringing on patents of printer manufacturers. Compatible inks and toners may come in a variety of packaging including sealed plastic wraps or taped plastic wraps. Regardless of packaging, compatible products are generally priced lower than original equipment manufacturer (OEM) brand inks and toners.
While there has been considerable debate and litigation involving the ink and toner patents of printer manufacturers, third-party manufacturers continue to thrive. Manufacturers of compatible ink and toner products currently control about 25% the ink and toner market well over $8 Billion annually.
Types
Compatible ink is manufactured for several types of machines including fax machines, laser printers, inkjet printers, multifunction printers, and copiers. Aside from compatible products, three other sources of consumables are also available to supply these machines, including OEM brand ink and toner, remanufactured toner and ink cartridges, and refilled ink and toner cartridges. Compatible ink manufacturers differentiate their product by using all new parts, whereas other ink replacements recycle used OEM parts. Compatible ink and toner products tend to offer greater value than original, genuine OEM ink and toner cartridges. Reducing cost for the end user, ink and toner manufactured by third-party manufacturers is classified as compatible when consisting of new parts for a third party printer.
Comparison of performance, quality and reliability
The performance of a printer cartridge needs to be measured by parameters like:
mechanism of printing (toner and ink-jet) which impacts the resolution and print-rate,
print quality, the percentage of useful pages (standard required e.g. business use) printed by the cartridge.
page yield (number of pages printed per cartridge)
printer compatibility etc.
A comparison between OEM and compatible cartridges for a specific printer needs to take into account the above parameters. For example, a remanufactured cartridge may for example be purchased cheaper, but may not print out as many useful pages. Reliability and consistency associated with an OEM cartridge may be more important than price, for example, when printing output for important business.
One independent test in 2004 on using a compatible ink for one type of printer showed little or no difference in quality between the compatible and OEM products.
All types of compatible ink cartridges are different and vary from supplier to supplier. This is due to the type of ink in the printer, the chips (or no chip) on the cartridge and the actual manufacture of the cartridge itself.
In terms of comparisons with suppliers, prices, quality and comparisons with original oem cartridges. This can vary also by manufacturer and printer. Some compatible cartridges will work perfectly in some printers.
See also
Vendor lock-in
Life-cycle assessment
References
Inks
Printing materials
Competition (economics) | Compatible ink | [
"Physics"
] | 580 | [
"Printing materials",
"Materials",
"Matter"
] |
11,789,562 | https://en.wikipedia.org/wiki/Michael%20Earl%20%28academic%29 | Michael John Earl (born 1 November 1944) is a British academic, Formerly Dean of Templeton College, Oxford, Pro-Vice-Chancellor of Oxford University, and Professor of Information Management in the University of Oxford, known for his work on strategic information systems planning.
Biography
Earl was educated at the University of Newcastle upon Tyne (BA) and the University of Warwick (MSc). He is also a Master of Arts of the University of Oxford.
From 1974 until 1976 he was Lecturer in Management Control at Manchester Business School. From 1976 until 1990 he was a Fellow of Templeton College, Oxford, and founding Director of the Oxford Institute of Information Management. He then spent eleven years at London Business School, where he held positions including Professor of Information Management, Director of the Centre for Network Economy, Deputy Dean, and Acting Dean. During this time he remained an Associate Fellow of Templeton College.
In 2002 he became Dean of Templeton and Professor of Information Management in the University of Oxford. Until 2004 the President of the College was both Chairman of the Governing Body and Head of House. From 2004 Earl was Head of House. On 9 February 2005 his position was approved by The Queen-in-Council. He was also University's Chairman of Executive Education.
As Dean of Templeton, he led a major restructuring of business and management studies in Oxford and then led the merger between Templeton College and Green College. In 2008 he became Pro-Vice-Chancellor (Development and External Affairs) of the university and oversaw most of the active phase of Oxford's £1.3bn fundraising campaign. He retired in 2010 and is now Emeritus Professor of Information Management at Oxford.
He is one of the founders of the annual Emerging Markets Symposium at Green Templeton College. His current research is on IT in mergers and acquisitions and on information strategy. He is a trustee of the Oxford Philharmonic (symphony orchestra), serves on the Finance Board of the Diocese of Gloucester, and is involved with other voluntary and charitable organisations.
Selected publications
Books
Earl, Michael J. Management strategies for information technology. Prentice-Hall, Inc., 1989.
Earl, Michael J. " Perspectives on Management" Oxford University Press, 1983 (and others)
Articles, a selection
Earl, Michael J., Sampler, J. L., and J. E. Short "Strategies for Business Process Reengineering ", Journal of Management Information Systems: Evidence from Field Studies, Journal of Management Information Systems, Vol 12 No 1, Summer 1995
Earl, M. J. and Bensaou, M. "The Right Mindset for Managing Information Technology", Harvard Business Review, Sept- Oct 1998
Cash, J. I., Earl, M. J. and R. Morison "Teaming Up to Crack Innovation and Enterprise Integration", Harvard Business Review, Nov 2008
Earl, Michael J. "Knowledge Management Strategies: Toward a Taxonomy", Journal of Management Information Systems, Summer 2001
Earl, Michael J. and D. Feeny, "is your CIO Adding Value?" Sloan Management Review, Vol 35, No 3, Spring 1994
Earl, Michael J. "The new and the old of business process redesign." The Journal of Strategic Information Systems 3.1 (1994): 5-22.
Earl, Michael J. "Experiences in strategic information systems planning." MIS Quarterly 17.1 (1993): 1-24.
Rockart, John F., Michael J. Earl, and Jeanne W. Ross. "Eight imperatives for the new IT organization." Sloan management review 38.1 (1996): 43-55.
Earl, Michael J., and Ian A. Scott. "What is a chief knowledge officer." Sloan management review 40.2 (1999): 29-38.
Earl, Michael J. "The risks of outsourcing IT." Sloan management review 37.3 (2012).
References
External links
Debrett's People of Today
Michael Earl at Formicio
1944 births
Living people
Information systems researchers
Alumni of Newcastle University
Alumni of the University of Warwick
Presidents of Templeton College, Oxford
Fellows of Green Templeton College, Oxford
Academics of the Victoria University of Manchester
Academics of London Business School
Alumni of the Manchester Business School | Michael Earl (academic) | [
"Technology"
] | 855 | [
"Information systems",
"Information systems researchers"
] |
11,789,595 | https://en.wikipedia.org/wiki/MALDI%20imaging | MALDI mass spectrometry imaging (MALDI-MSI) is the use of matrix-assisted laser desorption ionization as a mass spectrometry imaging technique in which the sample, often a thin tissue section, is moved in two dimensions while the mass spectrum is recorded. Advantages, like measuring the distribution of a large amount of analytes at one time without destroying the sample, make it a useful method in tissue-based study.
Sample preparation
Sample preparation is a critical step in imaging spectroscopy. Scientists take thin tissue slices mounted on conductive microscope slides and apply a suitable MALDI matrix to the tissue, either manually or automatically. Next, the microscope slide is inserted into a MALDI mass spectrometer. The mass spectrometer records the spatial distribution of molecular species such as peptides, proteins or small molecules. Suitable image processing software can be used to import data from the mass spectrometer to allow visualization and comparison with the optical image of the sample. Recent work has also demonstrated the capacity to create three-dimensional molecular images using MALDI imaging technology and comparison of these image volumes to other imaging modalities such as magnetic resonance imaging (MRI).
Tissue preparation
The tissue samples must be preserved quickly in order to reduce molecular degradation. The first step is to freeze the sample by wrapping the sample then submerging it in a cryogenic solution. Once frozen, the samples can be stored below -80 °C for up to a year.
When ready to be analyzed, the tissue is embedded in a gelatin media which supports the tissue while it is being cut, while reducing contamination that is seen in optimal cutting temperature compound (OCT) techniques. The mounted tissue section thickness varies depending on the tissue.
Tissue sections can then be thaw-mounted by placing the sample on the surface of a conductive slide that is of the same temperature, and then slowly warmed from below. The section can also be adhered to the surface of a warm slide by slowly lowering the slide over the cold sample until the sample sticks to the surface.
The sample can then be stained in order to easily target areas of interest, and pretreated with washing in order to remove species that suppress molecules of interest. Washing with varying grades of ethanol removes lipids in tissues that have a high lipid concentration with little delocalization and maintains the integrity of the peptide spatial arrangement within the sample.
Matrix application
The matrix must absorb at the laser wavelength and ionize the analyte. Matrix selection and solvent system relies heavily upon the analyte class desired in imaging. The analyte must be soluble in the solvent in order to mix and recrystallize the matrix. The matrix must have a homogeneous coating in order to increase sensitivity, intensity, and shot-to-shot reproducibility. Minimal solvent is used when applying the matrix in order to avoid delocalization.
One technique is spraying. The matrix is sprayed, as very small droplets, onto the surface of the sample, allowed to dry, and re-coated until there is enough matrix to analyze the sample. The size of the crystals depend on the solvent system used.
Sublimation can also be used to make uniform matrix coatings with very small crystals. The matrix is placed in a sublimation chamber with the mounted tissue sample inverted above it. Heat is applied to the matrix, causing it to sublime and condense onto the surface of the sample. Controlling the heating time controls the thickness of the matrix on the sample and the size of the crystals formed.
Automated spotters are also used by regularly spacing droplets throughout the tissue sample. The image resolution relies on the spacing of the droplets.
Image production
Images are constructed by plotting ion intensity versus relative position of the data from the sample. Spatial resolution highly impacts the molecular information gained from analysis.
Applications
MALDI-MSI involves the visualization of the spatial distribution of proteins, peptides, lipids, and other small molecules within thin slices of tissue, such as animal or plant. The application of this technique to biological studies has increased significantly since its introduction. MALDI-MSI is providing major contributions to the understanding of diseases, improving diagnostics, and drug delivery. Significant studies are of the eye, cancer research, drug distribution, and neuroscience.
MALDI-MSI has been able to differentiate between drugs and metabolites and provide histological information in cancer research, which makes it a promising tool for finding new protein biomarkers. However, this can be challenging because of ion suppression, poor ionization, and low molecular weight matrix fragmentation effects. To combat this, chemical derivatization is used to improve detection.
Using chemical derivatization, MALDI-MSI is particularly effective in the field of neurodegenerative disease research. The technique enables comprehensive mapping of a wide range of metabolites, such as neurotransmitters and fatty acids. These metabolites are crucial for normal brain function and are often implicated in various brain diseases. This capability is invaluable for exploring the progression and pathogenesis of diseases such as Parkinson's and Alzheimer's. By identifying changes in metabolic pathways early, MALDI-MSI can contribute to the development of better diagnostic markers and therapeutic targets, aiding in earlier detection and more tailored treatments.
See also
Histology
Tissue microarray
Laser capture microdissection
References
Further reading
External links
MALDI MS-imaging interest group
Imaging MS interest group
DFG (German Research Foundation) National Core Facility for MALDI MS-imaging
Mass spectrometry | MALDI imaging | [
"Physics",
"Chemistry"
] | 1,126 | [
"Spectrum (physical sciences)",
"Instrumental analysis",
"Mass",
"Mass spectrometry",
"Matter"
] |
11,789,724 | https://en.wikipedia.org/wiki/Absinthe | Absinthe (, ) is an anise-flavored spirit derived from several plants, including the flowers and leaves of Artemisia absinthium ("grand wormwood"), together with green anise, sweet fennel, and other medicinal and culinary herbs. Historically described as a highly alcoholic spirit, it is 45–74% ABV or 90–148 proof in the US. Absinthe traditionally has a natural green color but may also be colorless. It is commonly referred to in historical literature as . While sometimes casually referred to as a liqueur, absinthe is not traditionally bottled with sugar or sweeteners. Absinthe is traditionally bottled at a high level of alcohol by volume, but it is normally diluted with water before being consumed.
Absinthe was created in the canton of Neuchâtel in Switzerland in the late 18th century by the French physician Pierre Ordinaire. It rose to great popularity as an alcoholic drink in late 19th- and early 20th-century France, particularly among Parisian artists and writers. The consumption of absinthe was opposed by social conservatives and prohibitionists, partly due to its association with bohemian culture. From Europe and the Americas, notable absinthe drinkers included Ernest Hemingway, James Joyce, Lewis Carroll, Charles Baudelaire, Paul Verlaine, Arthur Rimbaud, and Henri de Toulouse-Lautrec.
Absinthe has often been portrayed as a dangerously addictive psychoactive drug and hallucinogen, which gave birth to the term absinthism. The chemical compound thujone, which is present in the spirit in trace amounts, was blamed for its alleged harmful effects. By 1915, absinthe had been banned in the United States and in much of Europe, including France, the Netherlands, Belgium, Switzerland, and Austria-Hungary, yet it has not been demonstrated to be any more dangerous than ordinary spirits. Recent studies have shown that absinthe's psychoactive properties (apart from those attributable to alcohol) have been exaggerated.
A revival of absinthe began in the 1990s, following the adoption of modern European Union food and beverage laws that removed long-standing barriers to its production and sale. By the early 21st century, nearly 200 brands of absinthe were being produced in a dozen countries, most notably in France, Switzerland, Austria, Germany, the Netherlands, Spain, and the Czech Republic.
Etymology
The French word can refer either to the alcoholic beverage, or less commonly, to the actual wormwood plant. Absinthe is derived from the Latin , which in turn comes from the Greek . The use of Artemisia absinthium in a drink is attested in Lucretius' De Rerum Natura (936–950), where Lucretius indicates that a drink containing wormwood is given as medicine to children in a cup with honey on the brim to make it drinkable. Some argue that the word means "undrinkable" in Greek, but it may instead be linked to the Persian root spand or aspand, or the variant esfand, which meant Peganum harmala, also called Syrian rue, although it is not actually a variety of rue, another famously bitter herb. That Artemisia absinthium was commonly burned as a protective offering may suggest that its origins lie in the reconstructed Proto-Indo-European language root , meaning "to perform a ritual" or "make an offering". Whether the word was a borrowing from Persian into Greek, or from a common ancestor of both, is unclear. Alternatively, the Greek word may originate in a pre-Greek substrate word, marked by the non-Indo-European consonant complex . Alternative spellings for absinthe include absinth, absynthe, and absenta. Absinth (without the final e) is a spelling variant most commonly applied to absinthes produced in central and eastern Europe, and is specifically associated with Bohemian-style absinthes.
History
The precise origin of absinthe is unclear. The medical use of wormwood dates back to ancient Egypt and is mentioned in the Ebers Papyrus from around 1550 BC. Wormwood extracts and wine-soaked wormwood leaves were used as remedies by the ancient Greeks. Moreover, some evidence exists of a wormwood-flavoured wine in ancient Greece called .
The first evidence of absinthe, in the sense of a distilled spirit containing green anise and fennel, dates to the 18th century. According to popular legend, it began as an all-purpose patent remedy created by Dr. Pierre Ordinaire, a French doctor living in Couvet, Switzerland, around 1792 (the exact date varies by account). Ordinaire's recipe was passed on to the Henriod sisters of Couvet, who sold it as a medicinal elixir. By other accounts, the Henriod sisters may have been making the elixir before Ordinaire's arrival. In either case, a certain Major Dubied acquired the formula from the sisters in 1797 and opened the first absinthe distillery named Dubied Père et Fils in Couvet with his son Marcellin and son-in-law Henry-Louis Pernod. In 1805, they built a second distillery in Pontarlier, France, under the company name Maison Pernod Fils. Pernod Fils remained one of the most popular brands of absinthe until the drink was banned in France in 1914.
Growth of consumption
Absinthe's popularity grew steadily through the 1840s, when it was given to French troops in Algeria as a malaria preventive, and the troops brought home their taste for it. Absinthe became so popular in bars, bistros, cafés, and cabarets by the 1860s that the hour of 5 pm was called . It was favoured by all social classes, from the wealthy bourgeoisie to poor artists and ordinary working-class people. By the 1880s, mass production had caused the price to drop sharply, and the French were drinking per year by 1910, compared to their annual consumption of almost of wine.
Absinthe was exported widely from France and Switzerland and attained some degree of popularity in other countries, including Spain, the United Kingdom, the United States, and the Czech Republic. It was never banned in Spain or Portugal, and its production and consumption have never ceased. It gained a temporary spike in popularity there during the early 20th century, corresponding with the Art Nouveau and Modernism aesthetic movements.
New Orleans has a cultural association with absinthe and is credited as the birthplace of the Sazerac, perhaps the earliest absinthe cocktail. The Old Absinthe House bar on Bourbon Street began selling absinthe in the first half of the 19th century. Its Catalan lease-holder, Cayetano Ferrer, named it the Absinthe Room in 1874 due to the popularity of the drink, which was served in the Parisian style. It was frequented by Mark Twain, Oscar Wilde, Franklin Delano Roosevelt, Aleister Crowley, and Frank Sinatra.
Bans
Absinthe became associated with violent crimes and social disorder, and one modern writer claims that this trend was spurred by fabricated claims and smear campaigns, which he claims were orchestrated by the temperance movement and the wine industry. One critic claimed:
Edgar Degas's 1876 painting can be seen at the Musée d'Orsay epitomising the popular view of absinthe addicts as sodden and benumbed, and Émile Zola described its effects in his novel .
In 1905, Swiss farmer Jean Lanfray murdered his family and attempted to kill himself after drinking absinthe. Lanfray was an alcoholic who had drunk a lot of wine and brandy before the killings, but that was overlooked or ignored, and blame for the murders was placed solely on his consumption of two glasses of absinthe. The Lanfray murders were the tipping point in this hotly debated topic, and a subsequent petition collected more than 82,000 signatures to ban it in Switzerland. A referendum was held on 5 July 1908. It was approved by voters, and the prohibition of absinthe was written into the Swiss constitution.
In 1906, Belgium and Brazil banned the sale and distribution of absinthe although these were not the first countries to take such action. It had been banned as early as 1898 in the colony of the Congo Free State. The Netherlands banned it in 1909, Switzerland in 1910, the United States in 1912, and France in 1914.
The prohibition of absinthe in France eventually led to the popularity of pastis, and to a lesser extent, ouzo, and other anise-flavoured spirits that do not contain wormwood. Following the conclusion of the First World War, production of the Pernod Fils brand was resumed at the Banus distillery in Catalonia, Spain (where absinthe was still legal), but gradually declining sales saw the cessation of production in the 1960s. In Switzerland, the ban served only to drive the production of absinthe underground. Clandestine home distillers produced colourless absinthe (la Bleue), which was easier to conceal from the authorities. Many countries never banned absinthe, notably the United Kingdom, where it had never been as popular as in continental Europe.
Modern revival
British importer BBH Spirits began to import Hill's Absinth from the Czech Republic in the 1990s, as the UK had never formally banned it, and this sparked a modern resurgence in its popularity. It began to reappear during a revival in the 1990s in countries where it was never banned. Forms of absinthe available during that time consisted almost exclusively of Czech, Spanish, and Portuguese brands that were of recent origin, typically consisting of Bohemian-style products. Connoisseurs considered these of inferior quality and not representative of the 19th-century spirit. In 2000, La Fée Absinthe became the first commercial absinthe distilled and bottled in France since the 1914 ban, but it is now one of dozens of brands that are produced and sold within France.
In the Netherlands, the restrictions were challenged by Amsterdam wineseller Menno Boorsma in July 2004, thus confirming the legality of absinthe once again. Similarly, Belgium lifted its long-standing ban on 1 January 2005 citing a conflict with the adopted food and beverage regulations of the single European Market. In Switzerland, the constitutional ban was repealed in 2000 during an overhaul of the national constitution although the prohibition was written into ordinary law instead. That law was later repealed, and absinthe was made legal on 1 March l 2005.
The drink was never officially banned in Spain although it began to fall out of favour in the 1940s and almost vanished into obscurity. Catalonia has seen significant resurgence since 2007 when one producer established operations there. Absinthe has never been illegal to import or manufacture in Australia although importation requires a permit under the Customs (Prohibited Imports) Regulation 1956 due to a restriction on importing any product containing oil of wormwood. In 2000, an amendment made all wormwood species prohibited herbs for food purposes under Food Standard 1.4.4. Prohibited and Restricted Plants and Fungi. However, this amendment was found inconsistent with other parts of the pre-existing Food Code, and it was withdrawn in 2002 during the transition between the two codes, thereby continuing to allow absinthe manufacture and importation through the existing permit-based system. These events were erroneously reported by the media as it having been reclassified from a prohibited product to a restricted product.
In 2007, the French brand Lucid became the first genuine absinthe to receive a Certificate of Label Approval for import into the United States since 1912, following independent efforts by representatives from Lucid and Kübler to overturn the long-standing U.S. ban. In December 2007, St. George Absinthe Verte produced by St. George Spirits of Alameda, California became the first brand of American-made absinthe produced in the United States since the ban. Since that time, other micro-distilleries have started producing small batches in the United States.
The French Absinthe Ban of 1915 was repealed in May 2011 following petitions by the , which represents French distillers, and the French Senate voted to repeal the prohibition in April 2011.
In Switzerland, the village of Môtiers, Val-de-Travers, near Neuchâtel, became the focal point of production and promotion of the liquor after a ban of nearly 100 years was lifted. The national (House of Absinthe), with its attached museum, is located in the former courthouse where absinthe distillers were formerly proscecuted.
The 21st century has seen new types of absinthe, including various frozen preparations, that have become increasingly popular.
Production
Most countries have no legal definition for absinthe, whereas the method of production and content of spirits such as whisky, brandy, and gin are globally defined and regulated. Therefore, producers are at liberty to label a product as "absinthe" or "absinth" without regard to any specific legal definition or quality standards.
Producers of legitimate absinthes employ one of two historically defined processes to create the finished spirit – distillation or cold mixing. In the sole country (Switzerland) that does possess a legal definition of absinthe, distillation is the only permitted method of production.
Distilled absinthe
Distilled absinthe employs a method of production similar to that of high-quality gin. Botanicals are initially macerated in distilled base alcohol before being redistilled to exclude bitter principles, and impart the desired complexity and texture to the spirit.
The distillation of absinthe first yields a colourless distillate that leaves the alembic at around 72% ABV. The distillate may be reduced and bottled clear, to produce a Blanche or la Bleue absinthe, or it may be coloured to create a verte using natural or artificial colouring.
Traditional absinthes obtain their green color strictly from the chlorophyll of whole herbs, which is extracted from the plants during the secondary maceration. This step involves steeping plants such as petite wormwood, hyssop, and melissa (among other herbs) in the distillate. Chlorophyll from these herbs is extracted in the process, giving the drink its famous green color.
This step also provides a herbal complexity that is typical of high-quality absinthe. The natural coloring process is considered critical for absinthe ageing, since the chlorophyll remains chemically active. The chlorophyll serves a similar role in absinthe that tannins do in wine or brown liquors.
After the coloring process, the resulting product is diluted with water to the desired percentage of alcohol. The flavor of absinthe is said to improve materially with storage, and many distilleries, before the ban, aged their absinthe in settling tanks before bottling.
Cold mixed absinthe
Many modern absinthes are produced using a cold-mix process. This inexpensive method of production does not involve distillation, and is regarded as inferior for the same reasons that give cause for cheaply compounded gin to be legally differentiated from distilled gin. The cold mixing process involves the simple blending of flavouring essences and artificial colouring in commercial alcohol, in similar fashion to most flavoured vodkas and inexpensive liqueurs and cordials. Some modern cold-mixed absinthes have been bottled at strengths approaching 90% ABV. Others are presented simply as a bottle of plain alcohol with a small amount of powdered herbs suspended within it.
The lack of a formal legal definition in most countries to regulate the production and quality of absinthe has enabled cheaply made products to be falsely presented as traditional in production and composition. In Switzerland, the only country with a formal legal definition of absinthe, any absinthe product not obtained by maceration and distillation or coloured artificially cannot be sold as absinthe.
Ingredients
Absinthe is traditionally prepared from a distillation of neutral alcohol, various herbs, spices, and water. Traditional absinthes were redistilled from a white grape spirit (or eau de vie), while lesser absinthes were more commonly made from alcohol from grains, beets, or potatoes. The principal botanicals are grande wormwood, green anise, and florence fennel, which are often called "the holy trinity". Many other herbs may be used as well, such as petite wormwood (Artemisia pontica or Roman wormwood), hyssop, melissa, star anise, angelica, peppermint, coriander, and veronica.
One early recipe was included in 1864's The English and Australian Cookery Book. It directed the maker to "Take of the tops of wormwood, four pounds; root of angelica, calamus aromaticus, aniseed, leaves of dittany, of each one ounce; alcohol, four gallons. Macerate these substances during eight days, add a little water, and distil by a gentle fire, until two gallons are obtained. This is reduced to a proof spirit, and a few drops of the oil of aniseed added."
Alternative colouring
Adding to absinthe's negative reputation in the late 19th and early 20th centuries, unscrupulous makers of the drink omitted the traditional colouring phase of production in favour of adding toxic copper salts to artificially induce a green tint. This practice may be responsible for some of the alleged toxicity historically associated with this beverage. Many modern-day producers resort to other shortcuts, including the use of artificial food coloring to create the green color. Additionally, at least some cheap absinthes produced before the ban were reportedly adulterated with poisonous antimony trichloride, reputed to enhance the louching effect.
Absinthe may also be naturally coloured pink or red using rose or hibiscus flowers. This was referred to as a rose (pink) or rouge (red) absinthe. Only one historical brand of rose absinthe has been documented.
Bottled strength
Absinthe was historically bottled at 45–74% ABV. Some modern Franco–Suisse absinthes are bottled at up to 83% ABV, while some modern, cold-mixed bohemian-style absinthes are bottled at up to 89.9% ABV.
Kits
The modern-day interest in absinthe has spawned a rash of absinthe kits from sellers claiming they produce homemade absinthe. Kits often call for soaking herbs in vodka or alcohol, or adding a liquid concentrate to vodka or alcohol to create an ersatz absinthe. Such practices usually yield a harsh substance that bears little resemblance to the genuine article, and are considered inauthentic by any practical standard. Some concoctions may even be dangerous, especially if they call for a potentially poisonous inclusion of herbs, oils, or extracts. In at least one documented case, a person suffered acute kidney injury after drinking 10 ml of pure wormwood oil.
Alternatives
In baking and in preparing the classic New Orleans-style Sazerac cocktail, anise-flavored liqueurs and pastis have often been used as a substitute if absinthe is unavailable.
Preparation
The traditional French preparation involves placing a sugar cube on top of a specially designed slotted spoon, and placing the spoon on a glass filled with a measure of absinthe. Iced water is poured or dripped over the sugar cube to mix the water into the absinthe. The final preparation contains 1 part absinthe and 3–5 parts water. As water dilutes the spirit, those components with poor water solubility (mainly those from anise, fennel, and star anise) come out of solution and cloud the drink. The resulting milky opalescence is called the louche (, French: 'opaque' or 'shady'). The release of these dissolved essences coincides with a perfuming of herbal aromas and flavours that "blossom" or "bloom", and brings out subtleties that are otherwise muted within the neat spirit. This reflects what is perhaps the oldest and purest method of preparation, and is often referred to as the French method.
The Bohemian method is a recent invention that involves fire, and was not performed during absinthe's peak of popularity in the Belle Époque. Like the French method, a sugar cube is placed on a slotted spoon over a glass containing one shot of absinthe. The sugar is soaked in alcohol (usually more absinthe), then set ablaze. The flaming sugar cube is then dropped into the glass, thus igniting the absinthe. Finally, a shot glass of water is added to douse the flames. This method tends to produce a stronger drink than the French method. A variant of the Bohemian method involves allowing the fire to extinguish on its own. This variant is sometimes referred to as "cooking the absinthe" or "the flaming green fairy". The origin of this burning ritual may borrow from a coffee and brandy drink that was served at Café Brûlot, in which a sugar cube soaked in brandy was set aflame. Most experienced absintheurs do not recommend the Bohemian Method and consider it a modern gimmick, as it can destroy the absinthe flavour and present a fire hazard due to the unusually high alcohol content present in absinthe.
In 19th century Parisian cafés, upon receiving an order for an absinthe, a waiter would present the patron with a dose of absinthe in a suitable glass, sugar, absinthe spoon, and a carafe of iced water. It was up to the patron to prepare the drink, as the inclusion or omission of sugar was strictly an individual preference, as was the amount of water used. As the popularity of the drink increased, additional accoutrements of preparation appeared, including the absinthe fountain, which was effectively a large jar of iced water with spigots, mounted on a lamp base. This let drinkers prepare a number of drinks at onceand with a hands-free drip, patrons could socialise while louching a glass.
Although many bars served absinthe in standard glassware, a number of glasses were specifically designed for the French absinthe preparation ritual. Absinthe glasses were typically fashioned with a dose line, bulge, or bubble in the lower portion denoting how much absinthe should be poured. One "dose" of absinthe ranged anywhere around 2–2.5 fluid ounces (60–75 ml).
In addition to being prepared with sugar and water, absinthe emerged as a popular cocktail ingredient in both the United Kingdom and the United States. By 1930, dozens of fancy cocktails that called for absinthe had been published in numerous credible bartender guides. One of the most famous of these libations is Ernest Hemingway's "Death in the Afternoon" cocktail, a tongue-in-cheek concoction that contributed to a 1935 collection of celebrity recipes. The directions are: "Pour one jigger absinthe into a Champagne glass. Add iced Champagne until it attains the proper opalescent milkiness. Drink three to five of these slowly."
Styles
Most categorical alcoholic beverages have regulations governing their classification and labelling, while those governing absinthe have always been conspicuously lacking. According to popular treatises from the 19th century, absinthe could be loosely categorised into several grades (ordinaire, demi-fine, fine, and Suissethe latter does not denote origin), in order of increasing alcoholic strength and quality. Many contemporary absinthe critics simply classify absinthe as distilled or mixed, according to its production method. And while the former is generally considered far superior in quality to the latter, an absinthe's simple claim of being 'distilled' makes no guarantee as to the quality of its base ingredients or the skill of its maker.
Blanche absinthe ("white" in French, also referred to as la Bleue in Switzerland) is bottled directly following distillation and reduction, and is uncoloured (clear). Blanches tend to have a clean, smooth flavour with strongly individuated tasting notes. The name la Bleue was originally a term used for Swiss bootleg absinthe, which was bottled colourless so as to be visually indistinct from other spirits during the era of absinthe prohibition, but has become a popular term for post-ban Swiss-style absinthe in general. Blanches are often lower in alcohol content than vertes, though this is not necessarily so; the only truly differentiating factor is that blanches are not put through a secondary maceration stage, and thus remain colourless like other distilled liquors.
Verte absinthe ("green" in French, sometimes called la fée verte) begins as a blanche, and is altered by a secondary maceration stage, in which a separate mixture of herbs is steeped into the clear distillate before bottling. This confers an intense, complex flavor as well as a peridot green hue. Vertes represent the prevailing type of absinthe that was found in the 19th century. Vertes are typically more alcoholic than blanches, as the high amounts of botanical oils conferred during the secondary maceration only remain miscible at lower concentrations of water, thus vertes are usually bottled at closer to still strength. Artificially colored green absinthes may also be claimed to be verte, though they lack the characteristic herbal flavors that result from maceration in whole herbs.
Absenta ("absinthe" in Spanish) is sometimes associated with a regional style that often differed slightly from its French cousin. Traditional absentas may taste slightly different due to their use of Alicante anise, and often exhibit a characteristic citrus flavour.
Hausgemacht (German for home-made, often abbreviated as HG) refers to clandestine absinthe (not to be confused with the Swiss La Clandestine brand) that is home-distilled by hobbyists. It should not be confused with absinthe kits. Hausgemacht absinthe is produced in tiny quantities for personal use and not for the commercial market. Clandestine production increased after absinthe was banned, when small producers went underground, most notably in Switzerland. Although the ban has been lifted in Switzerland, some clandestine distillers have not legitimised their production. Authorities believe that high taxes on alcohol and the mystique of being underground are likely reasons.
Bohemian-style absinth is also referred to as Czech-style absinthe, anise-free absinthe, or just "absinth" (without the "e"), and is best described as a wormwood bitters. It is produced mainly in the Czech Republic, from which it gets its designation as Bohemian or Czech, although not all absinthes from the Czech Republic are Bohemian-style. Bohemian-style absinth typically contains little or none of the anise, fennel, and other herbal flavours associated with traditional absinthe, and thus bears very little resemblance to the absinthes made popular in the 19th century. Typical Bohemian-style absinth has only two similarities with its authentic, traditional counterpart: it contains wormwood and has a high alcohol content. The Czechs are credited with inventing the fire ritual in the 1990s, possibly because Bohemian-style absinth does not louche, which renders the traditional French preparation method useless. As such, this type of absinthe and the fire ritual associated with it are entirely modern fabrications, and have little to no relationship with the historical absinthe tradition.
Storage
Absinthe that is artificially coloured or clear is aesthetically stable, and can be bottled in clear glass. If naturally colored absinthe is exposed to light or air for a prolonged period, the chlorophyll gradually becomes oxidized, which has the effect of gradually changing the color from green to yellow green, and eventually to brown. The colour of absinthe that has completed this transition was historically referred to as feuille morte ("dead leaf"). In the pre-ban era, this natural phenomenon was favourably viewed, for it confirmed the product in question was coloured naturally, and not artificially with potentially toxic chemicals. Predictably, vintage absinthes often emerge from sealed bottles as distinctly amber in tint due to decades of slow oxidation. Though this colour change presents no adverse impact to the flavour of absinthe, it is generally desired to preserve the original colour, which requires that naturally coloured absinthe be bottled in dark, light resistant bottles. Absinthe intended for decades of storage should be kept in a cool (room temperature), dry place, away from light and heat. Absinthe should not be stored in the refrigerator or freezer, as the anethole may polymerise inside the bottle, creating an irreversible precipitate, and adversely impacting the original flavour.
Health effects
Absinthe has been frequently described in modern times as being hallucinogenic, a claim refuted by modern science. The belief that absinthe induces hallucinogenic effects is rooted, at least partly, in the findings of 19th century French psychiatrist Valentin Magnan, who carried out ten years of experiments with wormwood oil. In the course of this research, he studied 250 cases of alcoholism and concluded that those who abused absinthe were worse off than those who abused other alcoholic drinks, experiencing rapid-onset hallucinations. Such accounts by opponents of absinthe (like Magnan) were cheerfully embraced by famous absinthe drinkers, many of whom were bohemian artists or writers.
Two famous artists who helped popularise the notion that absinthe had powerful psychoactive properties were Toulouse-Lautrec and Vincent van Gogh. In one of the best-known written accounts of absinthe drinking, an inebriated Oscar Wilde described a phantom sensation of having tulips brush against his legs after leaving a bar at closing time.
Notions of absinthe's alleged hallucinogenic properties were again fuelled in the 1970s, when a scientific paper suggested that thujone's structural similarity to tetrahydrocannabinol (THC), the active chemical in cannabis, presented the possibility of THC receptor affinity. Counterevidence to this was published in 1999.
The debate over whether absinthe produces effects on the human mind in addition to those of alcohol has not been resolved conclusively. The effects of absinthe have been described by some as mind opening. The most commonly reported experience is a "clear-headed" feeling of inebriationa form of "lucid drunkenness". Chemist, historian and absinthe distiller Ted Breaux has claimed that the alleged secondary effects of absinthe may be because some of the herbal compounds in the drink act as stimulants, while others act as sedatives, creating an overall lucid effect of awakening. The long-term effects of moderate absinthe consumption in humans remain unknown, although herbs traditionally used to produce absinthe are reported to have both painkilling and antiparasitic properties.
Today it is known that absinthe does not cause hallucinations. It is widely accepted that reports of hallucinogenic effects resulting from absinthe consumption were attributable to the poisonous adulterants being added to cheaper versions of the drink in the 19th century, such as oil of wormwood, impure alcohol (contaminated possibly with methanol), and poisonous colouring matter – notably (among other green copper salts) cupric acetate and antimony trichloride (the last-named being used to fake the ouzo effect).
Controversy
It was once widely promoted that excessive absinthe drinking caused effects that were discernible from those associated with alcoholism, a belief that led to the coining of the term absinthism. One of the first vilifications of absinthe followed an 1864 experiment in which Magnan simultaneously exposed one guinea pig to large doses of pure wormwood vapour, and another to alcohol vapours. The guinea pig exposed to wormwood vapour experienced convulsive seizures, while the animal exposed to alcohol did not. Magnan would later blame the naturally occurring (in wormwood) chemical thujone for these effects.
Thujone, once widely believed to be an active chemical in absinthe, is a GABA antagonist, and while it can produce muscle spasms in large doses, there is no direct evidence to suggest it causes hallucinations. Past reports estimated thujone concentrations in absinthe as being up to 260 mg/kg. More recently, published scientific analyses of samples of various original absinthes have disproved previous estimates, and demonstrated that only a trace of the thujone present in wormwood actually makes it into a properly distilled absinthe when historical methods and materials are employed to create the spirit. As such, most traditionally crafted absinthes, both vintage and modern, fall within the current EU standards.
Tests conducted on mice to study toxicity showed an oral of about 45 mg thujone per kg of body weight, which represents far more absinthe than could be realistically consumed. The high percentage of alcohol in absinthe would result in mortality long before thujone could become a factor. In documented cases of acute thujone poisoning as a result of oral ingestion, the source of thujone was not commercial absinthe, but rather non-absinthe-related sources, such as common essential oils (which may contain as much as 50% thujone).
One study published in the Journal of Studies on Alcohol concluded that high doses (0.28 mg/kg) of thujone in alcohol had negative effects on attention performance in a clinical setting. It delayed reaction time, and caused subjects to concentrate their attention into the central field of vision. Low doses (0.028 mg/kg) did not produce an effect noticeably different from the plain alcohol control. While the effects of the high dose samples were statistically significant in a double blind test, the test subjects themselves were unable to reliably identify which samples contained thujone. For the average man, the high dose samples in the study would equate to 18.2 mg of thujone. The EU limit of 35 mg/L of thujone in absinthe means that given the highest permitted thujone content, that individual would need to consume approximately 0.5 litres of high proof (e.g. 50%+ ABV) spirit before the thujone could be metabolized in order to display effects detectable in a clinical setting, which would result in a potentially lethal BAC of >0.4%.
Regulations
Most countries (except Switzerland) at present do not possess a legal definition of absinthe (unlike Scotch whisky or cognac). Accordingly, producers are free to label a product "absinthe" or "absinth", whether or not it bears any resemblance to the traditional spirit.
Australia
Absinthe is readily available in many bottle shops. Bitters may contain a maximum 35 mg/kg thujone, while other alcoholic beverages can contain a maximum 10 mg/kg. The domestic production and sale of absinthe is regulated by state licensing laws.
Until 13 July 2013, the import and sale of absinthe technically required a special permit, since "oil of wormwood, being an essential oil obtained from plants of the genus Artemisia, and preparations containing oil of wormwood" were listed as item 12A, Schedule 8, Regulation 5H of the Customs (Prohibited Imports) Regulations 1956 (Cth). These controls have now been repealed, and permission is no longer required.
Brazil
Absinthe was prohibited in Brazil until 1999 and was brought by entrepreneur Lalo Zanini and legalised in the same year. Presently, absinthe sold in Brazil must abide by the national law that restricts all spirits to a maximum of 54% ABV. While this regulation is enforced throughout channels of legal distribution, it may be possible to find absinthe containing alcohol in excess of the legal limit in some restaurants or food fairs.
Canada
In Canada, liquor laws concerning the production, distribution, and sale of spirits are written and enforced by individual provincial government monopolies. Each product is subject to the approval of a respective individual provincial liquor board before it can be sold in that province. Importation is a federal matter, and is enforced by the Canada Border Services Agency. The importation of a nominal amount of liquor by individuals for personal use is permitted, provided that conditions for the individual's duration of stay outside the country are satisfied.
British Columbia, New Brunswick: no established limits on thujone content
Alberta, Ontario: 10 mg/kg
Manitoba: 6–8 mg
Quebec: 15 mg/kg
Newfoundland and Labrador: absinthe sold in provincial liquor store outlets
Nova Scotia: absinthe sold in provincial liquor store outlets
Prince Edward Island: absinthe is not sold in provincial liquor store outlets, but one brand (Deep Roots) produced on the island can be procured locally.
Saskatchewan: Only one brand listed in provincial liquor stores, although an individual is permitted to import one case (usually twelve 750 ml bottles or eight one-litre bottles) of any liquor.
Ontario: 3 brands of absinthe are listed for sale on the web site of the Liquor Control Board of Ontario
In 2007, Canada's first genuine absinthe (Taboo Absinthe) was created by Okanagan Spirits Craft Distillery in British Columbia.
European Union
The European Union permits a maximum thujone level of 35 mg/kg in alcoholic beverages where Artemisia species is a listed ingredient, and 10 mg/kg in other alcoholic beverages. Member countries regulate absinthe production within this framework. The sale of absinthe is permitted in all EU countries unless they further regulate it.
Finland
The sale and production of absinthe was prohibited in Finland from 1919 to 1932; no current prohibitions exist. The government-owned chain of liquor stores (Alko) is the only outlet that may sell alcoholic beverages containing over 8% ABV, although national law bans the sale of alcoholic beverages containing over 80% ABV.
France
Édouard Manet's first major painting The Absinthe Drinker was controversial, and was rejected by the Paris Salon in 1859. Despite adopting sweeping EU food and beverage regulations in 1988 that effectively re-legalised absinthe, a decree was passed that same year that preserved the prohibition on products explicitly labelled as "absinthe", while placing strict limits on fenchone (fennel) and pinocamphone (hyssop) in an obvious, but failed, attempt to thwart a possible return of absinthe-like products. French producers circumvented this regulatory obstacle by labelling absinthe as ('wormwood-based spirits'), with many either reducing or omitting fennel and hyssop altogether from their products. A legal challenge to the scientific basis of this decree resulted in its repeal (2009), which opened the door for the official French re-legalisation of absinthe for the first time since 1915. The French Senate voted to repeal the prohibition in mid-April 2011.
Georgia
It is legal to produce and sell absinthe in Georgia, which has claimed to possess several producers of absinthe.
Germany
A ban on absinthe was enacted in Germany on 27 March 1923. In addition to banning the production of and commercial trade in absinthe, the law went so far as to prohibit the distribution of printed matter that provided details of its production. The original ban was lifted in 1981, but the use of Artemisia absinthium as a flavouring agent remained prohibited. On 27 September 1991, Germany adopted the European Community's standards of 1988, which effectively re-legalised absinthe.
Italy
The Fascist regime in 1926 banned the production, import, transport and sale of any liquor named . The ban was reinforced in 1931 with harsher penalties for transgressors, and remained in force until 1992 when the Italian government amended its laws to comply with the EU directive 88/388/EEC.
New Zealand
Although absinthe is not prohibited at national level, some local authorities have banned it. The latest is Mataura in Southland. The ban came in August 2008 after several issues of misuse drew public and police attention. One incident resulted in breathing difficulties and hospitalising of a 17-year-old for alcohol poisoning. The particular brand of absinthe that caused these effects was bottled at 89% ABV.
Sweden and Norway
The sale and production of absinthe has never been prohibited in Sweden or Norway. However, the only outlet that may sell alcoholic beverages containing more than 3.5% ABV in Sweden and 4.75% ABV in Norway is the government-owned chain of liquor stores known as Systembolaget in Sweden and Vinmonopolet in Norway. Systembolaget and Vinmonopolet did not import or sell absinthe for many years after the ban in France; however, today several absinthes are available for purchase in Systembolaget stores, including Swedish made distilled absinthe. In Norway, on the other hand, one is less likely to find many absinthes since Norwegian alcohol law prohibits the sale and importation of alcoholic beverages above 60% ABV, which eliminates most absinthes.
Switzerland
In Switzerland, the sale and production of absinthe was prohibited from 1910 to 1 March 2005. This was based on a vote in 1908. To be legally made or sold in Switzerland, absinthe must be distilled, must not contain certain additives, and must be either naturally coloured or left uncoloured.
In 2014, the Federal Administrative Court of Switzerland invalidated a governmental decision of 2010 which allowed only absinthe made in the Val-de-Travers region to be labelled as absinthe in Switzerland. The court found that absinthe was a label for a product and was not tied to a geographic origin.
United States
In 2007, the Alcohol and Tobacco Tax and Trade Bureau (TTB) effectively lifted the long-standing absinthe ban, and it has since approved many brands for sale in the US market. This was made possible partly through the TTB's clarification of the Food and Drug Administration's (FDA) thujone content regulations, which specify that finished food and beverages that contain Artemisia species must be thujone-free. In this context, the TTB considers a product thujone-free if the thujone content is less than 10 ppm (equal to 10 mg/kg). This is verified through the use of gas chromatography–mass spectrometry. The brands Kübler and Lucid and their lawyers did most of the work to get absinthe legalized in the U.S., over the 2004–2007 time period. In the U.S., 5 March sometimes is referred to as "National Absinthe Day", as it was the day the 95-year ban on absinthe was finally lifted.
The import, distribution, and sale of absinthe are permitted subject to the following restrictions:
The product must be thujone-free as per TTB guidelines,
The word "absinthe" can neither be the brand name nor stand alone on the label, and
The packaging cannot "project images of hallucinogenic, psychotropic, or mind-altering effects".
Absinthe imported in violation of these regulations is subject to seizure at the discretion of U.S. Customs and Border Protection.
Beginning in 2000, a product called Absente was sold legally in the United States under the marketing tagline "Absinthe Refined", but as the product contained sugar, and was made with southernwood (Artemisia abrotanum) and not grande wormwood (Artemisia absinthium) (before 2009), the TTB classified it as a liqueur.
Vanuatu
The Absinthe (Prohibition) Act 1915, passed in the New Hebrides, has never been repealed, is included in the 2006 Vanuatu consolidated legislation, and contains the following all-encompassing restriction: "The manufacture, importation, circulation and sale wholesale or by retail of absinthe or similar liquors in Vanuatu shall be prohibited."
Cultural influence
Numerous artists and writers living in France in the late 19th and early 20th centuries were noted absinthe drinkers and featured absinthe in their work. Some of these included Édouard Manet, Guy de Maupassant, Paul Verlaine, Amedeo Modigliani, Edgar Degas, Henri de Toulouse-Lautrec, Vincent van Gogh, Oscar Wilde, Arthur Rimbaud, and Émile Zola. Many other renowned artists and writers similarly drew from this cultural well, including Aleister Crowley, Ernest Hemingway, Pablo Picasso, August Strindberg, and Erik Satie.
The aura of illicitness and mystery surrounding absinthe has played into literature, movies, music, and television, where it is often portrayed as a mysterious, addictive, and mind-altering drink. Marie Corelli's Wormwood: A Drama of Paris (1890) was a popular novel about a Frenchman driven to murder and ruin after being introduced to absinthe. Intended as a morality tale on the dangers of the drink, it was speculated to have contributed to subsequent bans of absinthe in Europe and the United States.
Some of the earliest film references include The Hasher's Delirium (1910) by Émile Cohl, an early pioneer in the art of animation, as well as two different silent films, each entitled Absinthe, from 1913 and 1914 respectively.
See also
List of alcoholic drinks
References
Further reading
Adams, Jad (2004) Hideous absinthe: a history of the devil in a bottle, London: I.B. Tauris.
External links
"Absinthe's second coming" An April 2001 article in Cigar Aficionado about the first absinthe commercially produced in France since the 1915 ban.
"Swiss face sobering future after legalizing absinthe" A March 2005 Reuters article about the legalising of absinthe in Switzerland.
"The Mystery of the Green Menace"A November 2005 Wired magazine article about a New Orleans man who has researched the chemical content of absinthe and now distills it in France
"The Return of the Green Faerie"A wine and spirit journal article about the history, ritual, and artistic cult of absinthe
The Wormwood Society An independent organisation supporting changes to the US laws and regulations concerning absinthe. Provides articles, a forum and legal information.
"What Is Absinthe"Article discussing absinthe and its effect over mind and body.
Anise liqueurs and spirits
Culinary Heritage of Switzerland
Distilled drinks
French distilled drinks | Absinthe | [
"Chemistry"
] | 9,752 | [
"Distillation",
"Distilled drinks"
] |
11,790,206 | https://en.wikipedia.org/wiki/Automatic%20scorer | An automatic scorer is the computerized scoring system to keep track of scoring in ten-pin bowling. It was introduced en masse in bowling alleys in the 1970s and combined with mechanical pinsetters to detect overturned pins.
By eliminating the need for manual score-keeping, these systems have introduced new bowlers into the game who otherwise would not participate because they had to count the score themselves, as many do not understand the mathematical formula involved in bowler scoring. At first, people were skeptical about whether a computer could keep an accurate score. In the twenty-first century, automatic scorers are used in most bowling centers around the world. The three manufacturers of these specialty computers have been Brunswick Bowling, AMF Bowling (later QubicaAMF Worldwide), and RCA.
History
Automatic equipment is considered a cornerstone of the modern bowling center. The traditional bowling center of the early 20th century was advanced in automation when the pinsetter person ("pin boy"), who set back up by hand the bowled down pins, was replaced by a machine that automatically replaced the pins in their proper play positions. This machine came out in the 1950s. A detection system was developed from the pinsetter mechanism in the 1960s that could tell which pins had been knocked down, and that information could be transferred to a digital computer.
Automatic electronic scoring was first conceived by Robert Reynolds, who was described by a newspaper story at the time as "a West Coast electronics calculator expert." He worked with the technical staff of Brunswick Bowling to develop it. The goal was realized in the late 1960s when a specialized computer was designed for the purpose of automatic scorekeeping for bowling. The field test for the automatic scorer took place at Village Lanes bowling center, Chicago in 1967. The scoring machine received approval for official use by the American Bowling Congress in August of that year. They were first used in national official league gaming on October 10, 1967. In November, Brunswick announced that they were accepting orders for the new digital computer, which cost around $3,000 per bowling lane. Bowling centers that installed these new automatic scoring devices in the 1970s charged a ten cents extra per line of scoring for the convenience.
Description
Each Automatic Scorer computer unit kept score for four lanes. It had two bowler identification panels serving two lanes each. The bowler pushed it into his named position when his turn came up so the computer knew who was bowling and score accordingly. After the bowler rolled the bowling ball down the lane and knocked down pins, the pinsetter detected which pins were down and relayed this information back to the computer for scoring. The result was then printed on a scoresheet and projected overhead onto a large screen for all to see.
The Automatic Scorer digital computer was mathematically accurate, however the detection system at the pinsetter mechanism sometimes reported the wrong number of pins knocked down. The computer could be corrected manually for any errors in the system; similarly, human errors, such as neglecting to move the bowler identification mechanism, could be corrected for by manual action.
The scorer could take into account bowlers' handicaps and could adjust for late-arriving bowlers. The automatic scorer is directly connected to the foul detection unit. As a result, foul line violations are automatically scored.
Brunswick had put ten years of research and development into the Automatic Scorer, and by 1972 there were over 500 of these computers installed in bowling centers around the world. AMF Bowling, competitor to Brunswick, entered into the automatic scorer computer field during the 1970s and their systems were installed into their brand of bowling centers. By 1974, RCA was also making these computers for automatic scoring.
Reception and further developments
The purposes of the computerized scoring were to avoid errors by human scorers and to prevent cheating. It had the side benefit of speeding up the progress of the game and introducing new bowlers to the game. Score-keeping for bowling is based on a formula that many new to bowling were not familiar with and thought difficult to learn. These casual bowlers unfamiliar with the formula thought the scores given by the computers were confusing. Some bowlers were not comfortable with automatic scorers when they were introduced in the 1970s, so kept score using the traditional method on paper score sheets.
The introduction of this device increased the popularity of the sport. Automatic scorers came to be considered a normal part of modern bowling installations worldwide, with owners and managers saying that bowlers expect such equipment to be present in bowling establishments and that business increased following their introduction. Brunswick introduced a color television style automatic scorer in 1983. Bowling center owners could use these style automatic scorers for advertising, management, videos, and live television.
By the 2010s, these type of electronic visual displays could show bowler avatars and social media connections to publicize the bowlers' scores. Some are capable of being extended entertainment systems of games for children and adults. Some scoring systems support variations on traditional bowling, such as different kinds of bingo games where certain pins have to be knocked down at certain times or practice regimes where certain spares have to be accomplished.
By this point, QubicaAMF Worldwide, an outgrowth of AMF, was one of the leading providers of bowling scoring equipment.
Footnotes
Ten-pin bowling
Sports equipment
Automation
20th-century inventions
American inventions | Automatic scorer | [
"Engineering"
] | 1,062 | [
"Control engineering",
"Automation"
] |
11,790,568 | https://en.wikipedia.org/wiki/Percolation%20threshold | The percolation threshold is a mathematical concept in percolation theory that describes the formation of long-range connectivity in random systems. Below the threshold a giant connected component does not exist; while above it, there exists a giant component of the order of system size. In engineering and coffee making, percolation represents the flow of fluids through porous media, but in the mathematics and physics worlds it generally refers to simplified lattice models of random systems or networks (graphs), and the nature of the connectivity in them. The percolation threshold is the critical value of the occupation probability p, or more generally a critical surface for a group of parameters p1, p2, ..., such that infinite connectivity (percolation) first occurs.
Percolation models
The most common percolation model is to take a regular lattice, like a square lattice, and make it into a random network by randomly "occupying" sites (vertices) or bonds (edges) with a statistically independent probability p. At a critical threshold pc, large clusters and long-range connectivity first appear, and this is called the percolation threshold. Depending on the method for obtaining the random network, one distinguishes between the site percolation threshold and the bond percolation threshold. More general systems have several probabilities p1, p2, etc., and the transition is characterized by a critical surface or manifold. One can also consider continuum systems, such as overlapping disks and spheres placed randomly, or the negative space (Swiss-cheese models).
To understand the threshold, you can consider a quantity such as the probability that there is a continuous path from one boundary to another along occupied sites or bonds—that is, within a single cluster. For example, one can consider a square system, and ask for the probability P that there is a path from the top boundary to the bottom boundary. As a function of the occupation probability p, one finds a sigmoidal plot that goes from P=0 at p=0 to P=1 at p=1. The larger the square is compared to the lattice spacing, the sharper the transition will be. When the system size goes to infinity, P(p) will be a step function at the threshold value pc. For finite large systems, P(pc) is a constant whose value depends upon the shape of the system; for the square system discussed above, P(pc)= exactly for any lattice by a simple symmetry argument.
There are other signatures of the critical threshold. For example, the size distribution (number of clusters of size s) drops off as a power-law for large s at the threshold, ns(pc) ~ s−τ, where τ is a dimension-dependent percolation critical exponents. For an infinite system, the critical threshold corresponds to the first point (as p increases) where the size of the clusters become infinite.
In the systems described so far, it has been assumed that the occupation of a site or bond is completely random—this is the so-called Bernoulli percolation. For a continuum system, random occupancy corresponds to the points being placed by a Poisson process. Further variations involve correlated percolation, such as percolation clusters related to Ising and Potts models of ferromagnets, in which the bonds are put down by the Fortuin–Kasteleyn method. In bootstrap or k-sat percolation, sites and/or bonds are first occupied and then successively culled from a system if a site does not have at least k neighbors. Another important model of percolation, in a different universality class altogether, is directed percolation, where connectivity along a bond depends upon the direction of the flow. Another variation of recent interest is Explosive Percolation, whose thresholds are listed on that page.
Over the last several decades, a tremendous amount of work has gone into finding exact and approximate values of the percolation thresholds for a variety of these systems. Exact thresholds are only known for certain two-dimensional lattices that can be broken up into a self-dual array, such that under a triangle-triangle transformation, the system remains the same. Studies using numerical methods have led to numerous improvements in algorithms and several theoretical discoveries.
Simple duality in two dimensions implies that all fully triangulated lattices (e.g., the triangular, union jack, cross dual, martini dual and asanoha or 3-12 dual, and the Delaunay triangulation) all have site thresholds of , and self-dual lattices (square, martini-B) have bond thresholds of .
The notation such as (4,82) comes from Grünbaum and Shephard, and indicates that around a given vertex, going in the clockwise direction, one encounters first a square and then two octagons. Besides the eleven Archimedean lattices composed of regular polygons with every site equivalent, many other more complicated lattices with sites of different classes have been studied.
Error bars in the last digit or digits are shown by numbers in parentheses. Thus, 0.729724(3) signifies 0.729724 ± 0.000003, and 0.74042195(80) signifies 0.74042195 ± 0.00000080. The error bars variously represent one or two standard deviations in net error (including statistical and expected systematic error), or an empirical confidence interval, depending upon the source.
Percolation on networks
For a random tree-like network (i.e., a connected network with no cycle) without degree-degree correlation, it can be shown that such network can have a giant component, and the percolation threshold (transmission probability) is given by
.
Where is the generating function corresponding to the excess degree distribution, is the average degree of the network and is the second moment of the degree distribution. So, for example, for an ER network, since the degree distribution is a Poisson distribution, where the threshold is at .
In networks with low clustering, , the critical point gets scaled by such that:
This indicates that for a given degree distribution, the clustering leads to a larger percolation threshold, mainly because for a fixed number of links, the clustering structure reinforces the core of the network with the price of diluting the global connections. For networks with high clustering, strong clustering could induce the core–periphery structure, in which the core and periphery might percolate at different critical points, and the above approximate treatment is not applicable.
Percolation in 2D
Thresholds on Archimedean lattices
Note: sometimes "hexagonal" is used in place of honeycomb, although in some contexts a triangular lattice is also called a hexagonal lattice. z = bulk coordination number.
2D lattices with extended and complex neighborhoods
In this section, sq-1,2,3 corresponds to square (NN+2NN+3NN), etc. Equivalent to square-2N+3N+4N, sq(1,2,3). tri = triangular, hc = honeycomb.
Here NN = nearest neighbor, 2NN = second nearest neighbor (or next nearest neighbor), 3NN = third nearest neighbor (or next-next nearest neighbor), etc. These are also called 2N, 3N, 4N respectively in some papers.
For overlapping or touching squares, (site) given here is the net fraction of sites occupied similar to the in continuum percolation. The case of a 2×2 square is equivalent to percolation of a square lattice NN+2NN+3NN+4NN or sq-1,2,3,4 with threshold with . The 3×3 square corresponds to sq-1,2,3,4,5,6,7,8 with z=44 and . The value of z for a k x k square is (2k+1)2-5. For larger overlapping squares, see.
2D distorted lattices
Here, one distorts a regular lattice of unit spacing by moving vertices uniformly within the box , and considers percolation when sites are within Euclidean distance of each other.
Overlapping shapes on 2D lattices
Site threshold is number of overlapping objects per lattice site. k is the length (net area). Overlapping squares are shown in the complex neighborhood section. Here z is the coordination number to k-mers of either orientation, with for sticks.
The coverage is calculated from by for sticks, because there are sites where a stick will cause an overlap with a given site.
For aligned sticks:
Approximate formulas for thresholds of Archimedean lattices
AB percolation and colored percolation in 2D
In AB percolation, a is the proportion of A sites among B sites, and bonds are drawn between sites of opposite species. It is also called antipercolation.
In colored percolation, occupied sites are assigned one of colors with equal probability, and connection is made along bonds between neighbors of different colors.
Site-bond percolation in 2D
Site bond percolation. Here is the site occupation probability and is the bond occupation probability, and connectivity is made only if both the sites and bonds along a path are occupied. The criticality condition becomes a curve = 0, and some specific critical pairs are listed below.
Square lattice:
Honeycomb (hexagonal) lattice:
Kagome lattice:
* For values on different lattices, see "An investigation of site-bond percolation on many lattices".
Approximate formula for site-bond percolation on a honeycomb lattice
Archimedean duals (Laves lattices)
Laves lattices are the duals to the Archimedean lattices. Drawings from. See also Uniform tilings.
2-uniform lattices
Top 3 lattices: #13 #12 #36
Bottom 3 lattices: #34 #37 #11
Top 2 lattices: #35 #30
Bottom 2 lattices: #41 #42
Top 4 lattices: #22 #23 #21 #20
Bottom 3 lattices: #16 #17 #15
Top 2 lattices: #31 #32
Bottom lattice: #33
Inhomogeneous 2-uniform lattice
This figure shows something similar to the 2-uniform lattice #37, except the polygons are not all regular—there is a rectangle in the place of the two squares—and the size of the polygons is changed. This lattice is in the isoradial representation in which each polygon is inscribed in a circle of unit radius. The two squares in the 2-uniform lattice must now be represented as a single rectangle in order to satisfy the isoradial condition. The lattice is shown by black edges, and the dual lattice by red dashed lines. The green circles show the isoradial constraint on both the original and dual lattices. The yellow polygons highlight the three types of polygons on the lattice, and the pink polygons highlight the two types of polygons on the dual lattice. The lattice has vertex types ()(33,42) + ()(3,4,6,4), while the dual lattice has vertex types ()(46)+()(42,52)+()(53)+()(52,4). The critical point is where the longer bonds (on both the lattice and dual lattice) have occupation probability p = 2 sin (π/18) = 0.347296... which is the bond percolation threshold on a triangular lattice, and the shorter bonds have occupation probability 1 − 2 sin(π/18) = 0.652703..., which is the bond percolation on a hexagonal lattice. These results follow from the isoradial condition but also follow from applying the star-triangle transformation to certain stars on the honeycomb lattice. Finally, it can be generalized to having three different probabilities in the three different directions, p1, p2 and p3 for the long bonds, and , , and for the short bonds, where p1, p2 and p3 satisfy the critical surface for the inhomogeneous triangular lattice.
Thresholds on 2D bow-tie and martini lattices
To the left, center, and right are: the martini lattice, the martini-A lattice, the martini-B lattice. Below: the martini covering/medial lattice, same as the 2×2, 1×1 subnet for kagome-type lattices (removed).
Some other examples of generalized bow-tie lattices (a-d) and the duals of the lattices (e-h):
Thresholds on 2D covering, medial, and matching lattices
Thresholds on 2D chimera non-planar lattices
Thresholds on subnet lattices
The 2 x 2, 3 x 3, and 4 x 4 subnet kagome lattices. The 2 × 2 subnet is also known as the "triangular kagome" lattice.
Thresholds of random sequentially adsorbed objects
(For more results and comparison to the jamming density, see Random sequential adsorption)
The threshold gives the fraction of sites occupied by the objects when site percolation first takes place (not at full jamming). For longer k-mers see Ref.
Thresholds of full dimer coverings of two dimensional lattices
Here, we are dealing with networks that are obtained by covering a lattice with dimers, and then consider bond percolation on the remaining bonds. In discrete mathematics, this problem is known as the 'perfect matching' or the 'dimer covering' problem.
Thresholds of polymers (random walks) on a square lattice
System is composed of ordinary (non-avoiding) random walks of length l on the square lattice.
Thresholds of self-avoiding walks of length k added by random sequential adsorption
Thresholds on 2D inhomogeneous lattices
Thresholds for 2D continuum models
For disks, equals the critical number of disks per unit area, measured in units of the diameter , where is the number of objects and is the system size
For disks, equals critical total disk area.
gives the number of disk centers within the circle of influence (radius 2 r).
is the critical disk radius.
for ellipses of semi-major and semi-minor axes of a and b, respectively. Aspect ratio with .
for rectangles of dimensions and . Aspect ratio with .
for power-law distributed disks with , .
equals critical area fraction.
For disks, Ref. use where is the density of disks of radius .
equals number of objects of maximum length per unit area.
For ellipses,
For void percolation, is the critical void fraction.
For more ellipse values, see
For more rectangle values, see
Both ellipses and rectangles belong to the superellipses, with . For more percolation values of superellipses, see.
For the monodisperse particle systems, the percolation thresholds of concave-shaped superdisks are obtained as seen in
For binary dispersions of disks, see
Thresholds on 2D random and quasi-lattices
*Theoretical estimate
Thresholds on 2D correlated systems
Assuming power-law correlations
Thresholds on slabs
h is the thickness of the slab, h × ∞ × ∞. Boundary conditions (b.c.) refer to the top and bottom planes of the slab.
Percolation in 3D
Filling factor = fraction of space filled by touching spheres at every lattice site (for systems with uniform bond length only). Also called Atomic Packing Factor.
Filling fraction (or Critical Filling Fraction) = filling factor * pc(site).
NN = nearest neighbor, 2NN = next-nearest neighbor, 3NN = next-next-nearest neighbor, etc.
kxkxk cubes are cubes of occupied sites on a lattice, and are equivalent to extended-range percolation of a cube of length (2k+1), with edges and corners removed, with z = (2k+1)3-12(2k-1)-9 (center site not counted in z).
Question: the bond thresholds for the hcp and fcc lattice
agree within the small statistical error. Are they identical,
and if not, how far apart are they? Which threshold is expected to be bigger? Similarly for the ice and diamond lattices. See
3D distorted lattices
Here, one distorts a regular lattice of unit spacing by moving vertices uniformly within the cube , and considers percolation when sites are within Euclidean distance of each other.
Overlapping shapes on 3D lattices
Site threshold is the number of overlapping objects per lattice site. The coverage φc is the net fraction of sites covered, and v is the volume (number of cubes). Overlapping cubes are given in the section on thresholds of 3D lattices. Here z is the coordination number to k-mers of either orientation, with
The coverage is calculated from by for sticks, and for plaquettes.
Dimer percolation in 3D
Thresholds for 3D continuum models
All overlapping except for jammed spheres and polymer matrix.
is the total volume (for spheres), where N is the number of objects and L is the system size.
is the critical volume fraction, valid for overlapping randomly placed objects.
For disks and plates, these are effective volumes and volume fractions.
For void ("Swiss-Cheese" model), is the critical void fraction.
For more results on void percolation around ellipsoids and elliptical plates, see.
For more ellipsoid percolation values see.
For spherocylinders, H/D is the ratio of the height to the diameter of the cylinder, which is then capped by hemispheres. Additional values are given in.
For superballs, m is the deformation parameter, the percolation values are given in., In addition, the thresholds of concave-shaped superballs are also determined in
For cuboid-like particles (superellipsoids), m is the deformation parameter, more percolation values are given in.
Void percolation in 3D
Void percolation refers to percolation in the space around overlapping objects. Here refers to the fraction of the space occupied by the voids (not of the particles) at the critical point, and is related to by
. is defined as in the continuum percolation section above.
Thresholds on 3D random and quasi-lattices
Thresholds for other 3D models
In drilling percolation, the site threshold represents the fraction of columns in each direction that have not been removed, and . For the 1d drilling, we have (columns) (sites).
† In tube percolation, the bond threshold represents the value of the parameter such that the probability of putting a bond between neighboring vertical tube segments is , where is the overlap height of two adjacent tube segments.
Thresholds in different dimensional spaces
Continuum models in higher dimensions
In 4d, .
In 5d, .
In 6d, .
is the critical volume fraction, valid for overlapping objects.
For void models, is the critical void fraction, and is the total volume of the overlapping objects
Thresholds on hypercubic lattices
For thresholds on high dimensional hypercubic lattices, we have the asymptotic series expansions
where . For 13-dimensional bond percolation, for example, the error with the measured value is less than 10−6, and these formulas can be useful for higher-dimensional systems.
Thresholds in other higher-dimensional lattices
Thresholds in one-dimensional long-range percolation
In a one-dimensional chain we establish bonds between distinct sites and with probability decaying as a power-law with an exponent . Percolation occurs at a critical value for . The numerically determined percolation thresholds are given by:
Thresholds on hyperbolic, hierarchical, and tree lattices
In these lattices there may be two percolation thresholds: the lower threshold is the probability above which infinite clusters appear, and the upper is the probability above which there is a unique infinite cluster.
Note: {m,n} is the Schläfli symbol, signifying a hyperbolic lattice in which n regular m-gons meet at every vertex
For bond percolation on {P,Q}, we have by duality . For site percolation, because of the self-matching of triangulated lattices.
Cayley tree (Bethe lattice) with coordination number
Thresholds for directed percolation
nn = nearest neighbors. For a (d + 1)-dimensional hypercubic system, the hypercube is in d dimensions and the time direction points to the 2D nearest neighbors.
Directed percolation with multiple neighbors
Site-Bond Directed Percolation
p_b = bond threshold
p_s = site threshold
Site-bond percolation is equivalent to having different probabilities of connections:
P_0 = probability that no sites are connected
P_2 = probability that exactly one descendant is connected to the upper vertex (two connected together)
P_3 = probability that both descendants are connected to the original vertex (all three connected together)
Formulas:
P_0 = (1-p_s) + p_s(1-p_b)^2
P_2 = p_s p_b (1-p_b)
P_3 = p_s p_b^2
P_0 + 2P_2 + P_3 = 1
Exact critical manifolds of inhomogeneous systems
Inhomogeneous triangular lattice bond percolation
Inhomogeneous honeycomb lattice bond percolation = kagome lattice site percolation
Inhomogeneous (3,12^2) lattice, site percolation
or
Inhomogeneous union-jack lattice, site percolation with probabilities
Inhomogeneous martini lattice, bond percolation
Inhomogeneous martini lattice, site percolation. r = site in the star
Inhomogeneous martini-A (3–7) lattice, bond percolation. Left side (top of "A" to bottom): . Right side: . Cross bond: .
Inhomogeneous martini-B (3–5) lattice, bond percolation
Inhomogeneous martini lattice with outside enclosing triangle of bonds, probabilities from inside to outside, bond percolation
Inhomogeneous checkerboard lattice, bond percolation
Inhomogeneous bow-tie lattice, bond percolation
where are the four bonds around the square and is the diagonal bond connecting the vertex between bonds and .
See also
2D percolation cluster
Bootstrap percolation
Directed percolation
Effective medium approximations
Epidemic models on lattices
Graph theory
Network science
Percolation
Percolation critical exponents
Percolation theory
Continuum percolation theory
Random sequential adsorption
Uniform tilings
References
Percolation theory
Critical phenomena
Random graphs | Percolation threshold | [
"Physics",
"Chemistry",
"Materials_science",
"Mathematics"
] | 4,766 | [
"Physical phenomena",
"Phase transitions",
"Critical phenomena",
"Percolation theory",
"Graph theory",
"Combinatorics",
"Mathematical relations",
"Condensed matter physics",
"Random graphs",
"Statistical mechanics",
"Dynamical systems"
] |
14,450,284 | https://en.wikipedia.org/wiki/Shanghai%20Synchrotron%20Radiation%20Facility | The Shanghai Synchrotron Radiation Facility (SSRF) () is a synchrotron-radiation light source facility in Shanghai, People's Republic of China. Located in an eighteen-hectare campus at Shanghai National Synchrotron Radiation Centre, on the Zhangjiang Hi-Tech Park in the Pudong district.
SSRF is operated by the Shanghai Institute of Applied Physics (SINAP). The facility became operational in 2009, reaching full energy operation in Dec 2012.
When it opened, it was China's costliest single science facility.
The facility "has played a key role in revealing the inner mechanism of various cancers."
Construction
It has a circumference of 432 metres, and is designed to operate at 3.5 GeV, the highest energy of any synchrotron other than the Big Three facilities SPring-8 in Hyōgo Prefecture, Japan, ESRF in Grenoble, France and APS at Argonne National labs, United States. It will initially have eight beamlines.
The particle accelerator cost 1.2 billion yuan (US$176 million). It is China's biggest light facility. It is located under a building with a futuristic snail-shaped roof.
The synchrotron opened to universities, scientific institutes and companies for approved research in May 2009.
Dec. 2004 - Sept. 2006: Building construction
Jun. 2005 - Mar. 2008: Accelerator equipment and components manufacture and assembly
Dec. 2005 - Dec. 2008: Beamline construction and assembly
Apr. 2007 - Jul. 2007: Linac commissioning
Oct. 2007 - Mar. 2008: Booster commissioning
Apr. 2008 - Oct. 2008: Storage ring commissioning
Nov. 2008 - Mar. 2009: ID Beamline commissioning
Apr. 2009: The SSRF operation begins
References
External links
Official site
Synchrotron radiation facilities | Shanghai Synchrotron Radiation Facility | [
"Materials_science"
] | 373 | [
"Materials testing",
"Synchrotron radiation facilities"
] |
14,450,419 | https://en.wikipedia.org/wiki/Anthropogenic%20biome | Anthropogenic biomes, also known as anthromes, human biomes or intensive land-use biome, describe the terrestrial biosphere (biomes) in its contemporary, human-altered form using global ecosystem units defined by global patterns of sustained direct human interaction with ecosystems. Anthromes are generally composed of heterogeneous mosaics of different land uses and land covers, including significant areas of fallow or regenerating habitats.
Origin and evolution of the concept
Anthromes were first named and mapped by Erle Ellis and Navin Ramankutty in their 2008 paper, "Putting People in the Map: Anthropogenic Biomes of the World". Anthrome maps now appear in numerous textbooks. and in the National Geographic World Atlas. The most recent version of anthrome maps were published in 2021.
In a recent global ecosystem classification, anthropogenic biomes have been incorporated into several distinct functional biomes in the terrestrial and freshwater realms, and additional units have been described for the freshwater, marine, subterranean and transitional realms to create a more comprehensive description of all ecosystems created and maintained by human activities. The intensive land-use biome comprises five distinct terrestrial ecosystem functional groups: pastures, crops, plantations, urban and semi-natural ecosystem functional group. The artificial wetlands biome in the freshwater realm includes large reservoirs and other constructed wetlands, rice paddies, aquafarms and networks of canals and ditches. The anthropogenic marine biome in the marine realm includes submerged artificial structures and marine aquafarms. The anthropogenic subterranean voids biome includes industrial excavations or artificial cave-like systems. There are two additional biomes in transitions between realms: the anthropogenic shoreline biome includes artificial shorelines; the anthropogenic subterranean freshwaters biome includes water pipes, subterranean canals and flooded mines.
Anthropogenic transformation of the Biosphere
For more than a century, the biosphere has been described in terms of global ecosystem units called biomes, which are vegetation types like tropical rainforests and grasslands that are identified in relation to global climate patterns. Considering that human populations and their use of land have fundamentally altered global patterns of ecosystem form, process, and biodiversity, anthropogenic biomes provide a framework for integrating human systems with the biosphere in the Anthropocene.
Before 1700
Humans have been altering ecosystems since we have evolved. Evidence suggests that our ancestors were burning land to clear it at one million years ago. 600,000 years ago, humans were using spears to kill horses and other large animals in Great Britain and China. For the past tens of thousands of years, humans have greatly changed the plant and animal life around the globe, from what type of wildlife and plant life dominated to what type of ecosystems dominate. Examples include Native Americans; they altered the forest, burnt land to clear it, settled in cities, disrupting forests and other ecosystems, and built monuments that required moving large amounts of earth, such as the Cahokia Monuments. More examples are the civilizations of the ancient world; they mined large amounts of material, made roads, and especially for the Romans, when mining lead, released large amounts of mercury and lead into the air. A recent study showed that nearly three quarters of Earth's land was already inhabited and reshaped by human societies as long as 12,000 years ago.
Agriculture (1700–present)
Humans have been altering ecosystems since before agriculture first developed, and as the human population has grown and become more technologically advanced over time, the land use for agricultural purposes has increased significantly. The anthropogenic biome in the 1700s, before the industrial revolution, was made up of mostly wild, untouched land, with no human settlement disturbing the natural state. In this time period, most of the Earth's ice-free land consisted of wildlands and natural anthromes, and it wasn't until after the industrial revolution in the 19th century that land use for agriculture and human settlements started to increase. With technology advancing and manufacturing processes becoming more efficient, the human population was beginning to thrive, and was subsequently requiring and using more natural resources. By the year 2000, over half of the Earth's ice free land was transformed into rangelands, croplands, villages and dense settlements, which left less than half of the Earth's land untouched. Anthropogenic changes between 1700 and 1800 were far smaller than those of the following centuries, and as such the rate of change has increased over time. As a result, the 20th century had the fastest rate of anthropogenic ecosystem transformation of the past 300 years.
Land distribution
As the human population steadily increased in numbers throughout history, the use of natural resources and land began to increase, and the distribution of land used for various agricultural and settlement purposes began to change. The use of land around the world was transformed from its natural state to land used for agriculture, settlements and pastures to sustain the population and its growing needs. The distribution of land among anthromes underwent a shift away from natural anthromes and wildlands towards human-altered anthromes we are familiar with today. Now, the most populated anthromes (dense settlements and villages) account for only a small fraction of the global ice-free land. From the year 1700–2000, lands used for agriculture and urban settlements increased significantly, however the area occupied by rangelands increased even more rapidly, so that it became the dominant anthrome in the 20th century. As a result, the biggest global land-use change as a result of the industrial revolution, was the expansion of pastures.
Human population
Following the industrial revolution, the human population experienced a rapid increase. The human population density in certain anthromes began to change, shifting away from rural environments to urban settlements, where the population density was much higher. These changes in population density between areas shifted global patterns of anthrome emergence, and also had wide-spread effects on various ecosystems. Half of the Earth's population now lives in cities, and most people reside in urban anthromes, with some populations dwelling in smaller cities and towns. Currently, human populations are expected to grow until at least midcentury, and the transformation of the Earth's anthromes are expected to follow this growth.
Current state of the anthropogenic biosphere
The present state of the terrestrial biosphere is predominantly anthropogenic. More than half of the terrestrial biosphere remains unused directly for agriculture or urban settlements, and of these unused lands still remaining, less than half are wildlands. Most of Earth's unused lands are now within the agricultural and settled landscapes of semi-natural, rangeland, cropland and village anthromes.
Major anthromes
Anthromes include dense settlements (urban and mixed settlements), villages, croplands, rangelands and semi-natural lands and have been mapped globally using two different classification systems, viewable on Google Maps and Google Earth. There are currently 18 anthropogenic biomes, the most prominent of which are listed below.
Dense settlements
Dense settlements are the second most densely populated regions in the world. They are defined as areas with a high population density, though the density can be variable. The population density, however, never falls below 100 persons/km, even in the non-urban parts of the dense settlements, and it has been suggested that these areas consist of both the edges of major cities in underdeveloped nations, and the long standing small towns throughout western Europe and Asia. Most often we think of dense settlements as cities, but dense settlements can also be suburbs, towns and rural settlements with high but fragmented populations.
Villages
Villages are densely populated agricultural landscapes, many of which have been inhabited and intensively used for centuries to millennia.
Croplands
Croplands are another major anthrome throughout the world. Croplands include most of the cultivated lands of the world, and also about a quarter of global tree cover. Croplands which are locally irrigated have the highest human population density, likely due to the fact that it provides crops with a constant supply on water. This makes harvest time and crop survival more predictable. Croplands that are sustained mainly from the local rainfall are the most extensive of the populated anthromes, with annual precipitation near 1000 mm in certain areas of the globe. In these areas, there is sufficient water supplied by the climate to support all aspects of life without hardly any irrigation. However, in dryer areas, this method of agriculture would not be as productive.
Rangelands
Rangelands are a very broad anthropogenic biome group that has been described according to three levels of population density: residential, populated and remote. The Residential rangeland anthrome has two key features: its population density is never below 10 persons per square kilometre, and a substantial portion of its area is used for pasture. Pastures in rangelands are the most dominant land cover. Bare earth is significant in this anthrome, covering nearly one third of the land for every one square kilometer. Rangeland anthromes are less altered than croplands, but their alteration tends to increase with population. Domesticated grazing livestock are typically adapted to grasslands and savannas, so the alteration of these biomes tends to be less noticeable.
Cultured lands
Cultured anthromes are landscapes shaped by low levels of intensive land use and substantial to very low density populations. The Cultured anthrome classification was introduced in 2021 to replace analogous classifications, "Seminatural" (2010 classification) and "Forested" (original 2008 classification). Cultured woodland anthromes are woodland biomes shaped by land use and human inhabitation, and their population densities are usually less than 3 persons/km2. Many cultured woodlands are secondary forests that act as carbon sinks as a result of ongoing regrowth of woody vegetation. Some cultured woodlands are partially cleared for agriculture, including domestic livestock, and to utilize timber. Cultured dryland anthromes are dryland biomes shaped by land use and human inhabitation.
Indoor
Very few biologists have studied the evolutionary processes at work in indoor environments. Estimates of the extent of residential and commercial buildings range between 1.3% and 6% of global ice-free land area. This area is as extensive as other small biomes such as flooded grass-lands and tropical coniferous forests. The indoor biome is rapidly expanding. The indoor biome of Manhattan is almost three times as large, in terms of its floor space, as is the geographical area of the island itself, due to the buildings rising up instead of spreading out. Thousands of species live in the indoor biome, many of them preferentially or even obligatorily. The only action that humans take to alter the evolution of the indoor biome is with cleaning practices. The field of indoor biomes will continue to change as long as our culture will change.
Aquatic
Managed aquatic biomes or aquatic anthromes have rarely been studied as such. They range from fish ponds, marine shrimp and benthic farming sites to large tracts of land such as parts of the Guadalquivir Marshes in Andalusia, Spain.
Implications of an anthropogenic biosphere
Humans have fundamentally altered global patterns of biodiversity and ecosystem processes. It is no longer possible to explain or predict ecological patterns or processes across the Earth without considering the human role. Human societies began transforming terrestrial ecology more than 50 000 years ago, and evolutionary evidence has been presented demonstrating that the ultimate causes of human transformation of the biosphere are social and cultural, not biological, chemical, or physical. Anthropogenic biomes offer a new way forward by acknowledging human influence on global ecosystems and moving us toward models and investigations of the terrestrial biosphere that integrate human and ecological systems.
Challenges facing biodiversity in the anthropogenic biosphere
Extinctions
Over the past century, anthrome extent and land use intensity increased rapidly together with growing human populations, leaving wildlands without human population or land use in less than one quarter of the terrestrial biosphere. This massive transformation of Earth's ecosystems for human use has occurred with enhanced rates of species extinctions. Humans are directly causing species extinctions, especially of megafauna, by reducing, fragmenting and transforming native habitats and by overexploiting individual species. Current rates of extinctions vary greatly by taxa, with mammals, reptiles and amphibians especially threatened; however there is growing evidence that viable populations of many, if not most native taxa, especially plants, may be sustainable within anthromes. With the exception of especially vulnerable taxa, the majority of native species may be capable of maintaining viable populations in anthromes.
Conservation
Anthromes present an alternative view of the terrestrial biosphere by characterizing the diversity of global ecological land cover patterns created and sustained by human population densities and land use while also incorporating their relationships with biotic communities. Biomes and ecoregions are limited in that they reduce human influences, and an increasing number of conservation biologists have argued that biodiversity conservation must be extended to habitats directly shaped by humans. Within anthromes, including densely populated anthromes, humans rarely use all available land. As a result, anthromes are generally mosaics of heavily used lands and less intensively used lands. Protected areas and biodiversity hotspots are not distributed equally across anthromes. Less populated anthromes contain a greater proportion of protected areas. While 23.4% of remote woodland anthrome is protected, only 2.3% of irrigated village anthrome is protected. There is increasing evidence that suggests that biodiversity conservation can be effective in both densely and sparsely settled anthromes. A combination of land sharing and land sparing in working landscapes and multifunctional landscapes are increasingly popular as conservation strategies.
See also
Anthropocene
Landscape ecology
Landscape-scale conservation
Working landscape
Land use
Multifunctional landscape
Novel ecosystem
Agroecology
Technoecosystem
Technodiversity
References
External links
Putting the "Me" in Biome : Educational Resource at National Geographic
Anthropogenic Biomes at NASA
Anthropogenic Biomes project web site (with maps, educational materials, downloadable data)
Anthropocene
Biomes
Habitats
Urban planning
Human habitats
Human impact on the environment | Anthropogenic biome | [
"Engineering"
] | 2,931 | [
"Urban planning",
"Architecture"
] |
14,451,712 | https://en.wikipedia.org/wiki/Halstead%20complexity%20measures | Halstead complexity measures are software metrics introduced by Maurice Howard Halstead in 1977 as part of his treatise on establishing an empirical science of software development.
Halstead made the observation that metrics of the software should reflect the implementation or expression of algorithms in different languages, but be independent of their execution on a specific platform.
These metrics are therefore computed statically from the code.
Halstead's goal was to identify measurable properties of software, and the relations between them.
This is similar to the identification of measurable properties of matter (like the volume, mass, and pressure of a gas) and the relationships between them (analogous to the gas equation).
Thus his metrics are actually not just complexity metrics.
Calculation
For a given problem, let:
= the number of distinct operators
= the number of distinct operands
= the total number of operators
= the total number of operands
From these numbers, several measures can be calculated:
Program vocabulary:
Program length:
Calculated estimated program length:
Volume:
Difficulty :
Effort:
The difficulty measure is related to the difficulty of the program to write or understand, e.g. when doing code review.
The effort measure translates into actual coding time using the following relation,
Time required to program: seconds
Halstead's delivered bugs (B) is an estimate for the number of errors in the implementation.
Number of delivered bugs : or, more recently, is accepted.
Example
Consider the following C program:
main()
{
int a, b, c, avg;
scanf("%d %d %d", &a, &b, &c);
avg = (a+b+c)/3;
printf("avg = %d", avg);
}
The distinct operators () are:
main, (), {}, int, scanf,
&, =, +, /, printf, ,, ;
The distinct operands () are:
a, b, c, avg, "%d %d %d", 3, "avg = %d"
, ,
, ,
Calculated Estimated Program Length:
Volume:
Difficulty:
Effort:
Time required to program: seconds
Number of delivered bugs:
See also
Function point
Cyclomatic complexity
References
External links
The Halstead metrics - Extensive discussion on the calculation and use of Halstead Metrics in an object-oriented environment (with specific reference to Java).
Calculation of Halstead metrics - Measurement of Halstead Metrics.
Explanation with a Sample Program - Example (on Page 6 of the PDF)
Script computing Halstead Metrics and using them for commented code detection
IBM
Calculator for computing Halstead metrics
Software metrics | Halstead complexity measures | [
"Mathematics",
"Engineering"
] | 553 | [
"Software engineering",
"Quantity",
"Metrics",
"Software metrics"
] |
14,452,251 | https://en.wikipedia.org/wiki/Urban%20Traffic%20Management%20and%20Control | The Urban Traffic Management Control or UTMC programme is the main initiative in the United Kingdom for the development of a more open approach to Intelligent Transport Systems or ITS in urban areas. Originating as a Government research programme, the initiative is now managed by a community forum, the UTMC Development Group, which represents both local transport authorities and the systems industry.
UTMC systems are designed to allow the different applications used within modern traffic management systems to communicate and share information with each other. This allows previously disparate data from multiple sources such as Automatic Number Plate Recognition (ANPR) cameras, Variable Message Signs (VMS), car parks, traffic signals, air quality monitoring stations and meteorological data, to be amalgamated into a central console or database. The idea behind UTMC is to maximise road network potential to create a more robust and intelligent system that can be used to meet current and future management requirements.
Background and history
The UTMC was launched in 1997 by the UK Government's Department for Environment, Transport and the Regions (now the Department for Transport (DfT)). During the first three years, a number of research projects were undertaken to establish and validate an approach based on modular systems and open standards. These have contributed to the UTMC Technical Specifications, which define UTMC standards.
To assist local authorities in gaining the most from Intelligent transportation system and to achieve their transport objectives, the Department for Transport initiated the six-year, £6M UTMC programme in 1997. The first half of the UTMC programme (1997–2000) concentrated on specific applied research tasks, on both technical and operational issues.
In January 2001, the programme embarked on a demonstrator phase to consolidate the results of the earlier research. Full scale demonstrator projects taking a pragmatic UTMC approach were run in Preston with key systems provided by Mott MacDonald, Reading and Stratford-upon-Avon using Siemens and York with systems from Tenet Technology (these systems are now owned and marketed by Dynniq).
Early in 2003, the UTMC Development Group (UDG) was set up, as a group of local authorities and suppliers, with the support of the DfT, to oversee the future development of UTMC. This has managed the initiative continuously since 2004.
UTMC has helped local authorities achieve their goals by adopting an appropriate, but not over constraining, set of standards to allow users, suppliers and integrators of UTMC systems to plan and supply systems cost-effectively in an open market. These standards are essential in breaking boundaries and local authority borders to allow network interoperability.
UTMC Activities
Specifications and Standards
The UTMC Specifications and Standards Group (S&SG) is responsible for ensuring that the UTMC technical framework continues to meet local authorities' needs, currently and in the future. The S&SG oversees the maintenance and upkeep of the UTMC Technical Specifications. Its members are drawn from both local authorities and the supplier community, but it is always led by local authorities.
The S&SG works closely with the full range of UTMC suppliers to ensure its requirements are technically achievable. It operates a transparent consultation regime on all technical changes. From time to time it may commission and fund technical research and standards development activities, though it operates principally through coordinating the input freely provided by suppliers and users.
The Specification provides standards for shared data (i.e. data communicated between applications of a UTMC system, or between a UTMC system and an external system) through:
holding definitions of current UTMC Objects, and making them available to users;
receiving submissions for potential new UTMC Objects, and coordinating consultation as necessary;
facilitating contact between Object developers;
advising on changes needed to potential new UTMC Objects;
registering new UTMC Objects.
As well as undertaking technical work to develop national specifications, there are a number of activities that help "market" the initiative to the traffic management community. There is a conference, usually held annually, papers and articles are published in key industry journals and regular workshops are held focusing on key (technical or operational) themes. In 2006, the UTMC community ran a number of special sessions at the ITS World Congress held in London, as well as running a village of suppliers demonstrating UTMC-compatible products.
The UTMC initiative formerly published a Products Catalogue, representing products submitted as compliant by suppliers. This was discontinued in December 2014.
UTMC specification documents
The following documents are maintained and published for open use on the UTMC website.
The UTMC Framework Technical Specification TS003 presents the core technical standards recommended for use by Traffic Managers in their systems.
The UTMC Objects Registry TS004 presents a standardised set of data structures associated with traffic management, in several forms including UML data model, XML schema, SNMP MIBs, some IDL scripts for CORBA based systems, and tabular representation (originally designed for database designers).
The current issue of the Technical Specification is available for free download on the UTMC resources website .
Examples of UTMC in action
Local authorities with UTMC have more control over their road network. Some examples of what they can do are:
Advise
By monitoring how long it takes a vehicle to pass two ANPR cameras and then dividing the time by the distance between the cameras, an average speed can be measured and used to inform motorists via VMS how long it will take them to reach a destination, or to set diversions.
Example by Envitia: VMS in Aberdeen . Example by IDT: Journey time monitoring in Birmingham
Warn
Wind detectors attached to a bridge give drivers of high sided vehicles warnings before they cross. The warning messages are displayed on VMS signs activated when wind speed thresholds are exceeded.
Example by Siemens: Bridge VMSs offer wind warnings .
Guide
By linking parking guidance systems to a common database traffic control room operators can inform motorists via strategic VMS about the current state of car parks; especially useful for special events like carnivals when normal use is exceeded.
Example by Mott MacDonald: Car Park Guidance in Edinburgh
Previously these systems would have been impracticable due to the sheer volumes of data processing and the operator time needed to apply constant manual updates.
Joint Chairs’ Group (JCG)
The JCG was created in 2004 to bring together the UDG with other key ITS community organisations; it was later expanded to include representation from the Department for Transport and the Highways Agency. The JCG's aim was to ensure that the strategic direction of the various groups and bodies involved in UK ITS was kept aligned.
The JCG was suspended in September 2012, as the prevailing financial conditions had reduced the resource available to its participants.
UTMC links with international standards
UTMC builds on a base of mainstream internet protocols, and focusses on defining data structures suitable for exchange between ITS systems and devices. At the time of its origination there were few available international standards to build on, and the research was therefore used to generate many of its own standards. However, for exchange between central systems (for example, B2B data exchange between neighbouring roads authorities), UTMC refers to the specifications of the European project DATEX.
DATEX (as Datex II) is now being standardized through the European standards agency CEN and UTMC has been involved in a number of European standards-related projects, notably POSSE (Promotion of Open Specifications and Standards in Europe). There is a current workstream within UTMC aiming to align the UTMC Technical Specification more closely with Datex II.
See also
Department of Transport
Intelligent Transport Systems
Intermodal Journey Planner
External links
https://utmc.uk/ UTMC website
http://www.dft.gov.uk UK Department for Transport
Intelligent transportation systems
Real-time computing
Road traffic management
Urban society in the United Kingdom | Urban Traffic Management and Control | [
"Technology"
] | 1,581 | [
"Real-time computing",
"Transport systems",
"Information systems",
"Warning systems",
"Intelligent transportation systems"
] |
14,452,661 | https://en.wikipedia.org/wiki/Trk%20receptor | Trk receptors are a family of tyrosine kinases that regulates synaptic strength and plasticity in the mammalian nervous system. Trk receptors affect neuronal survival and differentiation through several signaling cascades. However, the activation of these receptors also has significant effects on functional properties of neurons.
The common ligands of trk receptors are neurotrophins, a family of growth factors critical to the functioning of the nervous system. The binding of these molecules is highly specific. Each type of neurotrophin has different binding affinity toward its corresponding Trk receptor. The activation of Trk receptors by neurotrophin binding may lead to activation of signal cascades resulting in promoting survival and other functional regulation of cells.
Origin of the name trk
The abbreviation trk (often pronounced 'track') stands for tropomyosin receptor kinase or tyrosine receptor kinase (and not "tyrosine kinase receptor" nor "tropomyosin-related kinase", as has been commonly mistaken).
The family of Trk receptors is named for the oncogene trk, whose identification led to the discovery of its first member, TrkA. Trk, initially identified in a colon carcinoma, is frequently (25%) activated in thyroid papillary carcinomas. The oncogene was generated by a mutation in chromosome 1 that resulted in the fusion of the first seven exons of tropomyosin to the transmembrane and cytoplasmic domains of the then-unknown TrkA receptor. Normal Trk receptors do not contain amino acid or DNA sequences related to tropomyosin.
Types and corresponding ligands
The three most common types of trk receptors are trkA, trkB, and trkC. Each of these receptor types has different binding affinity to certain types of neurotrophins. The differences in the signaling initiated by these distinct types of receptors are important for generating diverse biological responses.
Neurotrophin ligands of Trk receptors are processed ligands, meaning that they are synthesized in immature forms and then transformed by protease cleavage. Immature neurotrophins are specific only to one common p75NTR receptor. However, protease cleavage generates neurotrophins that have higher affinity to their corresponding Trk receptors. These processed neurotrophins can still bind to p75NTR, but at a much lower affinity.
TrkA
TrkA is a protein encoded by the NTRK1 gene and has the highest affinity to the binding nerve growth factor (NGF) After NGF is bound to TrkA this leads to a ligand-induced dimerization causing the autophosphorylation of the tyrosine kinase segment, which in turn activates the Ras/MAPK pathway and the PI3K/Akt pathway. NGF is a neurotrophic factor, and the NGF/TrkA interaction is critical in both local and nuclear actions, regulating growth cones, motility, and expression of genes encoding the biosynthesis of enzymes for neurotransmitters. Peptidergic nociceptive sensory neurons express mostly trkA and not trkB or trkC.
The TrkA receptor is associated with several diseases such as Inflammatory arthritis, keratoconus, functional dyspepsia and, in some cases, over expression has been linked to cancer development. In other cases, such as neuroblastoma Trk A acts as a promising prognostic indicator as it has the potential to induce terminal differentiation of cancer cells in a context-dependent manner.
TrkB
TrkB has the highest affinity to the binding of brain-derived neurotrophic factor (BDNF) and NT-4. BDNF is a growth factor that has important roles in the survival and function of neurons in the central nervous system. The binding of BDNF to TrkB receptor causes many intracellular cascades to be activated, which regulate neuronal development and plasticity, long-term potentiation, and apoptosis.
Although both BDNF and NT-4 have high specificity to TrkB, they are not interchangeable. In a mouse model study where BDNF expression was replaced by NT-4, the mouse with NT4 expression appeared to be smaller and exhibited decreased fertility.
Recently, studies have also indicated that TrkB receptor is associated with Alzheimer's disease and post-intracerebral hemorrhage depression.
TrkC
TrkC is ordinarily activated by binding with NT-3 and has little activation by other ligands. (TrkA and TrkB also bind NT-3, but to a lesser extent.) TrkC is mostly expressed by proprioceptive sensory neurons. The axons of these proprioceptive sensory neurons are much thicker than those of nociceptive sensory neurons, which express trkA.
Regulation by p75NTR
p75NTR (p75 neurotrophin receptor) affects the binding affinity and specificity of Trk receptor activation by neurotrophins. The presence of p75NTR is especially important in increasing the binding affinity of NGF to TrkA. Although the dissociation constants of p75NTR and TrkA are remarkably similar, their kinetics are quite different. Reduction and mutation of cytoplasmic and transmembrane domains of either TrkA or p75NTR prevent the formation of high-affinity binding sites on TrkA. However, the binding of ligands in p75NTR is not required to promote high-affinity binding. Therefore, the data suggest that the presence of p75NTR affects the conformation of TrkA, preferentially the state with high-affinity binding site for NGF. Surprisingly, although the presence of p75NTR is essential to promote high-affinity binding, the NT3 binding to the receptor is not required.
Apart from affecting the affinity and specificity for Trk receptors, the P75 neurotrophin receptor (P75NTR) can also reduce ligand-induced receptor ubiquitination, and delay receptor internalization and degradation.
Essential roles in differentiation and function
Precursor cell survival and proliferation
Numerous studies, both in vivo and in vitro, have shown that neurotrophins have proliferation and differentiation effects on CNS neuro-epithelial precursors, neural crest cells, or precursors of the enteric nervous system. TrkA that expresses NGF not only increase the survival of both C and A delta classes of nocireceptor neurons, but also affect the functional properties of these neurons.4 As mentioned before, BDNF improves the survival and function of neurons in CNS, particularly cholinergic neurons of the basal forebrain, as well as neurons in the hippocampus and cortex.
BDNF belongs to the neurotrophin family of growth factors and affects the survival and function of neurons in the central nervous system, particularly in brain regions susceptible to degeneration in AD. BDNF improves survival of cholinergic neurons of the basal forebrain, as well as neurons in the hippocampus and cortex.
TrkC that expresses NT3 has been shown to promote proliferation and survival of cultured neural crest cells, oligodendrocyte precursors, and differentiation of hippocampal neuron precursors.
Control of target innervation
Each of the neurotrophins mentioned above promotes neurite outgrowth. NGF/TrkA signaling regulates the advance of sympathetic neuron growth cones; even when neurons received adequate trophic (sustaining and nourishing) support, one experiment showed they did not grow into relating compartments without NGF. NGF increases the innervation of tissues that receive sympathetic or sensory innervation and induces aberrant innervation in tissues that are normally not innervated.
NGF/TrkA signaling upregulates BDNF, which is transported to both peripheral and central terminals of nocireceptive sensory neurons. In the periphery, TrkB/BDNF binding and TrkB/NT-4 binding acutely sensitizing nocireceptive pathway that require the presence of mast cells.
Sensory neuron function
Trk receptors and their ligands (neurotrophins) also affect neurons' functional properties. Both NT-3 and BDNF are important in the regulation and development of synapses formed between afferent neurons and motor neurons. Increased NT-3/trkC binding results in larger monosynaptic excitatory postsynaptic potentials (EPSPs) and reduced polysynaptic components. On the other hand, increased NT-3 binding to trkB to BDNF has the opposite effect, reducing the size of monosynaptic excitatory postsynaptic potentials (EPSPs) and increasing polysynaptic signaling.
Formation of ocular dominance column
In the development of mammalian visual system, axons from each eyes crosses through the lateral geniculate nucleus (LGN) and terminate in separate layers of striate cortex. However, axons from each LGN can only be driven by one side of the eye, but not both together. These axons that terminate in layer IV of the striate cortex result in ocular dominance columns. A study shows that The density of innervating axons in layer IV from LGN can be increased by exogenous BDNF and reduced by a scavenger of endogenous BDNF. Therefore, it raises the possibility that both of these agents are involved in some sorting mechanism that is not well comprehended yet. Previous studies with cat model has shown that monocular deprivation occurs when input to one of the mammalian eyes is absent during the critical period (critical window). However, A study demonstrated that the infusion of NT-4 (a ligand of trkB) into the visual cortex during the critical period has been shown to prevent many consequences of monocular deprivation. Surprisingly, even after losing responses during the critical period, the infusion of NT-4 has been shown to be able to restore them.
Synaptic strength and plasticity
In mammalian hippocampus, the axons of the CA3 pyramidal cells project into CA1 cells through the Schaffer collaterals. The long-term potentiation (LTP) may induce in either of these pathways, but it is specific only to the one that is stimulated with tetanus. The stimulated axon does not impact spill over to the other pathway. TrkB receptors are expressed in most of these hippocampal neurons, including dentate granule cells, CA3 and CA1 pyramidal cells, and inhibitory interneurons. LTP can be greatly reduced by BDNF mutants. In a similar study on a mouse mutant with reduced expression of trkB receptors, LTP of CA1 cells reduced significantly. TrkB loss has also been linked to interfere with the memory acquisition and consolidation in many learning paradigm.
Role of Trk oncogenes in cancer
Although originally identified as an oncogenic fusion in 1982, only recently has there been a renewed interest in the Trk family as it relates to its role in human cancers because of the identification of NTRK1 (TrkA), NTRK2 (TrkB) and NTRK3 (TrkC) gene fusions and other oncogenic alterations in a number of tumor types. More specifically, differential expression of Trk receptors closely correlates to prognosis and outcome in a number of cancers, such as neuroblastoma. Trk A is seen as a good prognosis marker, as it can induce terminal differentiation of cells, while Trk B is associated with a poor prognosis, due to its correlation with MYCN amplification. As a result, Trk inhibitors have been explored as a potential treatment avenue in the field of precision medicine. Trk inhibitors are (in 2015) in clinical trials and have shown early promise in shrinking human tumors.
Trk inhibitors in development
Entrectinib (formerly RXDX-101, trade name Rozlytrek) is a drug developed by Ignyta, Inc., which has antitumor activity. It is a selective pan-trk receptor tyrosine kinase inhibitor (TKI) targeting gene fusions in trkA, trkB, and trkC (coded by NTRK1, NTRK2, and NTRK3 genes) that is currently in phase 2 clinical testing.
Originally targeting soft tissue sarcomas, Larotrectinib (tradename Vitrakvi) was approved in November 2018 as a tissue-agnostic inhibitor of TrkA, TrkB, and TrkC developed by Array BioPharma for solid tumors with NTRK fusion mutations.
Due to this development of effective TRK inhibitors, the European Society for Medical Oncology (ESMO) is recommending that testing for NTRK fusion mutations is performed in the work up for non small cell lung cancer.
Activation pathway
Trk receptors dimerize in response to ligand, as do other tyrosine kinase receptors. These dimers phosphorylate each other and enhance catalytic activity of the kinase. Trk receptors affect neuronal growth and differentiation through the activation of different signaling cascades. The three known pathways are PLC, Ras/MAPK (mitogen-activated protein kinase) and the PI3K (phosphatidylinositol 3-kinase) pathways. These pathways involve the interception of nuclear and mitochondrial cell-death programs. These signaling cascades eventually led to the activation of a transcription factor, CREB (cAMP response element-binding), which in turn activate the target genes.
PKC pathways
The binding of neurotrophin will lead to the phosphorylation of phospholipase C (PLC) by trk receptor. This phosphorylation of PLC induces an enzyme to catalyze the breakdown of lipids to diacyglycerol and inositol(1,4, 5). Diacyglycerol may indirectly activate PI3 kinase or several protein kinase C (PKC) isoforms, whereas inositol(1,4, 5) promotes release of calcium from intracellular stores.
Ras/MAPK pathway
The signaling through Ras/MAPK pathway is important for the neurotrophin-induced differentiation of neuronal and neuroblastoma cells. Phosphorylation of tyrosine residues in the Trk receptors led to the activation of Ras molecules, H-Ras and K-Ras. H-ras is found in lipid rafts, embedded within the plasma membrane, while K-Ras is predominantly found in disordered region of the membrane. RAP, a vesicle bounded molecule that also takes part in the cascading, is localized in the intracellular region.
The activation of these molecules result in two alternative MAP kinase pathways. Erk 1,2 can be stimulated through the activation cascades of K-Ras, Raf1, and MEK 1,2, whereas ERK5 is stimulated through the activation cascades of B-Raf, MEK5, and Erk 5. However, whether PKC (protein kinase C) could activate MEK5 is not yet known.
PI3 pathway
PI3 pathway signaling is critical for both mediation of neurotrophin-induced survival and regulation of vesicular trafficking. The trk receptor stimulates PI3K heterodimers, which causes the activation of kinases PDK-1 and Akt. Akt in turn stimulates FRK (Forkhead family transcription factor), BAD, and GSK-3.
TrkA vs TrkC
Some studies have suggested that NGF/TrkA coupling causes preferential activation of the Ras/MAPK pathway, whereas NT3/TrkC coupling causes preferential activation of the PI3 pathway.
See also
TrkB receptor
References
Tyrosine kinase receptors | Trk receptor | [
"Chemistry"
] | 3,338 | [
"Tyrosine kinase receptors",
"Signal transduction"
] |
14,453,419 | https://en.wikipedia.org/wiki/VHDL-AMS | VHDL-AMS is a derivative of the hardware description language VHDL (IEEE 1076-2002). It includes analog and mixed-signal extensions (AMS) in order to define the behavior of analog and mixed-signal systems (IEEE 1076.1-2017).
The VHDL-AMS standard was created with the intent of enabling designers of analog and mixed signal systems and integrated circuits to create and use modules that encapsulate high-level behavioral descriptions as well as structural descriptions of systems and components.
VHDL-AMS is an industry standard modeling language for mixed signal circuits. It provides both continuous-time and event-driven modeling semantics, and so is suitable for analog, digital, and mixed analog/digital circuits. It is particularly well suited for verification of very complex analog, mixed-signal and radio frequency integrated circuits.
Code example
In VHDL-AMS, a design consists at a minimum of an entity which describes the interface and an architecture which contains the actual implementation. In addition, most designs import library modules. Some designs also contain multiple architectures and configurations.
A simple ideal diode in VHDL-AMS would look something like this:
library IEEE;
use IEEE.math_real.all;
use IEEE.electrical_systems.all;
-- this is the entity
entity DIODE is
generic (iss : current := 1.0e-14);
port (terminal anode, cathode : electrical);
end entity DIODE;
architecture IDEAL of DIODE is
quantity v across i through anode to cathode;
constant vt : voltage := 0.0258;
begin
i == iss * (exp(v/vt) - 1.0);
end architecture IDEAL;
VHDL-AMS Simulators
ANSYS Simplorer
Cadence Virtuoso AMS Designer
Dolphin Integration SMASH
Mentor Graphics Questa ADMS
Mentor Graphics SystemVision
Synopsys SaberRD
References
See also
Verilog-AMS, the Analog and Mixed Signal derivative of the Verilog hardware description language
VHDL
Electronic design automation
Very-large-scale integration
Modelica, a language for modeling physical systems
Hardware description languages | VHDL-AMS | [
"Engineering"
] | 454 | [
"Electronic engineering",
"Hardware description languages"
] |
14,453,424 | https://en.wikipedia.org/wiki/Pinning%20points | In a crystalline material, a dislocation is capable of traveling throughout the lattice when relatively small stresses are applied. This movement of dislocations results in the material plastically deforming. Pinning points in the material act to halt a dislocation's movement, requiring a greater amount of force to be applied to overcome the barrier. This results in an overall strengthening of materials.
Types of pinning points
Point defects
Point defects (as well as stationary dislocations, jogs, and kinks) present in a material create stress fields within a material that disallow traveling dislocations to come into direct contact. Much like two particles of the same electric charge feel a repulsion to one another when brought together, the dislocation is pushed away from the already present stress field.
Alloying elements
The introduction of atom1 into a crystal of atom2 creates a pinning point for multiple reasons. An alloying atom is by nature a point defect, thus it must create a stress field when placed into a foreign crystallographic position, which could block the passage of a dislocation. However, it is possible that the alloying material is approximately the same size as the atom that is replaced, and thus its presence would not stress the lattice (as occurs in cobalt alloyed nickel). The different atom would, though, have a different elastic modulus, which would create a different terrain for the moving dislocation. A higher modulus would look like an energy barrier, and a lower like an energy trough – both of which would stop its movement.
Second phase precipitates
The precipitation of a second phase within the lattice of a material creates physical blockades through which a dislocation cannot pass. The result is that the dislocation must bend (which requires greater energy, or a greater stress to be applied) around the precipitates, which inevitably leaves residual dislocation loops encircling the second phase material and shortens the original dislocation.
Grain boundaries
Dislocations require proper lattice ordering to move through a material. At grain boundaries, there is a lattice mismatch, and every atom that lies on the boundary is uncoordinated. This stops dislocations that encounter the boundary from moving.
Crystals
Crystallography
Physical quantities
Materials science
Metallurgy | Pinning points | [
"Physics",
"Chemistry",
"Materials_science",
"Mathematics",
"Engineering"
] | 474 | [
"Physical phenomena",
"Applied and interdisciplinary physics",
"Physical quantities",
"Metallurgy",
"Quantity",
"Materials science",
"Crystallography",
"Crystals",
"Condensed matter physics",
"nan",
"Physical properties"
] |
14,454,590 | https://en.wikipedia.org/wiki/Pivaloyloxymethyl | Pivaloyloxymethyl (POM, pivoxil, pivoxyl) is a protecting group used in organic synthesis. The POM radical has the formula (CH3)3C-CO-O-CH2.
The POM group is also sometimes used to produce prodrugs. For example, the POM group can be attached to a negatively charged organophosphate group(s) of a drug causing a neutralization of the negative charge, which can allow the drug to become more lipid-soluble and thus more able to diffuse passively across cell membranes into cells. Upon entry into cells, the POM portion of the molecule can be removed by cellular processes resulting in release of active drug.
Clinically used prodrugs containing pivaloyloxymethyl groups include adefovir dipivoxil, pivampicillin, cefditoren pivoxil, pivmecillinam, and valproate pivoxil. Tenofovir disoproxil contains a very similar prodrug group.
See also
Pivalic acid
References
Protecting groups
Carboxylate esters | Pivaloyloxymethyl | [
"Chemistry"
] | 244 | [
"Protecting groups",
"Functional groups",
"Reagents for organic chemistry"
] |
14,455,119 | https://en.wikipedia.org/wiki/GPR179 | Probable G-protein coupled receptor 179 is a protein that in humans is encoded by the GPR179 gene.
Clinical relevance
Mutations in this gene have been associated to cases of congenital stationary Night Blindness.
References
Further reading
G protein-coupled receptors | GPR179 | [
"Chemistry"
] | 50 | [
"G protein-coupled receptors",
"Signal transduction"
] |
14,455,317 | https://en.wikipedia.org/wiki/Cutinase | The enzyme cutinase (systematic name: cutin hydrolase, EC 3.1.1.74) is a member of the hydrolase family. It catalyzes the following reaction:
R1COOR2 + H2O -> R1COOH + R2OH
In biological systems, the reactant carboxylic ester is a constituent of the cutin polymer, and the hydrolysis of cutin results in the formation of alcohol and carboxylic acid monomer products.
Nomenclature
Cutinase has an assigned enzyme commission number of EC 3.1.1.74. Cutinase is in the third class of enzymes, meaning that its primary function is to hydrolyze its substrate (in this case, cutin). Within the third class, cutinase is further categorized into the first subclass, which indicates that it specifically hydrolyzes ester bonds. It is then placed in the first sub-subclass, meaning that it targets carboxylic esters, which are those that join together cutin polymers.
Function
Most plants have a layer composed of cutin, called the cuticle, on their aboveground surfaces such as stems, leaves, and fruits. This layer of cutin is formed by a matrix-like structure that contains waxy components embedded in the carbohydrate layers. The molecule, cutin, which composes most of the cuticle matrix (40-80%), is composed primarily of fatty acid chains that are polymerized via carboxylic ester bonds.
Research suggests that cutin plays a critical role in preventing pathogenic infections in plant systems. For instance, experiments conducted on tomato plants that had a substantial inability to synthesize cutin found that the tomatoes produced by those plants were significantly more susceptible to infection by both opportunistic pathogens and intentionally inoculated fungal spores.
Cutinase is produced by a variety of fungal plant pathogens, and its activity was first detected in the fungus, Penicillium spinulosum. In studies of Nectria haematococca, a fungal pathogen that is the cause of foot rot in pea plants, cutinase has been shown to play key roles in facilitating the early stages of plant infection. It is also suggested that fungal spores that make initial contact with plant surfaces, a small amount of catalytic cutinase produces cutin monomers which in turn up-regulate the expression of the cutinase gene. This proposes that the expression pathway of cutinase in fungal spores is characterized by a positive feedback loop until the fungus successfully breaches the cutin layer; however, the specific mechanism of this pathway is unclear. Inhibition of cutinase has been shown to prevent fungal infection through intact cuticles. Conversely, the supplementation of cutinase to fungi that are not able to produce it naturally had been shown to enhance fungal infection success rates.
Cutinases have also been observed in a few plant pathogenic bacterial species, such as Streptomyces scabies, Thermobifida fusca, Pseudomonas mendocina, and Pseudomonas putida, but these have not been studied to the extent as those found in fungi. The molecular structure of the Thermobifida fusca cutinase shows similarities to the Fusarium solani pisi fungal cutinase, with congruencies in their active sites and overall mechanisms.
Structure
Cutinase belongs to the α-β class of proteins, with a central β-sheet of 5 parallel strands covered by 5 alpha helices on either side of the sheet. Fungal cutinase is generally composed of around 197 amino acid residues, and its native form consists of a single domain. The protein also contains 4 invariant cysteine residues that form 2 disulfide bridges, whose cleavage results in a complete loss of enzymatic activity.
Crystal structures have shown that the active site of cutinases is found on one end of the ellipsoid shape of the enzyme. This active site is seen flanked by two hydrophobic loop structures and partly covered by 2 thin bridges formed by amino acid side chains. It does not possess a hydrophobic lid, which is a common constituent feature among other lipases. Instead, the catalytic serine in the active site is exposed to open solvent, and the cutinase enzyme does not show interfacial activation behaviors at an aqueous-nonpolar interface. Cutinase activation is believed to be derived from slight shifts in the conformation of hydrophobic residues, acting as a miniature lid. The oxyanion hole in the active site is a constituent feature of the binding site, which differs from most lipolytic enzymes whose oxyanion holes are induced upon substrate binding.
Mechanism
Cutinase is a serine esterase, and the active site contains a serine-histidine-aspartate triad and an oxyanion hole, which are signature elements of serine hydrolases. The binding site of the cutin lipid polymer consists of two hydrophobic loops characterized by nonpolar amino acids such as leucine, alanine, isoleucine, and proline. These hydrophobic residues show a higher degree of flexibility, suggesting an induced fit model to facilitate cutin bonding to the active site. In the cutinase active site, histidine deprotonates serine, allowing the serine to undergo a nucleophilic attack on the cutin carboxylic ester. This is followed by an elimination reaction whereby the charged oxygen (stabilized by the oxyanion hole) creates a double bond, removing an R group from the cutin polymer in the form of an alcohol. The process repeats with a nucleophilic attack on the new carboxylic ester by a deprotonated water molecule. Following this, the charged oxygen reforms its double bond, removing the serine attachment and releasing the carboxylic acid R monomer.
Applications
The stability of cutinases in higher temperatures (20-50 °C) and its compatibility with other hydrolytic enzymes has potential applications in the detergent industry. In fact, it has been shown that cutinases are more efficient at cleaving and eliminating non-calcium fats from clothing when compared against other industrial lipases. Another advantage of cutinase in this industry is its ability to be catalytically active with both water- and lipid-soluble ester compounds, making it a more versatile degradative agent. This versatility is also subjecting cutinase to experiments in enhancing the biofuel industry because of its ability to facilitate transesterification of biofuels in various solubility environments.
Rather unexpectedly, the ability to degrade the cutin layer of plants and their fruits holds the potential to be beneficial to the fruit industry. This is because the cuticle layer of fruits is a putative mechanism of water regulation, and the degradation of this layer subjects the fruits to water movement across its membrane. By using cutinase to degrade the cuticle of fruits, industry makers can enhance the drying of fruits and more easily deliver preservatives and additives to the flesh of the fruit.
See also
PETase
References
Further reading
EC 3.1.1
Enzymes of unknown structure
Protein domains | Cutinase | [
"Biology"
] | 1,497 | [
"Protein domains",
"Protein classification"
] |
14,456,747 | https://en.wikipedia.org/wiki/Miga%2C%20Quatchi%2C%20Sumi%20and%20Mukmuk | Miga and Quatchi are the official mascots of the 2010 Winter Olympics, Sumi is the official mascot of the 2010 Winter Paralympics, and Mukmuk is their designated "sidekick" for both games, held in Vancouver, British Columbia, Canada. The four mascots were introduced on November 27, 2007. They were designed by the Canadian and American design studio, Meomi Design. It was the first time (since Cobi and Petra) that the Olympic and Paralympic mascots were introduced at the same time.
Development
The emblem of 2010 Winter Olympics, "Ilanaaq the Inukshuk", was picked through an open contest. However, it met criticism from some aboriginal groups over its design. So the mascot artist was selected through a competition.
Through the process where 177 professionals around the world were submitted their ideas, five were made final. In December 2006, VANOC eventually selected concepts from Meomi Design. Formed in 2002, Meomi is a group of Vicki Wong, a Vancouver-born Canadian of Chinese descent who worked in graphic and web design, and Michael Murphy, born in Milford, Michigan, who worked in design and motion graphics. Writing for Sports Illustrated, experts Michael Erdmann and John Ryan, while making comments on the mascots of the Olympic Games held in Canada, pointed out that Meomi's character drawing styles "are more closely related to Urban Vinyl [...]".
After the selection, Meomi provided more than 20 different concepts to VANOC, and three concepts were selected. The conception of the mascots were based on the local wildlife, as well as First Nations legends, mythologies and legendary creatures. During the design process, an early name for Quatchi was dismissed when the undisclosed word was found to have a rude connotation in another language. An animated video by Buck, a design studio based in New York and Los Angeles, with music provided by Kid Koala was screened on the first public presentation of the mascots. Details about mascots were kept secret until November 27, 2007 when they were unveiled to the public.
Mascots
The first public presentation of the mascots took place before 800 schoolchildren at the Bell Centre For Performing Arts in Surrey, British Columbia. This represents the first time (since 1992) that the Olympic and Paralympic mascots were introduced at the same time. Miga and Quatchi are mascots for the 2010 Winter Olympics, while Sumi is the mascot for the 2010 Winter Paralympics. Mukmuk is their designated "sidekick". They made a cameo appearance in Mario & Sonic at the Olympic Winter Games.
Reception
Popularity
Mukmuk, although a designated "sidekick", was a run-away success, "capturing the hearts of Games-goers everywhere"; including an impromptu "protest" at the Vancouver Art Gallery to make him a full-fledged mascot, and making "Top 5" for the Olympic games in the Vancouver edition of 24 Hours.
Criticism and image confusion
When the mascots were unveiled, there were initial concerns over whether they were effective at representing British Columbia and Canada.
On July 3, 2009, Canadian artist Michael R. Barrick created two composite images – one based on the official art, and the other based on a fan art created by Angela Melick – depicting the official mascots alongside Pedobear, an internet meme popularized by the imageboard 4chan. The images were created to make "a visual critique of how the style of the mascots resembles the style of Pedobear." As a result of the images receiving high rankings on Google Images, this image was mistakenly used by other media. The Polish newspaper Gazeta Olsztyńska used one of the images for a front-page story about the then-upcoming Olympics, published on February 4, 2010. Similarly, the Dutch television guide Avrobode used one of the images.
After the games
In compliance with the strict orders of the International Olympic Committee which require that the mascots must not be animated or be worn again so that the raw material cannot be reused, 48 of the 61 life-sized mascot costumes were destroyed. Three full sets of costumes are kept in Canada, one full set has gone to the IOC in Switzerland, and one Sumi costume has gone to the International Paralympic Committee in Germany.
See also
Chimera
Orca
Thunderbird (mythology)
American black bear
Bigfoot
Bigfoot in popular culture
Vancouver Island marmot
Olympic mascots
Paralympic mascots
References
External links
Official Mascots page
Official Mascots Merchandise
2010 Winter Olympics Press Release on the Mascots (November 26, 2007)
Meomi Design - Designers of mascots
zinc Roe Design microsite of the mascots
2010 Winter Olympics
2010 Winter Paralympics
Olympic mascots
Paralympic mascots
Fictional hybrids
Bear mascots
Anthropomorphic bears
Bigfoot in popular culture
Totem poles
Canadian mascots
Fictional characters with air or wind abilities
Fictional characters with water abilities
Fictional characters with superhuman strength
Fictional characters from British Columbia
Mascots introduced in 2007 | Miga, Quatchi, Sumi and Mukmuk | [
"Biology"
] | 1,010 | [
"Fictional hybrids",
"Hybrid organisms"
] |
14,456,810 | https://en.wikipedia.org/wiki/Pollakanth | A pollakanth is a plant that reproduces, flowers and sets seed recurrently during its life. The term was first used by Frans R. Kjellman.
Other terms with the same meaning are polycarpic and iteroparous.
Its antonym is hapaxanth.
Plant life-forms | Pollakanth | [
"Biology"
] | 70 | [
"Plant life-forms",
"Plants"
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.