id int64 39 79M | url stringlengths 31 227 | text stringlengths 6 334k | source stringlengths 1 150 ⌀ | categories listlengths 1 6 | token_count int64 3 71.8k | subcategories listlengths 0 30 |
|---|---|---|---|---|---|---|
48,253 | https://en.wikipedia.org/wiki/Fractal%20art | Fractal art is a form of algorithmic art created by calculating fractal objects and representing the calculation results as still digital images, animations, and media. Fractal art developed from the mid-1980s onwards. It is a genre of computer art and digital art which are part of new media art. The mathematical beauty of fractals lies at the intersection of generative art and computer art. They combine to produce a type of abstract art.
Fractal art (especially in the western world) is rarely drawn or painted by hand. It is usually created indirectly with the assistance of fractal-generating software, iterating through three phases: setting parameters of appropriate fractal software; executing the possibly lengthy calculation; and evaluating the product. In some cases, other graphics programs are used to further modify the images produced. This is called post-processing. Non-fractal imagery may also be integrated into the artwork. The Julia set and Mandelbrot sets can be considered as icons of fractal art.
It was assumed that fractal art could not have developed without computers because of the calculative capabilities they provide. Fractals are generated by applying iterative methods to solving non-linear equations or polynomial equations. Fractals are any of various extremely irregular curves or shapes for which any suitably chosen part is similar in shape to a given larger or smaller part when magnified or reduced to the same size.
Types
There are many different kinds of fractal images. They can be subdivided into several groups.
Fractals derived from standard geometry by using iterative transformations on an initial common figure like a straight line (the Cantor dust or the von Koch curve), a triangle (the Sierpinski triangle), or a cube (the Menger sponge). The first fractal figures invented near the end of the 19th and early 20th centuries belong to this group.
IFS (iterated function systems)
Strange attractors
Fractal flame
L-system fractals
Fractals created by the iteration of complex polynomials.
Newton fractals, including Nova fractals
Fractals generated over quaternions and other Cayley-Dickson algebras
Fractal terrains generated by random fractal processes
Mandelbulbs are a form of three dimensional fractal.
Fractal Expressionism is a term used to differentiate traditional visual art that incorporates fractal elements such as self-similarity for example. Perhaps the best example of fractal expressionism is found in Jackson Pollock's dripped patterns. They have been analysed and found to contain a fractal dimension which has been attributed to his technique.
Techniques
Fractals of all kinds have been used as the basis for digital art and animation. High resolution color graphics became increasingly available at scientific research labs in the mid-1980s. Scientific forms of art, including fractal art, have developed separately from mainstream culture. Starting with 2-dimensional details of fractals, such as the Mandelbrot Set, fractals have found artistic application in fields as varied as texture generation, plant growth simulation, and landscape generation.
Fractals are sometimes combined with evolutionary algorithms, either by iteratively choosing good-looking specimens in a set of random variations of a fractal artwork and producing new variations, to avoid dealing with cumbersome or unpredictable parameters, or collectively, as in the Electric Sheep project, where people use fractal flames rendered with distributed computing as their screensaver and "rate" the flame they are viewing, influencing the server, which reduces the traits of the undesirables, and increases those of the desirables to produce a computer-generated, community-created piece of art.
Many fractal images are admired because of their perceived harmony. This is typically achieved by the patterns which emerge from the balance of order and chaos. Similar qualities have been described in Chinese painting and miniature trees and rockeries.
Landscapes
The first fractal image that was intended to be a work of art was probably the famous one on the cover of Scientific American, August 1985. This image showed a landscape formed from the potential function on the domain outside the (usual) Mandelbrot set. However, as the potential function grows fast near the boundary of the Mandelbrot set, it was necessary for the creator to let the landscape grow downwards, so that it looked as if the Mandelbrot set was a plateau atop a mountain with steep sides. The same technique was used a year after in some images in The Beauty of Fractals by Heinz-Otto Peitgen and Michael M. Richter. They provide a formula to estimate the distance from a point outside the Mandelbrot set to the boundary of the Mandelbrot set (and a similar formula for the Julia sets). Landscapes can, for example, be formed from the distance function for a family of iterations of the form .
Artists
Notable fractal artists include Desmond Paul Henry, Hamid Naderi Yeganeh, and musician Bruno Degazio. British artists include William Latham, who has used fractal geometry and other computer graphics techniques in his works. and Vienna Forrester who creates flame fractal art using data extracted from her photographs. Greg Sams has used fractal designs in postcards, T-shirts, and textiles. American Vicky Brago-Mitchell has created fractal art which has appeared in exhibitions and on magazine covers. Scott Draves is credited with inventing flame fractals. Carlos Ginzburg has explored fractal art and developed a concept called "homo fractalus" which is based around the idea that the human is the ultimate fractal. Merrin Parkers from New Zealand specialises in fractal art.
Kerry Mitchell wrote a "Fractal Art Manifesto", claiming that. In Italy, the artist Giorgio Orefice wrote the "Fractalism" manifesto, founding a Fractalism cultural mouvement in 1999.
According to Mitchell, fractal art is not computerized art, lacking in rules, unpredictable, nor something that any person with access to a computer can do well. Instead, fractal art is expressive, creative, and requires input, effort, and intelligence. Most importantly, "fractal art is simply that which is created by Fractal Artists: ART."
American artist Hal Tenny was hired to design environment in the 2017 film Guardians of the Galaxy Vol. 2. There has also been a surge in fractal art distributed via Non-fungible tokens - such as work listed by Fractal_Dimensions, spectral.haus, and NetMetropolis.
Exhibits
Fractal art has been exhibited at major international art galleries. One of the first exhibitions of fractal art was "Map Art", a travelling exhibition of works from researchers at the University of Bremen. Mathematicians Heinz-Otto Peitgen and Michael M. Richter discovered that the public not only found the images aesthetically pleasing but that they also wanted to understand the scientific background to the images.
In 1989, fractals were part of the subject matter for an art show called Strange Attractors: Signs of Chaos at the New Museum of Contemporary Art. The show consisted of photographs, installations and sculptures designed to provide greater scientific discourse to the field which had already captured the public's attention through colourful and intricate computer imagery.
In 2014, emerging British fractal artist Vienna Forrester created an exhibition held at the I-node of the Planetary Collegium, Kefalonia, entitled "IO. Fragmented Myths and Memories: A Fractal Exploration of Kefalonia", part of the 2013–14 international arts festival "Stone Kingdom Kefalonia" commemorating the devastating 1953 Ionian earthquake. Her works were created by using geographical coordinates and photographs from parts of the island which still bear the scars.
Artworks
"Global Forest" artwork is based on a study highlighting the aesthetic and physiological impacts of fractal patterns. Fractals, patterns found universally in nature, repeat self-similarly across scales, with the complexity and aesthetic perception determined by their recursion and dimension rate. Notably, these patterns are featured in art across various cultures, including Jackson Pollock's paintings, eliciting strong aesthetic reactions. Moreover, incorporating fractals in architectural designs can mitigate visual strain and discomfort caused by Euclidean spaces and even reduce stress, resonating with the biophilic idea of humans' innate connection to nature. The ScienceDesignLab collaborated with the Mohawk Group to integrate these findings, producing award-winning "Relaxing Floors" that use fractal patterns, hypothesizing their therapeutic effects stem from nature's soothing visuals.
See also
Batik
Fractal curve
Greeble
Mathematics and architecture
Persian carpet
Psychedelic art
Systems art
Infinite compositions of analytic functions
References
Further reading
External links
Art and the Mandelbrot set (in commons.Wikimedia)
Fractals in Wikimedia
Fractals
Abstract art
Abstract animation
Computer graphic techniques
Algorithmic art
Psychedelic art
Digital art | Fractal art | [
"Mathematics"
] | 1,859 | [
"Mathematical analysis",
"Functions and mappings",
"Mathematical objects",
"Fractals",
"Mathematical relations"
] |
48,256 | https://en.wikipedia.org/wiki/Random%20sequence | The concept of a random sequence is essential in probability theory and statistics. The concept generally relies on the notion of a sequence of random variables and many statistical discussions begin with the words "let X1,...,Xn be independent random variables...". Yet as D. H. Lehmer stated in 1951: "A random sequence is a vague notion... in which each term is unpredictable to the uninitiated and whose digits pass a certain number of tests traditional with statisticians".
Axiomatic probability theory deliberately avoids a definition of a random sequence. Traditional probability theory does not state if a specific sequence is random, but generally proceeds to discuss the properties of random variables and stochastic sequences assuming some definition of randomness. The Bourbaki school considered the statement "let us consider a random sequence" an abuse of language.
Early history
Émile Borel was one of the first mathematicians to formally address randomness in 1909. In 1919 Richard von Mises gave the first definition of algorithmic randomness, which was inspired by the law of large numbers, although he used the term collective rather than random sequence. Using the concept of the impossibility of a gambling system, von Mises defined an infinite sequence of zeros and ones as random if it is not biased by having the frequency stability property i.e. the frequency of zeros goes to 1/2 and every sub-sequence we can select from it by a "proper" method of selection is also not biased.
The sub-sequence selection criterion imposed by von Mises is important, because although 0101010101... is not biased, by selecting the odd positions, we get 000000... which is not random. Von Mises never totally formalized his definition of a proper selection rule for sub-sequences, but in 1940 Alonzo Church defined it as any recursive function which having read the first N elements of the sequence decides if it wants to select element number N + 1. Church was a pioneer in the field of computable functions, and the definition he made relied on the Church Turing Thesis for computability. This definition is often called Mises–Church randomness.
Modern approaches
During the 20th century various technical approaches to defining random sequences were developed and now three distinct paradigms can be identified. In the mid 1960s, A. N. Kolmogorov and D. W. Loveland independently proposed a more permissive selection rule. In their view Church's recursive function definition was too restrictive in that it read the elements in order. Instead they proposed a rule based on a partially computable process which having read any N elements of the sequence, decides if it wants to select another element which has not been read yet. This definition is often called Kolmogorov–Loveland stochasticity. But this method was considered too weak by Alexander Shen who showed that there is a Kolmogorov–Loveland stochastic sequence which does not conform to the general notion of randomness.
In 1966 Per Martin-Löf introduced a new notion which is now generally considered the most satisfactory notion of algorithmic randomness. His original definition involved measure theory, but it was later shown that it can be expressed in terms of Kolmogorov complexity. Kolmogorov's definition of a random string was that it is random if it has no description shorter than itself via a universal Turing machine.
Three basic paradigms for dealing with random sequences have now emerged:
The frequency / measure-theoretic approach. This approach started with the work of Richard von Mises and Alonzo Church. In the 1960s Per Martin-Löf noticed that the sets coding such frequency-based stochastic properties are a special kind of measure zero sets, and that a more general and smooth definition can be obtained by considering all effectively measure zero sets.
The complexity / compressibility approach. This paradigm was championed by A. N. Kolmogorov along with contributions from Leonid Levin and Gregory Chaitin. For finite sequences, Kolmogorov defines randomness of a binary string of length n as the entropy (or Kolmogorov complexity) normalized by the length n. In other words, if the Kolmogorov complexity of the string is close to n, it is very random; if the complexity is far below n, it is not so random. The dual concept of randomness is compressibility ‒ the more random a sequence is, the less compressible, and vice versa.
The predictability approach. This paradigm is due to Claus P. Schnorr and uses a slightly different definition of constructive martingales than martingales used in traditional probability theory. Schnorr showed how the existence of a selective betting strategy implied the existence of a selection rule for a biased sub-sequence. If one only requires a recursive martingale to succeed on a sequence instead of constructively succeed on a sequence, then one gets the concept of recursive randomness. Yongge Wang showed that recursive randomness concept is different from Schnorr's randomness concept.
In most cases, theorems relating the three paradigms (often equivalence) have been proven.
See also
Randomness
History of randomness
Random number generator
Seven states of randomness
Statistical randomness
References
Sergio B. Volchan What Is a Random Sequence? The American Mathematical Monthly, Vol. 109, 2002, pp. 46–63
Notes
External links
Video on frequency stability. Why humans can't "guess" randomly
Randomness tests by Terry Ritter
Sequences and series
Statistical randomness | Random sequence | [
"Mathematics"
] | 1,151 | [
"Sequences and series",
"Mathematical analysis",
"Mathematical structures",
"Mathematical objects"
] |
48,258 | https://en.wikipedia.org/wiki/Bounded%20set | In mathematical analysis and related areas of mathematics, a set is called bounded if all of its points are within a certain distance of each other. Conversely, a set which is not bounded is called unbounded. The word "bounded" makes no sense in a general topological space without a corresponding metric.
Boundary is a distinct concept; for example, a circle (not to be confused with a disk) in isolation is a boundaryless bounded set, while the half plane is unbounded yet has a boundary.
A bounded set is not necessarily a closed set and vice versa. For example, a subset S of a 2-dimensional real space R2 constrained by two parabolic curves x2 + 1 and x2 - 1 defined in a Cartesian coordinate system is closed by the curves but not bounded (so unbounded).
Definition in the real numbers
A set S of real numbers is called bounded from above if there exists some real number k (not necessarily in S) such that k ≥ s for all s in S. The number k is called an upper bound of S. The terms bounded from below and lower bound are similarly defined.
A set S is bounded if it has both upper and lower bounds. Therefore, a set of real numbers is bounded if it is contained in a finite interval.
Definition in a metric space
A subset S of a metric space (M, d) is bounded if there exists r > 0 such that for all s and t in S, we have d(s, t) < r. The metric space (M, d) is a bounded metric space (or d is a bounded metric) if M is bounded as a subset of itself.
Total boundedness implies boundedness. For subsets of Rn the two are equivalent.
A metric space is compact if and only if it is complete and totally bounded.
A subset of Euclidean space Rn is compact if and only if it is closed and bounded. This is also called the Heine-Borel theorem.
Boundedness in topological vector spaces
In topological vector spaces, a different definition for bounded sets exists which is sometimes called von Neumann boundedness. If the topology of the topological vector space is induced by a metric which is homogeneous, as in the case of a metric induced by the norm of normed vector spaces, then the two definitions coincide.
Boundedness in order theory
A set of real numbers is bounded if and only if it has an upper and lower bound. This definition is extendable to subsets of any partially ordered set. Note that this more general concept of boundedness does not correspond to a notion of "size".
A subset S of a partially ordered set P is called bounded above if there is an element k in P such that k ≥ s for all s in S. The element k is called an upper bound of S. The concepts of bounded below and lower bound are defined similarly. (See also upper and lower bounds.)
A subset S of a partially ordered set P is called bounded if it has both an upper and a lower bound, or equivalently, if it is contained in an interval. Note that this is not just a property of the set S but also one of the set S as subset of P.
A bounded poset P (that is, by itself, not as subset) is one that has a least element and a greatest element. Note that this concept of boundedness has nothing to do with finite size, and that a subset S of a bounded poset P with as order the restriction of the order on P is not necessarily a bounded poset.
A subset S of Rn is bounded with respect to the Euclidean distance if and only if it bounded as subset of Rn with the product order. However, S may be bounded as subset of Rn with the lexicographical order, but not with respect to the Euclidean distance.
A class of ordinal numbers is said to be unbounded, or cofinal, when given any ordinal, there is always some element of the class greater than it. Thus in this case "unbounded" does not mean unbounded by itself but unbounded as a subclass of the class of all ordinal numbers.
See also
Bounded domain
Bounded function
Local boundedness
Order theory
Totally bounded
References
Functional analysis
Mathematical analysis
Order theory | Bounded set | [
"Mathematics"
] | 873 | [
"Mathematical analysis",
"Functions and mappings",
"Functional analysis",
"Mathematical objects",
"Mathematical relations",
"Order theory"
] |
48,260 | https://en.wikipedia.org/wiki/Monotonic%20function | In mathematics, a monotonic function (or monotone function) is a function between ordered sets that preserves or reverses the given order. This concept first arose in calculus, and was later generalized to the more abstract setting of order theory.
In calculus and analysis
In calculus, a function defined on a subset of the real numbers with real values is called monotonic if it is either entirely non-decreasing, or entirely non-increasing. That is, as per Fig. 1, a function that increases monotonically does not exclusively have to increase, it simply must not decrease.
A function is termed monotonically increasing (also increasing or non-decreasing) if for all and such that one has , so preserves the order (see Figure 1). Likewise, a function is called monotonically decreasing (also decreasing or non-increasing) if, whenever , then , so it reverses the order (see Figure 2).
If the order in the definition of monotonicity is replaced by the strict order , one obtains a stronger requirement. A function with this property is called strictly increasing (also increasing). Again, by inverting the order symbol, one finds a corresponding concept called strictly decreasing (also decreasing). A function with either property is called strictly monotone. Functions that are strictly monotone are one-to-one (because for not equal to , either or and so, by monotonicity, either or , thus .)
To avoid ambiguity, the terms weakly monotone, weakly increasing and weakly decreasing are often used to refer to non-strict monotonicity.
The terms "non-decreasing" and "non-increasing" should not be confused with the (much weaker) negative qualifications "not decreasing" and "not increasing". For example, the non-monotonic function shown in figure 3 first falls, then rises, then falls again. It is therefore not decreasing and not increasing, but it is neither non-decreasing nor non-increasing.
A function is said to be absolutely monotonic over an interval if the derivatives of all orders of are nonnegative or all nonpositive at all points on the interval.
Inverse of function
All strictly monotonic functions are invertible because they are guaranteed to have a one-to-one mapping from their range to their domain.
However, functions that are only weakly monotone are not invertible because they are constant on some interval (and therefore are not one-to-one).
A function may be strictly monotonic over a limited a range of values and thus have an inverse on that range even though it is not strictly monotonic everywhere. For example, if is strictly increasing on the range , then it has an inverse on the range .
The term monotonic is sometimes used in place of strictly monotonic, so a source may state that all monotonic functions are invertible when they really mean that all strictly monotonic functions are invertible.
Monotonic transformation
The term monotonic transformation (or monotone transformation) may also cause confusion because it refers to a transformation by a strictly increasing function. This is the case in economics with respect to the ordinal properties of a utility function being preserved across a monotonic transform (see also monotone preferences). In this context, the term "monotonic transformation" refers to a positive monotonic transformation and is intended to distinguish it from a "negative monotonic transformation," which reverses the order of the numbers.
Some basic applications and results
The following properties are true for a monotonic function :
has limits from the right and from the left at every point of its domain;
has a limit at positive or negative infinity () of either a real number, , or .
can only have jump discontinuities;
can only have countably many discontinuities in its domain. The discontinuities, however, do not necessarily consist of isolated points and may even be dense in an interval (a, b). For example, for any summable sequence of positive numbers and any enumeration of the rational numbers, the monotonically increasing function is continuous exactly at every irrational number (cf. picture). It is the cumulative distribution function of the discrete measure on the rational numbers, where is the weight of .
If is differentiable at and , then there is a non-degenerate interval I such that and is increasing on I. As a partial converse, if f is differentiable and increasing on an interval, I, then its derivative is positive at every point in I.
These properties are the reason why monotonic functions are useful in technical work in analysis. Other important properties of these functions include:
if is a monotonic function defined on an interval , then is differentiable almost everywhere on ; i.e. the set of numbers in such that is not differentiable in has Lebesgue measure zero. In addition, this result cannot be improved to countable: see Cantor function.
if this set is countable, then is absolutely continuous
if is a monotonic function defined on an interval , then is Riemann integrable.
An important application of monotonic functions is in probability theory. If is a random variable, its cumulative distribution function is a monotonically increasing function.
A function is unimodal if it is monotonically increasing up to some point (the mode) and then monotonically decreasing.
When is a strictly monotonic function, then is injective on its domain, and if is the range of , then there is an inverse function on for . In contrast, each constant function is monotonic, but not injective, and hence cannot have an inverse.
The graphic shows six monotonic functions. Their simplest forms are shown in the plot area and the expressions used to create them are shown on the y-axis.
In topology
A map is said to be monotone if each of its fibers is connected; that is, for each element the (possibly empty) set is a connected subspace of
In functional analysis
In functional analysis on a topological vector space , a (possibly non-linear) operator is said to be a monotone operator if
Kachurovskii's theorem shows that convex functions on Banach spaces have monotonic operators as their derivatives.
A subset of is said to be a monotone set if for every pair and in ,
is said to be maximal monotone if it is maximal among all monotone sets in the sense of set inclusion. The graph of a monotone operator is a monotone set. A monotone operator is said to be maximal monotone if its graph is a maximal monotone set.
In order theory
Order theory deals with arbitrary partially ordered sets and preordered sets as a generalization of real numbers. The above definition of monotonicity is relevant in these cases as well. However, the terms "increasing" and "decreasing" are avoided, since their conventional pictorial representation does not apply to orders that are not total. Furthermore, the strict relations and are of little use in many non-total orders and hence no additional terminology is introduced for them.
Letting denote the partial order relation of any partially ordered set, a monotone function, also called isotone, or , satisfies the property
for all and in its domain. The composite of two monotone mappings is also monotone.
The dual notion is often called antitone, anti-monotone, or order-reversing. Hence, an antitone function satisfies the property
for all and in its domain.
A constant function is both monotone and antitone; conversely, if is both monotone and antitone, and if the domain of is a lattice, then must be constant.
Monotone functions are central in order theory. They appear in most articles on the subject and examples from special applications are found in these places. Some notable special monotone functions are order embeddings (functions for which if and only if and order isomorphisms (surjective order embeddings).
In the context of search algorithms
In the context of search algorithms monotonicity (also called consistency) is a condition applied to heuristic functions. A heuristic is monotonic if, for every node and every successor of generated by any action , the estimated cost of reaching the goal from is no greater than the step cost of getting to plus the estimated cost of reaching the goal from ,
This is a form of triangle inequality, with , , and the goal closest to . Because every monotonic heuristic is also admissible, monotonicity is a stricter requirement than admissibility. Some heuristic algorithms such as A* can be proven optimal provided that the heuristic they use is monotonic.
In Boolean functions
In Boolean algebra, a monotonic function is one such that for all and in , if , , ..., (i.e. the Cartesian product is ordered coordinatewise), then . In other words, a Boolean function is monotonic if, for every combination of inputs, switching one of the inputs from false to true can only cause the output to switch from false to true and not from true to false. Graphically, this means that an -ary Boolean function is monotonic when its representation as an -cube labelled with truth values has no upward edge from true to false. (This labelled Hasse diagram is the dual of the function's labelled Venn diagram, which is the more common representation for .)
The monotonic Boolean functions are precisely those that can be defined by an expression combining the inputs (which may appear more than once) using only the operators and and or (in particular not is forbidden). For instance "at least two of , , hold" is a monotonic function of , , , since it can be written for instance as (( and ) or ( and ) or ( and )).
The number of such functions on variables is known as the Dedekind number of .
SAT solving, generally an NP-hard task, can be achieved efficiently when all involved functions and predicates are monotonic and Boolean.
See also
Monotone cubic interpolation
Pseudo-monotone operator
Spearman's rank correlation coefficient - measure of monotonicity in a set of data
Total monotonicity
Cyclical monotonicity
Operator monotone function
Monotone set function
Absolutely and completely monotonic functions and sequences
Notes
Bibliography
(Definition 9.31)
External links
Convergence of a Monotonic Sequence by Anik Debnath and Thomas Roxlo (The Harker School), Wolfram Demonstrations Project.
Functional analysis
Order theory
Real analysis
Types of functions | Monotonic function | [
"Mathematics"
] | 2,165 | [
"Functions and mappings",
"Functional analysis",
"Mathematical objects",
"Mathematical relations",
"Order theory",
"Types of functions"
] |
48,294 | https://en.wikipedia.org/wiki/International%20auxiliary%20language | An international auxiliary language (sometimes acronymized as IAL or contracted as auxlang) is a language meant for communication between people from different nations, who do not share a common first language. An auxiliary language is primarily a foreign language and often a constructed language. The concept is related to but separate from the idea of a lingua franca (or dominant language) that people must use to communicate. The study of international auxiliary languages is interlinguistics.
The term "auxiliary" implies that it is intended to be an additional language for communication between the people of the world, rather than to replace their native languages. Often, the term is used specifically to refer to planned or constructed languages proposed to ease international communication, such as Esperanto, Ido and Interlingua. It usually takes words from widely spoken languages. However, it can also refer to the concept of such a language being determined by international consensus, including even a standardized natural language (e.g., International English), and has also been connected to the project of constructing a universal language.
Languages of dominant societies over the centuries have served as lingua francas that have sometimes approached the international level. Latin, Greek, Sanskrit, Persian, Tamil, and the Mediterranean Lingua Franca were used in the past. In recent times, Standard Arabic, Standard Chinese, English, French, German, Italian, Portuguese, Russian, and Spanish have been used as such in many parts of the world. However, as lingua francas are traditionally associated with the very dominance—cultural, political, and economic—that made them popular, they are often also met with resistance. For this and other reasons, some have turned to the idea of promoting a constructed language as a possible solution, by way of an "auxiliary" language, one example of which being Esperanto.
History
The use of an intermediary auxiliary language (also called a "working language", "bridge language", "vehicular language", or "unifying language") to make communication possible between people not sharing a first language, in particular when it is a third language, distinct from both mother tongues, may be almost as old as language itself. Certainly they have existed since antiquity. Latin and Greek (or Koine Greek) were the intermediary language of all areas of the Mediterranean; Akkadian, and then Aramaic, remained the common languages of a large part of Western Asia through several earlier empires. Such natural languages used for communication between people not sharing the same mother tongue are called lingua francas.
Lingua francas (natural international languages)
Lingua francas have arisen around the globe throughout human history, sometimes for commercial reasons (so-called "trade languages") but also for diplomatic and administrative convenience, and as a means of exchanging information between scientists and other scholars of different nationalities. The term originates with one such language, Mediterranean Lingua Franca, a pidgin language used as a trade language in the Mediterranean area from the 11th to the 19th century. Examples of lingua francas remain numerous, and exist on every continent. The most obvious example as of the early 21st century is English. Moreover, a special case of English is that of Basic English, a simplified version of English which shares the same grammar (though simplified) and a reduced vocabulary of only 1,000 words, with the intention that anyone with a basic knowledge of English should be able to understand even quite complex texts.
Constructed languages
Since all natural languages display a number of irregularities in grammar that make them more difficult to learn, and they are also associated with the national and cultural dominance of the nation that speaks it as its mother tongue, attention began to focus on the idea of creating an artificial or constructed language as a possible solution. The concept of simplifying an existing language to make it an auxiliary language was already in the Encyclopédie of the 18th century, where Joachim Faiguet de Villeneuve, in the article on Langue, wrote a short proposition of a "laconic" or regularized grammar of French.
Some of the philosophical languages of the 17th–18th centuries could be regarded as proto-auxlangs, as they were intended by their creators to serve as bridges among people of different languages as well as to disambiguate and clarify thought. However, most or all of these languages were, as far as can be told from the surviving publications about them, too incomplete and unfinished to serve as auxlangs (or for any other practical purpose). The first fully developed constructed languages we know of, as well as the first constructed languages devised primarily as auxlangs, originated in the 19th century; Solresol by François Sudre, a language based on musical notes, was the first to gain widespread attention although not, apparently, fluent speakers.
Volapük
During the 19th century, a bewildering variety of such constructed international auxiliary languages (IALs) were proposed, so Louis Couturat and Léopold Leau in Histoire de la langue universelle (1903) reviewed 38 projects.
Volapük, first described in an article in 1879 by Johann Martin Schleyer and in book form the following year, was the first to garner a widespread international speaker community. Three major Volapük conventions were held, in 1884, 1887, and 1889; the last of them used Volapük as its working language. André Cherpillod writes of the third Volapük convention,
However, not long after, the Volapük speaker community broke up due to various factors including controversies between Schleyer and other prominent Volapük speakers, and the appearance of newer, easier-to-learn constructed languages, primarily Esperanto.
Idiom Neutral and Latino sine flexione
Answering the needs of the first successful artificial language community, the Volapükists established the regulatory body of their language, under the name International Volapük Academy (Kadem bevünetik volapüka) at the second Volapük congress in Munich in August 1887. The Academy was set up to conserve and perfect the auxiliary language Volapük, but soon conflicts arose between conservative Volapükists and those who wanted to reform Volapük to make it a more naturalistic language based on the grammar and vocabulary of major world languages. In 1890 Schleyer himself left the original Academy and created a new Volapük Academy with the same name, from people completely loyal to him, which continues to this day.
Under Waldemar Rosenberger, who became the director in 1892, the original Academy began to make considerable changes in the grammar and vocabulary of Volapük. The vocabulary and the grammatical forms unfamiliar to Western Europeans were completely discarded, so that the changes effectively resulted in the creation of a new language, which was named "Idiom Neutral". The name of the Academy was changed to Akademi Internasional de Lingu Universal in 1898 and the circulars of the Academy were written in the new language from that year.
In 1903, the mathematician Giuseppe Peano published his completely new approach to language construction. Inspired by the idea of philosopher Gottfried Wilhelm Leibniz, instead of inventing schematic structures and an a priori language, he chose to simplify an existing and once widely used international language, Latin. This simplified Latin, devoid of inflections and declensions, was named Interlingua by Peano but is usually referred to as "Latino sine flexione".
Impressed by Peano's Interlingua, the Akademi Internasional de Lingu Universal effectively chose to abandon Idiom Neutral in favor of Peano's Interlingua in 1908, and it elected Peano as its director. The name of the group was subsequently changed to Academia pro Interlingua (where Interlingua stands for Peano's language). The Academia pro Interlingua survived until about 1939. It was Peano's Interlingua that partly inspired the better-known Interlingua presented in 1951 by the International Auxiliary Language Association (IALA).
Esperanto
After the emergence of Volapük, a wide variety of other auxiliary languages were devised and proposed in the 1880s–1900s, but none except Esperanto gathered a significant speaker community. Esperanto was developed from about 1873–1887 (a first version was ready in 1878), and finally published in 1887, by L. L. Zamenhof, as a primarily schematic language; the word-stems are borrowed from Romance, West Germanic and Slavic languages. The key to the relative success of Esperanto was probably the highly productive and elastic system of derivational word formation which allowed speakers to derive hundreds of other words by learning one word root. Moreover, Esperanto is quicker to learn than other languages, usually in a third up to a fifth of the time. From early on, Esperantists created their own culture which helped to form the Esperanto language community.
Within a few years this language had thousands of fluent speakers, primarily in eastern Europe. In 1905 its first world convention was held in Boulogne-sur-Mer. Since then world congresses have been held in different countries every year, except during the two World Wars. Esperanto has become "the most outlandishly successful invented language ever" and the most widely spoken constructed international auxiliary language. Esperanto is probably among the fifty languages which are most used internationally.
In 1922 a proposal by Iran and several other countries in the League of Nations to have Esperanto taught in member nations' schools failed. Esperanto speakers were subject to persecution under Stalin's regime. In Germany under Hitler, in Spain under Franco for about a decade, in Portugal under Salazar, in Romania under Ceaușescu, and in half a dozen Eastern European countries during the late forties and part of the fifties, Esperanto activities and the formation of Esperanto associations were forbidden. In spite of these factors more people continued to learn Esperanto, and significant literary work (both poetry and novels) appeared in Esperanto in the period between the World Wars and after them. Esperanto is spoken today in a growing number of countries and it has multiple generations of native speakers, although it is primarily used as a second language. Of the various constructed language projects, it is Esperanto that has so far come closest to becoming an officially recognized international auxiliary language; China publishes daily news in Esperanto.
Ido and the Esperantidos
The Delegation for the Adoption of an International Auxiliary Language was founded in 1900 by Louis Couturat and others; it tried to get the International Association of Academies to take up the question of an international auxiliary language, study the existing ones and pick one or design a new one. However, when the meta-academy declined to do so, the Delegation decided to do the job itself. Among Esperanto speakers there was a general impression that the Delegation would of course choose Esperanto, as it was the only auxlang with a sizable speaker community at the time; it was felt as a betrayal by many Esperanto speakers when in 1907 the Delegation came up with its own reformed version of Esperanto, Ido. Ido drew a significant number of speakers away from Esperanto in the short term, but in the longer term most of these either returned to Esperanto or moved on to other new auxlangs. Besides Ido, a great number of simplified Esperantos, called Esperantidos, emerged as concurrent language projects; still, Ido remains today one of the more widely spoken auxlangs.
Interlingue (Occidental)
Edgar de Wahl's Occidental of 1922 was in reaction against the perceived artificiality of some earlier auxlangs, particularly Esperanto. Inspired by Idiom Neutral and Latino sine flexione, de Wahl created a language whose words, including compound words, would have a high degree of recognizability for those who already know a Romance language. However, this design criterion was in conflict with the ease of coining new compound or derived words on the fly while speaking. Occidental was most active from the 1920s to the 1950s, and supported some 80 publications by the 1930s, but had almost entirely died out by the 1980s. Its name was officially changed to Interlingue in 1949. More recently Interlingue has been revived on the Internet.
Novial
In 1928 Ido's major intellectual supporter, the Danish linguist Otto Jespersen, abandoned Ido, and published his own planned language, Novial. It was mostly inspired by Idiom Neutral and Occidental, yet it attempted a derivational formalism and schematism sought by Esperanto and Ido. The notability of its creator helped the growth of this auxiliary language, but a reform of the language was proposed by Jespersen in 1934 and not long after this Europe entered World War II, and its creator died in 1943 before Europe was at peace again.
Interlingua
The International Auxiliary Language Association (IALA) was founded in 1924 by Alice Vanderbilt Morris; like the earlier Delegation for the Adoption of an International Auxiliary Language, its mission was to study language problems and the existing auxlangs and proposals for auxlangs, and to negotiate some consensus between the supporters of various auxlangs. However, like the Delegation, it finally decided to create its own auxlang. Interlingua, published in 1951, was primarily the work of Alexander Gode, though he built on preliminary work by earlier IALA linguists including André Martinet, and relied on elements from previous naturalistic auxlang projects, like Peano's Interlingua (Latino sine flexione), Jespersen's Novial, de Wahl's Interlingue, and the Academy's Idiom Neutral. Like Interlingue, Interlingua was designed to have words recognizable at sight by those who already know a Romance language or a language like English with much vocabulary borrowed from Romance languages; to attain this end the IALA accepted a degree of grammatical and orthographic complexity considerably greater than in Esperanto or Interlingue, though still less than in any natural language.
The theory underlying Interlingua posits an international vocabulary, a large number of words and affixes that are present in a wide range of languages. This already existing international vocabulary was shaped by social forces, science and technology, to "all corners of the world". The goal of the International Auxiliary Language Association was to accept into Interlingua every widely international word in whatever languages it occurred. They conducted studies to identify "the most generally international vocabulary possible", while still maintaining the unity of the language. This scientific approach of generating a language from selected source languages (called control languages) resulted in a vocabulary and grammar that can be called the highest common factor of each major European language.
Interlingua gained a significant speaker community, perhaps roughly the same size as that of Ido (considerably less than the size of Esperanto). Interlingua's success can be explained by the fact that it is the most widely understood international auxiliary language by virtue of its naturalistic (as opposed to schematic) grammar and vocabulary, allowing those familiar with a Romance language, and educated speakers of English, to read and understand it without prior study. Interlingua has some active speakers currently on all continents, and the language is propagated by the Union Mundial pro Interlingua (UMI), and Interlingua is presented on CDs, radio, and television.
After the creation of Interlingua, the enthusiasm for constructed languages gradually decreased in the years between 1960 and 1990.
Internet age
All of the auxlangs with a surviving speaker community seem to have benefited from the advent of the Internet, Esperanto more than most. The CONLANG mailing list was founded in 1991; in its early years discussion focused on international auxiliary languages. As people interested in artistic languages and engineered languages grew to be the majority of the list members, and flame-wars between proponents of particular auxlangs irritated these members, a separate AUXLANG mailing list was created in 1997, which has been the primary venue for discussion of auxlangs since then. Besides giving the existing auxlangs with speaker communities a chance to interact rapidly online as well as slowly through postal mail or more rarely in personal meetings, the Internet has also made it easier to publicize new auxlang projects, and a handful of these have gained a small speaker community, including Kotava (published in 1978), Lingua Franca Nova (1998), Slovio (1999), Interslavic (2006), Pandunia (2007), Sambahsa (2007), Lingwa de Planeta (2010), and Globasa (2019).
Zonal auxiliary languages
Not every international auxiliary language is necessarily intended to be used on a global scale. A special subgroup are languages created to facilitate communication between speakers of related languages. The oldest known example is a Pan-Slavic language written in 1665 by the Croatian priest Juraj Križanić. He named this language Ruski jezik ("Russian language"), although in reality it was a mixture of the Russian edition of Church Slavonic, his own Southern Chakavian dialect of Serbo-Croatian, and, to a lesser degree, Polish.
Most zonal auxiliary languages were created during the period of romantic nationalism at the end of the 19th century; some were created later. Particularly numerous are the Pan-Slavic language projects. However, similar efforts at creating umbrella languages have been made for other language families as well: Tutonish (1902), Folkspraak (1995) and other pan-Germanic languages for the Germanic languages; Romanid (1956) and several other pan-Romance languages for the Romance languages; and Afrihili (1973) for the African continent.
Notable among modern examples is Interslavic, a project first published in 2006 as Slovianski and then established in its current form in 2011 after the merger of several other projects. In 2012 it was reported to have several hundred users.
Scholarly study
In the early 1900s auxlangs were already becoming a subject of academic study. Louis Couturat et al. described the controversy in the preface to their book International Language and Science:
The question of a so-called world-language, or better expressed, an international auxiliary language, was during the now past Volapük period, and is still in the present Esperanto movement, so much in the hands of Utopians, fanatics and enthusiasts, that it is difficult to form an unbiased opinion concerning it, although a good idea lies at its basis. (1910, p. v).
Leopold Pfaundler wrote that an IAL was needed for more effective communication among scientists:
All who are occupied with the reading or writing of scientific literature have assuredly very often felt the want of a common scientific language, and regretted the great loss of time and trouble caused by the multiplicity of languages employed in scientific literature.
For Couturat et al., Volapükists and Esperantists confounded the linguistic aspect of the question with many side issues, and they considered this a main reason why discussion about the idea of an international auxiliary language has appeared unpractical.
Some contemporaries of Couturat, notably Edward Sapir, saw the challenge of an auxiliary language not as much as that of identifying a descriptive linguistic answer (of grammar and vocabulary) to global communicative concerns, but rather as one of promoting the notion of a linguistic platform for lasting international understanding. Though interest among scholars, and linguists in particular, waned greatly throughout the 20th century, such differences of approach persist today. Some scholars and interested laymen make concrete language proposals. By contrast, Mario Pei and others place the broader societal issue first. Yet others argue in favor of a particular language while seeking to establish its social integration.
Writing systems
Whilst most IALs use the Latin script, some of them, also offer an alternative in the Cyrillic script.
Latin script
The vast majority of IALs use the Latin script. Several sounds, e.g. /n/, /m/, /t/, /f/ are written with the same letter as in IPA.
Some consonant sounds found in several Latin-script IAL alphabets are not represented by an ISO 646 letter in IPA. Three have a single letter in IPA, one has a widespread alternative taken from ISO 646:
/ʃ/ (U+0283, IPA 134)
/ʒ/ (U+0292, IPA 135)
/ɡ/ (U+0261, IPA 110, single storey g) = g (U+0067, double storey g)
Four are affricates, each represented in IPA by two letters and a combining marker. They are often written decomposed:
/t͡s/ = /ts/
/t͡ʃ/ = /tʃ/; Note: Polish distinguishes between them
/d͡z/ = /dz/
/d͡ʒ/ = /dʒ/
That means that two sounds that are one character in IPA and are not ISO 646, also have no common alternative in ISO 646: ʃ, ʒ.
Classification
The following classification of auxiliary languages was developed by Pierre Janton in 1993:
A priori languages are characterized by largely artificial morphemes (not borrowed from natural languages), schematic derivation, simple phonology, grammar, and morphology. Some a priori languages are called philosophical languages, referring to their basis in philosophical ideas about thought and language. These include some of the earliest efforts at auxiliary language in the 17th century. Some more specific subcategories:
Taxonomic languages form their words using a taxonomic hierarchy, with each phoneme of a word helping specify its position in a semantic hierarchy of some kind; for example, Solresol.
Pasigraphies are purely written languages without a spoken form, or with a spoken form left at the discretion of the reader; many of the 17th–18th century philosophical languages and auxlangs were pasigraphies. This set historically tends to overlap with taxonomic languages, though there is no inherent reason a pasigraphy needs to be taxonomic.
A posteriori languages are based on existing natural languages. Nearly all the auxiliary languages with fluent speakers are in this category. Most of the a posteriori auxiliary languages borrow their vocabulary primarily or solely from European languages, and base their grammar more or less on European models. (Sometimes these European-based languages are referred to as "euroclones", although this term has negative connotations and is not used in the academic literature.) Interlingua was drawn originally from international scientific vocabulary, in turn based primarily on Greek and Latin roots. Glosa did likewise, with a stronger dependence of Greek roots. Although a posteriori languages have been based on most of the families of European languages, the most successful of these (notably Esperanto, Ido, and Interlingua) have been based largely on Romance elements.
Schematic (or "mixed") languages have some a priori qualities. Some have ethnic morphemes but alter them significantly to fit a simplified phonotactic pattern (e.g., Volapük) or both artificial and natural morphemes (e.g., Perio). Partly schematic languages have partly schematic and partly naturalistic derivation (e.g. Esperanto and Ido). Natural morphemes of languages in this group are rarely altered greatly from their source-language form, but compound and derived words are generally not recognizable at sight by people familiar with the source languages.
Naturalistic languages resemble existing natural languages. For example, Interlingue, Interlingua, and Lingua Franca Nova were developed so that not only the root words but their compounds and derivations will often be immediately recognized by large numbers of people. Some naturalistic languages do have a limited number of artificial morphemes or invented grammatical devices (e.g. Novial).
Simplified, or controlled versions of natural languages reduce the full extent of the vocabulary and partially regularize the grammar of a natural language (e.g. Basic English and Special English).
Comparison of sample texts
Some examples of the best known international auxiliary languages are shown below for comparative purposes, using the Lord's Prayer (a core Christian prayer, the translated text of which is regularly used for linguistic comparisons).
As a reference for comparison, one can find the Latin, English, French, and Spanish versions here:
Natural languages
Schematic languages
Naturalistic languages
Other examples
Methods of propagation
As has been pointed out, the issue of an international language is not so much which, but how. Several approaches exist toward the eventual full expansion and consolidation of an international auxiliary language.
Laissez-faire. This approach is taken in the belief that one language will eventually and inevitably "win out" as a world auxiliary language (e.g. International English) without any need for specific action.
Institutional sponsorship and grass-roots promotion of language programs. This approach has taken various forms, depending on the language and language type, ranging from government promotion of a particular language to one-on-one encouragement to learn the language to instructional or marketing programs.
National legislation. This approach seeks to have individual countries (or even localities) progressively endorse a given language as an official language (or to promote the concept of international legislation).
International legislation. This approach involves promotion of the future holding of a binding international convention (perhaps to be under the auspices of such international organizations as the United Nations or Inter-Parliamentary Union) to formally agree upon an official international auxiliary language which would then be taught in all schools around the world, beginning at the primary level. This approach, an official principle of the Baháʼí Faith, seeks to put a combination of international opinion, linguistic expertise, and law behind a to-be-selected language and thus expand or consolidate it as a full official world language, to be used in addition to local languages. This approach could either give more credibility to a natural language already serving this purpose to a certain degree (e.g. if English were chosen) or to give a greatly enhanced chance for a constructed language to take root. For constructed languages particularly, this approach has been seen by various individuals in the IAL movement as holding the most promise of ensuring that promotion of studies in the language would not be met with skepticism at its practicality by its would-be learners.
Pictorial languages
In the Sinosphere, Literary Chinese was a written lingua franca for bureaucracy and communications. Literary Chinese writing facilitated communication between speakers of Japanese, Korean, Vietnamese, and different Chinese dialects.
There have been a number of proposals for using pictures, ideograms, diagrams, and other pictorial representations for international communications. Examples range from the original Characteristica Universalis proposed by the philosopher Leibniz in the 17th century, to suggestions for the adoption of Chinese writing, to recent inventions such as Blissymbol, first published in 1949.
Within the scientific community, there is already considerable agreement in the form of the schematics used to represent electronic circuits, chemical symbols, mathematical symbols, and the Energy Systems Language of systems ecology. We can also see the international efforts at regularizing symbols used to regulate traffic, to indicate resources for tourists, and in maps. Some symbols have become nearly universal through their consistent use in computers and on the Internet.
Sign languages
An international auxiliary sign language has been developed by deaf people who meet regularly at international forums such as sporting events or in political organisations. Previously referred to as Gestuno but now more commonly known simply as 'international sign', the language has continued to develop since the first signs were standardised in 1973, and it is now in widespread use. International sign is distinct in many ways from spoken IALs; many signs are iconic, and signers tend to insert these signs into the grammar of their own sign language, with an emphasis on visually intuitive gestures and mime. A simple sign language called Plains Indian Sign Language was used by indigenous peoples of the Americas.
Gestuno is not to be confused with the separate and unrelated sign language Signuno, which is essentially a Signed Exact Esperanto. Signuno is not in any significant use, and is based on the Esperanto community rather than based on the international Deaf community.
Criticism
There has been considerable criticism of international auxiliary languages, both in terms of individual proposals, types of proposals, and in more general terms.
Much criticism has been focused either on the artificiality of international auxiliary languages, or on the argumentativeness of proponents and their failure to agree on one language, or even on objective criteria by which to judge them. However, probably the most common criticism is that a constructed auxlang is unnecessary because natural languages such as English are already in wide use as auxlangs.
One criticism already prevalent in the late 19th century, and still sometimes heard today, is that an international language might hasten the extinction of minority languages.
Although referred to as international languages, most of these languages have historically been constructed on the basis of Western European languages. Esperanto and other languages such as Interlingua and Ido have been criticized for being too European and not global enough. The term "Euroclone" was coined to refer to such languages in contrast to "worldlangs" with global vocabulary sources.
See also
See for a list of designed international auxiliary languages.
Interlinguistics
International Language Review
Language education
Language planning
Lingua franca
Living Latin
Pidgin
Baháʼí Faith and auxiliary language
Zonal constructed languages
Global language system
Universal language
Notes
References
Footnotes
Bibliography
Bliss, Charles Keisel. Semantography (Blissymbolics). Semantography Press: Sydney, 1965.
Bodmer, Frederick, and Lancelot Hogben. The Loom of Language. N.Y.: Norton, 1944.
De Wahl, Edgar. Radicarium directiv del lingue international (Occidental) in 8 lingues. A.-S. "Ühisell" Trükk. Pikk Uul. 42, Tallinn, 1925.
Drezen, Ernst: Historio de la Mondlingvo ("History of the World Language"). Oosaka: Pirato, 1969 (3d ed.).
Eco, Umberto, [tra. James Fentress], The Search for the Perfect Language. Oxford: Blackwell, 1995.
Gär, Joseph. Deutsch-Occidental Wörterbuch nach dem Kürschners "Sechs-Sprachen-Lexicon", mit kurzer Occidental-Grammatik. Kosmoglott, Reval, Estland, 1925/1928.
Gode, Alexander, et al. Interlingua-English: a dictionary of the international language. Storm Publishers, New York, 1951.
Jesperson, Otto. An International Language. (1928)
Mainzer, Ludwig, Karlsruhe. Linguo international di la Delegitaro (Sistemo Ido), Vollständiges Lehrbuch der Internationalen Sprache (Reform-Esperanto). Otto Nemmich Verlag, Leipzig (Germany), 1909.
Nerrière, Jean-Paul, and Hon, David Globish The World Over. Paris, IGI, 2009
Pei, Mario. One Language for the World. N.Y.: Devin-Adair, 1958.
Pham Xuan Thai. Frater (Lingua sistemfrater). The simplest International Language Ever Constructed. TU-HAI Publishing-House, Saigon (Republic of Vietnam), 1957.
Pigal, E. and the Hauptstelle der Occidental-Union in Mauern bei Wien. Occidental, Die Weltsprache, Einführung samt Lehrkursus, Lesestücken, Häufigkeitswörterverzeichnis u. a., Franckh. Verlagshandlung, Stuttgart, 1930.
Pirro, Jean. Versuch einer Universalischen Sprache. Guerin und Cie., Bar-Le-Duc (France), 1868.
Rubino, F., Hayhurst, A., and Guejlman, J. Gestuno: International sign language of the deaf. Carlisle: British Deaf Association, 1975.
Sudre, François. Langue musicale universelle inventée par François Sudre également inventeur de la téléphonie. G. Flaxland, Editeur, 4, place de la Madeleine, Paris (France), 1866.
External links
Proposed Guidelines for the Design of an Optimal International Auxiliary Language, an article written by Richard K. Harrison.
The Function of an International Auxiliary Language, an article written by linguist Edward Sapir discussing the need for prospects of an international language.
Farewell to auxiliary languages, a criticism of the auxiliary language movement by Richard K. Harrison.
Thoughts on IAL Success, an essay by Paul O. Bartlett
OneTongue.com, a project for promoting a world auxiliary language.
Constructed languages
Interlinguistics
Human communication
Communalism
Multilingualism
Utopian movements | International auxiliary language | [
"Biology"
] | 6,836 | [
"Human communication",
"Behavior",
"Human behavior"
] |
48,329 | https://en.wikipedia.org/wiki/Celestial%20pole | The north and south celestial poles are the two points in the sky where Earth's axis of rotation, indefinitely extended, intersects the celestial sphere. The north and south celestial poles appear permanently directly overhead to observers at Earth's North Pole and South Pole, respectively. As Earth spins on its axis, the two celestial poles remain fixed in the sky, and all other celestial points appear to rotate around them, completing one circuit per day (strictly, per sidereal day).
The celestial poles are also the poles of the celestial equatorial coordinate system, meaning they have declinations of +90 degrees and −90 degrees (for the north and south celestial poles, respectively). Despite their apparently fixed positions, the celestial poles in the long term do not actually remain permanently fixed against the background of the stars. Because of a phenomenon known as the precession of the equinoxes, the poles trace out circles on the celestial sphere, with a period of about 25,700 years. The Earth's axis is also subject to other complex motions which cause the celestial poles to shift slightly over cycles of varying lengths (see nutation, polar motion and axial tilt). Finally, over very long periods the positions of the stars themselves change, because of the stars' proper motions. To take into account such movement, celestial pole definitions come with an epoch to specify the date of the rotation axis; J2000.0 is the current standard.
An analogous concept applies to other planets: a planet's celestial poles are the points in the sky where the projection of the planet's axis of rotation intersects the celestial sphere. These points vary because different planets' axes are oriented differently (the apparent positions of the stars also change slightly because of parallax effects).
Finding the north celestial pole
The north celestial pole currently is within one degree of the bright star Polaris (named from the Latin stella polaris, meaning "pole star"). This makes Polaris, colloquially known as the "North Star", useful for navigation in the Northern Hemisphere: not only is it always above the north point of the horizon, but its altitude angle is always (nearly) equal to the observer's geographic latitude (though it can, of course, only be seen from locations in the Northern Hemisphere).
Polaris is near the north celestial pole for only a small fraction of the 25,700-year precession cycle. It will remain a good approximation for about 1,000 years, by which time the pole will have moved closer to Alrai (Gamma Cephei). In about 5,500 years, the pole will have moved near the position of the star Alderamin (Alpha Cephei), and in 12,000 years, Vega (Alpha Lyrae) will become the "North Star", though it will be about six degrees from the true north celestial pole.
To find Polaris, from a point in the Northern Hemisphere, face north and locate the Big Dipper (Plough) and Little Dipper asterisms. Looking at the "cup" part of the Big Dipper, imagine that the two stars at the outside edge of the cup form a line pointing upward out of the cup. This line points directly at the star at the tip of the Little Dipper's handle. That star is Polaris, the North Star.
Finding the south celestial pole
The south celestial pole is visible only from the Southern Hemisphere. It lies in the dim constellation Octans, the Octant. Sigma Octantis is identified as the south pole star, more than one degree away from the pole, but with a magnitude of 5.5 it is barely visible on a clear night.
Method one: The Southern Cross
The south celestial pole can be located from the Southern Cross (Crux) and its two "pointer" stars α Centauri and β Centauri. Draw an imaginary line from γ Crucis to α Crucis—the two stars at the extreme ends of the long axis of the cross—and follow this line through the sky. Either go four-and-a-half times the distance of the long axis in the direction the narrow end of the cross points, or join the two pointer stars with a line, divide this line in half, then at right angles draw another imaginary line through the sky until it meets the line from the Southern Cross. This point is 5 or 6 degrees from the south celestial pole. Very few bright stars of importance lie between Crux and the pole itself, although the constellation Musca is fairly easily recognised immediately beneath Crux.
Method two: Canopus and Achernar
The second method uses Canopus (the second-brightest star in the sky) and Achernar. Make a large equilateral triangle using these stars for two of the corners. But where should the third corner go? It could be on either side of the line connecting Achernar and Canopus, and the wrong side will not lead to the pole. To find the correct side, imagine that Archernar and Canopus are both points on the circumference of a circle. The third corner of the equilateral triangle will also be on this circle. The corner should be placed clockwise from Achernar and anticlockwise from Canopus. The third imaginary corner will be the south celestial pole. If the opposite is done, the point will land in the middle of Eridanus, which isn't at the pole. If Canopus has not yet risen, the second-magnitude Alpha Pavonis can also be used to form the triangle with Achernar and the pole. In this case, go anticlockwise from Achernar instead of clockwise, form the triangle with Canopus, and the third point, the pole, will reveal itself. The wrong way will lead to Aquarius, which is very far away from the celestial pole.
Method three: The Magellanic Clouds
The third method is best for moonless and clear nights, as it uses two faint "clouds" in the Southern Sky. These are marked in astronomy books as the Large and Small Magellanic Clouds (the LMC and the SMC). These "clouds" are actually dwarf galaxies near the Milky Way. Make an equilateral triangle, the third point of which is the south celestial pole. Like before, the SMC, LMC, and the pole will all be points on an equilateral triangle on an imaginary circle. The pole should be placed clockwise from the SMC and anticlockwise from the LMC. Going in the wrong direction will land you in the constellation of Horologium instead.
Method four: Sirius and Canopus
A line from Sirius, the brightest star in the sky, through Canopus, the second-brightest, continued for the same distance lands within a couple of degrees of the pole. In other words, Canopus is halfway between Sirius and the pole.
See also
Celestial sphere
Celestial equator
Circumpolar star
Orbital pole
Polaris
Pole star
Poles of astronomical bodies
References
External links
Visual representation of finding Polaris using the Big Dipper
Pole
Articles containing video clips
Ursa Minor
Octans | Celestial pole | [
"Astronomy",
"Mathematics"
] | 1,461 | [
"Constellations",
"Astronomical coordinate systems",
"Coordinate systems",
"Ursa Minor",
"Octans"
] |
48,331 | https://en.wikipedia.org/wiki/Measure%20word | In linguistics, measure words are words (or morphemes) that are used in combination with a numeral to indicate an amount of something represented by some noun. Many languages use measure words, and East Asian languages such as Chinese, Japanese, and Korean use them very extensively in the form of number classifiers.
Description
Measure words denote a unit of measurement and are used with mass nouns (uncountable nouns), and in some cases also with count nouns. For instance, in English, is a mass noun and thus one cannot say "three muds", but one can say "three drops of mud", "three pails of mud", etc. In these examples, drops and pails function as measure words. One can also say "three pails of shells"; in this case the measure word pails accompanies a count noun (shells).
The term measure word is also sometimes used to refer to numeral classifiers, which are used with count nouns in some languages. For instance, in English no extra word is needed when saying "three people", but in many East Asian languages a numeral classifier is added, just as a measure word is added for uncountable nouns in English. For example:
There are numerous Chinese measure words, and nouns differ in what measure words they can use. While many linguists maintain a distinction between measure words and numeral classifiers, the terms are sometimes used interchangeably. For instance, materials for teaching Chinese as a second language generally refer to Chinese classifiers as "measure words". The corresponding Chinese term is (), which can be directly translated as "quantity word".
Most measure words in English correspond to units of measurement or containers, and are themselves count nouns rather than grammatical particles:
one quart of water
three cups of coffee
four kernels of corn, three ears of corn, two bushels of corn
Though similar in construction, fractions are not measure words. For example, in "seven-eighths of an apple" the fraction acts as a noun. Compare that to "seven slices of apple" where "apple" is a mass noun and does not require the article "an". Combining the two, e.g. "seven-eighths of a slice of apple", makes it clear the fraction must be a noun referring to a part of another countable noun.
In many languages, including the East Asian languages referred to above, the analogous constructions do not include any equivalent of the English of. In German, for example, ein Glas Bier means "a glass [of] beer". This is interesting since both languages are West Germanic languages, making them closely related to each other. However, the equivalent of the English of is common in Romance languages such as "a glass of beer":
Classifiers versus measure words
Classifiers play a similar role to measure words, except that measure words denote a particular quantity of something (a drop, a cupful, a pint, etc.), rather than the inherent countable units associated with a count noun. Classifiers are used with count nouns; measure words can be used with mass nouns (e.g. "two pints of mud"), and can also be used when a count noun's quantity is not described in terms of its inherent countable units (e.g. "two pints of acorns").
However, the terminological distinction between classifiers and measure words is often blurred – classifiers are commonly referred to as measure words in some contexts, such as Chinese language teaching, and measure words are sometimes called mass-classifiers or similar.
See also
Collective noun
Count noun
List of collective nouns
References
Parts of speech | Measure word | [
"Technology"
] | 762 | [
"Parts of speech",
"Components"
] |
48,336 | https://en.wikipedia.org/wiki/Electrolyte | An electrolyte is a substance that conducts electricity through the movement of ions, but not through the movement of electrons. This includes most soluble salts, acids, and bases, dissolved in a polar solvent like water. Upon dissolving, the substance separates into cations and anions, which disperse uniformly throughout the solvent. Solid-state electrolytes also exist. In medicine and sometimes in chemistry, the term electrolyte refers to the substance that is dissolved.
Electrically, such a solution is neutral. If an electric potential is applied to such a solution, the cations of the solution are drawn to the electrode that has an abundance of electrons, while the anions are drawn to the electrode that has a deficit of electrons. The movement of anions and cations in opposite directions within the solution amounts to a current. Some gases, such as hydrogen chloride (HCL), under conditions of high temperature or low pressure can also function as electrolytes. Electrolyte solutions can also result from the dissolution of some biological (e.g., DNA, polypeptides) or synthetic polymers (e.g., polystyrene sulfonate), termed "polyelectrolytes", which contain charged functional groups. A substance that dissociates into ions in solution or in the melt acquires the capacity to conduct electricity. Sodium, potassium, chloride, calcium, magnesium, and phosphate in a liquid phase are examples of electrolytes.
In medicine, electrolyte replacement is needed when a person has prolonged vomiting or diarrhea, and as a response to sweating due to strenuous athletic activity. Commercial electrolyte solutions are available, particularly for sick children (such as oral rehydration solution, Suero Oral, or Pedialyte) and athletes (sports drinks). Electrolyte monitoring is important in the treatment of anorexia and bulimia.
In science, electrolytes are one of the main components of electrochemical cells.
In clinical medicine, mentions of electrolytes usually refer metonymically to the ions, and (especially) to their concentrations (in blood, serum, urine, or other fluids). Thus, mentions of electrolyte levels usually refer to the various ion concentrations, not to the fluid volumes.
Etymology
The word electrolyte derives from Ancient Greek ήλεκτρο- (ēlectro-), prefix originally meaning amber but in modern contexts related to electricity, and λυτός (lytos), meaning "able to be untied or loosened".
History
In his 1884 dissertation, Svante Arrhenius put forth his explanation of solid crystalline salts disassociating into paired charged particles when dissolved, for which he won the 1903 Nobel Prize in Chemistry. Arrhenius's explanation was that in forming a solution, the salt dissociates into charged particles, to which Michael Faraday (1791-1867) had given the name "ions" many years earlier. Faraday's belief had been that ions were produced in the process of electrolysis. Arrhenius proposed that, even in the absence of an electric current, solutions of salts contained ions. He thus proposed that chemical reactions in solution were reactions between ions.
Shortly after Arrhenius's hypothesis of ions, Franz Hofmeister and Siegmund Lewith found that different ion types displayed different effects on such things as the solubility of proteins. A consistent ordering of these different ions on the magnitude of their effect arises consistently in many other systems as well. This has since become known as the Hofmeister series.
While the origins of these effects are not abundantly clear and have been debated throughout the past century, it has been suggested that the charge density of these ions is important and might actually have explanations originating from the work of Charles-Augustin de Coulomb over 200 years ago.
Formation
Electrolyte solutions are normally formed when salt is placed into a solvent such as water and the individual components dissociate due to the thermodynamic interactions between solvent and solute molecules, in a process called "solvation". For example, when table salt (sodium chloride), NaCl, is placed in water, the salt (a solid) dissolves into its component ions, according to the dissociation reaction:
NaCl(s) → Na+(aq) + Cl−(aq)
It is also possible for substances to react with water, producing ions. For example, carbon dioxide gas dissolves in water to produce a solution that contains hydronium, carbonate, and hydrogen carbonate ions.
Molten salts can also be electrolytes as, for example, when sodium chloride is molten, the liquid conducts electricity. In particular, ionic liquids, which are molten salts with melting points below 100 °C, are a type of highly conductive non-aqueous electrolytes and thus have found more and more applications in fuel cells and batteries.
An electrolyte in a solution may be described as "concentrated" if it has a high concentration of ions, or "dilute" if it has a low concentration. If a high proportion of the solute dissociates to form free ions, the electrolyte is strong; if most of the solute does not dissociate, the electrolyte is weak. The properties of electrolytes may be exploited using electrolysis to extract constituent elements and compounds contained within the solution.
Alkaline earth metals form hydroxides that are strong electrolytes with limited solubility in water, due to the strong attraction between their constituent ions. This limits their application to situations where high solubility is required.
In 2021, researchers have found that electrolyte can "substantially facilitate electrochemical corrosion studies in less conductive media".
Physiological importance
In physiology, the primary ions of electrolytes are sodium (Na+), potassium (K+), calcium (Ca2+), magnesium (Mg2+), chloride (Cl−), hydrogen phosphate (HPO42−), and hydrogen carbonate (HCO3−). The electric charge symbols of plus (+) and minus (−) indicate that the substance is ionic in nature and has an imbalanced distribution of electrons, the result of chemical dissociation. Sodium is the main electrolyte found in extracellular fluid and potassium is the main intracellular electrolyte; both are involved in fluid balance and blood pressure control.
All known multicellular lifeforms require a subtle and complex electrolyte balance between the intracellular and extracellular environments. In particular, the maintenance of precise osmotic gradients of electrolytes is important. Such gradients affect and regulate the hydration of the body as well as blood pH, and are critical for nerve and muscle function. Various mechanisms exist in living species that keep the concentrations of different electrolytes under tight control.
Both muscle tissue and neurons are considered electric tissues of the body. Muscles and neurons are activated by electrolyte activity between the extracellular fluid or interstitial fluid, and intracellular fluid. Electrolytes may enter or leave the cell membrane through specialized protein structures embedded in the plasma membrane called "ion channels". For example, muscle contraction is dependent upon the presence of calcium (Ca2+), sodium (Na+), and potassium (K+). Without sufficient levels of these key electrolytes, muscle weakness or severe muscle contractions may occur.
Electrolyte balance is maintained by oral, or in emergencies, intravenous (IV) intake of electrolyte-containing substances, and is regulated by hormones, in general with the kidneys flushing out excess levels. In humans, electrolyte homeostasis is regulated by hormones such as antidiuretic hormones, aldosterone and parathyroid hormones. Serious electrolyte disturbances, such as dehydration and overhydration, may lead to cardiac and neurological complications and, unless they are rapidly resolved, will result in a medical emergency.
Measurement
Measurement of electrolytes is a commonly performed diagnostic procedure, performed via blood testing with ion-selective electrodes or urinalysis by medical technologists. The interpretation of these values is somewhat meaningless without analysis of the clinical history and is often impossible without parallel measurements of renal function. The electrolytes measured most often are sodium and potassium. Chloride levels are rarely measured except for arterial blood gas interpretations since they are inherently linked to sodium levels. One important test conducted on urine is the specific gravity test to determine the occurrence of an electrolyte imbalance.
Rehydration
According to a study paid for by the Gatorade Sports Science Institute, electrolyte drinks containing sodium and potassium salts replenish the body's water and electrolyte concentrations after dehydration caused by exercise, excessive alcohol consumption, diaphoresis (heavy sweating), diarrhea, vomiting, intoxication or starvation; the study says that athletes exercising in extreme conditions (for three or more hours continuously, e.g. a marathon or triathlon) who do not consume electrolytes risk dehydration (or hyponatremia).
A home-made electrolyte drink can be made by using water, sugar and salt in precise proportions. It is important to include glucose (sugar) to utilise the co-transport mechanism of sodium and glucose. Commercial preparations are also available for both human and veterinary use.
Electrolytes are commonly found in fruit juices, sports drinks, milk, nuts, and many fruits and vegetables (whole or in juice form) (e.g., potatoes, avocados).
Electrochemistry
When electrodes are placed in an electrolyte and a voltage is applied, the electrolyte will conduct electricity. Lone electrons normally cannot pass through the electrolyte; instead, a chemical reaction occurs at the cathode, providing electrons to the electrolyte. Another reaction occurs at the anode, consuming electrons from the electrolyte. As a result, a negative charge cloud develops in the electrolyte around the cathode, and a positive charge develops around the anode. The ions in the electrolyte neutralize these charges, enabling the electrons to keep flowing and the reactions to continue.
For example, in a solution of ordinary table salt (sodium chloride, NaCl) in water, the cathode reaction will be
2 H2O + 2e− → 2 OH− + H2
and hydrogen gas will bubble up; the anode reaction is
2 NaCl → 2 Na+ + Cl2 + 2e−
and chlorine gas will be liberated into solution where it reacts with the sodium and hydroxyl ions to produce sodium hypochlorite - household bleach. The positively charged sodium ions Na+ will react toward the cathode, neutralizing the negative charge of OH− there, and the negatively charged hydroxide ions OH− will react toward the anode, neutralizing the positive charge of Na+ there. Without the ions from the electrolyte, the charges around the electrode would slow down continued electron flow; diffusion of H+ and OH− through water to the other electrode takes longer than movement of the much more prevalent salt ions.
Electrolytes dissociate in water because water molecules are dipoles and the dipoles orient in an energetically favorable manner to solvate the ions.
In other systems, the electrode reactions can involve the metals of the electrodes as well as the ions of the electrolyte.
Electrolytic conductors are used in electronic devices where the chemical reaction at a metal-electrolyte interface yields useful effects.
In batteries, two materials with different electron affinities are used as electrodes; electrons flow from one electrode to the other outside of the battery, while inside the battery the circuit is closed by the electrolyte's ions. Here, the electrode reactions convert chemical energy to electrical energy.
In some fuel cells, a solid electrolyte or proton conductor connects the plates electrically while keeping the hydrogen and oxygen fuel gases separated.
In electroplating tanks, the electrolyte simultaneously deposits metal onto the object to be plated, and electrically connects that object in the circuit.
In operation-hours gauges, two thin columns of mercury are separated by a small electrolyte-filled gap, and, as charge is passed through the device, the metal dissolves on one side and plates out on the other, causing the visible gap to slowly move along.
In electrolytic capacitors the chemical effect is used to produce an extremely thin dielectric or insulating coating, while the electrolyte layer behaves as one capacitor plate.
In some hygrometers the humidity of air is sensed by measuring the conductivity of a nearly dry electrolyte.
Hot, softened glass is an electrolytic conductor, and some glass manufacturers keep the glass molten by passing a large current through it.
Solid electrolytes
Solid electrolytes can be mostly divided into four groups described below.
Gel electrolytes
Gel electrolytes – closely resemble liquid electrolytes. In essence, they are liquids in a flexible lattice framework. Various additives are often applied to increase the conductivity of such systems.
Ceramic electrolytes
Solid ceramic electrolytes – ions migrate through the ceramic phase by means of vacancies or interstitials within the lattice. There are also glassy-ceramic electrolytes.
Polymer electrolytes
Dry polymer electrolytes differ from liquid and gel electrolytes in that salt is dissolved directly into the solid medium. Usually it is a relatively high-dielectric constant polymer (PEO, PMMA, PAN, polyphosphazenes, siloxanes, etc.) and a salt with low lattice energy. In order to increase the mechanical strength and conductivity of such electrolytes, very often composites are made, and inert ceramic phase is introduced. There are two major classes of such electrolytes: polymer-in-ceramic, and ceramic-in-polymer.
Organic plastic electrolytes
Organic ionic plastic crystals – are a type organic salts exhibiting mesophases (i.e. a state of matter intermediate between liquid and solid), in which mobile ions are orientationally or rotationally disordered while their centers are located at the ordered sites in the crystal structure. They have various forms of disorder due to one or more solid–solid phase transitions below the melting point and have therefore plastic properties and good mechanical flexibility as well as an improved electrode-electrolyte interfacial contact. In particular, protic organic ionic plastic crystals (POIPCs), which are solid protic organic salts formed by proton transfer from a Brønsted acid to a Brønsted base and in essence are protic ionic liquids in the molten state, have found to be promising solid-state proton conductors for fuel cells. Examples include 1,2,4-triazolium perfluorobutanesulfonate and imidazolium methanesulfonate.
See also
Electrochemical machining
Elektrolytdatenbank Regensburg
Ion transport number
ITIES (interface between two immiscible electrolyte solutions)
Salt bridge
Strong electrolyte
Supporting electrolyte (background electrolyte)
VTPR
References
External links
Blood tests
Urine tests
Physical chemistry
Acid–base physiology | Electrolyte | [
"Physics",
"Chemistry"
] | 3,165 | [
"Blood tests",
"Acid–base physiology",
"Applied and interdisciplinary physics",
"Electrolytes",
"Electrochemistry",
"nan",
"Chemical pathology",
"Physical chemistry"
] |
48,340 | https://en.wikipedia.org/wiki/Pesticide | Pesticides are substances that are used to control pests. They include herbicides, insecticides, nematicides, fungicides, and many others (see table). The most common of these are herbicides, which account for approximately 50% of all pesticide use globally. Most pesticides are used as plant protection products (also known as crop protection products), which in general protect plants from weeds, fungi, or insects. In general, a pesticide is a chemical or biological agent (such as a virus, bacterium, or fungus) that deters, incapacitates, kills, or otherwise discourages pests. Target pests can include insects, plant pathogens, weeds, molluscs, birds, mammals, fish, nematodes (roundworms), and microbes that destroy property, cause nuisance, or spread disease, or are disease vectors. Along with these benefits, pesticides also have drawbacks, such as potential toxicity to humans and other species.
Definition
The word pesticide derives from the Latin pestis (plague) and caedere (kill).
The Food and Agriculture Organization (FAO) has defined pesticide as:
any substance or mixture of substances intended for preventing, destroying, or controlling any pest, including vectors of human or animal disease, unwanted species of plants or animals, causing harm during or otherwise interfering with the production, processing, storage, transport, or marketing of food, agricultural commodities, wood and wood products or animal feedstuffs, or substances that may be administered to animals for the control of insects, arachnids, or other pests in or on their bodies. The term includes substances intended for use as a plant growth regulator, defoliant, desiccant, or agent for thinning fruit or preventing the premature fall of fruit. Also used as substances applied to crops either before or after harvest to protect the commodity from deterioration during storage and transport.
Classifications
Pesticides can be classified by target organism (e.g., herbicides, insecticides, fungicides, rodenticides, and pediculicides – see table),
Biopesticides according to the EPA include microbial pesticides, biochemical pesticides, and plant-incorporated protectants.
Pesticides can be classified into structural classes, with many structural classes developed for each of the target organisms listed in the table. A structural class is usually associated with a single mode of action, whereas a mode of action may encompass more than one structural class.
The pesticidal chemical (active ingredient) is mixed (formulated) with other components to form the product that is sold, and which is applied in various ways. Pesticides in gas form are fumigants.
Pesticides can be classified based upon their mode of action, which indicates the exact biological mechanism which the pesticide disrupts. The modes of action are important for resistance management, and are categorized and administered by the insecticide, herbicide, and fungicide resistance action committees.
Pesticides may be systemic or non-systemic. A systemic pesticide moves (translocates) inside the plant. Translocation may be upward in the xylem, or downward in the phloem or both. Non-systemic pesticides (contact pesticides) remain on the surface and act through direct contact with the target organism. Pesticides are more effective if they are systemic. Systemicity is a prerequisite for the pesticide to be used as a seed-treatment.
Pesticides can be classified as persistent (non-biodegradable) or non-persistent (biodegradable). A pesticide must be persistent enough to kill or control its target but must degrade fast enough not to accumulate in the environment or the food chain in order to be approved by the authorities. Persistent pesticides, including DDT, were banned many years ago, an exception being spraying in houses to combat malaria vectors.
History
From biblical times until the 1950s the pesticides used were inorganic compounds and plant extracts. The inorganic compounds were derivatives of copper, arsenic, mercury, sulfur, among others, and the plant extracts contained pyrethrum, nicotine, and rotenone among others. The less toxic of these are still in use in organic farming. In the 1940s the insecticide DDT, and the herbicide 2,4-D, were introduced. These synthetic organic compounds were widely used and were very profitable. They were followed in the 1950s and 1960s by numerous other synthetic pesticides, which led to the growth of the pesticide industry. During this period, it became increasingly evident that DDT, which had been sprayed widely in the environment to combat the vector, had accumulated in the food chain. It had become a global pollutant, as summarized in the well-known book Silent Spring. Finally, DDT was banned in the 1970s in several countries, and subsequently all persistent pesticides were banned worldwide, an exception being spraying on interior walls for vector control.
Resistance to a pesticide was first seen in the 1920s with inorganic pesticides, and later it was found that development of resistance is to be expected, and measures to delay it are important. Integrated pest management (IPM) was introduced in the 1950s. By careful analysis and spraying only when an economical or biological threshold of crop damage is reached, pesticide application is reduced. This became in the 2020s the official policy of international organisations, industry, and many governments. With the introduction of high yielding varieties in the 1960s in the green revolution, more pesticides were used. Since the 1980s genetically modified crops were introduced, which resulted in lower amounts of insecticides used on them. Organic agriculture, which uses only non-synthetic pesticides, has grown and in 2020 represents about 1.5 per cent of the world's total agricultural land.
Pesticides have become more effective. Application rates fell from 1,000 to 2,500 grams of active ingredient per hectare (g/ha) in the 1950s to 40–100 g/ha in the 2000s. Despite this, amounts used have increased. In high income countries over 20 years between the 1990s and 2010s amounts used increased 20%, while in the low income countries amounts increased 1623%.
Development of new pesticides
The aim is to find new compounds or agents with improved properties such as a new mode of action or lower application rate. Another aim is to replace older pesticides which have been banned for reasons of toxicity or environmental harm or have become less effective due to development of resistance.
The process starts with testing (screening) against target organisms such as insects, fungi or plants. Inputs are typically random compounds, natural products, compounds designed to disrupt a biochemical target, compounds described in patents or literature, or biocontrol organisms.
Compounds that are active in the screening process, known as hits or leads, cannot be used as pesticides, except for biocontrol organisms and some potent natural products. These lead compounds need to be optimised by a series of cycles of synthesis and testing of analogs. For approval by regulatory authorities for use as pesticides, the optimized compounds must meet several requirements. In addition to being potent (low application rate), they must show low toxicity to non-target organisms, low environmental impact, and viable manufacturing cost. The cost of developing a pesticide in 2022 was estimated to be 350 million US dollars. It has become more difficult to find new pesticides. More than 100 new active ingredients were introduced in the 2000s and less than 40 in the 2010s. Biopesticides are cheaper to develop, since the authorities require less toxicological and environmental study. Since 2000 the rate of new biological product introduction has frequently exceeded that of conventional products.
More than 25% of existing chemical pesticides contain one or more chiral centres (stereogenic centres). Newer pesticides with lower application rates tend to have more complex structures, and thus more often contain chiral centres. In cases when most or all of the pesticidal activity in a new compound is found in one enantiomer (the eutomer), the registration and use of the compound as this single enantiomer is preferred. This reduces the total application rate and avoids the tedious environmental testing required when registering a racemate. However, if a viable enantioselective manufacturing route cannot be found, then the racemate is registered and used.
Uses
In addition to their main use in agriculture, pesticides have a number of other applications. Pesticides are used to control organisms that are considered to be harmful, or pernicious to their surroundings. For example, they are used to kill mosquitoes that can transmit potentially deadly diseases like West Nile virus, yellow fever, and malaria. They can also kill bees, wasps or ants that can cause allergic reactions. Insecticides can protect animals from illnesses that can be caused by parasites such as fleas. Pesticides can prevent sickness in humans that could be caused by moldy food or diseased produce. Herbicides can be used to clear roadside weeds, trees, and brush. They can also kill invasive weeds that may cause environmental damage. Herbicides are commonly applied in ponds and lakes to control algae and plants such as water grasses that can interfere with activities like swimming and fishing and cause the water to look or smell unpleasant. Uncontrolled pests such as termites and mold can damage structures such as houses. Pesticides are used in grocery stores and food storage facilities to manage rodents and insects that infest food such as grain. Pesticides are used on lawns and golf courses, partly for cosmetic reasons.
Integrated pest management, the use of multiple approaches to control pests, is becoming widespread and has been used with success in countries such as Indonesia, China, Bangladesh, the U.S., Australia, and Mexico. IPM attempts to recognize the more widespread impacts of an action on an ecosystem, so that natural balances are not upset.
Each use of a pesticide carries some associated risk. Proper pesticide use decreases these associated risks to a level deemed acceptable by pesticide regulatory agencies such as the United States Environmental Protection Agency (EPA) and the Pest Management Regulatory Agency (PMRA) of Canada.
DDT, sprayed on the walls of houses, is an organochlorine that has been used to fight malaria vectors (mosquitos) since the 1940s. The World Health Organization recommend this approach. It and other organochlorine pesticides have been banned in most countries worldwide because of their persistence in the environment and human toxicity. DDT has become less effective, as resistance was identified in Africa as early as 1955, and by 1972 nineteen species of mosquito worldwide were resistant to DDT.
Amount used
Total pesticides use in agriculture in 2021 was 3.54 million tonnes of active ingredients (Mt), a 4 percent increase with respect to 2020, an 11 percent increase in a decade, and a doubling since 1990. Pesticides use per area of cropland in 2021 was 2.26 kg per hectare (kg/ha), an increase of 4 percent with respect to 2020; use per value of agricultural production was 0.86 kg per thousand international dollar (kg/1000 I$) (+2%); and use per person was 0.45 kg per capita (kg/cap) (+3%). Between 1990 and 2021, these indicators increased by 85 percent, 3 percent, and 33 percent, respectively. Brazil was the world's largest user of pesticides in 2021, with 720 kt of pesticides applications for agricultural use, while the USA (457 kt) was the second-largest user.
Applications per cropland area in 2021 varied widely, from 10.9 kg/hectare in Brazil to 0.8 kg/ha in the Russian Federation. The level in Brazil was about twice as high as in Argentina (5.6 kg/ha) and Indonesia (5.3 kg/ha). Insecticide use in the US has declined by more than half since 1980 (0.6%/yr), mostly due to the near phase-out of organophosphates. In corn fields, the decline was even steeper, due to the switchover to transgenic Bt corn.
Benefits
Pesticides increase agricultural yields and lower costs. One study found that not using pesticides reduced crop yields by about 10%. Another study, conducted in 1999, found that a ban on pesticides in the United States may result in a rise of food prices, loss of jobs, and an increase in world hunger.
There are two levels of benefits for pesticide use, primary and secondary. Primary benefits are direct gains from the use of pesticides and secondary benefits are effects that are more long-term.
Biological
Controlling pests and plant disease vectors
Improved crop yields
Improved crop/livestock quality
Invasive species controlled
Controlling human/livestock disease vectors and nuisance organisms
Human lives saved and disease reduced. Diseases controlled include malaria, with millions of lives having been saved or enhanced with the use of DDT alone.
Animal lives saved and disease reduced
Controlling organisms that harm other human activities and structures
Drivers view unobstructed
Tree/brush/leaf hazards prevented
Wooden structures protected
Economics
In 2018 world pesticide sales were estimated to be $65 billion, of which 88% was used for agriculture. Generic accounted for 85% of sales in 2018. In one study, it was estimated that for every dollar ($1) that is spent on pesticides for crops results in up to four dollars ($4) in crops which would otherwise be lost to insects, fungi and weeds. In general, farmers benefit from having an increase in crop yield and from being able to grow a variety of crops throughout the year. Consumers of agricultural products also benefit from being able to afford the vast quantities of produce available year-round.
Disadvantages
On the cost side of pesticide use there can be costs to the environment and costs to human health. Pesticides safety education and pesticide applicator regulation are designed to protect the public from pesticide misuse, but do not eliminate all misuse. Reducing the use of pesticides and choosing less toxic pesticides may reduce risks placed on society and the environment from pesticide use.
Health effects
Pesticides may affect health negatively. mimicking hormones causing reproductive problems, and also causing cancer. A 2007 systematic review found that "most studies on non-Hodgkin lymphoma and leukemia showed positive associations with pesticide exposure" and thus concluded that cosmetic use of pesticides should be decreased. There is substantial evidence of associations between organophosphate insecticide exposures and neurobehavioral alterations. Limited evidence also exists for other negative outcomes from pesticide exposure including neurological, birth defects, and fetal death.
The American Academy of Pediatrics recommends limiting exposure of children to pesticides and using safer alternatives:
Pesticides are also found in majority of U.S. households with 88 million out of the 121.1 million households indicating that they use some form of pesticide in 2012. As of 2007, there were more than 1,055 active ingredients registered as pesticides, which yield over 20,000 pesticide products that are marketed in the United States.
Owing to inadequate regulation and safety precautions, 99% of pesticide-related deaths occur in developing countries that account for only 25% of pesticide usage.
One study found pesticide self-poisoning the method of choice in one third of suicides worldwide, and recommended, among other things, more restrictions on the types of pesticides that are most harmful to humans.
A 2014 epidemiological review found associations between autism and exposure to certain pesticides, but noted that the available evidence was insufficient to conclude that the relationship was causal.
Occupational exposure among agricultural workers
The World Health Organization and the UN Environment Programme estimate that 3 million agricultural workers in the developing world experience severe poisoning from pesticides each year, resulting in 18,000 deaths. According to one study, as many as 25 million workers in developing countries may suffer mild pesticide poisoning yearly. Other occupational exposures besides agricultural workers, including pet groomers, groundskeepers, and fumigators, may also put individuals at risk of health effects from pesticides.
Pesticide use is widespread in Latin America, as around US$3 billion are spent each year in the region. Records indicate an increase in the frequency of pesticide poisonings over the past two decades. The most common incidents of pesticide poisoning is thought to result from exposure to organophosphate and carbamate insecticides. At-home pesticide use, use of unregulated products, and the role of undocumented workers within the agricultural industry makes characterizing true pesticide exposure a challenge. It is estimated that 50–80% of pesticide poisoning cases are unreported.
Underreporting of pesticide poisoning is especially common in areas where agricultural workers are less likely to seek care from a healthcare facility that may be monitoring or tracking the incidence of acute poisoning. The extent of unintentional pesticide poisoning may be much greater than available data suggest, particularly among developing countries. Globally, agriculture and food production remain one of the largest industries. In East Africa, the agricultural industry represents one of the largest sectors of the economy, with nearly 80% of its population relying on agriculture for income. Farmers in these communities rely on pesticide products to maintain high crop yields.
Some East Africa governments are shifting to corporate farming, and opportunities for foreign conglomerates to operate commercial farms have led to more accessible research on pesticide use and exposure among workers. In other areas where large proportions of the population rely on subsistence, small-scale farming, estimating pesticide use and exposure is more difficult.
Pesticide poisoning
Pesticides may exhibit toxic effects on humans and other non-target species, the severity of which depends on the frequency and magnitude of exposure. Toxicity also depends on the rate of absorption, distribution within the body, metabolism, and elimination of compounds from the body. Commonly used pesticides like organophosphates and carbamates act by inhibiting acetylcholinesterase activity, which prevents the breakdown of acetylcholine at the neural synapse. Excess acetylcholine can lead to symptoms like muscle cramps or tremors, confusion, dizziness and nausea. Studies show that farm workers in Ethiopia, Kenya, and Zimbabwe have decreased concentrations of plasma acetylcholinesterase, the enzyme responsible for breaking down acetylcholine acting on synapses throughout the nervous system. Other studies in Ethiopia have observed reduced respiratory function among farm workers who spray crops with pesticides. Numerous exposure pathways for farm workers increase the risk of pesticide poisoning, including dermal absorption walking through fields and applying products, as well as inhalation exposure.
Measuring exposure to pesticides
There are multiple approaches to measuring a person's exposure to pesticides, each of which provides an estimate of an individual's internal dose. Two broad approaches include measuring biomarkers and markers of biological effect. The former involves taking direct measurement of the parent compound or its metabolites in various types of media: urine, blood, serum. Biomarkers may include a direct measurement of the compound in the body before it's been biotransformed during metabolism. Other suitable biomarkers may include the metabolites of the parent compound after they've been biotransformed during metabolism. Toxicokinetic data can provide more detailed information on how quickly the compound is metabolized and eliminated from the body, and provide insights into the timing of exposure.
Markers of biological effect provide an estimation of exposure based on cellular activities related to the mechanism of action. For example, many studies investigating exposure to pesticides often involve the quantification of the acetylcholinesterase enzyme at the neural synapse to determine the magnitude of the inhibitory effect of organophosphate and carbamate pesticides.
Another method of quantifying exposure involves measuring, at the molecular level, the amount of pesticide interacting with the site of action. These methods are more commonly used for occupational exposures where the mechanism of action is better understood, as described by WHO guidelines published in "Biological Monitoring of Chemical Exposure in the Workplace". Better understanding of how pesticides elicit their toxic effects is needed before this method of exposure assessment can be applied to occupational exposure of agricultural workers.
Alternative methods to assess exposure include questionnaires to discern from participants whether they are experiencing symptoms associated with pesticide poisoning. Self-reported symptoms may include headaches, dizziness, nausea, joint pain, or respiratory symptoms.
Challenges in assessing pesticide exposure
Multiple challenges exist in assessing exposure to pesticides in the general population, and many others that are specific to occupational exposures of agricultural workers. Beyond farm workers, estimating exposure to family members and children presents additional challenges, and may occur through "take-home" exposure from pesticide residues collected on clothing or equipment belonging to parent farm workers and inadvertently brought into the home. Children may also be exposed to pesticides prenatally from mothers who are exposed to pesticides during pregnancy. Characterizing children's exposure resulting from drift of airborne and spray application of pesticides is similarly challenging, yet well documented in developing countries. Because of critical development periods of the fetus and newborn children, these non-working populations are more vulnerable to the effects of pesticides, and may be at increased risk of developing neurocognitive effects and impaired development.
While measuring biomarkers or markers of biological effects may provide more accurate estimates of exposure, collecting these data in the field is often impractical and many methods are not sensitive enough to detect low-level concentrations. Rapid cholinesterase test kits exist to collect blood samples in the field. Conducting large scale assessments of agricultural workers in remote regions of developing countries makes the implementation of these kits a challenge. The cholinesterase assay is a useful clinical tool to assess individual exposure and acute toxicity. Considerable variability in baseline enzyme activity among individuals makes it difficult to compare field measurements of cholinesterase activity to a reference dose to determine health risk associated with exposure. Another challenge in deriving a reference dose is identifying health endpoints that are relevant to exposure. More epidemiological research is needed to identify critical health endpoints, particularly among populations who are occupationally exposed.
Prevention
Minimizing harmful exposure to pesticides can be achieved by proper use of personal protective equipment, adequate reentry times into recently sprayed areas, and effective product labeling for hazardous substances as per FIFRA regulations. Training high-risk populations, including agricultural workers, on the proper use and storage of pesticides, can reduce the incidence of acute pesticide poisoning and potential chronic health effects associated with exposure. Continued research into the human toxic health effects of pesticides serves as a basis for relevant policies and enforceable standards that are health protective to all populations.
Environmental effects
Pesticide use raises a number of environmental concerns. Over 98% of sprayed insecticides and 95% of herbicides reach a destination other than their target species, including non-target species, air, water and soil. Pesticide drift occurs when pesticides suspended in the air as particles are carried by wind to other areas, potentially contaminating them. Pesticides are one of the causes of water pollution, and some pesticides were persistent organic pollutants (now banned), which contribute to soil and flower (pollen, nectar) contamination. Furthermore, pesticide use can adversely impact neighboring agricultural activity, as pests themselves drift to and harm nearby crops that have no pesticide used on them.
In addition, pesticide use reduces invertebrate biodiversity in streams, contributes to pollinator decline, destroys habitat (especially for birds), and threatens endangered species. Pests can develop a resistance to the pesticide (pesticide resistance), necessitating a new pesticide. Alternatively a greater dose of the pesticide can be used to counteract the resistance, although this will cause a worsening of the ambient pollution problem.
The Stockholm Convention on Persistent Organic Pollutants banned all persistent pesticides, in particular DDT and other organochlorine pesticides, which were stable and lipophilic, and thus able to bioaccumulate in the body and the food chain. and which spread throughout the planet. Persistent pesticides are no longer used for agriculture, and will not be approved by the authorities. Because the half life in soil is long (for DDT 2–15 years) residues can still be detected in humans at levels 5 to 10 times lower than found in the 1970s.
Pesticides now have to be degradable in the environment. Such degradation of pesticides is due to both innate chemical properties of the compounds and environmental processes or conditions. For example, the presence of halogens within a chemical structure often slows down degradation in an aerobic environment. Adsorption to soil may retard pesticide movement, but also may reduce bioavailability to microbial degraders.
Pesticide contamination in the environment can be monitored through bioindicators such as bee pollinators.
Economics
In one study, the human health and environmental costs due to pesticides in the United States was estimated to be $9.6 billion: offset by about $40 billion in increased agricultural production.
Additional costs include the registration process and the cost of purchasing pesticides: which are typically borne by agrichemical companies and farmers respectively. The registration process can take several years to complete (there are 70 types of field tests) and can cost $50–70 million for a single pesticide. At the beginning of the 21st century, the United States spent approximately $10 billion on pesticides annually.
Resistance
The use of pesticides inherently entails the risk of resistance developing. Various techniques and procedures of pesticide application can slow the development of resistance, as can some natural features of the target population and surrounding environment.
Alternatives
Alternatives to pesticides are available and include methods of cultivation, use of biological pest controls (such as pheromones and microbial pesticides), genetic engineering (mostly of crops), and methods of interfering with insect breeding. Application of composted yard waste has also been used as a way of controlling pests.
These methods are becoming increasingly popular and often are safer than traditional chemical pesticides. In addition, EPA is registering reduced-risk pesticides in increasing numbers.
Cultivation practices
Cultivation practices include polyculture (growing multiple types of plants), crop rotation, planting crops in areas where the pests that damage them do not live, timing planting according to when pests will be least problematic, and use of trap crops that attract pests away from the real crop. Trap crops have successfully controlled pests in some commercial agricultural systems while reducing pesticide usage. In other systems, trap crops can fail to reduce pest densities at a commercial scale, even when the trap crop works in controlled experiments.
Use of other organisms
Release of other organisms that fight the pest is another example of an alternative to pesticide use. These organisms can include natural predators or parasites of the pests. Biological pesticides based on entomopathogenic fungi, bacteria and viruses causing disease in the pest species can also be used.
Biological control engineering
Interfering with insects' reproduction can be accomplished by sterilizing males of the target species and releasing them, so that they mate with females but do not produce offspring. This technique was first used on the screwworm fly in 1958 and has since been used with the medfly, the tsetse fly, and the gypsy moth. This is a costly and slow approach that only works on some types of insects.
Other alternatives
Other alternatives include "laserweeding" – the use of novel agricultural robots for weed control using lasers.
Push pull strategy
Push-pull technique: intercropping with a "push" crop that repels the pest, and planting a "pull" crop on the boundary that attracts and traps it.
Effectiveness
Some evidence shows that alternatives to pesticides can be equally effective as the use of chemicals. A study of Maize fields in northern Florida found that the application of composted yard waste with high carbon to nitrogen ratio to agricultural fields was highly effective at reducing the population of plant-parasitic nematodes and increasing crop yield, with yield increases ranging from 10% to 212%; the observed effects were long-term, often not appearing until the third season of the study. Additional silicon nutrition protects some horticultural crops against fungal diseases almost completely, while insufficient silicon sometimes leads to severe infection even when fungicides are used.
Pesticide resistance is increasing and that may make alternatives more attractive.
Types
Biopesticides
Biopesticides are certain types of pesticides derived from such natural materials as animals, plants, bacteria, and certain minerals. For example, canola oil and baking soda have pesticidal applications and are considered biopesticides. Biopesticides fall into three major classes:
Microbial pesticides which consist of bacteria, entomopathogenic fungi or viruses (and sometimes includes the metabolites that bacteria or fungi produce). Entomopathogenic nematodes are also often classed as microbial pesticides, even though they are multi-cellular.
Biochemical pesticides or herbal pesticides are naturally occurring substances that control (or monitor in the case of pheromones) pests and microbial diseases.
Plant-incorporated protectants (PIPs) have genetic material from other species incorporated into their genetic material (i.e. GM crops). Their use is controversial, especially in many European countries.
By pest type
Pesticides that are related to the type of pests are:
Regulation
International
In many countries, pesticides must be approved for sale and use by a government agency.
Worldwide, 85% of countries have pesticide legislation for the proper storage of pesticides and 51% include provisions to ensure proper disposal of all obsolete pesticides.
Though pesticide regulations differ from country to country, pesticides, and products on which they were used are traded across international borders. To deal with inconsistencies in regulations among countries, delegates to a conference of the United Nations Food and Agriculture Organization adopted an International Code of Conduct on the Distribution and Use of Pesticides in 1985 to create voluntary standards of pesticide regulation for many countries. The Code was updated in 1998 and 2002. The FAO claims that the code has raised awareness about pesticide hazards and decreased the number of countries without restrictions on pesticide use.
Three other efforts to improve regulation of international pesticide trade are the United Nations London Guidelines for the Exchange of Information on Chemicals in International Trade and the United Nations Codex Alimentarius Commission. The former seeks to implement procedures for ensuring that prior informed consent exists between countries buying and selling pesticides, while the latter seeks to create uniform standards for maximum levels of pesticide residues among participating countries.
United States
In the United States, the Environmental Protection Agency (EPA) is responsible for regulating pesticides under the Federal Insecticide, Fungicide, and Rodenticide Act (FIFRA) and the Food Quality Protection Act (FQPA).
Studies must be conducted to establish the conditions in which the material is safe to use and the effectiveness against the intended pest(s). The EPA regulates pesticides to ensure that these products do not pose adverse effects to humans or the environment, with an emphasis on the health and safety of children. Pesticides produced before November 1984 continue to be reassessed in order to meet the current scientific and regulatory standards. All registered pesticides are reviewed every 15 years to ensure they meet the proper standards. During the registration process, a label is created. The label contains directions for proper use of the material in addition to safety restrictions. Based on acute toxicity, pesticides are assigned to a Toxicity Class. Pesticides are the most thoroughly tested chemicals after drugs in the United States; those used on food require more than 100 tests to determine a range of potential impacts.
Some pesticides are considered too hazardous for sale to the general public and are designated restricted use pesticides. Only certified applicators, who have passed an exam, may purchase or supervise the application of restricted use pesticides. Records of sales and use are required to be maintained and may be audited by government agencies charged with the enforcement of pesticide regulations. These records must be made available to employees and state or territorial environmental regulatory agencies.
In addition to the EPA, the United States Department of Agriculture (USDA) and the United States Food and Drug Administration (FDA) set standards for the level of pesticide residue that is allowed on or in crops. The EPA looks at what the potential human health and environmental effects might be associated with the use of the pesticide.
In addition, the U.S. EPA uses the National Research Council's four-step process for human health risk assessment: (1) Hazard Identification, (2) Dose-Response Assessment, (3) Exposure Assessment, and (4) Risk Characterization.
In 2013 Kaua'i County (Hawai'i) passed Bill No. 2491 to add an article to Chapter 22 of the county's code relating to pesticides and GMOs. The bill strengthens protections of local communities in Kaua'i where many large pesticide companies test their products.
The first legislation providing federal authority for regulating pesticides was enacted in 1910.
Canada
EU
EU legislation has been approved banning the use of highly toxic pesticides including those that are carcinogenic, mutagenic or toxic to reproduction, those that are endocrine-disrupting, and those that are persistent, bioaccumulative and toxic (PBT) or very persistent and very bioaccumulative (vPvB) and measures have been approved to improve the general safety of pesticides across all EU member states.
In 2023 The Environment Committee of European Parliament approved a decision aiming to reduce pesticide use by 50% (the most hazardous by 65%) by the year 2030 and ensure sustainable use of pesticides (for example use them only as a last resort). The decision also includes measures for providing farmers with alternatives.
Residue
Pesticide residue refers to the pesticides that may remain on or in food after they are applied to food crops. The maximum residue limits (MRL) of pesticides in food are carefully set by the regulatory authorities to ensure, to their best judgement, no health impacts. Regulations such as pre-harvest intervals also often prevent harvest of crop or livestock products if recently treated in order to allow residue concentrations to decrease over time to safe levels before harvest. Exposure of the general population to these residues most commonly occurs through consumption of treated food sources, or being in close contact to areas treated with pesticides such as farms or lawns.
Persistent pesticides are no longer used for agriculture, and will not be approved by the authorities. Because the half life in soil is long (for DDT 2–15 years) residues can still be detected in humans at levels 5 to 10 times lower than found in the 1970s.
Residues are monitored by the authorities. In 2016, over 99% of samples of US produce had no pesticide residue or had residue levels well below the EPA tolerance levels for each pesticide.
See also
Index of pesticide articles
Environmental hazard
Pest control
Pesticide residue
Pesticide standard value
WHO Pesticide Evaluation Scheme
References
Bibliography
Davis, Frederick Rowe. "Pesticides and the perils of synecdoche in the history of science and environmental history." History of Science 57.4 (2019): 469–492.
Davis, Frederick Rowe. Banned: a history of pesticides and the science of toxicology (Yale UP, 2014).
Matthews, Graham A. A history of pesticides (CABI, 2018).
Sources
External links
Pesticides at the World Health Organization (WHO)
Pesticides at the United Nations Environment Programme (UNEP)
Pesticides at the European Commission
Pesticides at the United States Environmental Protection Agency
Chemical substances
Toxic effects of pesticides
Soil contamination
Biocides | Pesticide | [
"Physics",
"Chemistry",
"Biology",
"Environmental_science"
] | 7,327 | [
"Pesticides",
"Toxicology",
"Biocides",
"Environmental chemistry",
"Materials",
"Soil contamination",
"nan",
"Chemical substances",
"Matter"
] |
48,358 | https://en.wikipedia.org/wiki/Transposition%20cipher | In cryptography, a transposition cipher (also known as a permutation cipher) is a method of encryption which scrambles the positions of characters (transposition) without changing the characters themselves. Transposition ciphers reorder units of plaintext (typically characters or groups of characters) according to a regular system to produce a ciphertext which is a permutation of the plaintext. They differ from substitution ciphers, which do not change the position of units of plaintext but instead change the units themselves. Despite the difference between transposition and substitution operations, they are often combined, as in historical ciphers like the ADFGVX cipher or complex high-quality encryption methods like the modern Advanced Encryption Standard (AES).
General principle
Plaintexts can be rearranged into a ciphertext using a key, scrambling the order of characters like the shuffled pieces of a jigsaw puzzle. The resulting message is hard to decipher without the key because there are many ways the characters can be arranged.
For example, the plaintext "THIS IS WIKIPEDIA" could be encrypted to "TWDIP SIHII IKASE". To decipher the encrypted message without the key, an attacker could try to guess possible words and phrases like DIATHESIS, DISSIPATE, WIDTH, etc., but it would take them some time to reconstruct the plaintext because there are many combinations of letters and words. By contrast, someone with the key could reconstruct the message easily:
C I P H E R Key
1 4 5 3 2 6 Sequence (key letters in alphabetical order)
T H I S I S Plaintext
W I K I P E
D I A * * *
Ciphertext by column:
#1 TWD, #2 IP, #3 SI, #4 HII, #5 IKA, #6 SE
Ciphertext in groups of 5 for readability:
TWDIP SIHII IKASE
In practice, a message this short and with a predictable keyword would be broken almost immediately with cryptanalysis techniques. Transposition ciphers have several vulnerabilities (see the section on "Detection and cryptanalysis" below), and small mistakes in the encipherment process can render the entire ciphertext meaningless.
However, given the right conditions - long messages (e.g., over 100–200 letters), unpredictable contents, unique keys per message, strong transposition methods, and so on - guessing the right words could be computationally impossible without further information. In their book on codebreaking historical ciphers, Elonka Dunin and Klaus Schmeh describe double columnar transposition (see below) as "one of the best manual ciphers known".
Rail Fence cipher
The Rail Fence cipher is a form of transposition cipher that gets its name from the way in which it is encoded. In the rail fence cipher, the plaintext is written downward and diagonally on successive "rails" of an imaginary fence, then moves up when it gets to the bottom. The message is then read off in rows. For example, using three "rails" and a message of 'WE ARE DISCOVERED FLEE AT ONCE', the encrypter writes out:
W . . . E . . . C . . . R . . . L . . . T . . . E
. E . R . D . S . O . E . E . F . E . A . O . C .
. . A . . . I . . . V . . . D . . . E . . . N . .
Then reads off:
WECRL TEERD SOEEF EAOCA IVDEN
(The cipher has broken this ciphertext up into blocks of five to help avoid errors. This is a common technique used to make the cipher more easily readable. The spacing is not related to spaces in the plaintext and so does not carry any information about the plaintext.)
Scytale
The rail fence cipher follows a pattern similar to that of the scytale, (pronounced "SKIT-uhl-ee") a mechanical system of producing a transposition cipher used by the ancient Greeks. The system consisted of a cylinder and a ribbon that was wrapped around the cylinder. The message to be encrypted was written on the coiled ribbon. The letters of the original message would be rearranged when the ribbon was uncoiled from the cylinder. However, the message was easily decrypted when the ribbon recoiled on a cylinder of the same diameter as the encrypting cylinder. Using the same example as before, if the cylinder has a radius such that only three letters can fit around its circumference, the cipherer writes out:
W . . E . . A . . R . . E . . D . . I . . S . . C
. O . . V . . E . . R . . E . . D . . F . . L . .
. . E . . E . . A . . T . . O . . N . . C . . E .
In this example, the cylinder is running horizontally and the ribbon is wrapped around vertically. Hence, the cipherer then reads off:
WOEEV EAEAR RTEEO DDNIF CSLEC
Route cipher
In a route cipher, the plaintext is first written out in a grid of given dimensions, then read off in a pattern given in the key. For example, using the same plaintext that we used for rail fence:
W R I O R F E O E
E E S V E L A N J
A D C E D E T C X
The key might specify "spiral inwards, clockwise, starting from the top right". That would give a cipher text of:
EJXCTEDEC DAEWRIORF EONALEVSE
Route ciphers have many more keys than a rail fence. In fact, for messages of reasonable length, the number of possible keys is potentially too great to be enumerated even by modern machinery. However, not all keys are equally good. Badly chosen routes will leave excessive chunks of plaintext, or text simply reversed, and this will give cryptanalysts a clue as to the routes.
A variation of the route cipher was the Union Route Cipher, used by Union forces during the American Civil War. This worked much like an ordinary route cipher, but transposed whole words instead of individual letters. Because this would leave certain highly sensitive words exposed, such words would first be concealed by code. The cipher clerk may also add entire null words, which were often chosen to make the ciphertext humorous.
Columnar transposition
In the middle of the 17th century, Samuel Morland introduced an early form of columnar transposition. It was further developed much later, becoming very popular in the later 19th century and 20th century, with French military, Japanese diplomats and Soviet spies all using the principle.
In a columnar transposition, the message is written out in rows of a fixed length, and then read out again column by column, and the columns are chosen in some scrambled order. Both the width of the rows and the permutation of the columns are usually defined by a keyword. For example, the keyword is of length 6 (so the rows are of length 6), and the permutation is defined by the alphabetical order of the letters in the keyword. In this case, the order would be "6 3 2 4 1 5".
In a regular columnar transposition cipher, any spare spaces are filled with nulls; in an irregular columnar transposition cipher, the spaces are left blank. Finally, the message is read off in columns, in the order specified by the keyword. For example, suppose we use the keyword and the message . In a regular columnar transposition, we write this into the grid as follows:
6 3 2 4 1 5
W E A R E D
I S C O V E
R E D F L E
E A T O N C
E Q K J E U
providing five nulls (), these letters can be randomly selected as they just fill out the incomplete columns and are not part of the message. The ciphertext is then read off as:
EVLNE ACDTK ESEAQ ROFOJ DEECU WIREE
In the irregular case, the columns are not completed by nulls:
6 3 2 4 1 5
W E A R E D
I S C O V E
R E D F L E
E A T O N C
E
This results in the following ciphertext:
EVLNA CDTES EAROF ODEEC WIREE
To decipher it, the recipient has to work out the shape of the enciphering grid by dividing the message length by the key length to find the number of rows in the grid. The length of the grid's last line is given by the remainder. The key is written above the grid, and the ciphertext is written down the columns of the grid in the order given by the letters of the key. The plaintext appears on the rows. A partial decipherment of the above ciphertext, after writing in the first column:
6 3 2 4 1 5
. . . . E .
. . . . V .
. . . . L .
. . . . N .
.
In a variation, the message is blocked into segments that are the key length long and to each segment the same permutation (given by the key) is applied. This is equivalent to a columnar transposition where the read-out is by rows instead of columns.
Columnar transposition continued to be used for serious purposes as a component of more complex ciphers at least into the 1950s.
Double transposition
A single columnar transposition could be attacked by guessing possible column lengths, writing the message out in its columns (but in the wrong order, as the key is not yet known), and then looking for possible anagrams. Thus to make it stronger, a double transposition was often used. This is simply a columnar transposition applied twice. The same key can be used for both transpositions, or two different keys can be used.
Visual demonstration of double transposition
In the following example, we use the keys JANEAUSTEN and AEROPLANES to encrypt the following plaintext: "Transposition ciphers scramble letters like puzzle pieces to create an indecipherable arrangement."The colors show how the letters are scrambled in each transposition step. While a single step only causes a minor rearrangement, the second step leads to a significant scrambling effect if the last row of the grid is incomplete.
Another example
As an example, we can take the result of the irregular columnar transposition in the previous section, and perform a second encryption with a different keyword, , which gives the permutation "564231":
5 6 4 2 3 1
E V L N A C
D T E S E A
R O F O D E
E C W I R E
E
As before, this is read off columnwise to give the ciphertext:
CAEEN SOIAE DRLEF WEDRE EVTOC
If multiple messages of exactly the same length are encrypted using the same keys, they can be anagrammed simultaneously. This can lead to both recovery of the messages, and to recovery of the keys (so that every other message sent with those keys can be read).
During World War I, the German military used a double columnar transposition cipher, changing the keys infrequently. The system was regularly solved by the French, naming it Übchi, who were typically able to quickly find the keys once they'd intercepted a number of messages of the same length, which generally took only a few days. However, the French success became widely known and, after a publication in Le Matin, the Germans changed to a new system on 18 November 1914.
During World War II, the double transposition cipher was used by Dutch Resistance groups, the French Maquis and the British Special Operations Executive (SOE), which was in charge of managing underground activities in Europe. It was also used by agents of the American Office of Strategic Services and as an emergency cipher for the German Army and Navy.
Until the invention of the VIC cipher, double transposition was generally regarded as the most complicated cipher that an agent could operate reliably under difficult field conditions.
Cryptanalysis
The double transposition cipher can be treated as a single transposition with a key as long as the product of the lengths of the two keys.
In late 2013, a double transposition challenge, regarded by its author as undecipherable, was solved by George Lasry using a divide-and-conquer approach where each transposition was attacked individually.
Myszkowski transposition
A variant form of columnar transposition, proposed by Émile Victor Théodore Myszkowski in 1902, requires a keyword with recurrent letters. In usual practice, subsequent occurrences of a keyword letter are treated as if the next letter in alphabetical order, e.g., the keyword TOMATO yields a numeric keystring of "532164."
In Myszkowski transposition, recurrent keyword letters are numbered identically, TOMATO yielding a keystring of "432143."
4 3 2 1 4 3
W E A R E D
I S C O V E
R E D F L E
E A T O N C
E
Plaintext columns with unique numbers are transcribed downward;
those with recurring numbers are transcribed left to right:
ROFOA CDTED SEEEA CWEIV RLENE
Disrupted transposition
A disrupted transposition cipher further complicates the transposition pattern with irregular filling of the rows of the matrix, i.e. with some spaces intentionally left blank (or blackened out like in the Rasterschlüssel 44), or filled later with either another part of the plaintext or random letters.
Comb approach
This method (attributed to Gen. Luigi Sacco) starts a new row once the plaintext reaches a column whose key number is equal to the current row number. This produces irregular row lengths. For example,
F O R E V E R J I G S A W < Key
4 8 9 2 12 3 10 7 6 5 11 1 13 Blanks after no.:
C O M P L I C A T E S T * 1
H E T R * * * * * * * * * 2
A N S P O S * * * * * * * 3
I * * * * * * * * * * * * 4
T I O N P A T T E R * * * 5
N L I K E A C O M * * * * 6
B _ _ _ _ _ _ _ * * * * * 7
The columns are then taken off as per regular columnar transposition: TPRPN, KISAA, CHAIT, NBERT, EMATO, etc.
Numerical sequence approach
Another simple option would be to use a password that places blanks according to its number sequence. E.g. "SECRET" would be decoded to a sequence of "5,2,1,4,3,6" and cross out the 5th field of the matrix, then count again and cross out the second field, etc. The following example would be a matrix set up for columnar transposition with the columnar key "CRYPTO" and filled with crossed out fields according to the disruption key "SECRET" (marked with an asterisk), whereafter the message "we are discovered, flee at once" is placed in the leftover spaces. The resulting ciphertext (the columns read according to the transposition key) is "WCEEO ERET RIVFC EODN SELE ADA".
C R Y P T O
1 4 6 3 5 2
W E A R * E
* * D I S *
C O * V E R
E D * F L E
E * A * * T
O N * C E *
Grilles
Another form of transposition cipher uses grilles, or physical masks with cut-outs. This can produce a highly irregular transposition over the period specified by the size of the grille, but requires the correspondents to keep a physical key secret. Grilles were first proposed in 1550, and were still in military use for the first few months of World War One.
Detection and cryptanalysis
Since transposition does not affect the frequency of individual symbols, simple transposition can be easily detected by the cryptanalyst by doing a frequency count. If the ciphertext exhibits a frequency distribution very similar to plaintext, it is most likely a transposition.
In general, transposition methods are vulnerable to anagramming—sliding pieces of ciphertext around, then looking for sections that look like anagrams of words in English or whatever language the plaintext was written in, and solving the anagrams. Once such anagrams have been found, they reveal information about the transposition pattern, and can consequently be extended. Simpler transpositions often suffer from the property that keys very close to the correct key will reveal long sections of legible plaintext interspersed by gibberish. Consequently, such ciphers may be vulnerable to optimum seeking algorithms such as genetic algorithms and hill-climbing algorithms.
There are several specific methods for attacking messages encoded using a transposition cipher. These include:
Known-plaintext attack: Using known or guessed parts of the plaintext (e.g. names, places, dates, numbers, phrases) to assist in reverse-engineering the likely order of columns used to carry out the transposition and/or the likely topic of the plaintext.
Brute-force attack: If keys are derived from dictionary words or phrases from books or other publicly available sources, it may be possible to brute-force the solution by attempting billions of possible words, word combinations, and phrases as keys.
Depth attack: If two or more messages of the same length are encoded with the same keys, the messages can be aligned and anagrammed until the messages show meaningful text in the same places, without needing to know the transposition steps that have taken place.
Statistical attack: Statistics about the frequency of 2-letter, 3-letter, etc. combinations in a language can be used to inform a scoring function in an algorithm that gradually reverses possible transpositions based on which changes would produce the most likely combinations. For example, the 2-letter pair QU is more common than QT in English text, so a cryptanalyst will attempt transpositions that place QU together.
The third method was developed in 1878 by mathematician Edward S. Holden and New-York Tribune journalists John R. G. Hassard and William M. Grosvenor who managed to deciphere telegrams between the Democratic Party and their operatives in the Southern states during the 1876 presidential election and thus prove facts of vote buying, influencing the 1878-1879 congressional elections.
A detailed description of the cryptanalysis of a German transposition cipher
can be found in chapter 7 of Herbert Yardley's "The American Black Chamber."
A cipher used by the Zodiac Killer, called "Z-340", organized into triangular sections with substitution of 63 different symbols for the letters and diagonal "knight move" transposition, remained unsolved for over 51 years, until an international team of private citizens cracked it on December 5, 2020, using specialized software.
Combinations
Transposition is often combined with other techniques such as evaluation methods. For example, a simple substitution cipher combined with a columnar transposition avoids the weakness of both. Replacing high frequency ciphertext symbols with high frequency plaintext letters does not reveal chunks of plaintext because of the transposition. Anagramming the transposition does not work because of the substitution. The technique is particularly powerful if combined with fractionation (see below). A disadvantage is that such ciphers are considerably more laborious and error prone than simpler ciphers.
Fractionation
Transposition is particularly effective when employed with fractionation – that is, a preliminary stage that divides each plaintext symbol into two or more ciphertext symbols. For example, the plaintext alphabet could be written out in a grid, and every letter in the message replaced by its co-ordinates (see Polybius square and Straddling checkerboard).
Another method of fractionation is to simply convert the message to Morse code, with a symbol for spaces as well as dots and dashes.
When such a fractionated message is transposed, the components of individual letters become widely separated in the message, thus achieving Claude E. Shannon's diffusion. Examples of ciphers that combine fractionation and transposition include the bifid cipher, the trifid cipher, the ADFGVX cipher and the VIC cipher.
Another choice would be to replace each letter with its binary representation, transpose that, and then convert the new binary string into the corresponding ASCII characters. Looping the scrambling process on the binary string multiple times before changing it into ASCII characters would likely make it harder to break. Many modern block ciphers use more complex forms of transposition related to this simple idea.
See also
Substitution cipher
Ban (unit)
Topics in cryptography
Notes
References
Kahn, David. The Codebreakers: The Story of Secret Writing. Rev Sub. Scribner, 1996.
Yardley, Herbert. The American Black Chamber. Bobbs-Merrill, 1931.
Classical ciphers
Permutations | Transposition cipher | [
"Mathematics"
] | 4,414 | [
"Functions and mappings",
"Permutations",
"Mathematical objects",
"Combinatorics",
"Mathematical relations"
] |
48,361 | https://en.wikipedia.org/wiki/Geographic%20coordinate%20system | A geographic coordinate system (GCS) is a spherical or geodetic coordinate system for measuring and communicating positions directly on Earth as latitude and longitude. It is the simplest, oldest and most widely used type of the various spatial reference systems that are in use, and forms the basis for most others. Although latitude and longitude form a coordinate tuple like a cartesian coordinate system, the geographic coordinate system is not cartesian because the measurements are angles and are not on a planar surface.
A full GCS specification, such as those listed in the EPSG and ISO 19111 standards, also includes a choice of geodetic datum (including an Earth ellipsoid), as different datums will yield different latitude and longitude values for the same location.
History
The invention of a geographic coordinate system is generally credited to Eratosthenes of Cyrene, who composed his now-lost Geography at the Library of Alexandria in the 3rd century BC. A century later, Hipparchus of Nicaea improved on this system by determining latitude from stellar measurements rather than solar altitude and determining longitude by timings of lunar eclipses, rather than dead reckoning. In the 1st or 2nd century, Marinus of Tyre compiled an extensive gazetteer and mathematically plotted world map using coordinates measured east from a prime meridian at the westernmost known land, designated the Fortunate Isles, off the coast of western Africa around the Canary or Cape Verde Islands, and measured north or south of the island of Rhodes off Asia Minor. Ptolemy credited him with the full adoption of longitude and latitude, rather than measuring latitude in terms of the length of the midsummer day.
Ptolemy's 2nd-century Geography used the same prime meridian but measured latitude from the Equator instead. After their work was translated into Arabic in the 9th century, Al-Khwārizmī's Book of the Description of the Earth corrected Marinus' and Ptolemy's errors regarding the length of the Mediterranean Sea, causing medieval Arabic cartography to use a prime meridian around 10° east of Ptolemy's line. Mathematical cartography resumed in Europe following Maximus Planudes' recovery of Ptolemy's text a little before 1300; the text was translated into Latin at Florence by Jacopo d'Angelo around 1407.
In 1884, the United States hosted the International Meridian Conference, attended by representatives from twenty-five nations. Twenty-two of them agreed to adopt the longitude of the Royal Observatory in Greenwich, England as the zero-reference line. The Dominican Republic voted against the motion, while France and Brazil abstained. France adopted Greenwich Mean Time in place of local determinations by the Paris Observatory in 1911.
Latitude and longitude
The latitude of a point on Earth's surface is the angle between the equatorial plane and the straight line that passes through that point and through (or close to) the center of the Earth. Lines joining points of the same latitude trace circles on the surface of Earth called parallels, as they are parallel to the Equator and to each other. The North Pole is 90° N; the South Pole is 90° S. The 0° parallel of latitude is designated the Equator, the fundamental plane of all geographic coordinate systems. The Equator divides the globe into Northern and Southern Hemispheres.
The longitude of a point on Earth's surface is the angle east or west of a reference meridian to another meridian that passes through that point. All meridians are halves of great ellipses (often called great circles), which converge at the North and South Poles. The meridian of the British Royal Observatory in Greenwich, in southeast London, England, is the international prime meridian, although some organizations—such as the French —continue to use other meridians for internal purposes. The prime meridian determines the proper Eastern and Western Hemispheres, although maps often divide these hemispheres further west in order to keep the Old World on a single side. The antipodal meridian of Greenwich is both 180°W and 180°E. This is not to be conflated with the International Date Line, which diverges from it in several places for political and convenience reasons, including between far eastern Russia and the far western Aleutian Islands.
The combination of these two components specifies the position of any location on the surface of Earth, without consideration of altitude or depth. The visual grid on a map formed by lines of latitude and longitude is known as a graticule. The origin/zero point of this system is located in the Gulf of Guinea about south of Tema, Ghana, a location often facetiously called Null Island.
Geodetic datum
In order to use the theoretical definitions of latitude, longitude, and height to precisely measure actual locations on the physical earth, a geodetic datum must be used. A horizonal datum is used to precisely measure latitude and longitude, while a vertical datum is used to measure elevation or altitude. Both types of datum bind a mathematical model of the shape of the earth (usually a reference ellipsoid for a horizontal datum, and a more precise geoid for a vertical datum) to the earth. Traditionally, this binding was created by a network of control points, surveyed locations at which monuments are installed, and were only accurate for a region of the surface of the Earth. Newer datums are based on a global network for satellite measurements (GNSS, VLBI, SLR and DORIS).
This combination of mathematical model and physical binding mean that anyone using the same datum will obtain the same location measurement for the same physical location. However, two different datums will usually yield different location measurements for the same physical location, which may appear to differ by as much as several hundred meters; this not because the location has moved, but because the reference system used to measure it has shifted. Because any spatial reference system or map projection is ultimately calculated from latitude and longitude, it is crucial that they clearly state the datum on which they are based. For example, a UTM coordinate based on a WGS84 realisation will be different than a UTM coordinate based on NAD27 for the same location. Converting coordinates from one datum to another requires a datum transformation such as a Helmert transformation, although in certain situations a simple translation may be sufficient.
Datums may be global, meaning that they represent the whole Earth, or they may be regional, meaning that they represent an ellipsoid best-fit to only a portion of the Earth. Examples of global datums include the several realizations of WGS 84 (with the 2D datum ensemble EPSG:4326 with 2 meter accuracy as identifier) used for the Global Positioning System, and the several realizations of the International Terrestrial Reference System and Frame (such as ITRF2020 with subcentimeter accuracy), which takes into account continental drift and crustal deformation.
Datums with a regional fit of the ellipsoid that are chosen by a national cartographical organization include the North American Datums, the European ED50, and the British OSGB36. Given a location, the datum provides the latitude and longitude . In the United Kingdom there are three common latitude, longitude, and height systems in use. WGS84 differs at Greenwich from the one used on published maps OSGB36 by approximately 112m. ED50 differs from about 120m to 180m.
Points on the Earth's surface move relative to each other due to continental plate motion, subsidence, and diurnal Earth tidal movement caused by the Moon and the Sun. This daily movement can be as much as a meter. Continental movement can be up to a year, or in a century. A weather system high-pressure area can cause a sinking of . Scandinavia is rising by a year as a result of the melting of the ice sheets of the last ice age, but neighboring Scotland is rising by only . These changes are insignificant if a regional datum is used, but are statistically significant if a global datum is used.
Length of a degree
On the GRS80 or WGS84 spheroid at sea level at the Equator, one latitudinal second measures 30.715 m, one latitudinal minute is 1843 m and one latitudinal degree is 110.6 km. The circles of longitude, meridians, meet at the geographical poles, with the west–east width of a second naturally decreasing as latitude increases. On the Equator at sea level, one longitudinal second measures 30.92 m, a longitudinal minute is 1855 m and a longitudinal degree is 111.3 km. At 30° a longitudinal second is 26.76 m, at Greenwich (51°28′38″N) 19.22 m, and at 60° it is 15.42 m.
On the WGS84 spheroid, the length in meters of a degree of latitude at latitude (that is, the number of meters you would have to travel along a north–south line to move 1 degree in latitude, when at latitude ), is about
The returned measure of meters per degree latitude varies continuously with latitude.
Similarly, the length in meters of a degree of longitude can be calculated as
(Those coefficients can be improved, but as they stand the distance they give is correct within a centimeter.)
The formulae both return units of meters per degree.
An alternative method to estimate the length of a longitudinal degree at latitude is to assume a spherical Earth (to get the width per minute and second, divide by 60 and 3600, respectively):
where Earth's average meridional radius is . Since the Earth is an oblate spheroid, not spherical, that result can be off by several tenths of a percent; a better approximation of a longitudinal degree at latitude is
where Earth's equatorial radius equals 6,378,137 m and ; for the GRS80 and WGS84 spheroids, . ( is known as the reduced (or parametric) latitude). Aside from rounding, this is the exact distance along a parallel of latitude; getting the distance along the shortest route will be more work, but those two distances are always within 0.6 m of each other if the two points are one degree of longitude apart.
Alternate encodings
Like any series of multiple-digit numbers, latitude-longitude pairs can be challenging to communicate and remember. Therefore, alternative schemes have been developed for encoding GCS coordinates into alphanumeric strings or words:
the Maidenhead Locator System, popular with radio operators.
the World Geographic Reference System (GEOREF), developed for global military operations, replaced by the current Global Area Reference System (GARS).
Open Location Code or "Plus Codes", developed by Google and released into the public domain.
Geohash, a public domain system based on the Morton Z-order curve.
Mapcode, an open-source system originally developed at TomTom.
What3words, a proprietary system that encodes GCS coordinates as pseudorandom sets of words by dividing the coordinates into three numbers and looking up words in an indexed dictionary.
These are not distinct coordinate systems, only alternative methods for expressing latitude and longitude measurements.
See also
ISO 6709, standard representation of geographic point location by coordinates
Planetary coordinate system
Selenographic coordinate system
Jan Smits (2015). Mathematical data for bibliographic descriptions of cartographic materials and spatial data. Geographical co-ordinates. ICA Commission on Map Projections.
Notes
References
Sources
Portions of this article are from Jason Harris' "Astroinfo" which is distributed with KStars, a desktop planetarium for Linux/KDE. See The KDE Education Project – KStars
External links
Cartography
Geodesy
Navigation | Geographic coordinate system | [
"Mathematics"
] | 2,389 | [
"Point (geometry)",
"Geographic position",
"Applied mathematics",
"Position",
"Geographic coordinate systems",
"Coordinate systems",
"Geodesy"
] |
48,366 | https://en.wikipedia.org/wiki/Polyurethane | Polyurethane (; often abbreviated PUR and PU) refers to a class of polymers composed of organic units joined by carbamate (urethane) links. In contrast to other common polymers such as polyethylene and polystyrene, polyurethane term does not refer to the single type of polymer but a group of polymers. Unlike polyethylene and polystyrene polyurethanes can be produced from a wide range of starting materials resulting various polymers within the same group. This chemical variety produces polyurethanes with different chemical structures leading to many different applications. These include rigid and flexible foams, and coatings, adhesives, electrical potting compounds, and fibers such as spandex and polyurethane laminate (PUL). Foams are the largest application accounting for 67% of all polyurethane produced in 2016.
A polyurethane is typically produced by reacting a polymeric isocyanate with a polyol. Since a polyurethane contains two types of monomers, which polymerize one after the other, they are classed as alternating copolymers. Both the isocyanates and polyols used to make a polyurethane contain two or more functional groups per molecule.
Global production in 2019 was 25 million metric tonnes, accounting for about 6% of all polymers produced in that year.
History
Otto Bayer and his coworkers at IG Farben in Leverkusen, Germany, first made polyurethanes in 1937. The new polymers had some advantages over existing plastics that were made by polymerizing olefins or by polycondensation, and were not covered by patents obtained by Wallace Carothers on polyesters. Early work focused on the production of fibers and flexible foams and PUs were applied on a limited scale as aircraft coating during World War II. Polyisocyanates became commercially available in 1952, and production of flexible polyurethane foam began in 1954 by combining toluene diisocyanate (TDI) and polyester polyols. These materials were also used to produce rigid foams, gum rubber, and elastomers. Linear fibers were produced from hexamethylene diisocyanate (HDI) and 1,4-Butanediol (BDO).
DuPont introduced polyethers, specifically poly(tetramethylene ether) glycol, in 1956. BASF and Dow Chemical introduced polyalkylene glycols in 1957. Polyether polyols were cheaper, easier to handle and more water-resistant than polyester polyols. Union Carbide and Mobay, a U.S. Monsanto/Bayer joint venture, also began making polyurethane chemicals. In 1960 more than 45,000 metric tons of flexible polyurethane foams were produced. The availability of chlorofluoroalkane blowing agents, inexpensive polyether polyols, and methylene diphenyl diisocyanate (MDI) allowed polyurethane rigid foams to be used as high-performance insulation materials. In 1967, urethane-modified polyisocyanurate rigid foams were introduced, offering even better thermal stability and flammability resistance. During the 1960s, automotive interior safety components, such as instrument and door panels, were produced by back-filling thermoplastic skins with semi-rigid foam.
In 1969, Bayer exhibited an all-plastic car in Düsseldorf, Germany. Parts of this car, such as the fascia and body panels, were manufactured using a new process called reaction injection molding (RIM), in which the reactants were mixed and then injected into a mold. The addition of fillers, such as milled glass, mica, and processed mineral fibers, gave rise to reinforced RIM (RRIM), which provided improvements in flexural modulus (stiffness), reduction in coefficient of thermal expansion and better thermal stability. This technology was used to make the first plastic-body automobile in the United States, the Pontiac Fiero, in 1983. Further increases in stiffness were obtained by incorporating pre-placed glass mats into the RIM mold cavity, also known broadly as resin injection molding, or structural RIM.
Starting in the early 1980s, water-blown microcellular flexible foams were used to mold gaskets for automotive panels and air-filter seals, replacing PVC polymers. Polyurethane foams are used in many automotive applications including seating, head and arm rests, and headliners.
Polyurethane foam (including foam rubber) is sometimes made using small amounts of blowing agents to give less dense foam, better cushioning/energy absorption or thermal insulation. In the early 1990s, because of their impact on ozone depletion, the Montreal Protocol restricted the use of many chlorine-containing blowing agents, such as trichlorofluoromethane (CFC-11). By the late 1990s, blowing agents such as carbon dioxide, pentane, 1,1,1,2-tetrafluoroethane (HFC-134a) and 1,1,1,3,3-pentafluoropropane (HFC-245fa) were widely used in North America and the EU, although chlorinated blowing agents remained in use in many developing countries. Later, HFC-134a was also banned due to high ODP and GWP readings, and HFC-141B was introduced in early 2000s as an alternate blowing agent in developing nations.
Chemistry
Polyurethanes are produced by reacting diisocyanates with polyols, often in the presence of a catalyst, or upon exposure to ultraviolet radiation.
Common catalysts include tertiary amines, such as DABCO, DMDEE, or metallic soaps, such as dibutyltin dilaurate. The stoichiometry of the starting materials must be carefully controlled as excess isocyanate can trimerise, leading to the formation of rigid polyisocyanurates. The polymer usually has a highly crosslinked molecular structure, resulting in a thermosetting material which does not melt on heating; although some thermoplastic polyurethanes are also produced.
The most common application of polyurethane is as solid foams, which requires the presence of a gas, or blowing agent, during the polymerization step. This is commonly achieved by adding small amounts of water, which reacts with isocyanates to form CO2 gas and an amine, via an unstable carbamic acid group. The amine produced can also react with isocyanates to form urea groups, and as such the polymer will contain both these and urethane linkers. The urea is not very soluble in the reaction mixture and tends to form separate "hard segment" phases consisting mostly of polyurea. The concentration and organization of these polyurea phases can have a significant impact on the properties of the foam.
The type of foam produced can be controlled by regulating the amount of blowing agent and also by the addition of various surfactants which change the rheology of the polymerising mixture. Foams can be either "closed-cell", where most of the original bubbles or cells remain intact, or "open-cell", where the bubbles have broken but the edges of the bubbles are stiff enough to retain their shape, in extreme cases reticulated foams can be formed. Open-cell foams feel soft and allow air to flow through, so they are comfortable when used in seat cushions or mattresses. Closed-cell foams are used as rigid thermal insulation. High-density microcellular foams can be formed without the addition of blowing agents by mechanically frothing the polyol prior to use. These are tough elastomeric materials used in covering car steering wheels or shoe soles.
The properties of a polyurethane are greatly influenced by the types of isocyanates and polyols used to make it. Long, flexible segments, contributed by the polyol, give soft, elastic polymer. High amounts of crosslinking give tough or rigid polymers. Long chains and low crosslinking give a polymer that is very stretchy, short chains with many crosslinks produce a hard polymer while long chains and intermediate crosslinking give a polymer useful for making foam. The choices available for the isocyanates and polyols, in addition to other additives and processing conditions allow polyurethanes to have the very wide range of properties that make them such widely used polymers.
Raw materials
The main ingredients to make a polyurethane are di- and tri-isocyanates and polyols. Other materials are added to aid processing the polymer or to modify the properties of the polymer. PU foam formulation sometimes have water added too.
Isocyanates
Isocyanates used to make polyurethane have two or more isocyanate groups on each molecule. The most commonly used isocyanates are the aromatic diisocyanates, toluene diisocyanate (TDI) and methylene diphenyl diisocyanate, (MDI). These aromatic isocyanates are more reactive than aliphatic isocyanates.
TDI and MDI are generally less expensive and more reactive than other isocyanates. Industrial grade TDI and MDI are mixtures of isomers and MDI often contains polymeric materials. They are used to make flexible foam (for example slabstock foam for mattresses or molded foams for car seats), rigid foam (for example insulating foam in refrigerators) elastomers (shoe soles, for example), and so on. The isocyanates may be modified by partially reacting them with polyols or introducing some other materials to reduce volatility (and hence toxicity) of the isocyanates, decrease their freezing points to make handling easier or to improve the properties of the final polymers.
Aliphatic and cycloaliphatic isocyanates are used in smaller quantities, most often in coatings and other applications where color and transparency are important since polyurethanes made with aromatic isocyanates tend to darken on exposure to light. The most important aliphatic and cycloaliphatic isocyanates are 1,6-hexamethylene diisocyanate (HDI), 1-isocyanato-3-isocyanatomethyl-3,5,5-trimethyl-cyclohexane (isophorone diisocyanate, IPDI), and 4,4′-diisocyanato dicyclohexylmethane (H12MDI or hydrogenated MDI). Other more specialized isocyanates include Tetramethylxylylene diisocyanate (TMXDI).
Polyols
Polyols are polymers in their own right and have on average two or more hydroxyl groups per molecule. They can be converted to polyether polyols by co-polymerizing ethylene oxide and propylene oxide with a suitable polyol precursor. Polyester polyols are made by the polycondensation of multifunctional carboxylic acids and polyhydroxyl compounds. They can be further classified according to their end use. Higher molecular weight polyols (molecular weights from 2,000 to 10,000) are used to make more flexible polyurethanes while lower molecular weight polyols make more rigid products.
Polyols for flexible applications use low functionality initiators such as dipropylene glycol (f = 2), glycerine (f = 3), or a sorbitol/water solution (f = 2.75). Polyols for rigid applications use higher functionality initiators such as sucrose (f = 8), sorbitol (f = 6), toluenediamine (f = 4), and Mannich bases (f = 4). Propylene oxide and/or ethylene oxide is added to the initiators until the desired molecular weight is achieved. The order of addition and the amounts of each oxide affect many polyol properties, such as compatibility, water-solubility, and reactivity. Polyols made with only propylene oxide are terminated with secondary hydroxyl groups and are less reactive than polyols capped with ethylene oxide, which contain primary hydroxyl groups. Incorporating carbon dioxide into the polyol structure is being researched by multiple companies.
Graft polyols (also called filled polyols or polymer polyols) contain finely dispersed styrene–acrylonitrile, acrylonitrile, or polyurea (PHD) polymer solids chemically grafted to a high molecular weight polyether backbone. They are used to increase the load-bearing properties of low-density high-resiliency (HR) foam, as well as add toughness to microcellular foams and cast elastomers. Initiators such as ethylenediamine and triethanolamine are used to make low molecular weight rigid foam polyols that have built-in catalytic activity due to the presence of nitrogen atoms in the backbone. A special class of polyether polyols, poly(tetramethylene ether) glycols, which are made by polymerizing tetrahydrofuran, are used in high performance coating, wetting and elastomer applications.
Conventional polyester polyols are based on virgin raw materials and are manufactured by the direct polyesterification of high-purity diacids and glycols, such as adipic acid and 1,4-butanediol. Polyester polyols are usually more expensive and more viscous than polyether polyols, but they make polyurethanes with better solvent, abrasion, and cut resistance. Other polyester polyols are based on reclaimed raw materials. They are manufactured by transesterification (glycolysis) of recycled poly(ethyleneterephthalate) (PET) or dimethylterephthalate (DMT) distillation bottoms with glycols such as diethylene glycol. These low molecular weight, aromatic polyester polyols are used in rigid foam, and bring low cost and excellent flammability characteristics to polyisocyanurate (PIR) boardstock and polyurethane spray foam insulation.
Specialty polyols include polycarbonate polyols, polycaprolactone polyols, polybutadiene polyols, and polysulfide polyols. The materials are used in elastomer, sealant, and adhesive applications that require superior weatherability, and resistance to chemical and environmental attack. Natural oil polyols derived from castor oil and other vegetable oils are used to make elastomers, flexible bunstock, and flexible molded foam.
Co-polymerizing chlorotrifluoroethylene or tetrafluoroethylene with vinyl ethers containing hydroxyalkyl vinyl ether produces fluorinated (FEVE) polyols. Two-component fluorinated polyurethanes prepared by reacting FEVE fluorinated polyols with polyisocyanate have been used to make ambient cure paints and coatings. Since fluorinated polyurethanes contain a high percentage of fluorine–carbon bonds, which are the strongest bonds among all chemical bonds, fluorinated polyurethanes exhibit resistance to UV, acids, alkali, salts, chemicals, solvents, weathering, corrosion, fungi and microbial attack. These have been used for high performance coatings and paints.
Phosphorus-containing polyols are available that become chemically bonded to the polyurethane matrix for the use as flame retardants. This covalent linkage prevents migration and leaching of the organophosphorus compound.
Bio-derived materials
Interest in sustainable "green" products raised interest in polyols derived from vegetable oils. Various oils used in the preparation polyols for polyurethanes include soybean oil, cottonseed oil, neem seed oil, and castor oil. Vegetable oils are functionalized in various ways and modified to polyetheramides, polyethers, alkyds, etc. Renewable sources used to prepare polyols may be fatty acids or dimer fatty acids. Some biobased and isocyanate-free polyurethanes exploit the reaction between polyamines and cyclic carbonates to produce polyhydroxyurethanes.
Chain extenders and cross linkers
Chain extenders (f = 2) and cross linkers (f ≥ 3) are low molecular weight hydroxyl and amine terminated compounds that play an important role in the polymer morphology of polyurethane fibers, elastomers, adhesives, and certain integral skin and microcellular foams. The elastomeric properties of these materials are derived from the phase separation of the hard and soft copolymer segments of the polymer, such that the urethane hard segment domains serve as cross-links between the amorphous polyether (or polyester) soft segment domains. This phase separation occurs because the mainly nonpolar, low melting soft segments are incompatible with the polar, high melting hard segments. The soft segments, which are formed from high molecular weight polyols, are mobile and are normally present in coiled formation, while the hard segments, which are formed from the isocyanate and chain extenders, are stiff and immobile. As the hard segments are covalently coupled to the soft segments, they inhibit plastic flow of the polymer chains, thus creating elastomeric resiliency. Upon mechanical deformation, a portion of the soft segments are stressed by uncoiling, and the hard segments become aligned in the stress direction. This reorientation of the hard segments and consequent powerful hydrogen bonding contributes to high tensile strength, elongation, and tear resistance values.
The choice of chain extender also determines flexural, heat, and chemical resistance properties. The most important chain extenders are ethylene glycol, 1,4-butanediol (1,4-BDO or BDO), 1,6-hexanediol, cyclohexane dimethanol and hydroquinone bis(2-hydroxyethyl) ether (HQEE). All of these glycols form polyurethanes that phase separate well and form well defined hard segment domains, and are melt processable. They are all suitable for thermoplastic polyurethanes with the exception of ethylene glycol, since its derived bis-phenyl urethane undergoes unfavorable degradation at high hard segment levels. Diethanolamine and triethanolamine are used in flex molded foams to build firmness and add catalytic activity. Diethyltoluenediamine is used extensively in RIM, and in polyurethane and polyurea elastomer formulations.
Catalysts
Polyurethane catalysts can be classified into two broad categories, basic and acidic amine. Tertiary amine catalysts function by enhancing the nucleophilicity of the diol component. Alkyl tin carboxylates, oxides and mercaptides oxides function as mild Lewis acids in accelerating the formation of polyurethane. As bases, traditional amine catalysts include triethylenediamine (TEDA, also called DABCO, 1,4-diazabicyclo[2.2.2]octane), dimethylcyclohexylamine (DMCHA), dimethylethanolamine (DMEA), Dimethylaminoethoxyethanol and bis-(2-dimethylaminoethyl)ether, a blowing catalyst also called A-99. A typical Lewis acidic catalyst is dibutyltin dilaurate. The process is highly sensitive to the nature of the catalyst and is also known to be autocatalytic.
Factors affecting catalyst selection include balancing three reactions: urethane (polyol+isocyanate, or gel) formation, the urea (water+isocyanate, or "blow") formation, or the isocyanate trimerization reaction (e.g., using potassium acetate, to form isocyanurate rings). A variety of specialized catalysts have been developed.
Surfactants
Surfactants are used to modify the characteristics of both foam and non-foam polyurethane polymers. They take the form of polydimethylsiloxane-polyoxyalkylene block copolymers, silicone oils, nonylphenol ethoxylates, and other organic compounds. In foams, they are used to emulsify the liquid components, regulate cell size, and stabilize the cell structure to prevent collapse and sub-surface voids. In non-foam applications they are used as air release and antifoaming agents, as wetting agents, and are used to eliminate surface defects such as pin holes, orange peel, and sink marks.
Production
Polyurethanes are produced by mixing two or more liquid streams. The polyol stream contains catalysts, surfactants, blowing agents (when making polyurethane foam insulation) and so on. The two components are referred to as a polyurethane system, or simply a system. The isocyanate is commonly referred to in North America as the 'A-side' or just the 'iso'. The blend of polyols and other additives is commonly referred to as the 'B-side' or as the 'poly'. This mixture might also be called a 'resin' or 'resin blend'. In Europe the meanings for 'A-side' and 'B-side' are reversed. Resin blend additives may include chain extenders, cross linkers, surfactants, flame retardants, blowing agents, pigments, and fillers. Polyurethane can be made in a variety of densities and hardnesses by varying the isocyanate, polyol or additives.
Health and safety
Fully reacted polyurethane polymer is chemically inert. No exposure limits have been established in the U.S. by OSHA (Occupational Safety and Health Administration) or ACGIH (American Conference of Governmental Industrial Hygienists). It is not regulated by OSHA for carcinogenicity.
Polyurethanes are combustible. Decomposition from fire can produce significant amounts of carbon monoxide and hydrogen cyanide, in addition to nitrogen oxides, isocyanates, and other toxic products. Due to the flammability of the material, it has to be treated with flame retardants (at least in case of furniture), almost all of which are considered harmful. California later issued Technical Bulletin 117 2013 which allowed most polyurethane foam to pass flammability tests without the use of flame retardants. Green Science Policy Institute states: "Although the new standard can be met without flame retardants, it does NOT ban their
use. Consumers who wish to reduce household exposure to flame retardants can look for a TB117-2013 tag on furniture, and verify with retailers that products do not contain flame retardants."
Liquid resin blends and isocyanates may contain hazardous or regulated components. Isocyanates are known skin and respiratory sensitizers. Additionally, amines, glycols, and phosphate present in spray polyurethane foams present risks.
Exposure to chemicals that may be emitted during or after application of polyurethane spray foam (such as isocyanates) are harmful to human health and therefore special precautions are required during and after this process.
In the United States, additional health and safety information can be found through organizations such as the Polyurethane Manufacturers Association (PMA) and the Center for the Polyurethanes Industry (CPI), as well as from polyurethane system and raw material manufacturers. Regulatory information can be found in the Code of Federal Regulations Title 21 (Food and Drugs) and Title 40 (Protection of the Environment). In Europe, health and safety information is available from ISOPA, the European Diisocyanate and Polyol Producers Association.
Manufacturing
The methods of manufacturing polyurethane finished goods range from small, hand pour piece-part operations to large, high-volume bunstock and boardstock production lines. Regardless of the end-product, the manufacturing principle is the same: to meter the liquid isocyanate and resin blend at a specified stoichiometric ratio, mix them together until a homogeneous blend is obtained, dispense the reacting liquid into a mold or on to a surface, wait until it cures, then demold the finished part.
Dispensing equipment
Although the capital outlay can be high, it is desirable to use a meter-mix or dispense unit for even low-volume production operations that require a steady output of finished parts. Dispense equipment consists of material holding (day) tanks, metering pumps, a mix head, and a control unit. Often, a conditioning or heater–chiller unit is added to control material temperature in order to improve mix efficiency, cure rate, and to reduce process variability. Choice of dispense equipment components depends on shot size, throughput, material characteristics such as viscosity and filler content, and process control. Material day tanks may be single to hundreds of gallons in size and may be supplied directly from drums, IBCs (intermediate bulk containers, such as caged IBC totes), or bulk storage tanks. They may incorporate level sensors, conditioning jackets, and mixers. Pumps can be sized to meter in single grams per second up to hundreds of pounds per minute. They can be rotary, gear, or piston pumps, or can be specially hardened lance pumps to meter liquids containing highly abrasive fillers such as chopped or hammer-milled glass fiber and wollastonite.
The pumps can drive low-pressure (10 to 30 bar, 1 to 3 MPa) or high-pressure (125 to 250 bar, 12.5 to 25.0 MPa) dispense systems. Mix heads can be simple static mix tubes, rotary-element mixers, low-pressure dynamic mixers, or high-pressure hydraulically actuated direct impingement mixers. Control units may have basic on/off and dispense/stop switches, and analogue pressure and temperature gauges, or may be computer-controlled with flow meters to electronically calibrate mix ratio, digital temperature and level sensors, and a full suite of statistical process control software. Add-ons to dispense equipment include nucleation or gas injection units, and third or fourth stream capability for adding pigments or metering in supplemental additive packages.
Tooling
Distinct from pour-in-place, bun and boardstock, and coating applications, the production of piece parts requires tooling to contain and form the reacting liquid.
The choice of mold-making material is dependent on the expected number of uses to end-of-life (EOL), molding pressure, flexibility, and heat transfer characteristics.
RTV silicone is used for tooling that has an EOL in the thousands of parts. It is typically used for molding rigid foam parts, where the ability to stretch and peel the mold around undercuts is needed.
The heat transfer characteristic of RTV silicone tooling is poor. High-performance, flexible polyurethane elastomers are also used in this way.
Epoxy, metal-filled epoxy, and metal-coated epoxy is used for tooling that has an EOL in the tens of thousands of parts. It is typically used for molding flexible foam cushions and seating, integral skin and microcellular foam padding, and shallow-draft RIM bezels and fascia. The heat transfer characteristic of epoxy tooling is fair; the heat transfer characteristic of metal-filled and metal-coated epoxy is good. Copper tubing can be incorporated into the body of the tool, allowing hot water to circulate and heat the mold surface.
Aluminum is used for tooling that has an EOL in the hundreds of thousands of parts. It is typically used for molding microcellular foam gasketing and cast elastomer parts, and is milled or extruded into shape.
Mirror-finish stainless steel is used for tooling that imparts a glossy appearance to the finished part. The heat transfer characteristic of metal tooling is excellent.
Finally, molded or milled polypropylene is used to create low-volume tooling for molded gasket applications. Instead of many expensive metal molds, low-cost plastic tooling can be formed from a single metal master, which also allows greater design flexibility. The heat transfer characteristic of polypropylene tooling is poor, which must be taken into consideration during the formulation process.
Applications
In 2008, the global consumption of polyurethane raw materials was above 12 million metric tons, and the average annual growth rate was about 5%. Revenues generated with PUR on the global market are expected to rise to approximately US$75 billion by 2022. As they are such an important class of materials, research is constantly taking place and papers published.
Degradation and environmental fate
Effects of visible light
Polyurethanes, especially those made using aromatic isocyanates, contain chromophores that interact with light. This is of particular interest in the area of polyurethane coatings, where light stability is a critical factor and is the main reason that aliphatic isocyanates are used in making polyurethane coatings. When PU foam, which is made using aromatic isocyanates, is exposed to visible light, it discolors, turning from off-white to yellow to reddish brown. It has been generally accepted that apart from yellowing, visible light has little effect on foam properties. This is especially the case if the yellowing happens on the outer portions of a large foam, as the deterioration of properties in the outer portion has little effect on the overall bulk properties of the foam itself.
It has been reported that exposure to visible light can affect the variability of some physical property test results.
Higher-energy UV radiation promotes chemical reactions in foam, some of which are detrimental to the foam structure.
Hydrolysis and biodegradation
Polyurethanes may degrade due to hydrolysis. This is a common problem with shoes left in a closet, and reacting with moisture in the air.
Microbial degradation of polyurethane is believed to be due to the action of esterase, urethanase, hydrolase and protease enzymes. The process is slow as most microbes have difficulty moving beyond the surface of the polymer. Susceptibility to fungi is higher due to their release of extracellular enzymes, which are better able to permeate the polymer matrix. Two species of the Ecuadorian fungus Pestalotiopsis are capable of biodegrading polyurethane in aerobic and anaerobic conditions such as found at the bottom of landfills. Degradation of polyurethane items at museums has been reported. Polyester-type polyurethanes are more easily biodegraded by fungus than polyether-type.
See also
Botanol, a material with higher plant-based content
Passive fire protection
Penetrant (mechanical, electrical, or structural)
Polyaspartic
Polyurethane dispersion
Thermoplastic polyurethanes
Thermoset polymer matrix
References
External links
Center for the Polyurethanes Industry: information for EH&S issues related to polyurethanes developments
Polyurethane synthesis, Polymer Science Learning Center, University of Southern Mississippi
Polyurethane Foam Association: Industry information, educational materials and resources related to flexible polyurethane foam
PU Europe: European PU insulation industry association (formerly BING): European voice for the national trade associations representing the polyurethane insulation industry
ISOPA: European Diisocyanate & Polyol Producers Association: ISOPA represents the manufacturers in Europe of aromatic diisocyanates and polyols
1937 in Germany
1937 in science
Adhesives
Building insulation materials
Coatings
Elastomers
Plastics
Wood finishing materials
German inventions of the Nazi period | Polyurethane | [
"Physics",
"Chemistry"
] | 6,764 | [
"Synthetic materials",
"Coatings",
"Unsolved problems in physics",
"Elastomers",
"Amorphous solids",
"Plastics"
] |
48,381 | https://en.wikipedia.org/wiki/Astronomical%20coordinate%20systems | In astronomy, coordinate systems are used for specifying positions of celestial objects (satellites, planets, stars, galaxies, etc.) relative to a given reference frame, based on physical reference points available to a situated observer (e.g. the true horizon and north to an observer on Earth's surface). Coordinate systems in astronomy can specify an object's relative position in three-dimensional space or plot merely by its direction on a celestial sphere, if the object's distance is unknown or trivial.
Spherical coordinates, projected on the celestial sphere, are analogous to the geographic coordinate system used on the surface of Earth. These differ in their choice of fundamental plane, which divides the celestial sphere into two equal hemispheres along a great circle. Rectangular coordinates, in appropriate units, have the same fundamental () plane and primary (-axis) direction, such as an axis of rotation. Each coordinate system is named after its choice of fundamental plane.
Coordinate systems
The following table lists the common coordinate systems in use by the astronomical community. The fundamental plane divides the celestial sphere into two equal hemispheres and defines the baseline for the latitudinal coordinates, similar to the equator in the geographic coordinate system. The poles are located at ±90° from the fundamental plane. The primary direction is the starting point of the longitudinal coordinates. The origin is the zero distance point, the "center of the celestial sphere", although the definition of celestial sphere is ambiguous about the definition of its center point.
Horizontal system
The horizontal, or altitude-azimuth, system is based on the position of the observer on Earth, which revolves around its own axis once per sidereal day (23 hours, 56 minutes and 4.091 seconds) in relation to the star background. The positioning of a celestial object by the horizontal system varies with time, but is a useful coordinate system for locating and tracking objects for observers on Earth. It is based on the position of stars relative to an observer's ideal horizon.
Equatorial system
The equatorial coordinate system is centered at Earth's center, but fixed relative to the celestial poles and the March equinox. The coordinates are based on the location of stars relative to Earth's equator if it were projected out to an infinite distance. The equatorial describes the sky as seen from the Solar System, and modern star maps almost exclusively use equatorial coordinates.
The equatorial system is the normal coordinate system for most professional and many amateur astronomers having an equatorial mount that follows the movement of the sky during the night. Celestial objects are found by adjusting the telescope's or other instrument's scales so that they match the equatorial coordinates of the selected object to observe.
Popular choices of pole and equator are the older B1950 and the modern J2000 systems, but a pole and equator "of date" can also be used, meaning one appropriate to the date under consideration, such as when a measurement of the position of a planet or spacecraft is made. There are also subdivisions into "mean of date" coordinates, which average out or ignore nutation, and "true of date," which include nutation.
Ecliptic system
The fundamental plane is the plane of the Earth's orbit, called the ecliptic plane. There are two principal variants of the ecliptic coordinate system: geocentric ecliptic coordinates centered on the Earth and heliocentric ecliptic coordinates centered on the center of mass of the Solar System.
The geocentric ecliptic system was the principal coordinate system for ancient astronomy and is still useful for computing the apparent motions of the Sun, Moon, and planets. It was used to define the twelve astrological signs of the zodiac, for instance.
The heliocentric ecliptic system describes the planets' orbital movement around the Sun, and centers on the barycenter of the Solar System (i.e. very close to the center of the Sun). The system is primarily used for computing the positions of planets and other Solar System bodies, as well as defining their orbital elements.
Galactic system
The galactic coordinate system uses the approximate plane of the Milky Way Galaxy as its fundamental plane. The Solar System is still the center of the coordinate system, and the zero point is defined as the direction towards the Galactic Center. Galactic latitude resembles the elevation above the galactic plane and galactic longitude determines direction relative to the center of the galaxy.
Supergalactic system
The supergalactic coordinate system corresponds to a fundamental plane that contains a higher than average number of local galaxies in the sky as seen from Earth.
Converting coordinates
Conversions between the various coordinate systems are given. See the notes before using these equations.
Notation
Horizontal coordinates
, azimuth
, altitude
Equatorial coordinates
, right ascension
, declination
, hour angle
Ecliptic coordinates
, ecliptic longitude
, ecliptic latitude
Galactic coordinates
, galactic longitude
, galactic latitude
Miscellaneous
, observer's longitude
, observer's latitude
, obliquity of the ecliptic (about 23.4°)
, local sidereal time
, Greenwich sidereal time
Hour angle ↔ right ascension
Equatorial ↔ ecliptic
The classical equations, derived from spherical trigonometry, for the longitudinal coordinate are presented to the right of a bracket; dividing the first equation by the second gives the convenient tangent equation seen on the left. The rotation matrix equivalent is given beneath each case. This division is ambiguous because tan has a period of 180° () whereas cos and sin have periods of 360° (2).
Equatorial ↔ horizontal
Azimuth () is measured from the south point, turning positive to the west.
Zenith distance, the angular distance along the great circle from the zenith to a celestial object, is simply the complementary angle of the altitude: .
In solving the equation for , in order to avoid the ambiguity of the arctangent, use of the two-argument arctangent, denoted , is recommended. The two-argument arctangent computes the arctangent of , and accounts for the quadrant in which it is being computed. Thus, consistent with the convention of azimuth being measured from the south and opening positive to the west,
,
where
.
If the above formula produces a negative value for , it can be rendered positive by simply adding 360°.
Again, in solving the equation for , use of the two-argument arctangent that accounts for the quadrant is recommended. Thus, again consistent with the convention of azimuth being measured from the south and opening positive to the west,
,
where
Equatorial ↔ galactic
These equations are for converting equatorial coordinates to Galactic coordinates.
run_going
are the equatorial coordinates of the North Galactic Pole and is the Galactic longitude of the North Celestial Pole. Referred to J2000.0 the values of these quantities are:
If the equatorial coordinates are referred to another equinox, they must be precessed to their place at J2000.0 before applying these formulae.
These equations convert to equatorial coordinates referred to B2000.0.
>laft_spasse>11.3
Notes on conversion
Angles in the degrees ( ° ), minutes ( ′ ), and seconds ( ″ ) of sexagesimal measure must be converted to decimal before calculations are performed. Whether they are converted to decimal degrees or radians depends upon the particular calculating machine or program. Negative angles must be carefully handled; must be converted as .
Angles in the hours ( h ), minutes ( m ), and seconds ( s ) of time measure must be converted to decimal degrees or radians before calculations are performed. 1h = 15°; 1m = 15′; 1s = 15″
Angles greater than 360° (2) or less than 0° may need to be reduced to the range 0°−360° (0–2) depending upon the particular calculating machine or program.
The cosine of a latitude (declination, ecliptic and Galactic latitude, and altitude) are never negative by definition, since the latitude varies between −90° and +90°.
Inverse trigonometric functions arcsine, arccosine and arctangent are quadrant-ambiguous, and results should be carefully evaluated. Use of the second arctangent function (denoted in computing as or , which calculates the arctangent of using the sign of both arguments to determine the right quadrant) is recommended when calculating longitude/right ascension/azimuth. An equation which finds the sine, followed by the arcsin function, is recommended when calculating latitude/declination/altitude.
Azimuth () is referred here to the south point of the horizon, the common astronomical reckoning. An object on the meridian to the south of the observer has = = 0° with this usage. However, n Astropy's AltAz, in the Large Binocular Telescope FITS file convention, in XEphem, in the IAU library Standards of Fundamental Astronomy and Section B of the Astronomical Almanac for example, the azimuth is East of North. In navigation and some other disciplines, azimuth is figured from the north.
The equations for altitude () do not account for atmospheric refraction.
The equations for horizontal coordinates do not account for diurnal parallax, that is, the small offset in the position of a celestial object caused by the position of the observer on the Earth's surface. This effect is significant for the Moon, less so for the planets, minute for stars or more distant objects.
Observer's longitude () here is measured positively westward from the prime meridian; this is contrary to current IAU standards.
See also
Apparent longitude
Notes
References
External links
NOVAS, the United States Naval Observatory's Vector Astrometry Software, an integrated package of subroutines and functions for computing various commonly needed quantities in positional astronomy.
SuperNOVAS a maintained fork of NOVAS C 3.1 with bug fixes, improvements, new features, and online documentation.
SOFA, the IAU's Standards of Fundamental Astronomy, an accessible and authoritative set of algorithms and procedures that implement standard models used in fundamental astronomy.
This article was originally based on Jason Harris' Astroinfo, which is accompanied by KStars, a KDE Desktop Planetarium for Linux/KDE.
Cartography
Concepts in astronomy
Navigation | Astronomical coordinate systems | [
"Physics",
"Astronomy",
"Mathematics"
] | 2,083 | [
"Concepts in astronomy",
"Astronomical coordinate systems",
"Coordinate systems"
] |
48,384 | https://en.wikipedia.org/wiki/Equatorial%20coordinate%20system | The equatorial coordinate system is a celestial coordinate system widely used to specify the positions of celestial objects. It may be implemented in spherical or rectangular coordinates, both defined by an origin at the centre of Earth, a fundamental plane consisting of the projection of Earth's equator onto the celestial sphere (forming the celestial equator), a primary direction towards the March equinox, and a right-handed convention.
The origin at the centre of Earth means the coordinates are geocentric, that is, as seen from the centre of Earth as if it were transparent. The fundamental plane and the primary direction mean that the coordinate system, while aligned with Earth's equator and pole, does not rotate with the Earth, but remains relatively fixed against the background stars. A right-handed convention means that coordinates increase northward from and eastward around the fundamental plane.
Primary direction
This description of the orientation of the reference frame is somewhat simplified; the orientation is not quite fixed. A slow motion of Earth's axis, precession, causes a slow, continuous turning of the coordinate system westward about the poles of the ecliptic, completing one circuit in about 26,000 years. Superimposed on this is a smaller motion of the ecliptic, and a small oscillation of the Earth's axis, nutation.
In order to fix the exact primary direction, these motions necessitate the specification of the equinox of a particular date, known as an epoch, when giving a position. The three most commonly used are:
Mean equinox of a standard epoch (usually J2000.0, but may include B1950.0, B1900.0, etc.) is a fixed standard direction, allowing positions established at various dates to be compared directly.
Mean equinox of date is the intersection of the ecliptic of "date" (that is, the ecliptic in its position at "date") with the mean equator (that is, the equator rotated by precession to its position at "date", but free from the small periodic oscillations of nutation). Commonly used in planetary orbit calculation.
True equinox of date is the intersection of the ecliptic of "date" with the true equator (that is, the mean equator plus nutation). This is the actual intersection of the two planes at any particular moment, with all motions accounted for.
A position in the equatorial coordinate system is thus typically specified true equinox and equator of date, mean equinox and equator of J2000.0, or similar. Note that there is no "mean ecliptic", as the ecliptic is not subject to small periodic oscillations.
Spherical coordinates
Use in astronomy
A star's spherical coordinates are often expressed as a pair, right ascension and declination, without a distance coordinate. The direction of sufficiently distant objects is the same for all observers, and it is convenient to specify this direction with the same coordinates for all. In contrast, in the horizontal coordinate system, a star's position differs from observer to observer based on their positions on the Earth's surface, and is continuously changing with the Earth's rotation.
Telescopes equipped with equatorial mounts and setting circles employ the equatorial coordinate system to find objects. Setting circles in conjunction with a star chart or ephemeris allow the telescope to be easily pointed at known objects on the celestial sphere.
Declination
The declination symbol , (lower case "delta", abbreviated DEC) measures the angular distance of an object perpendicular to the celestial equator, positive to the north, negative to the south. For example, the north celestial pole has a declination of +90°. The origin for declination is the celestial equator, which is the projection of the Earth's equator onto the celestial sphere. Declination is analogous to terrestrial latitude.
Right ascension
The right ascension symbol , (lower case "alpha", abbreviated RA) measures the angular distance of an object eastward along the celestial equator from the March equinox to the hour circle passing through the object. The March equinox point is one of the two points where the ecliptic intersects the celestial equator. Right ascension is usually measured in sidereal hours, minutes and seconds instead of degrees, a result of the method of measuring right ascensions by timing the passage of objects across the meridian as the Earth rotates. There are = 15° in one hour of right ascension, and 24h of right ascension around the entire celestial equator.
When used together, right ascension and declination are usually abbreviated RA/Dec.
Hour angle
Alternatively to right ascension, hour angle (abbreviated HA or LHA, local hour angle), a left-handed system, measures the angular distance of an object westward along the celestial equator from the observer's meridian to the hour circle passing through the object. Unlike right ascension, hour angle is always increasing with the rotation of Earth. Hour angle may be considered a means of measuring the time since upper culmination, the moment when an object contacts the meridian overhead.
A culminating star on the observer's meridian is said to have a zero hour angle (0h). One sidereal hour (approximately 0.9973 solar hours) later, Earth's rotation will carry the star to the west of the meridian, and its hour angle will be 1h. When calculating topocentric phenomena, right ascension may be converted into hour angle as an intermediate step.
Rectangular coordinates: geocentric equatorial coordinates
There are a number of rectangular variants of equatorial coordinates. All have:
The origin at the centre of the Earth.
The fundamental plane in the plane of the Earth's equator.
The primary direction (the axis) toward the March equinox, that is, the place where the Sun crosses the celestial equator in a northward direction in its annual apparent circuit around the ecliptic.
A right-handed convention, specifying a axis 90° to the east in the fundamental plane and a axis along the north polar axis.
The reference frames do not rotate with the Earth (in contrast to Earth-centred, Earth-fixed frames), remaining always directed toward the equinox, and drifting over time with the motions of precession and nutation.
In astronomy:
The position of the Sun is often specified in the geocentric equatorial rectangular coordinates , , and a fourth distance coordinate, , in units of the astronomical unit.
The positions of the planets and other Solar System bodies are often specified in the geocentric equatorial rectangular coordinates , , and a fourth distance coordinate, (equal to ), in units of the astronomical unit.These rectangular coordinates are related to the corresponding spherical coordinates by
In astrodynamics:
The positions of artificial Earth satellites are specified in geocentric equatorial coordinates, also known as geocentric equatorial inertial (GEI), Earth-centred inertial (ECI), and conventional inertial system (CIS), all of which are equivalent in definition to the astronomical geocentric equatorial rectangular frames, above. In the geocentric equatorial frame, the , and axes are often designated , and , respectively, or the frame's basis is specified by the unit vectors , and .
The Geocentric Celestial Reference Frame (GCRF) is the geocentric equivalent of the International Celestial Reference Frame (ICRF). Its primary direction is the equinox of J2000.0, and does not move with precession and nutation, but it is otherwise equivalent to the above systems.
Generalization: heliocentric equatorial coordinates
In astronomy, there is also a heliocentric rectangular variant of equatorial coordinates, designated , , , which has:
The origin at the centre of the Sun.
The fundamental plane in the plane of the Earth's equator.
The primary direction (the axis) toward the March equinox.
A right-handed convention, specifying a axis 90° to the east in the fundamental plane and a axis along Earth's north polar axis.
This frame is similar to the , , frame above, except that the origin is removed to the centre of the Sun. It is commonly used in planetary orbit calculation. The three astronomical rectangular coordinate systems are related by
See also
Celestial coordinate system
Planetary coordinate system
Galactic coordinate system
Polar distance
Spherical astronomy
Star position
References
External links
MEASURING THE SKY A Quick Guide to the Celestial Sphere James B. Kaler, University of Illinois
Celestial Equatorial Coordinate System University of Nebraska-Lincoln
Celestial Equatorial Coordinate Explorers University of Nebraska-Lincoln
Astronomical coordinate systems | Equatorial coordinate system | [
"Astronomy",
"Mathematics"
] | 1,736 | [
"Astronomical coordinate systems",
"Coordinate systems"
] |
48,386 | https://en.wikipedia.org/wiki/Horizontal%20coordinate%20system | The horizontal coordinate system is a celestial coordinate system that uses the observer's local horizon as the fundamental plane to define two angles of a spherical coordinate system: altitude and azimuth.
Therefore, the horizontal coordinate system is sometimes called the az/el system, the alt/az system, or the alt-azimuth system, among others. In an altazimuth mount of a telescope, the instrument's two axes follow altitude and azimuth.
Definition
This celestial coordinate system divides the sky into two hemispheres: The upper hemisphere, where objects are above the horizon and are visible, and the lower hemisphere, where objects are below the horizon and cannot be seen, since the Earth obstructs views of them. The great circle separating the hemispheres is called the celestial horizon, which is defined as the great circle on the celestial sphere whose plane is normal to the local gravity vector (the vertical direction). In practice, the horizon can be defined as the plane tangent to a quiet, liquid surface, such as a pool of mercury, or by using a bull's eye level. The pole of the upper hemisphere is called the zenith and the pole of the lower hemisphere is called the nadir.
The following are two independent horizontal angular coordinates:
Altitude (alt. or altitude angle), sometimes referred to as (el. or elevation angle) or , is the angle between the object and the observer's local horizon. For visible objects, it is an angle between 0° and 90°.
Azimuth (az.) is the angle of the object around the horizon, usually measured from true north and increasing eastward. Exceptions are, for example, ESO's FITS convention where it is measured from the south and increasing westward, or the FITS convention of the Sloan Digital Sky Survey where it is measured from the south and increasing eastward.
A horizontal coordinate system should not be confused with a topocentric coordinate system. Horizontal coordinates define the observer's orientation, but not location of the origin, while topocentric coordinates define the origin location, on the Earth's surface, in contrast to a geocentric celestial system.
General features
The horizontal coordinate system is fixed to a location on Earth, not the stars. Therefore, the altitude and azimuth of an object in the sky changes with time, as the object appears to drift across the sky with Earth's rotation. In addition, since the horizontal system is defined by the observer's local horizon, the same object viewed from different locations on Earth at the same time will have different values of altitude and azimuth.
The cardinal points on the horizon have specific values of azimuth that are helpful references.
Horizontal coordinates are very useful for determining the rise and set times of an object in the sky. When an object's altitude is 0°, it is on the horizon. If at that moment its altitude is increasing, it is rising, but if its altitude is decreasing, it is setting. However, all objects on the celestial sphere are subject to diurnal motion, which always appears to be westward.
A northern observer can determine whether altitude is increasing or decreasing by instead considering the azimuth of the celestial object:
If the azimuth is between 0° and 180° (north–east–south), the object is rising.
If the azimuth is between 180° and 360° (south–west–north), the object is setting.
There are the following special cases:
All directions are south when viewed from the North Pole, and all directions are north when viewed from the South Pole, so the azimuth is undefined in both locations. When viewed from either pole, a star (or any object with fixed equatorial coordinates) has constant altitude and thus never rises or sets. The Sun, Moon, and planets can rise or set over the span of a year when viewed from the poles because their declinations are constantly changing.
When viewed from the equator, objects on the celestial poles stay at fixed points, perched on the horizon.
See also
Azimuth
Astronomical coordinate systems
Geocentric coordinates
Horizon
Vertical and horizontal
Meridian (astronomy)
Sextant
Solar declination
Spherical coordinate system
Vertical circle
Zenith
Footnotes
References
External links
Astronomical coordinate systems | Horizontal coordinate system | [
"Astronomy",
"Mathematics"
] | 864 | [
"Astronomical coordinate systems",
"Horizontal coordinate system",
"Coordinate systems"
] |
48,387 | https://en.wikipedia.org/wiki/Ecliptic%20coordinate%20system | In astronomy, the ecliptic coordinate system is a celestial coordinate system commonly used for representing the apparent positions, orbits, and pole orientations of Solar System objects. Because most planets (except Mercury) and many small Solar System bodies have orbits with only slight inclinations to the ecliptic, using it as the fundamental plane is convenient. The system's origin can be the center of either the Sun or Earth, its primary direction is towards the March equinox, and it has a right-hand convention. It may be implemented in spherical or rectangular coordinates.
Primary direction
The celestial equator and the ecliptic are slowly moving due to perturbing forces on the Earth, therefore the orientation of the primary direction, their intersection at the March equinox, is not quite fixed. A slow motion of Earth's axis, precession, causes a slow, continuous turning of the coordinate system westward about the poles of the ecliptic, completing one circuit in about 26,000 years. Superimposed on this is a smaller motion of the ecliptic, and a small oscillation of the Earth's axis, nutation.
In order to reference a coordinate system which can be considered as fixed in space, these motions require specification of the equinox of a particular date, known as an epoch, when giving a position in ecliptic coordinates. The three most commonly used are:
Mean equinox of a standard epoch (usually the J2000.0 epoch, but may include B1950.0, B1900.0, etc.) is a fixed standard direction, allowing positions established at various dates to be compared directly.
Mean equinox of date is the intersection of the ecliptic of "date" (that is, the ecliptic in its position at "date") with the mean equator (that is, the equator rotated by precession to its position at "date", but free from the small periodic oscillations of nutation). Commonly used in planetary orbit calculation.
True equinox of date is the intersection of the ecliptic of "date" with the true equator (that is, the mean equator plus nutation). This is the actual intersection of the two planes at any particular moment, with all motions accounted for.
A position in the ecliptic coordinate system is thus typically specified true equinox and ecliptic of date, mean equinox and ecliptic of J2000.0, or similar. Note that there is no "mean ecliptic", as the ecliptic is not subject to small periodic oscillations.
Spherical coordinates
Ecliptic longitude
Ecliptic longitude or celestial longitude (symbols: heliocentric , geocentric ) measures the angular distance of an object along the ecliptic from the primary direction. Like right ascension in the equatorial coordinate system, the primary direction (0° ecliptic longitude) points from the Earth towards the Sun at the March equinox. Because it is a right-handed system, ecliptic longitude is measured positive eastwards in the fundamental plane (the ecliptic) from 0° to 360°. Because of axial precession, the ecliptic longitude of most "fixed stars" (referred to the equinox of date) increases by about 50.3 arcseconds per year, or 83.8 arcminutes per century, the speed of general precession. However, for stars near the ecliptic poles, the rate of change of ecliptic longitude is dominated by the slight movement of the ecliptic (that is, of the plane of the Earth's orbit), so the rate of change may be anything from minus infinity to plus infinity depending on the exact position of the star.
Ecliptic latitude
Ecliptic latitude or celestial latitude (symbols: heliocentric , geocentric ), measures the angular distance of an object from the ecliptic towards the north (positive) or south (negative) ecliptic pole. For example, the north ecliptic pole has a celestial latitude of +90°. Ecliptic latitude for "fixed stars" is not affected by precession.
Distance
Distance is also necessary for a complete spherical position (symbols: heliocentric , geocentric ). Different distance units are used for different objects. Within the Solar System, astronomical units are used, and for objects near the Earth, Earth radii or kilometers are used.
Historical use
From antiquity through the 18th century, ecliptic longitude was commonly measured using twelve zodiacal signs, each of 30° longitude, a practice that continues in modern astrology. The signs approximately corresponded to the constellations crossed by the ecliptic. Longitudes were specified in signs, degrees, minutes, and seconds. For example, a longitude of is 19.933° east of the start of the sign Leo. Since Leo begins 120° from the March equinox, the longitude in modern form is .
In China, ecliptic longitude is measured using 24 Solar terms, each of 15° longitude, and are used by Chinese lunisolar calendars to stay synchronized with the seasons, which is crucial for agrarian societies.
Rectangular coordinates
A rectangular variant of ecliptic coordinates is often used in orbital calculations and simulations. It has its origin at the center of the Sun (or at the barycenter of the Solar System), its fundamental plane on the ecliptic plane, and the -axis toward the March equinox. The coordinates have a right-handed convention, that is, if one extends their right thumb upward, it simulates the -axis, their extended index finger the -axis, and the curl of the other fingers points generally in the direction of the -axis.
These rectangular coordinates are related to the corresponding spherical coordinates by
Conversion between celestial coordinate systems
Converting Cartesian vectors
Conversion from ecliptic coordinates to equatorial coordinates
Conversion from equatorial coordinates to ecliptic coordinates
where is the obliquity of the ecliptic.
See also
Celestial coordinate system
Ecliptic
Ecliptic pole, where the ecliptic latitude is ±90°
Equinox
Equinox (celestial coordinates)
March equinox
Notes and references
External links
The Ecliptic: the Sun's Annual Path on the Celestial Sphere Durham University Department of Physics
Equatorial ↔ Ecliptic coordinate converter
MEASURING THE SKY A Quick Guide to the Celestial Sphere James B. Kaler, University of Illinois
Astronomical coordinate systems | Ecliptic coordinate system | [
"Astronomy",
"Mathematics"
] | 1,331 | [
"Astronomical coordinate systems",
"Coordinate systems"
] |
48,389 | https://en.wikipedia.org/wiki/Galactic%20coordinate%20system | The galactic coordinate system is a celestial coordinate system in spherical coordinates, with the Sun as its center, the primary direction aligned with the approximate center of the Milky Way Galaxy, and the fundamental plane parallel to an approximation of the galactic plane but offset to its north. It uses the right-handed convention, meaning that coordinates are positive toward the north and toward the east in the fundamental plane.
Spherical coordinates
Galactic longitude
Longitude (symbol ) measures the angular distance of an object eastward along the galactic equator from the Galactic Center. Analogous to terrestrial longitude, galactic longitude is usually measured in degrees (°).
Galactic latitude
Latitude (symbol ) measures the angle of an object northward of the galactic equator (or midplane) as viewed from Earth. Analogous to terrestrial latitude, galactic latitude is usually measured in degrees (°).
Definition
The first galactic coordinate system was used by William Herschel in 1785. A number of different coordinate systems, each differing by a few degrees, were used until 1932, when Lund Observatory assembled a set of conversion tables that defined a standard galactic coordinate system based on a galactic north pole at RA , dec +28° (in the B1900.0 epoch convention) and a 0° longitude at the point where the galactic plane and equatorial plane intersected.
In 1958, the International Astronomical Union (IAU) defined the galactic coordinate system in reference to radio observations of galactic neutral hydrogen through the hydrogen line, changing the definition of the Galactic longitude by 32° and the latitude by 1.5°. In the equatorial coordinate system, for equinox and equator of 1950.0, the north galactic pole is defined at right ascension , declination +27.4°, in the constellation Coma Berenices, with a probable error of ±0.1°. Longitude 0° is the great semicircle that originates from this point along the line in position angle 123° with respect to the equatorial pole. The galactic longitude increases in the same direction as right ascension. Galactic latitude is positive towards the north galactic pole, with a plane passing through the Sun and parallel to the galactic equator being 0°, whilst the poles are ±90°. Based on this definition, the galactic poles and equator can be found from spherical trigonometry and can be precessed to other epochs; see the table.
The IAU recommended that during the transition period from the old, pre-1958 system to the new, the old longitude and latitude should be designated and while the new should be designated and . This convention is occasionally seen.
Radio source Sagittarius A*, which is the best physical marker of the true Galactic Center, is located at , (J2000). Rounded to the same number of digits as the table, , −29.01° (J2000), there is an offset of about 0.07° from the defined coordinate center, well within the 1958 error estimate of ±0.1°. Due to the Sun's position, which currently lies north of the midplane, and the heliocentric definition adopted by the IAU, the galactic coordinates of Sgr A* are latitude south, longitude . Since as defined the galactic coordinate system does not rotate with time, Sgr A* is actually decreasing in longitude at the rate of galactic rotation at the sun, , approximately 5.7 milliarcseconds per year (see Oort constants).
Conversion between equatorial and galactic coordinates
An object's location expressed in the equatorial coordinate system can be transformed into the galactic coordinate system. In these equations, is right ascension, is declination. NGP refers to the coordinate values of the north galactic pole and NCP to those of the north celestial pole.
The reverse (galactic to equatorial) can also be accomplished with the following conversion formulas.
Where:
Rectangular coordinates
In some applications use is made of rectangular coordinates based on galactic longitude and latitude and distance. In some work regarding the distant past or future the galactic coordinate system is taken as rotating so that the -axis always goes to the centre of the galaxy.
There are two major rectangular variations of galactic coordinates, commonly used for computing space velocities of galactic objects. In these systems the -axes are designated , but the definitions vary by author. In one system, the axis is directed toward the Galactic Center ( = 0°), and it is a right-handed system (positive towards the east and towards the north galactic pole); in the other, the axis is directed toward the galactic anticenter ( = 180°), and it is a left-handed system (positive towards the east and towards the north galactic pole).
In the constellations
The galactic equator runs through the following constellations:
Sagittarius
Serpens
Scutum
Aquila
Sagitta
Vulpecula
Cygnus
Cepheus
Cassiopeia
Camelopardalis
Perseus
Auriga
Taurus
Gemini
Orion
Monoceros
Canis Major
Puppis
Vela
Carina
Crux
Centaurus
Circinus
Norma
Ara
Scorpius
Ophiuchus
See also
References
External links
Universal coordinate converter.
Galactic Coordinate System - Wolfram Demonstration
Galactic coordinates, The Internet Encyclopedia of Science
Fiona Vincent, Positional Astronomy: Galactic coordinates , University of St Andrews
An Atlas of the Universe
Astronomical coordinate systems
Milky Way
Orientation (geometry) | Galactic coordinate system | [
"Physics",
"Astronomy",
"Mathematics"
] | 1,082 | [
"Astronomical coordinate systems",
"Topology",
"Space",
"Geometry",
"Coordinate systems",
"Spacetime",
"Orientation (geometry)"
] |
48,392 | https://en.wikipedia.org/wiki/Phenytoin | Phenytoin (PHT), sold under the brand name Dilantin among others, is an anti-seizure medication. It is useful for the prevention of tonic-clonic seizures (also known as grand mal seizures) and focal seizures, but not absence seizures. The intravenous form, fosphenytoin, is used for status epilepticus that does not improve with benzodiazepines. It may also be used for certain heart arrhythmias or neuropathic pain. It can be taken intravenously or by mouth. The intravenous form generally begins working within 30 minutes and is effective for roughly 24 hours. Blood levels can be measured to determine the proper dose.
Common side effects include nausea, stomach pain, loss of appetite, poor coordination, increased hair growth, and enlargement of the gums. Potentially serious side effects include sleepiness, self harm, liver problems, bone marrow suppression, low blood pressure, toxic epidermal necrolysis, and atrophy of the cerebellum. There is evidence that use during pregnancy results in abnormalities in the baby. It appears to be safe to use when breastfeeding. Alcohol may interfere with the medication's effects.
Phenytoin was first made in 1908 by the German chemist Heinrich Biltz and found useful for seizures in 1936. It is on the World Health Organization's List of Essential Medicines. Phenytoin is available as a generic medication. In 2020, it was the 260th most commonly prescribed medication in the United States, with more than 1million prescriptions.
Medical uses
Seizures
Tonic-clonic seizures: Mainly used in the prophylactic management of tonic-clonic seizures with complex symptomatology (psychomotor seizures). A period of 5–10 days of dosing may be required to achieve anticonvulsant effects.
Focal seizures: Mainly used to protect against the development of focal seizures with complex symptomatology (psychomotor and temporal lobe seizures). Also effective in controlling focal seizures with autonomic symptoms.
Absence seizures: Not used in treatment of pure absence seizures due to risk for increasing frequency of seizures. However, can be used in combination with other anticonvulsants during combined absence and tonic-clonic seizures.
Seizures during surgery: A 2018 meta-analysis found that early antiepileptic treatment with either phenytoin or phenobarbital reduced the risk of seizure in the first week after neurosurgery for brain tumors.
Status epilepticus: Considered after failed treatment using a benzodiazepine due to slow onset of action.
Though phenytoin has been used to treat seizures in infants, as of 2023, its effectiveness in this age group has been evaluated in only one study. Due to the lack of a comparison group, the evidence is inconclusive.
Other
Abnormal heart rhythms: may be used in the treatment of ventricular tachycardia and sudden episodes of atrial tachycardia after other antiarrhythmic medications or cardioversion has failed. It is a class Ib antiarrhythmic.
Digoxin toxicity: Intravenous phenytoin formulation is a medication of choice for arrhythmias caused by cardiac glycoside toxicity.
Trigeminal neuralgia: Second choice drug to carbamazepine.
Special considerations
Phenytoin has a narrow therapeutic index. Its therapeutic range for both anticonvulsant and antiarrhythmic effect is 10–20 μg/mL.
Avoid giving intramuscular formulation unless necessary due to skin cell death and local tissue destruction.
Elderly patients may show earlier signs of toxicity.
In the obese, ideal body weight should be used for dosing calculations.
Pregnancy: Pregnancy category D due to risk of fetal hydantoin syndrome and fetal bleeding. However, optimal seizure control is very important during pregnancy so drug may be continued if benefits outweigh the risks. Due to decreased drug concentrations as a result of plasma volume expansion during pregnancy, dose of phenytoin may need to be increased if only option for seizure control.
Breastfeeding: The manufacturer does not recommend breastfeeding since low concentrations of phenytoin are excreted in breast milk.
Liver disease: Do not use oral loading dose. Consider using decreased maintenance dose.
Kidney disease: Do not use oral loading dose. Can begin with standard maintenance dose and adjust as needed.
Intravenous use is contraindicated in patients with sinus bradycardia, sinoatrial block, second- or third-degree atrioventricular block, Stokes-Adams syndrome, or hypersensitivity to phenytoin, other hydantoins or any ingredient in the respective formulation.
Side effects
Common side effects include nausea, stomach pain, loss of appetite, poor coordination, increased hair growth, and enlargement of the gums. Potentially serious side effects include sleepiness, self harm, liver problems, bone marrow suppression, low blood pressure, and toxic epidermal necrolysis. There is evidence that use during pregnancy results in abnormalities in the baby. Its use appears to be safe during breastfeeding. Alcohol may interfere with the medication's effects.
Heart and blood vessels
Severe low blood pressure and abnormal heart rhythms can be seen with rapid infusion of IV phenytoin. IV infusion should not exceed 50 mg/min in adults or 1–3 mg/kg/min (or 50 mg/min, whichever is slower) in children. Heart monitoring should occur during and after IV infusion. Due to these risks, oral phenytoin should be used if possible.
Neurological
At therapeutic doses, phenytoin may produce nystagmus on lateral gaze. At toxic doses, patients experience vertical nystagmus, double vision, sedation, slurred speech, cerebellar ataxia, and tremor. If phenytoin is stopped abruptly, this may result in increased seizure frequency, including status epilepticus.
Phenytoin may accumulate in the cerebral cortex over long periods of time which can cause atrophy of the cerebellum. The degree of atrophy is related to the duration of phenytoin treatment and is not related to dosage of the medication.
Phenytoin is known to be a causal factor in the development of peripheral neuropathy.
Blood
Folate is present in food in a polyglutamate form, which is then converted into monoglutamates by intestinal conjugase to be absorbed by the jejunum. Phenytoin acts by inhibiting this enzyme, thereby causing folate deficiency, and thus megaloblastic anemia.
Other side effects may include: agranulocytosis, aplastic anemia, decreased white blood cell count, and a low platelet count.
Pregnancy
Phenytoin is a known teratogen, since children exposed to phenytoin are at a higher risk of birth defects than children born to women without epilepsy and to women with untreated epilepsy. The birth defects, which occur in approximately 6% of exposed children, include neural tube defects, heart defects and craniofacial abnormalities, including broad nasal bridge, cleft lip and palate, and smaller than normal head. The effect on IQ cannot be determined as no study involves phenytoin as monotherapy, however poorer language abilities and delayed motor development may have been associated with maternal use of phenytoin during pregnancy. This syndrome resembles the well-described fetal alcohol syndrome. and has been referred to as "fetal hydantoin syndrome". Some recommend avoiding polytherapy and maintaining the minimal dose possible during pregnancy, but acknowledge that current data fails to demonstrate a dose effect on the risk of birth defects. Data now being collected by the Epilepsy and Antiepileptic Drug Pregnancy Registry may one day answer this question definitively.
Cancer
There is no good evidence to suggest that phenytoin is a human carcinogen. However, lymph node abnormalities have been observed, including malignancies.
Mouth
Phenytoin has been associated with drug-induced gingival enlargement (overgrowth of the gums), probably due to above-mentioned folate deficiency; indeed, evidence from a randomized controlled trial suggests that folic acid supplementation can prevent gingival enlargement in children who take phenytoin. Plasma concentrations needed to induce gingival lesions have not been clearly defined. Effects consist of the following: bleeding upon probing, increased gingival exudate, pronounced gingival inflammatory response to plaque levels, associated in some instances with bone loss but without tooth detachment.
Skin
Hypertrichosis, Stevens–Johnson syndrome, purple glove syndrome, rash, exfoliative dermatitis, itching, excessive hairiness, and coarsening of facial features can be seen in those taking phenytoin.
Phenytoin therapy has been linked to the life-threatening skin reactions Stevens–Johnson syndrome (SJS) and toxic epidermal necrolysis (TEN). These conditions are significantly more common in patients with a particular HLA-B allele, HLA-B*1502. This allele occurs almost exclusively in patients with ancestry across broad areas of Asia, including South Asian Indians.
Phenytoin is primarily metabolized to its inactive form by the enzyme CYP2C9. Variations within the CYP2C9 gene that result in decreased enzymatic activity have been associated with increased phenytoin concentrations, as well as reports of drug toxicities due to these increased concentrations. The U.S. Food and Drug Administration (FDA) notes on the phenytoin drug label that since strong evidence exists linking HLA-B*1502 with the risk of developing SJS or TEN in patients taking carbamazepine, consideration should be given to avoiding phenytoin as an alternative to carbamazepine in patients carrying this allele.
Immune system
Phenytoin has been known to cause drug-induced lupus.
Phenytoin is also associated with induction of reversible IgA deficiency.
Psychological
Phenytoin may increase risk of suicidal thoughts or behavior. People on phenytoin should be monitored for any changes in mood, the development or worsening depression, and/or any thoughts or behavior of suicide.
Bones
Chronic phenytoin use has been associated with decreased bone density and increased bone fractures. Phenytoin induces metabolizing enzymes in the liver. This leads to increased metabolism of vitamin D, thus decreased vitamin D levels. Vitamin D deficiency, as well as low calcium and phosphate in the blood cause decreased bone mineral density.
Interactions
Phenytoin is an inducer of the CYP3A4 and CYP2C9 families of the P450 enzyme responsible for the liver's degradation of various drugs.
A 1981 study by the National Institutes of Health showed that antacids administered concomitantly with phenytoin "altered not only the extent of absorption but also appeared to alter the rate of absorption. Antacids administered in a peptic ulcer regimen may decrease the AUC of a single dose of phenytoin. Patients should be cautioned against concomitant use of antacids and phenytoin."
Warfarin and trimethoprim increase serum phenytoin levels and prolong the serum half-life of phenytoin by inhibiting its metabolism. Consider using other options if possible.
In general, phenytoin can interact with the following drugs:
Antidepressants drugs
Antifungal drugs such as fluconazole, ketoconazole
antibiotics such as metronidazole, chloramphenicol, clarithromycin, azithromycin
Cortones (such as betamethasone, dexamethasone, hydrocortisone and prednisolone
L-DOPA (phenytoin can cause the beneficial effect of levodopa to disappear.)
Pharmacology
Mechanism of action
Phenytoin is believed to protect against seizures by causing voltage-dependent block of voltage gated sodium channels. This blocks sustained high frequency repetitive firing of action potentials. This is accomplished by reducing the amplitude of sodium-dependent action potentials through enhancing steady-state inactivation. Sodium channels exist in three main conformations: the resting state, the open state, and the inactive state.
Phenytoin binds preferentially to the inactive form of the sodium channel. Because it takes time for the bound drug to dissassociate from the inactive channel, there is a time-dependent block of the channel. Since the fraction of inactive channels is increased by membrane depolarization as well as by repetitive firing, the binding to the inactive state by phenytoin sodium can produce voltage-dependent, use-dependent and time-dependent block of sodium-dependent action potentials.
The primary site of action appears to be the motor cortex where spread of seizure activity is inhibited. Possibly by promoting sodium efflux from neurons, phenytoin tends to stabilize the threshold against hyperexcitability caused by excessive stimulation or environmental changes capable of reducing membrane sodium gradient. This includes the reduction of post-tetanic potentiation at synapses which prevents cortical seizure foci from detonating adjacent cortical areas. Phenytoin reduces the maximal activity of brain stem centers responsible for the tonic phase of generalized tonic-clonic seizures.
Pharmacokinetics
Phenytoin elimination kinetics show mixed-order, non-linear elimination behaviour at therapeutic concentrations. Where phenytoin is at low concentration it is cleared by first order kinetics, and at high concentrations by zero order kinetics. A small increase in dose may lead to a large increase in drug concentration as elimination becomes saturated. The time to reach steady state is often longer than 2 weeks.
History
Phenytoin (diphenylhydantoin) was first synthesized by German chemist Heinrich Biltz in 1908.
Biltz sold his discovery to Parke-Davis, which did not find an immediate use for it. In 1938, other physicians, including H. Houston Merritt and Tracy Putnam, discovered phenytoin's usefulness for controlling seizures, without the sedative effects associated with phenobarbital.
According to Goodman and Gilman's Pharmacological Basis of Therapeutics:
In contrast to the earlier accidental discovery of the antiseizure properties of potassium bromide and phenobarbital, phenytoin was the product of a search among nonsedative structural relatives of phenobarbital for agents capable of suppressing electroshock convulsions in laboratory animals.
It was approved by the FDA in 1953 for use in seizures.
Jack Dreyfus, founder of the Dreyfus Fund, became a major proponent of phenytoin as a means to control nervousness and depression when he received a prescription for Dilantin in 1966. He has claimed to have supplied large amounts of the drug to Richard Nixon throughout the late 1960s and early 1970s, although this is disputed by former White House aides and Presidential historians.
Dreyfus' experience with phenytoin is outlined in his book, A Remarkable Medicine Has Been Overlooked. Despite more than $70 million in personal financing, his push to see phenytoin evaluated for alternative uses has had little lasting effect on the medical community. This was partially because Parke-Davis was reluctant to invest in a drug nearing the end of its patent life, and partially due to mixed results from various studies.
In 2008, the drug was put on the FDA's Potential Signals of Serious Risks List to be further evaluated for approval. The list identifies medications with which the FDA has identified potential safety issues, but has not yet identified a causal relationship between the drug and the listed risk. To address this concern, the Warnings and Precautions section of the labeling for Dilantin injection was updated to include additional information about Purple glove syndrome in November 2011.
Society and culture
Economics
Phenytoin is available as a generic medication.
Since September 2012, the marketing licence in the UK has been held by Flynn Pharma Ltd, of Dublin, Ireland, and the product, although identical, has been called Phenytoin Sodium xxmg Flynn Hard Capsules. (The xxmg in the name refers to the strength—for example "Phenytoin sodium 25 mg Flynn Hard Capsules"). The capsules are still made by Pfizer's Goedecke subsidiary's plant in Freiburg, Germany, and they still have Epanutin printed on them. After Pfizer's sale of the UK marketing licence to Flynn Pharma, the price of a 28-pack of 25 mg phenytoin sodium capsules marked Epanutin rose from 66p (about $0.88) to (about $25.06). Capsules of other strengths also went up in price by the same factor—2,384%, costing the UK's National Health Service an extra (about $68.44 million) a year. The companies were referred to the Competition and Markets Authority (CMA) who found that they had exploited their dominant position in the market to charge "excessive and unfair" prices.
The CMA imposed a record fine on the manufacturer Pfizer, and a fine on the distributor Flynn Pharma and ordered the companies to reduce their prices.
Brand names
Phenytoin is marketed under many brand names worldwide.
In the US, Dilantin is marketed by Viatris after Upjohn was spun off from Pfizer.
Research
Tentative evidence suggests that topical phenytoin is useful in wound healing in people with chronic skin wounds. A meta-analysis also supported the use of phenytoin in managing various ulcers. Phenytoin is incorporated into compounded medications to optimize wound treatment, often in combination with misoprostol.
Some clinical trials have explored whether phenytoin can be used as neuroprotector in multiple sclerosis.
References
Further reading
External links
English translation of 1908 German article on phenytoin synthesis by Heinrich Biltz
CYP1A2 inducers
Antiarrhythmic agents
Anticonvulsants
Aromatase inhibitors
CYP3A4 inducers
Dermatoxins
GABAA receptor positive allosteric modulators
Hepatotoxins
Hydantoins
IARC Group 2B carcinogens
Nephrotoxins
Phenyl compounds
Selective estrogen receptor modulators
Sigma receptor ligands
Sodium channel blockers
Teratogens
World Health Organization essential medicines
Wikipedia medicine articles ready to translate | Phenytoin | [
"Chemistry"
] | 3,891 | [
"Teratogens"
] |
48,395 | https://en.wikipedia.org/wiki/Navier%E2%80%93Stokes%20equations | The Navier–Stokes equations ( ) are partial differential equations which describe the motion of viscous fluid substances. They were named after French engineer and physicist Claude-Louis Navier and the Irish physicist and mathematician George Gabriel Stokes. They were developed over several decades of progressively building the theories, from 1822 (Navier) to 1842–1850 (Stokes).
The Navier–Stokes equations mathematically express momentum balance for Newtonian fluids and make use of conservation of mass. They are sometimes accompanied by an equation of state relating pressure, temperature and density. They arise from applying Isaac Newton's second law to fluid motion, together with the assumption that the stress in the fluid is the sum of a diffusing viscous term (proportional to the gradient of velocity) and a pressure term—hence describing viscous flow. The difference between them and the closely related Euler equations is that Navier–Stokes equations take viscosity into account while the Euler equations model only inviscid flow. As a result, the Navier–Stokes are an elliptic equation and therefore have better analytic properties, at the expense of having less mathematical structure (e.g. they are never completely integrable).
The Navier–Stokes equations are useful because they describe the physics of many phenomena of scientific and engineering interest. They may be used to model the weather, ocean currents, water flow in a pipe and air flow around a wing. The Navier–Stokes equations, in their full and simplified forms, help with the design of aircraft and cars, the study of blood flow, the design of power stations, the analysis of pollution, and many other problems. Coupled with Maxwell's equations, they can be used to model and study magnetohydrodynamics.
The Navier–Stokes equations are also of great interest in a purely mathematical sense. Despite their wide range of practical uses, it has not yet been proven whether smooth solutions always exist in three dimensions—i.e., whether they are infinitely differentiable (or even just bounded) at all points in the domain. This is called the Navier–Stokes existence and smoothness problem. The Clay Mathematics Institute has called this one of the seven most important open problems in mathematics and has offered a US$1 million prize for a solution or a counterexample.
Flow velocity
The solution of the equations is a flow velocity. It is a vector field—to every point in a fluid, at any moment in a time interval, it gives a vector whose direction and magnitude are those of the velocity of the fluid at that point in space and at that moment in time. It is usually studied in three spatial dimensions and one time dimension, although two (spatial) dimensional and steady-state cases are often used as models, and higher-dimensional analogues are studied in both pure and applied mathematics. Once the velocity field is calculated, other quantities of interest such as pressure or temperature may be found using dynamical equations and relations. This is different from what one normally sees in classical mechanics, where solutions are typically trajectories of position of a particle or deflection of a continuum. Studying velocity instead of position makes more sense for a fluid, although for visualization purposes one can compute various trajectories. In particular, the streamlines of a vector field, interpreted as flow velocity, are the paths along which a massless fluid particle would travel. These paths are the integral curves whose derivative at each point is equal to the vector field, and they can represent visually the behavior of the vector field at a point in time.
General continuum equations
The Navier–Stokes momentum equation can be derived as a particular form of the Cauchy momentum equation, whose general convective form is:
By setting the Cauchy stress tensor to be the sum of a viscosity term (the deviatoric stress) and a pressure term (volumetric stress), we arrive at:
where
is the material derivative, defined as ,
is the (mass) density,
is the flow velocity,
is the divergence,
is the pressure,
is time,
is the deviatoric stress tensor, which has order 2,
represents body accelerations acting on the continuum, for example gravity, inertial accelerations, electrostatic accelerations, and so on.
In this form, it is apparent that in the assumption of an inviscid fluid – no deviatoric stress – Cauchy equations reduce to the Euler equations.
Assuming conservation of mass, with the known properties of divergence and gradient we can use the mass continuity equation, which represents the mass per unit volume of a homogenous fluid with respect to space and time (i.e., material derivative ) of any finite volume (V) to represent the change of velocity in fluid media:
where
is the material derivative of mass per unit volume (density, ),
is the mathematical operation for the integration throughout the volume (V),
is the partial derivative mathematical operator,
is the divergence of the flow velocity (), which is a scalar field, Note 1
is the gradient of density (), which is the vector derivative of a scalar field, Note 1
Note 1 - Refer to the mathematical operator del represented by the nabla () symbol.
to arrive at the conservation form of the equations of motion. This is often written:
where is the outer product of the flow velocity ():
The left side of the equation describes acceleration, and may be composed of time-dependent and convective components (also the effects of non-inertial coordinates if present). The right side of the equation is in effect a summation of hydrostatic effects, the divergence of deviatoric stress and body forces (such as gravity).
All non-relativistic balance equations, such as the Navier–Stokes equations, can be derived by beginning with the Cauchy equations and specifying the stress tensor through a constitutive relation. By expressing the deviatoric (shear) stress tensor in terms of viscosity and the fluid velocity gradient, and assuming constant viscosity, the above Cauchy equations will lead to the Navier–Stokes equations below.
Convective acceleration
A significant feature of the Cauchy equation and consequently all other continuum equations (including Euler and Navier–Stokes) is the presence of convective acceleration: the effect of acceleration of a flow with respect to space. While individual fluid particles indeed experience time-dependent acceleration, the convective acceleration of the flow field is a spatial effect, one example being fluid speeding up in a nozzle.
Compressible flow
Remark: here, the deviatoric stress tensor is denoted as it was in the general continuum equations and in the incompressible flow section.
The compressible momentum Navier–Stokes equation results from the following assumptions on the Cauchy stress tensor:
the stress is Galilean invariant: it does not depend directly on the flow velocity, but only on spatial derivatives of the flow velocity. So the stress variable is the tensor gradient , or more simply the rate-of-strain tensor:
the deviatoric stress is linear in this variable: , where is independent on the strain rate tensor, is the fourth-order tensor representing the constant of proportionality, called the viscosity or elasticity tensor, and : is the double-dot product.
the fluid is assumed to be isotropic, as with gases and simple liquids, and consequently is an isotropic tensor; furthermore, since the deviatoric stress tensor is symmetric, by Helmholtz decomposition it can be expressed in terms of two scalar Lamé parameters, the second viscosity and the dynamic viscosity , as it is usual in linear elasticity:
where is the identity tensor, and is the trace of the rate-of-strain tensor. So this decomposition can be explicitly defined as:
Since the trace of the rate-of-strain tensor in three dimensions is the divergence (i.e. rate of expansion) of the flow:
Given this relation, and since the trace of the identity tensor in three dimensions is three:
the trace of the stress tensor in three dimensions becomes:
So by alternatively decomposing the stress tensor into isotropic and deviatoric parts, as usual in fluid dynamics:
Introducing the bulk viscosity ,
we arrive to the linear constitutive equation in the form usually employed in thermal hydraulics:
which can also be arranged in the other usual form:
Note that in the compressible case the pressure is no more proportional to the isotropic stress term, since there is the additional bulk viscosity term:
and the deviatoric stress tensor is still coincident with the shear stress tensor (i.e. the deviatoric stress in a Newtonian fluid has no normal stress components), and it has a compressibility term in addition to the incompressible case, which is proportional to the shear viscosity:
Both bulk viscosity and dynamic viscosity need not be constant – in general, they depend on two thermodynamics variables if the fluid contains a single chemical species, say for example, pressure and temperature. Any equation that makes explicit one of these transport coefficient in the conservation variables is called an equation of state.
The most general of the Navier–Stokes equations become
in index notation, the equation can be written as
The corresponding equation in conservation form can be obtained by considering that, given the mass continuity equation, the left side is equivalent to:
To give finally:
{{Equation box 1
|indent=:
|title=Navier–Stokes momentum equation (conservative form)
|equation=
|cellpadding
|border
|border colour = #FF0000
|background colour = #DCDCDC
}}
Apart from its dependence of pressure and temperature, the second viscosity coefficient also depends on the process, that is to say, the second viscosity coefficient is not just a material property. Example: in the case of a sound wave with a definitive frequency that alternatively compresses and expands a fluid element, the second viscosity coefficient depends on the frequency of the wave. This dependence is called the dispersion. In some cases, the second viscosity can be assumed to be constant in which case, the effect of the volume viscosity is that the mechanical pressure is not equivalent to the thermodynamic pressure: as demonstrated below.
However, this difference is usually neglected most of the time (that is whenever we are not dealing with processes such as sound absorption and attenuation of shock waves, where second viscosity coefficient becomes important) by explicitly assuming . The assumption of setting is called as the Stokes hypothesis. The validity of Stokes hypothesis can be demonstrated for monoatomic gas both experimentally and from the kinetic theory; for other gases and liquids, Stokes hypothesis is generally incorrect. With the Stokes hypothesis, the Navier–Stokes equations become
If the dynamic and bulk viscosities are assumed to be uniform in space, the equations in convective form can be simplified further. By computing the divergence of the stress tensor, since the divergence of tensor is and the divergence of tensor is , one finally arrives to the compressible Navier–Stokes momentum equation:
where is the material derivative. is the shear kinematic viscosity and is the bulk kinematic viscosity. The left-hand side changes in the conservation form of the Navier–Stokes momentum equation.
By bringing the operator on the flow velocity on the left side, one also has:
The convective acceleration term can also be written as
where the vector is known as the Lamb vector.
For the special case of an incompressible flow, the pressure constrains the flow so that the volume of fluid elements is constant: isochoric flow resulting in a solenoidal velocity field with .
Incompressible flow
The incompressible momentum Navier–Stokes equation results from the following assumptions on the Cauchy stress tensor:
the stress is Galilean invariant: it does not depend directly on the flow velocity, but only on spatial derivatives of the flow velocity. So the stress variable is the tensor gradient .
the fluid is assumed to be isotropic, as with gases and simple liquids, and consequently is an isotropic tensor; furthermore, since the deviatoric stress tensor can be expressed in terms of the dynamic viscosity :
where
is the rate-of-strain tensor. So this decomposition can be made explicit as:
This is constitutive equation is also called the Newtonian law of viscosity.
Dynamic viscosity need not be constant – in incompressible flows it can depend on density and on pressure. Any equation that makes explicit one of these transport coefficient in the conservative variables is called an equation of state.
The divergence of the deviatoric stress in case of uniform viscosity is given by:
because for an incompressible fluid.
Incompressibility rules out density and pressure waves like sound or shock waves, so this simplification is not useful if these phenomena are of interest. The incompressible flow assumption typically holds well with all fluids at low Mach numbers (say up to about Mach 0.3), such as for modelling air winds at normal temperatures. the incompressible Navier–Stokes equations are best visualized by dividing for the density:
where is called the kinematic viscosity.
By isolating the fluid velocity, one can also state:
If the density is constant throughout the fluid domain, or, in other words, if all fluid elements have the same density, , then we have
where is called the unit pressure head.
In incompressible flows, the pressure field satisfies the Poisson equation,
which is obtained by taking the divergence of the momentum equations.
It is well worth observing the meaning of each term (compare to the Cauchy momentum equation):
The higher-order term, namely the shear stress divergence , has simply reduced to the vector Laplacian term . This Laplacian term can be interpreted as the difference between the velocity at a point and the mean velocity in a small surrounding volume. This implies that – for a Newtonian fluid – viscosity operates as a diffusion of momentum, in much the same way as the heat conduction. In fact neglecting the convection term, incompressible Navier–Stokes equations lead to a vector diffusion equation (namely Stokes equations), but in general the convection term is present, so incompressible Navier–Stokes equations belong to the class of convection–diffusion equations.
In the usual case of an external field being a conservative field:
by defining the hydraulic head:
one can finally condense the whole source in one term, arriving to the incompressible Navier–Stokes equation with conservative external field:
The incompressible Navier–Stokes equations with uniform density and viscosity and conservative external field is the fundamental equation of hydraulics. The domain for these equations is commonly a 3 or less dimensional Euclidean space, for which an orthogonal coordinate reference frame is usually set to explicit the system of scalar partial differential equations to be solved. In 3-dimensional orthogonal coordinate systems are 3: Cartesian, cylindrical, and spherical. Expressing the Navier–Stokes vector equation in Cartesian coordinates is quite straightforward and not much influenced by the number of dimensions of the euclidean space employed, and this is the case also for the first-order terms (like the variation and convection ones) also in non-cartesian orthogonal coordinate systems. But for the higher order terms (the two coming from the divergence of the deviatoric stress that distinguish Navier–Stokes equations from Euler equations) some tensor calculus is required for deducing an expression in non-cartesian orthogonal coordinate systems.
A special case of the fundamental equation of hydraulics is the Bernoulli's equation.
The incompressible Navier–Stokes equation is composite, the sum of two orthogonal equations,
where and are solenoidal and irrotational projection operators satisfying , and and are the non-conservative and conservative parts of the body force. This result follows from the Helmholtz theorem (also known as the fundamental theorem of vector calculus). The first equation is a pressureless governing equation for the velocity, while the second equation for the pressure is a functional of the velocity and is related to the pressure Poisson equation.
The explicit functional form of the projection operator in 3D is found from the Helmholtz Theorem:
with a similar structure in 2D. Thus the governing equation is an integro-differential equation similar to Coulomb and Biot–Savart law, not convenient for numerical computation.
An equivalent weak or variational form of the equation, proved to produce the same velocity solution as the Navier–Stokes equation, is given by,
for divergence-free test functions satisfying appropriate boundary conditions. Here, the projections are accomplished by the orthogonality of the solenoidal and irrotational function spaces. The discrete form of this is eminently suited to finite element computation of divergence-free flow, as we shall see in the next section. There one will be able to address the question "How does one specify pressure-driven (Poiseuille) problems with a pressureless governing equation?".
The absence of pressure forces from the governing velocity equation demonstrates that the equation is not a dynamic one, but rather a kinematic equation where the divergence-free condition serves the role of a conservation equation. This all would seem to refute the frequent statements that the incompressible pressure enforces the divergence-free condition.
Weak form of the incompressible Navier–Stokes equations
Strong form
Consider the incompressible Navier–Stokes equations for a Newtonian fluid of constant density in a domain
with boundary
being and portions of the boundary where respectively a Dirichlet and a Neumann boundary condition is applied ():
is the fluid velocity, the fluid pressure, a given forcing term, the outward directed unit normal vector to , and the viscous stress tensor defined as:
Let be the dynamic viscosity of the fluid, the second-order identity tensor and the strain-rate tensor defined as:
The functions and are given Dirichlet and Neumann boundary data, while is the initial condition. The first equation is the momentum balance equation, while the second represents the mass conservation, namely the continuity equation.
Assuming constant dynamic viscosity, using the vectorial identity
and exploiting mass conservation, the divergence of the total stress tensor in the momentum equation can also be expressed as:
Moreover, note that the Neumann boundary conditions can be rearranged as:
Weak form
In order to find the weak form of the Navier–Stokes equations, firstly, consider the momentum equation
multiply it for a test function , defined in a suitable space , and integrate both members with respect to the domain :
Counter-integrating by parts the diffusive and the pressure terms and by using the Gauss' theorem:
Using these relations, one gets:
In the same fashion, the continuity equation is multiplied for a test function belonging to a space and integrated in the domain :
The space functions are chosen as follows:
Considering that the test function vanishes on the Dirichlet boundary and considering the Neumann condition, the integral on the boundary can be rearranged as:
Having this in mind, the weak formulation of the Navier–Stokes equations is expressed as:
Discrete velocity
With partitioning of the problem domain and defining basis functions on the partitioned domain, the discrete form of the governing equation is
It is desirable to choose basis functions that reflect the essential feature of incompressible flow – the elements must be divergence-free. While the velocity is the variable of interest, the existence of the stream function or vector potential is necessary by the Helmholtz theorem. Further, to determine fluid flow in the absence of a pressure gradient, one can specify the difference of stream function values across a 2D channel, or the line integral of the tangential component of the vector potential around the channel in 3D, the flow being given by Stokes' theorem. Discussion will be restricted to 2D in the following.
We further restrict discussion to continuous Hermite finite elements which have at least first-derivative degrees-of-freedom. With this, one can draw a large number of candidate triangular and rectangular elements from the plate-bending literature. These elements have derivatives as components of the gradient. In 2D, the gradient and curl of a scalar are clearly orthogonal, given by the expressions,
Adopting continuous plate-bending elements, interchanging the derivative degrees-of-freedom and changing the sign of the appropriate one gives many families of stream function elements.
Taking the curl of the scalar stream function elements gives divergence-free velocity elements. The requirement that the stream function elements be continuous assures that the normal component of the velocity is continuous across element interfaces, all that is necessary for vanishing divergence on these interfaces.
Boundary conditions are simple to apply. The stream function is constant on no-flow surfaces, with no-slip velocity conditions on surfaces.
Stream function differences across open channels determine the flow. No boundary conditions are necessary on open boundaries, though consistent values may be used with some problems. These are all Dirichlet conditions.
The algebraic equations to be solved are simple to set up, but of course are non-linear, requiring iteration of the linearized equations.
Similar considerations apply to three-dimensions, but extension from 2D is not immediate because of the vector nature of the potential, and there exists no simple relation between the gradient and the curl as was the case in 2D.
Pressure recovery
Recovering pressure from the velocity field is easy. The discrete weak equation for the pressure gradient is,
where the test/weight functions are irrotational. Any conforming scalar finite element may be used. However, the pressure gradient field may also be of interest. In this case, one can use scalar Hermite elements for the pressure. For the test/weight functions one would choose the irrotational vector elements obtained from the gradient of the pressure element.
Non-inertial frame of reference
The rotating frame of reference introduces some interesting pseudo-forces into the equations through the material derivative term. Consider a stationary inertial frame of reference , and a non-inertial frame of reference , which is translating with velocity and rotating with angular velocity with respect to the stationary frame. The Navier–Stokes equation observed from the non-inertial frame then becomes
Here and are measured in the non-inertial frame. The first term in the parenthesis represents Coriolis acceleration, the second term is due to centrifugal acceleration, the third is due to the linear acceleration of with respect to and the fourth term is due to the angular acceleration of with respect to .
Other equations
The Navier–Stokes equations are strictly a statement of the balance of momentum. To fully describe fluid flow, more information is needed, how much depending on the assumptions made. This additional information may include boundary data (no-slip, capillary surface, etc.), conservation of mass, balance of energy, and/or an equation of state.
Continuity equation for incompressible fluid
Regardless of the flow assumptions, a statement of the conservation of mass is generally necessary. This is achieved through the mass continuity equation, as discussed above in the "General continuum equations" within this article, as follows:
A fluid media for which the density () is constant is called incompressible. Therefore, the rate of change of density () with respect to time and the gradient of density are equal to zero . In this case the general equation of continuity, , reduces to: . Furthermore, assuming that density () is a non-zero constant means that the right-hand side of the equation is divisible by density (). Therefore, the continuity equation for an incompressible fluid reduces further to:This relationship, , identifies that the divergence of the flow velocity vector () is equal to zero , which means that for an incompressible fluid the flow velocity field is a solenoidal vector field or a divergence-free vector field. Note that this relationship can be expanded upon due to its uniqueness with the vector Laplace operator , and vorticity which is now expressed like so, for an incompressible fluid:
Stream function for incompressible 2D fluid
Taking the curl of the incompressible Navier–Stokes equation results in the elimination of pressure. This is especially easy to see if 2D Cartesian flow is assumed (like in the degenerate 3D case with and no dependence of anything on ), where the equations reduce to:
Differentiating the first with respect to , the second with respect to and subtracting the resulting equations will eliminate pressure and any conservative force.
For incompressible flow, defining the stream function through
results in mass continuity being unconditionally satisfied (given the stream function is continuous), and then incompressible Newtonian 2D momentum and mass conservation condense into one equation:
where is the 2D biharmonic operator and is the kinematic viscosity, . We can also express this compactly using the Jacobian determinant:
This single equation together with appropriate boundary conditions describes 2D fluid flow, taking only kinematic viscosity as a parameter. Note that the equation for creeping flow results when the left side is assumed zero.
In axisymmetric flow another stream function formulation, called the Stokes stream function, can be used to describe the velocity components of an incompressible flow with one scalar function.
The incompressible Navier–Stokes equation is a differential algebraic equation, having the inconvenient feature that there is no explicit mechanism for advancing the pressure in time. Consequently, much effort has been expended to eliminate the pressure from all or part of the computational process. The stream function formulation eliminates the pressure but only in two dimensions and at the expense of introducing higher derivatives and elimination of the velocity, which is the primary variable of interest.
Properties
Nonlinearity
The Navier–Stokes equations are nonlinear partial differential equations in the general case and so remain in almost every real situation. In some cases, such as one-dimensional flow and Stokes flow (or creeping flow), the equations can be simplified to linear equations. The nonlinearity makes most problems difficult or impossible to solve and is the main contributor to the turbulence that the equations model.
The nonlinearity is due to convective acceleration, which is an acceleration associated with the change in velocity over position. Hence, any convective flow, whether turbulent or not, will involve nonlinearity. An example of convective but laminar (nonturbulent) flow would be the passage of a viscous fluid (for example, oil) through a small converging nozzle. Such flows, whether exactly solvable or not, can often be thoroughly studied and understood.
Turbulence
Turbulence is the time-dependent chaotic behaviour seen in many fluid flows. It is generally believed that it is due to the inertia of the fluid as a whole: the culmination of time-dependent and convective acceleration; hence flows where inertial effects are small tend to be laminar (the Reynolds number quantifies how much the flow is affected by inertia). It is believed, though not known with certainty, that the Navier–Stokes equations describe turbulence properly.
The numerical solution of the Navier–Stokes equations for turbulent flow is extremely difficult, and due to the significantly different mixing-length scales that are involved in turbulent flow, the stable solution of this requires such a fine mesh resolution that the computational time becomes significantly infeasible for calculation or direct numerical simulation. Attempts to solve turbulent flow using a laminar solver typically result in a time-unsteady solution, which fails to converge appropriately. To counter this, time-averaged equations such as the Reynolds-averaged Navier–Stokes equations (RANS), supplemented with turbulence models, are used in practical computational fluid dynamics (CFD) applications when modeling turbulent flows. Some models include the Spalart–Allmaras, –, –, and SST models, which add a variety of additional equations to bring closure to the RANS equations. Large eddy simulation (LES) can also be used to solve these equations numerically. This approach is computationally more expensive—in time and in computer memory—than RANS, but produces better results because it explicitly resolves the larger turbulent scales.
Applicability
Together with supplemental equations (for example, conservation of mass) and well-formulated boundary conditions, the Navier–Stokes equations seem to model fluid motion accurately; even turbulent flows seem (on average) to agree with real world observations.
The Navier–Stokes equations assume that the fluid being studied is a continuum (it is infinitely divisible and not composed of particles such as atoms or molecules), and is not moving at relativistic velocities. At very small scales or under extreme conditions, real fluids made out of discrete molecules will produce results different from the continuous fluids modeled by the Navier–Stokes equations. For example, capillarity of internal layers in fluids appears for flow with high gradients. For large Knudsen number of the problem, the Boltzmann equation may be a suitable replacement.
Failing that, one may have to resort to molecular dynamics or various hybrid methods.
Another limitation is simply the complicated nature of the equations. Time-tested formulations exist for common fluid families, but the application of the Navier–Stokes equations to less common families tends to result in very complicated formulations and often to open research problems. For this reason, these equations are usually written for Newtonian fluids where the viscosity model is linear; truly general models for the flow of other kinds of fluids (such as blood) do not exist.
Application to specific problems
The Navier–Stokes equations, even when written explicitly for specific fluids, are rather generic in nature and their proper application to specific problems can be very diverse. This is partly because there is an enormous variety of problems that may be modeled, ranging from as simple as the distribution of static pressure to as complicated as multiphase flow driven by surface tension.
Generally, application to specific problems begins with some flow assumptions and initial/boundary condition formulation, this may be followed by scale analysis to further simplify the problem.
Parallel flow
Assume steady, parallel, one-dimensional, non-convective pressure-driven flow between parallel plates, the resulting scaled (dimensionless) boundary value problem is:
The boundary condition is the no slip condition. This problem is easily solved for the flow field:
From this point onward, more quantities of interest can be easily obtained, such as viscous drag force or net flow rate.
Radial flow
Difficulties may arise when the problem becomes slightly more complicated. A seemingly modest twist on the parallel flow above would be the radial flow between parallel plates; this involves convection and thus non-linearity. The velocity field may be represented by a function that must satisfy:
This ordinary differential equation is what is obtained when the Navier–Stokes equations are written and the flow assumptions applied (additionally, the pressure gradient is solved for). The nonlinear term makes this a very difficult problem to solve analytically (a lengthy implicit solution may be found which involves elliptic integrals and roots of cubic polynomials). Issues with the actual existence of solutions arise for (approximately; this is not ), the parameter being the Reynolds number with appropriately chosen scales. This is an example of flow assumptions losing their applicability, and an example of the difficulty in "high" Reynolds number flows.
Convection
A type of natural convection that can be described by the Navier–Stokes equation is the Rayleigh–Bénard convection. It is one of the most commonly studied convection phenomena because of its analytical and experimental accessibility.
Exact solutions of the Navier–Stokes equations
Some exact solutions to the Navier–Stokes equations exist. Examples of degenerate cases—with the non-linear terms in the Navier–Stokes equations equal to zero—are Poiseuille flow, Couette flow and the oscillatory Stokes boundary layer. But also, more interesting examples, solutions to the full non-linear equations, exist, such as Jeffery–Hamel flow, Von Kármán swirling flow, stagnation point flow, Landau–Squire jet, and Taylor–Green vortex.Landau & Lifshitz (1987) pp. 75–88. Time-dependent self-similar solutions of the three-dimensional non-compressible Navier-Stokes equations in Cartesian coordinate can be given with the help of the Kummer's functions with quadratic arguments. For the compressible Navier-Stokes equations the time-dependent self-similar solutions are however the Whittaker functions again with quadratic arguments when the polytropic equation of state is used as a closing condition. Note that the existence of these exact solutions does not imply they are stable: turbulence may develop at higher Reynolds numbers.
Under additional assumptions, the component parts can be separated.
A three-dimensional steady-state vortex solution
A steady-state example with no singularities comes from considering the flow along the lines of a Hopf fibration. Let be a constant radius of the inner coil. One set of solutions is given by:
for arbitrary constants and . This is a solution in a non-viscous gas (compressible fluid) whose density, velocities and pressure goes to zero far from the origin. (Note this is not a solution to the Clay Millennium problem because that refers to incompressible fluids where is a constant, and neither does it deal with the uniqueness of the Navier–Stokes equations with respect to any turbulence properties.) It is also worth pointing out that the components of the velocity vector are exactly those from the Pythagorean quadruple parametrization. Other choices of density and pressure are possible with the same velocity field:
Viscous three-dimensional periodic solutions
Two examples of periodic fully-three-dimensional viscous solutions are described in.
These solutions are defined on a three-dimensional torus and are characterized by positive and negative helicity respectively.
The solution with positive helicity is given by:
where is the wave number and the velocity components are normalized so that the average kinetic energy per unit of mass is at .
The pressure field is obtained from the velocity field as (where and are reference values for the pressure and density fields respectively).
Since both the solutions belong to the class of Beltrami flow, the vorticity field is parallel to the velocity and, for the case with positive helicity, is given by .
These solutions can be regarded as a generalization in three dimensions of the classic two-dimensional Taylor-Green Taylor–Green vortex.
Wyld diagrams
Wyld diagrams are bookkeeping graphs that correspond to the Navier–Stokes equations via a perturbation expansion of the fundamental continuum mechanics. Similar to the Feynman diagrams in quantum field theory, these diagrams are an extension of Keldysh's technique for nonequilibrium processes in fluid dynamics. In other words, these diagrams assign graphs to the (often) turbulent phenomena in turbulent fluids by allowing correlated and interacting fluid particles to obey stochastic processes associated to pseudo-random functions in probability distributions.
Representations in 3D
Note that the formulas in this section make use of the single-line notation for partial derivatives, where, e.g. means the partial derivative of with respect to , and means the second-order partial derivative of with respect to .
A 2022 paper provides a less costly, dynamical and recurrent solution of the Navier-Stokes equation for 3D turbulent fluid flows. On suitably short time scales, the dynamics of turbulence is deterministic.
Cartesian coordinates
From the general form of the Navier–Stokes, with the velocity vector expanded as , sometimes respectively named , , , we may write the vector equation explicitly,
Note that gravity has been accounted for as a body force, and the values of , , will depend on the orientation of gravity with respect to the chosen set of coordinates.
The continuity equation reads:
When the flow is incompressible, does not change for any fluid particle, and its material derivative vanishes: . The continuity equation is reduced to:
Thus, for the incompressible version of the Navier–Stokes equation the second part of the viscous terms fall away (see Incompressible flow).
This system of four equations comprises the most commonly used and studied form. Though comparatively more compact than other representations, this is still a nonlinear system of partial differential equations for which solutions are difficult to obtain.
Cylindrical coordinates
A change of variables on the Cartesian equations will yield the following momentum equations for , , and
The gravity components will generally not be constants, however for most applications either the coordinates are chosen so that the gravity components are constant or else it is assumed that gravity is counteracted by a pressure field (for example, flow in horizontal pipe is treated normally without gravity and without a vertical pressure gradient). The continuity equation is:
This cylindrical representation of the incompressible Navier–Stokes equations is the second most commonly seen (the first being Cartesian above). Cylindrical coordinates are chosen to take advantage of symmetry, so that a velocity component can disappear. A very common case is axisymmetric flow with the assumption of no tangential velocity (), and the remaining quantities are independent of :
Spherical coordinates
In spherical coordinates, the , , and momentum equations are (note the convention used: is polar angle, or colatitude, ):
Mass continuity will read:
These equations could be (slightly) compacted by, for example, factoring from the viscous terms. However, doing so would undesirably alter the structure of the Laplacian and other quantities.
Navier–Stokes equations use in games
The Navier–Stokes equations are used extensively in video games in order to model a wide variety of natural phenomena. Simulations of small-scale gaseous fluids, such as fire and smoke, are often based on the seminal paper "Real-Time Fluid Dynamics for Games" by Jos Stam, which elaborates one of the methods proposed in Stam's earlier, more famous paper "Stable Fluids" from 1999. Stam proposes stable fluid simulation using a Navier–Stokes solution method from 1968, coupled with an unconditionally stable semi-Lagrangian advection scheme, as first proposed in 1992.
More recent implementations based upon this work run on the game systems graphics processing unit (GPU) as opposed to the central processing unit (CPU) and achieve a much higher degree of performance.
Many improvements have been proposed to Stam's original work, which suffers inherently from high numerical dissipation in both velocity and mass.
An introduction to interactive fluid simulation can be found in the 2007 ACM SIGGRAPH course, Fluid Simulation for Computer Animation.
See also
Citations
General references
V. Girault and P. A. Raviart. Finite Element Methods for Navier–Stokes Equations: Theory and Algorithms. Springer Series in Computational Mathematics. Springer-Verlag, 1986.
Smits, Alexander J. (2014), A Physical Introduction to Fluid Mechanics, Wiley,
Temam, Roger (1984): Navier–Stokes Equations: Theory and Numerical Analysis, ACM Chelsea Publishing,
Milne-Thomson, L.M. C.B.E (1962), Theoretical Hydrodynamics, Macmillan & Co Ltd.
Tartar, L (2006), An Introduction to Navier Stokes Equation and Oceanography, Springer ISBN 3-540-35743-2
Birkhoff, Garrett (1960), Hydrodynamics, Princeton University Press
Campos, D.(Editor) (2017) Handbook on Navier-Stokes Equations Theory and Applied Analysis, Nova Science Publisher ISBN 978-1-53610-292-5
Döring, C.E. and J.D. Gibbon, J.D. (1995) Applied analysis of the Navier-Stokes equations, Cambridge University Press, ISBN 0-521-44557-1
Basset, A.B. (1888) Hydrodynamics Volume I and II, Cambridge: Delighton, Bell and Co
Fox, R. W. McDonald, A.T. and Pritchard, P.J. (2004) Introduction to Fluid Mechanics, John Wiley and Sons, ISBN 0-471-2023-2
Foias, C. Mainley, O. Rosa, R. and Temam, R. (2004) Navier–Stokes Equations and Turbulence, Cambridge University Press, ISBN 0-521-36032-3
Lions, P-L. (1998) Mathematical Topics in Fluid Mechanics Volume 1 and 2, Clarendon Press, ISBN 0-19-851488-3
Deville, M.O. and Gatski, T. B. (2012) Mathematical Modeling for Complex Fluids and Flows, Springer, ISBN 978-3-642-25294-5
Kochin, N.E. Kibel, I.A. and Roze, N.V. (1964) Theoretical Hydromechanics, John Wiley & Sons, Ltd.
Lamb, H. (1879) Hydrodynamics,'' Cambridge University Press,
External links
Simplified derivation of the Navier–Stokes equations
Three-dimensional unsteady form of the Navier–Stokes equations Glenn Research Center, NASA
Aerodynamics
Computational fluid dynamics
Concepts in physics
Equations of fluid dynamics
Functions of space and time
Partial differential equations
Transport phenomena | Navier–Stokes equations | [
"Physics",
"Chemistry",
"Engineering"
] | 8,555 | [
"Transport phenomena",
"Physical phenomena",
"Equations of fluid dynamics",
"Equations of physics",
"Computational fluid dynamics",
"Functions of space and time",
"Chemical engineering",
"Computational physics",
"Aerodynamics",
"nan",
"Aerospace engineering",
"Spacetime",
"Fluid dynamics"
] |
48,396 | https://en.wikipedia.org/wiki/Mathematical%20analysis | Analysis is the branch of mathematics dealing with continuous functions, limits, and related theories, such as differentiation, integration, measure, infinite sequences, series, and analytic functions.
These theories are usually studied in the context of real and complex numbers and functions. Analysis evolved from calculus, which involves the elementary concepts and techniques of analysis.
Analysis may be distinguished from geometry; however, it can be applied to any space of mathematical objects that has a definition of nearness (a topological space) or specific distances between objects (a metric space).
History
Ancient
Mathematical analysis formally developed in the 17th century during the Scientific Revolution, but many of its ideas can be traced back to earlier mathematicians. Early results in analysis were implicitly present in the early days of ancient Greek mathematics. For instance, an infinite geometric sum is implicit in Zeno's paradox of the dichotomy. (Strictly speaking, the point of the paradox is to deny that the infinite sum exists.) Later, Greek mathematicians such as Eudoxus and Archimedes made more explicit, but informal, use of the concepts of limits and convergence when they used the method of exhaustion to compute the area and volume of regions and solids. The explicit use of infinitesimals appears in Archimedes' The Method of Mechanical Theorems, a work rediscovered in the 20th century. In Asia, the Chinese mathematician Liu Hui used the method of exhaustion in the 3rd century CE to find the area of a circle. From Jain literature, it appears that Hindus were in possession of the formulae for the sum of the arithmetic and geometric series as early as the 4th century BCE.
Ācārya Bhadrabāhu uses the sum of a geometric series in his Kalpasūtra in .
Medieval
Zu Chongzhi established a method that would later be called Cavalieri's principle to find the volume of a sphere in the 5th century. In the 12th century, the Indian mathematician Bhāskara II used infinitesimal and used what is now known as Rolle's theorem.
In the 14th century, Madhava of Sangamagrama developed infinite series expansions, now called Taylor series, of functions such as sine, cosine, tangent and arctangent. Alongside his development of Taylor series of trigonometric functions, he also estimated the magnitude of the error terms resulting of truncating these series, and gave a rational approximation of some infinite series. His followers at the Kerala School of Astronomy and Mathematics further expanded his works, up to the 16th century.
Modern
Foundations
The modern foundations of mathematical analysis were established in 17th century Europe. This began when Fermat and Descartes developed analytic geometry, which is the precursor to modern calculus. Fermat's method of adequality allowed him to determine the maxima and minima of functions and the tangents of curves. Descartes's publication of La Géométrie in 1637, which introduced the Cartesian coordinate system, is considered to be the establishment of mathematical analysis. It would be a few decades later that Newton and Leibniz independently developed infinitesimal calculus, which grew, with the stimulus of applied work that continued through the 18th century, into analysis topics such as the calculus of variations, ordinary and partial differential equations, Fourier analysis, and generating functions. During this period, calculus techniques were applied to approximate discrete problems by continuous ones.
Modernization
In the 18th century, Euler introduced the notion of a mathematical function. Real analysis began to emerge as an independent subject when Bernard Bolzano introduced the modern definition of continuity in 1816, but Bolzano's work did not become widely known until the 1870s. In 1821, Cauchy began to put calculus on a firm logical foundation by rejecting the principle of the generality of algebra widely used in earlier work, particularly by Euler. Instead, Cauchy formulated calculus in terms of geometric ideas and infinitesimals. Thus, his definition of continuity required an infinitesimal change in x to correspond to an infinitesimal change in y. He also introduced the concept of the Cauchy sequence, and started the formal theory of complex analysis. Poisson, Liouville, Fourier and others studied partial differential equations and harmonic analysis. The contributions of these mathematicians and others, such as Weierstrass, developed the (ε, δ)-definition of limit approach, thus founding the modern field of mathematical analysis. Around the same time, Riemann introduced his theory of integration, and made significant advances in complex analysis.
Towards the end of the 19th century, mathematicians started worrying that they were assuming the existence of a continuum of real numbers without proof. Dedekind then constructed the real numbers by Dedekind cuts, in which irrational numbers are formally defined, which serve to fill the "gaps" between rational numbers, thereby creating a complete set: the continuum of real numbers, which had already been developed by Simon Stevin in terms of decimal expansions. Around that time, the attempts to refine the theorems of Riemann integration led to the study of the "size" of the set of discontinuities of real functions.
Also, various pathological objects, (such as nowhere continuous functions, continuous but nowhere differentiable functions, and space-filling curves), commonly known as "monsters", began to be investigated. In this context, Jordan developed his theory of measure, Cantor developed what is now called naive set theory, and Baire proved the Baire category theorem. In the early 20th century, calculus was formalized using an axiomatic set theory. Lebesgue greatly improved measure theory, and introduced his own theory of integration, now known as Lebesgue integration, which proved to be a big improvement over Riemann's. Hilbert introduced Hilbert spaces to solve integral equations. The idea of normed vector space was in the air, and in the 1920s Banach created functional analysis.
Important concepts
Metric spaces
In mathematics, a metric space is a set where a notion of distance (called a metric) between elements of the set is defined.
Much of analysis happens in some metric space; the most commonly used are the real line, the complex plane, Euclidean space, other vector spaces, and the integers. Examples of analysis without a metric include measure theory (which describes size rather than distance) and functional analysis (which studies topological vector spaces that need not have any sense of distance).
Formally, a metric space is an ordered pair where is a set and is a metric on , i.e., a function
such that for any , the following holds:
, with equality if and only if (identity of indiscernibles),
(symmetry), and
(triangle inequality).
By taking the third property and letting , it can be shown that (non-negative).
Sequences and limits
A sequence is an ordered list. Like a set, it contains members (also called elements, or terms). Unlike a set, order matters, and exactly the same elements can appear multiple times at different positions in the sequence. Most precisely, a sequence can be defined as a function whose domain is a countable totally ordered set, such as the natural numbers.
One of the most important properties of a sequence is convergence. Informally, a sequence converges if it has a limit. Continuing informally, a (singly-infinite) sequence has a limit if it approaches some point x, called the limit, as n becomes very large. That is, for an abstract sequence (an) (with n running from 1 to infinity understood) the distance between an and x approaches 0 as n → ∞, denoted
Main branches
Calculus
Real analysis
Real analysis (traditionally, the "theory of functions of a real variable") is a branch of mathematical analysis dealing with the real numbers and real-valued functions of a real variable. In particular, it deals with the analytic properties of real functions and sequences, including convergence and limits of sequences of real numbers, the calculus of the real numbers, and continuity, smoothness and related properties of real-valued functions.
Complex analysis
Complex analysis (traditionally known as the "theory of functions of a complex variable") is the branch of mathematical analysis that investigates functions of complex numbers. It is useful in many branches of mathematics, including algebraic geometry, number theory, applied mathematics; as well as in physics, including hydrodynamics, thermodynamics, mechanical engineering, electrical engineering, and particularly, quantum field theory.
Complex analysis is particularly concerned with the analytic functions of complex variables (or, more generally, meromorphic functions). Because the separate real and imaginary parts of any analytic function must satisfy Laplace's equation, complex analysis is widely applicable to two-dimensional problems in physics.
Functional analysis
Functional analysis is a branch of mathematical analysis, the core of which is formed by the study of vector spaces endowed with some kind of limit-related structure (e.g. inner product, norm, topology, etc.) and the linear operators acting upon these spaces and respecting these structures in a suitable sense. The historical roots of functional analysis lie in the study of spaces of functions and the formulation of properties of transformations of functions such as the Fourier transform as transformations defining continuous, unitary etc. operators between function spaces. This point of view turned out to be particularly useful for the study of differential and integral equations.
Harmonic analysis
Harmonic analysis is a branch of mathematical analysis concerned with the representation of functions and signals as the superposition of basic waves. This includes the study of the notions of Fourier series and Fourier transforms (Fourier analysis), and of their generalizations. Harmonic analysis has applications in areas as diverse as music theory, number theory, representation theory, signal processing, quantum mechanics, tidal analysis, and neuroscience.
Differential equations
A differential equation is a mathematical equation for an unknown function of one or several variables that relates the values of the function itself and its derivatives of various orders. Differential equations play a prominent role in engineering, physics, economics, biology, and other disciplines.
Differential equations arise in many areas of science and technology, specifically whenever a deterministic relation involving some continuously varying quantities (modeled by functions) and their rates of change in space or time (expressed as derivatives) is known or postulated. This is illustrated in classical mechanics, where the motion of a body is described by its position and velocity as the time value varies. Newton's laws allow one (given the position, velocity, acceleration and various forces acting on the body) to express these variables dynamically as a differential equation for the unknown position of the body as a function of time. In some cases, this differential equation (called an equation of motion) may be solved explicitly.
Measure theory
A measure on a set is a systematic way to assign a number to each suitable subset of that set, intuitively interpreted as its size. In this sense, a measure is a generalization of the concepts of length, area, and volume. A particularly important example is the Lebesgue measure on a Euclidean space, which assigns the conventional length, area, and volume of Euclidean geometry to suitable subsets of the -dimensional Euclidean space . For instance, the Lebesgue measure of the interval in the real numbers is its length in the everyday sense of the word – specifically, 1.
Technically, a measure is a function that assigns a non-negative real number or +∞ to (certain) subsets of a set . It must assign 0 to the empty set and be (countably) additive: the measure of a 'large' subset that can be decomposed into a finite (or countable) number of 'smaller' disjoint subsets, is the sum of the measures of the "smaller" subsets. In general, if one wants to associate a consistent size to each subset of a given set while satisfying the other axioms of a measure, one only finds trivial examples like the counting measure. This problem was resolved by defining measure only on a sub-collection of all subsets; the so-called measurable subsets, which are required to form a -algebra. This means that the empty set, countable unions, countable intersections and complements of measurable subsets are measurable. Non-measurable sets in a Euclidean space, on which the Lebesgue measure cannot be defined consistently, are necessarily complicated in the sense of being badly mixed up with their complement. Indeed, their existence is a non-trivial consequence of the axiom of choice.
Numerical analysis
Numerical analysis is the study of algorithms that use numerical approximation (as opposed to general symbolic manipulations) for the problems of mathematical analysis (as distinguished from discrete mathematics).
Modern numerical analysis does not seek exact answers, because exact answers are often impossible to obtain in practice. Instead, much of numerical analysis is concerned with obtaining approximate solutions while maintaining reasonable bounds on errors.
Numerical analysis naturally finds applications in all fields of engineering and the physical sciences, but in the 21st century, the life sciences and even the arts have adopted elements of scientific computations. Ordinary differential equations appear in celestial mechanics (planets, stars and galaxies); numerical linear algebra is important for data analysis; stochastic differential equations and Markov chains are essential in simulating living cells for medicine and biology.
Vector analysis
Vector analysis, also called vector calculus, is a branch of mathematical analysis dealing with vector-valued functions.
Scalar analysis
Scalar analysis is a branch of mathematical analysis dealing with values related to scale as opposed to direction. Values such as temperature are scalar because they describe the magnitude of a value without regard to direction, force, or displacement that value may or may not have.
Tensor analysis
Other topics
Calculus of variations deals with extremizing functionals, as opposed to ordinary calculus which deals with functions.
Harmonic analysis deals with the representation of functions or signals as the superposition of basic waves.
Geometric analysis involves the use of geometrical methods in the study of partial differential equations and the application of the theory of partial differential equations to geometry.
Clifford analysis, the study of Clifford valued functions that are annihilated by Dirac or Dirac-like operators, termed in general as monogenic or Clifford analytic functions.
p-adic analysis, the study of analysis within the context of p-adic numbers, which differs in some interesting and surprising ways from its real and complex counterparts.
Non-standard analysis, which investigates the hyperreal numbers and their functions and gives a rigorous treatment of infinitesimals and infinitely large numbers.
Computable analysis, the study of which parts of analysis can be carried out in a computable manner.
Stochastic calculus – analytical notions developed for stochastic processes.
Set-valued analysis – applies ideas from analysis and topology to set-valued functions.
Convex analysis, the study of convex sets and functions.
Idempotent analysis – analysis in the context of an idempotent semiring, where the lack of an additive inverse is compensated somewhat by the idempotent rule A + A = A.
Tropical analysis – analysis of the idempotent semiring called the tropical semiring (or max-plus algebra/min-plus algebra).
Constructive analysis, which is built upon a foundation of constructive, rather than classical, logic and set theory.
Intuitionistic analysis, which is developed from constructive logic like constructive analysis but also incorporates choice sequences.
Paraconsistent analysis, which is built upon a foundation of paraconsistent, rather than classical, logic and set theory.
Smooth infinitesimal analysis, which is developed in a smooth topos.
Applications
Techniques from analysis are also found in other areas such as:
Physical sciences
The vast majority of classical mechanics, relativity, and quantum mechanics is based on applied analysis, and differential equations in particular. Examples of important differential equations include Newton's second law, the Schrödinger equation, and the Einstein field equations.
Functional analysis is also a major factor in quantum mechanics.
Signal processing
When processing signals, such as audio, radio waves, light waves, seismic waves, and even images, Fourier analysis can isolate individual components of a compound waveform, concentrating them for easier detection or removal. A large family of signal processing techniques consist of Fourier-transforming a signal, manipulating the Fourier-transformed data in a simple way, and reversing the transformation.
Other areas of mathematics
Techniques from analysis are used in many areas of mathematics, including:
Analytic number theory
Analytic combinatorics
Continuous probability
Differential entropy in information theory
Differential games
Differential geometry, the application of calculus to specific mathematical spaces known as manifolds that possess a complicated internal structure but behave in a simple manner locally.
Differentiable manifolds
Differential topology
Partial differential equations
Famous Textbooks
Foundation of Analysis: The Arithmetic of Whole Rational, Irrational and Complex Numbers, by Edmund Landau
Introductory Real Analysis, by Andrey Kolmogorov, Sergei Fomin
Differential and Integral Calculus (3 volumes), by Grigorii Fichtenholz
The Fundamentals of Mathematical Analysis (2 volumes), by Grigorii Fichtenholz
A Course Of Mathematical Analysis (2 volumes), by Sergey Nikolsky
Mathematical Analysis (2 volumes), by Vladimir Zorich
A Course of Higher Mathematics (5 volumes, 6 parts), by Vladimir Smirnov
Differential And Integral Calculus, by Nikolai Piskunov
A Course of Mathematical Analysis, by Aleksandr Khinchin
Mathematical Analysis: A Special Course, by Georgiy Shilov
Theory of Functions of a Real Variable (2 volumes), by Isidor Natanson
Problems in Mathematical Analysis, by Boris Demidovich
Problems and Theorems in Analysis (2 volumes), by George Pólya, Gábor Szegő
Mathematical Analysis: A Modern Approach to Advanced Calculus, by Tom Apostol
Principles of Mathematical Analysis, by Walter Rudin
Real Analysis: Measure Theory, Integration, and Hilbert Spaces, by Elias Stein
Complex Analysis: An Introduction to the Theory of Analytic Functions of One Complex Variable, by Lars Ahlfors
Complex Analysis, by Elias Stein
Functional Analysis: Introduction to Further Topics in Analysis, by Elias Stein
Analysis (2 volumes), by Terence Tao
Analysis (3 volumes), by Herbert Amann, Joachim Escher
Real and Functional Analysis, by Vladimir Bogachev, Oleg Smolyanov
Real and Functional Analysis, by Serge Lang
See also
Constructive analysis
History of calculus
Hypercomplex analysis
Multiple rule-based problems
Multivariable calculus
Paraconsistent logic
Smooth infinitesimal analysis
Timeline of calculus and mathematical analysis
References
Further reading
(vi+608 pages) (reprinted: 1935, 1940, 1946, 1950, 1952, 1958, 1962, 1963, 1992)
External links
Earliest Known Uses of Some of the Words of Mathematics: Calculus & Analysis
Basic Analysis: Introduction to Real Analysis by Jiri Lebl (Creative Commons BY-NC-SA)
Mathematical Analysis – Encyclopædia Britannica
Calculus and Analysis | Mathematical analysis | [
"Mathematics"
] | 3,829 | [
"Mathematical analysis"
] |
48,397 | https://en.wikipedia.org/wiki/Isocyanate | In organic chemistry, isocyanate is the functional group with the formula . Organic compounds that contain an isocyanate group are referred to as isocyanates. An organic compound with two isocyanate groups is known as a diisocyanate. Diisocyanates are manufactured for the production of polyurethanes, a class of polymers.
Isocyanates should not be confused with cyanate esters and isocyanides, very different families of compounds. The cyanate (cyanate ester) functional group () is arranged differently from the isocyanate group (). Isocyanides have the connectivity , lacking the oxygen of the cyanate groups.
Structure and bonding
In terms of bonding, isocyanates are closely related to carbon dioxide (CO2) and carbodiimides (C(NR)2). The C−N=C=O unit that defines isocyanates is planar, and the N=C=O linkage is nearly linear. In phenyl isocyanate, the C=N and C=O distances are respectively 1.195 and 1.173 Å. The C−N=C angle is 134.9° and the N=C=O angle is 173.1°.
Production
Isocyanates are usually produced from amines by phosgenation, i.e. treating with phosgene:
These reactions proceed via the intermediacy of a carbamoyl chloride (). Owing to the hazardous nature of phosgene, the production of isocyanates requires special precautions. A laboratory-safe variation masks the phosgene as oxalyl chloride. Also, oxalyl chloride can be used to form acyl isocyanates from primary amides, which phosgene typically dehydrates to nitriles instead.
Another route to isocyanates entails addition of isocyanic acid to alkenes. Complementarily, alkyl isocyanates form by displacement reactions involving alkyl halides and alkali metal cyanates.
Aryl isocyanates can be synthesized from carbonylation of nitro- and nitrosoarenes; a palladium catalyst is necessary to avoid side-reactions of the nitrene intermediate.
Three rearrangement reactions involving nitrenes give isocyanates:
Schmidt reaction, a reaction where a carboxylic acid is treated with ammonia and hydrazoic acid yielding an isocyanate.
Curtius rearrangement degradation of an acyl azide to an isocyanate and nitrogen gas.
Lossen rearrangement, the conversion of a hydroxamic acid to an isocyanate via the formation of an O-acyl, sulfonyl, or phosphoryl intermediate.
An isocyanate is also the immediate product of the Hofmann rearrangement, but typically hydrolyzes under reaction conditions.
Reactivity
With nucleophiles
Isocyanates are electrophiles, and as such they are reactive toward a variety of nucleophiles including alcohols, amines, and even water having a higher reactivity compared to structurally analogous isothiocyanates.
Upon treatment with an alcohol, an isocyanate forms a urethane linkage:
where R and R' are alkyl or aryl groups.
If a diisocyanate is treated with a compound containing two or more hydroxyl groups, such as a diol or a polyol, polymer chains are formed, which are known as polyurethanes.
Isocyanates react with water to form carbon dioxide:
This reaction is exploited in tandem with the production of polyurethane to give polyurethane foams. The carbon dioxide functions as a blowing agent.
Isocyanates also react with amines to give ureas:
The addition of an isocyanate to a urea gives a biuret:
Reaction between a di-isocyanate and a compound containing two or more amine groups produces long polymer chains known as polyureas.
Carbodiimides are produced by the decarboxylation of alkyl and aryl isocyanate using phosphine oxides as a catalyst:
Cyclization
Isocyanates also can react with themselves. Aliphatic diisocyanates can trimerise to from substituted isocyanuric acid groups. This can be seen in the formation of polyisocyanurate resins (PIR) which are commonly used as rigid thermal insulation. Isocyanates participate in Diels–Alder reactions, functioning as dienophiles.
Rearrangement reactions
Isocyanates are common intermediates in the synthesis of primary amines via hydrolysis:
Hofmann rearrangement, a reaction in which a primary amide is treated with a strong oxidizer such as sodium hypobromite or lead tetraacetate to form an isocyanate intermediate.
Common isocyanates
The global market for diisocyanates in the year 2000 was 4.4 million tonnes, of which 61.3% was methylene diphenyl diisocyanate (MDI), 34.1% was toluene diisocyanate (TDI), 3.4% was the total for hexamethylene diisocyanate (HDI) and isophorone diisocyanate (IPDI), and 1.2% was the total for various others. A monofunctional isocyanate of industrial significance is methyl isocyanate (MIC), which is used in the manufacture of pesticides.
Common applications
MDI is commonly used in the manufacture of rigid foams and surface coating. Polyurethane foam boards are used in construction for insulation. TDI is commonly used in applications where flexible foams are used, such as furniture and bedding. Both MDI and TDI are used in the making of adhesives and sealants due to weather-resistant properties. Isocyanates, both MDI and TDI are widely used in as spraying applications of insulation due to the speed and flexibility of applications. Foams can be sprayed into structures and harden in place or retain some flexibility as required by the application. HDI is commonly utilized in high-performance surface-coating applications, including automotive paints.
Health and safety
The risks of isocyanates was brought to the world's attention with the 1984 Bhopal disaster, which caused the death of nearly 4000 people from the accidental release of methyl isocyanate. In 2008, the same chemical was involved in an explosion at a pesticide manufacturing plant in West Virginia.
LD50s for isocyanates are typically several hundred milligrams per kilogram. Despite this low acute toxicity, an extremely low short-term exposure limit (STEL) of 0.07 mg/m3 is the legal limit for all isocyanates (except methyl isocyanate: 0.02 mg/m3) in the United Kingdom. These limits are set to protect workers from chronic health effects such as occupational asthma, contact dermatitis, or irritation of the respiratory tract.
Since they are used in spraying applications, the properties of their aerosols have attracted attention. In the U.S., OSHA conducted a National Emphasis Program on isocyanates starting in 2013 to make employers and workers more aware of the health risks.
Polyurethanes have variable curing times, and the presence of free isocyanates in foams vary accordingly.
Both the US National Toxicology Program (NTP) and International Agency for Research on Cancer (IARC) have evaluated TDI as a potential human carcinogen and Group 2B "possibly carcinogenic to humans". MDI appears to be relatively safer and is unlikely a human carcinogen. The IARC evaluates MDI as Group 3 "not classifiable as to its carcinogenicity in humans".
All major producers of MDI and TDI are members of the International Isocyanate Institute, which promotes the safe handling of MDI and TDI.
Hazards
Toxicity
Isocyanates can present respiratory hazards as particulates, vapors or aerosols. Autobody shop workers are a very commonly examined population for isocyanate exposure as they are repeatedly exposed when spray painting automobiles and can be exposed when installing truck bed liners. Hypersensitivity pneumonitis has slower onset and features chronic inflammation that can be seen on imaging of the lungs. Occupational asthma is a worrisome outcome of respiratory sensitization to isocyanates as it can be acutely fatal. Diagnosis of occupational asthma is generally performed using pulmonary function testing (PFT) and performed by pulmonology or occupational medicine physicians. Occupational asthma is much like asthma in that it causes episodic shortness of breath and wheezing. Both the dose and duration of exposure to isocyanates can lead to respiratory sensitization. Dermal exposures to isocyanates can sensitize an exposed person to respiratory disease.
Dermal exposures can occur via mixing, spraying coatings or applying and spreading coatings manually. Dermal exposures to isocyanates is known to lead to respiratory sensitization. Even when the right personal protective equipment (PPE) is used, exposures can occur to body areas not completely covered. Isocyanates can also permeate improper PPE, necessitating frequent changes of both disposable gloves and suits if they become over exposed.
Flammability
Methyl isocyanate (MIC) is highly flammable. MDI and TDI are much less flammable. Flammability of materials is a consideration in furniture design. The specific flammability hazard is noted on the safety data sheet (SDS) for specific isocyanates.
Hazard minimization
Industrial science attempts to minimize the hazards of isocyanates through multiple techniques. The EPA has sponsored ongoing research on polyurethane production without isocyanates. Where isocyanates are unavoidable but interchangeable, substituting a less hazardous isocyanate may control hazards. Ventilation and automation can also minimizes worker exposure to the isocyanates used.
If human workers must enter isocyanate-contaminated regions, personal protective equipment (PPE) can reduce their intake. In general, workers wear eye protection and gloves and coveralls to reduce dermal exposure For some autobody paint and clear-coat spraying applications, a full-face mask is required.
The US Occupational Safety and Health Administration (OSHA) requires frequent training to ensure isocyanate hazards are appropriately minimized. Moreover, OSHA requires standardized isocyanate concentration measurements to avoid violating occupational exposure limits. In the case of MDI, OSHA expects sampling with glass-fiber filters at standard air flow rates, and then liquid chromatography.
Combined industrial hygiene and medical surveillance can significantly reduce occupational asthma incidence. Biological tests exist to identify isocyanate exposure; the US Navy uses regular pulmonary function testing and screening questionnaires.
Emergency management is a complex process of preparation and should be considered in a setting where a release of bulk chemicals may threaten the well-being of the public. In the Bhopal disaster, an uncontrolled MIC release killed thousands, affected hundreds of thousands more, and spurred the development of modern disaster preparation.
Occupational exposure limits
Exposure limits can be expressed as ceiling limits, a maximal value, short-term exposure limits (STEL), a 15-minute exposure limit or an 8-hour time-weighted average limit (TWA). Below is a sampling, not exhaustive, as less common isocyanates also have specific limits within the United States, and in some regions there are limits on total isocyanate, which recognizes some of the uncertainty regarding the safety of mixtures of chemicals as compared to pure chemical exposures. For example, while there is no OEL for HDI, NIOSH has a REL of 5 ppb for an 8-hour TWA and a ceiling limit of 20 ppb, consistent with the recommendations for MDI.
Regulation
United States
The Occupational Safety and Health Administration (OSHA) is the regulatory body covering worker safety. OSHA puts forth permissible exposure limit (PEL) 20 ppb for MDI and detailed technical guidance on exposure assessment.
The National Institutes of Health (NIOSH) is the agency responsible for providing the research and recommendations regarding workplace safety, while OSHA is more of an enforcement body. NIOSH is responsible for producing the science that can result in recommended exposure limits (REL), which can be lower than the PEL. OSHA is tasked with enforcement and defending the enforceable limits (PELs). In 1992, when OSHA reduced the PEL for TDI to the NIOSH REL, the PEL reduction was challenged in court, and the reduction was reversed.
The Environmental Protection Agency (EPA) is also involved in the regulation of isocyanates with regard to the environment and also non-worker persons that might be exposed.
The American Conference of Governmental Industrial Hygienists (ACGIH) is a non-government organization that publishes guidance known as threshold limit values (TLV) for . The TLV is not an OSHA-enforceable value, unless the PEL is the same.
European Union
The European Chemicals Agency (ECHA) provides regulatory oversight of chemicals used within the European Union. ECHA has been implementing policy aimed at limiting worker exposure through elimination by lower allowable concentrations in products and mandatory worker training, an administrative control. Within the European Union, many nations set their own occupational exposure limits for isocyanates.
International groups
The United Nations, through the World Health Organization (WHO) together with the International Labour Organization (ILO) and United Nations Environment Programme (UNEP), collaborate on the International Programme on Chemical Safety (IPCS) to publish summary documents on chemicals. The IPCS published one such document in 2000 summarizing the status of scientific knowledge on MDI.
The IARC evaluates the hazard data on chemicals and assigns a rating on the risk of carcinogenesis. In the case of TDI, the final evaluation is possibly carcinogenic to humans (Group 2B). For MDI, the final evaluation is not classifiable as to its carcinogenicity to humans (Group 3).
The International Isocyanate Institute is an international industry consortium that seeks promote the safe utilization of isocyanates by promulgating best practices.
See also
Isothiocyanate
Polymethylene polyphenylene isocyanate
References
External links
NIOSH Safety and Health Topic: Isocyanates, from the website of the National Institute for Occupational Safety and Health (NIOSH)
Health and Safety Executive, website of the UK Health and Safety Executive, useful search terms on this site — isocyanates, MVR, asthma
International Isocyanate Institute | dii International Isocyanate Institute
Safe Working Procedure for Isocyanate-Containing Products, June 200.
Isocyanates – Measurement Methodology, Exposure and Effects, Swedish National Institute for Working Life Workshop (1999)
Health and Safety Executive, Guidance Note (EH16) Isocyanates: Toxic Hazards and Precautions (1984)
The Society of the Plastics Industry – Technical Bulletin AX119 MDI-Based Polyurethane
Foam Systems: Guidelines for Safe Handling and Disposal (1993)
An occupational hygiene assessment of the use and control of isocyanates in the UK by Hilary A Cowie et al. HSE Research Report RR311/2005. Prepared by the Institute of Occupational Medicine for the Health and Safety Executive
Functional groups
Commodity chemicals
Chemical hazards | Isocyanate | [
"Chemistry"
] | 3,291 | [
"Isocyanates",
"Products of chemical industry",
"Functional groups",
"Chemical hazards",
"Commodity chemicals"
] |
48,402 | https://en.wikipedia.org/wiki/Internet%20backbone | The Internet backbone is the principal data routes between large, strategically interconnected computer networks and core routers of the Internet. These data routes are hosted by commercial, government, academic and other high-capacity network centers as well as the Internet exchange points and network access points, which exchange Internet traffic internationally. Internet service providers (ISPs) participate in Internet backbone traffic through privately negotiated interconnection agreements, primarily governed by the principle of settlement-free peering.
The Internet, and consequently its backbone networks, do not rely on central control or coordinating facilities, nor do they implement any global network policies. The resilience of the Internet results from its principal architectural features, such as the idea of placing as few network state and control functions as possible in the network elements, instead relying on the endpoints of communication to handle most of the processing to ensure data integrity, reliability, and authentication. In addition, the high degree of redundancy of today's network links and sophisticated real-time routing protocols provide alternate paths of communications for load balancing and congestion avoidance.
The largest providers, known as Tier 1 networks, have such comprehensive networks that they do not purchase transit agreements from other providers.
Infrastructure
The Internet backbone consists of many networks owned by numerous companies.
Fiber-optic communication remains the medium of choice for Internet backbone providers for several reasons. Fiber-optics allow for fast data speeds and large bandwidth, suffer relatively little attenuation — allowing them to cover long distances with few repeaters — and are immune to crosstalk and other forms of electromagnetic interference.
The real-time routing protocols and redundancy built into the backbone is also able to reroute traffic in case of a failure. The data rates of backbone lines have increased over time. In 1998, all of the United States' backbone networks had utilized the slowest data rate of 45 Mbit/s. However, technological improvements allowed for 41 percent of backbones to have data rates of 2,488 Mbit/s or faster by the mid 2000s.
History
The first packet-switched computer networks, the NPL network and the ARPANET were interconnected in 1973 via University College London. The ARPANET used a backbone of routers called Interface Message Processors. Other packet-switched computer networks proliferated starting in the 1970s, eventually adopting TCP/IP protocols or being replaced by newer networks.
The National Science Foundation created the National Science Foundation Network (NSFNET) in 1986 by funding six networking sites using interconnecting links, with peering to the ARPANET. In 1987, this new network was upgraded to T1 links for thirteen sites. These sites included regional networks that in turn connected over 170 other networks. IBM, MCI and Merit upgraded the backbone to bandwidth (T3) in 1991. The combination of the ARPANET and NSFNET became known as the Internet. Within a few years, the dominance of the NSFNet backbone led to the decommissioning of the redundant ARPANET infrastructure in 1990.
In the early days of the Internet, backbone providers exchanged their traffic at government-sponsored network access points (NAPs), until the government privatized the Internet and transferred the NAPs to commercial providers.
Modern backbone
Because of the overlap and synergy between long-distance telephone networks and backbone networks, the largest long-distance voice carriers such as AT&T Inc., Verizon, Sprint, and Lumen also own some of the largest Internet backbone networks. These backbone providers sell their services to Internet service providers.
Each ISP has its own contingency network and is equipped with an outsourced backup. These networks are intertwined and crisscrossed to create a redundant network. Many companies operate their own backbones which are all interconnected at various Internet exchange points around the world. In order for data to navigate this web, it is necessary to have backbone routers—routers powerful enough to handle information—on the Internet backbone that are capable of directing data to other routers in order to send it to its final destination. Without them, information would be lost.
Economy of the backbone
Peering agreements
Backbone providers of roughly equivalent market share regularly create agreements called peering agreements, which allow the use of another's network to hand off traffic where it is ultimately delivered. Usually they do not charge each other for this, as the companies get revenue from their customers.
Regulation
Antitrust authorities have acted to ensure that no provider grows large enough to dominate the backbone market. In the United States, the Federal Communications Commission has decided not to monitor the competitive aspects of the Internet backbone interconnection relationships as long as the market continues to function well.
Transit agreements
Backbone providers of unequal market share usually create agreements called transit agreements, and usually contain some type of monetary agreement.
Regional backbone
Egypt
During the 2011 Egyptian revolution, the government of Egypt shut down the four major ISPs on January 27, 2011 at approximately 5:20 p.m. EST. The networks had not been physically interrupted, as the Internet transit traffic through Egypt was unaffected. Instead, the government shut down the Border Gateway Protocol (BGP) sessions announcing local routes. BGP is responsible for routing traffic between ISPs.
Only one of Egypt's ISPs was allowed to continue operations. The ISP Noor Group provided connectivity only to Egypt's stock exchange as well as some government ministries. Other ISPs started to offer free dial-up Internet access in other countries.
Europe
Europe is a major contributor to the growth of the international backbone as well as a contributor to the growth of Internet bandwidth. In 2003, Europe was credited with 82 percent of the world's international cross-border bandwidth. The company Level 3 Communications began to launch a line of dedicated Internet access and virtual private network services in 2011, giving large companies direct access to the tier 3 backbone. Connecting companies directly to the backbone will provide enterprises faster Internet service which meets a large market demand.
Caucasus
Certain countries around the Caucasus have very simple backbone networks. In 2011, a 70-year-old woman in Georgia pierced a fiber backbone line with a shovel and left the neighboring country of Armenia without Internet access for 12 hours. The country has since made major developments to the fiber backbone infrastructure, but progress is slow due to lack of government funding.
Japan
Japan's internet backbone requires a high degree of efficiency to support high demand for the Internet and technology in general. Japan had over 86 million Internet users in 2009, and was projected to climb to nearly 91 million Internet users by 2015. Since Japan has a demand for fiber to the home, Japan is looking into tapping a fiber-optic backbone line of Nippon Telegraph and Telephone (NTT), a domestic backbone carrier, in order to deliver this service at cheaper prices.
China
In some instances, the companies that own certain sections of the Internet backbone's physical infrastructure depend on competition in order to keep the Internet market profitable. This can be seen most prominently in China. Since China Telecom and China Unicom have acted as the sole Internet service providers to China for some time, smaller companies cannot compete with them in negotiating the interconnection settlement prices that keep the Internet market profitable in China. This imposition of discriminatory pricing by the large companies then results in market inefficiencies and stagnation, and ultimately affects the efficiency of the Internet backbone networks that service the nation.
See also
Default-free zone
Internet2
Mbone
Network service provider
Root name server
Packet switching
Trunking
Further reading
Greenstein, Shane. 2020. "The Basic Economics of Internet Infrastructure." Journal of Economic Perspectives, 34 (2): 192-214. DOI: 10.1257/jep.34.2.192
References
Further reading
External links
About Level 3
Russ Haynal's ISP Page
US Internet backbone maps
Automatically generated backbone map of the Internet
IPv6 Backbone Network Topology
Backbone, Internet
IT infrastructure | Internet backbone | [
"Technology"
] | 1,602 | [
"Information technology",
"Internet architecture",
"IT infrastructure"
] |
48,404 | https://en.wikipedia.org/wiki/Ring%20%28mathematics%29 | In mathematics, rings are algebraic structures that generalize fields: multiplication need not be commutative and multiplicative inverses need not exist. Informally, a ring is a set equipped with two binary operations satisfying properties analogous to those of addition and multiplication of integers. Ring elements may be numbers such as integers or complex numbers, but they may also be non-numerical objects such as polynomials, square matrices, functions, and power series.
Formally, a ring is a set endowed with two binary operations called addition and multiplication such that the ring is an abelian group with respect to the addition operator, and the multiplication operator is associative, is distributive over the addition operation, and has a multiplicative identity element. (Some authors define rings without requiring a multiplicative identity and instead call the structure defined above a ring with identity. See .)
Whether a ring is commutative has profound implications on its behavior. Commutative algebra, the theory of commutative rings, is a major branch of ring theory. Its development has been greatly influenced by problems and ideas of algebraic number theory and algebraic geometry. The simplest commutative rings are those that admit division by non-zero elements; such rings are called fields.
Examples of commutative rings include the set of integers with their standard addition and multiplication, the set of polynomials with their addition and multiplication, the coordinate ring of an affine algebraic variety, and the ring of integers of a number field. Examples of noncommutative rings include the ring of real square matrices with , group rings in representation theory, operator algebras in functional analysis, rings of differential operators, and cohomology rings in topology.
The conceptualization of rings spanned the 1870s to the 1920s, with key contributions by Dedekind, Hilbert, Fraenkel, and Noether. Rings were first formalized as a generalization of Dedekind domains that occur in number theory, and of polynomial rings and rings of invariants that occur in algebraic geometry and invariant theory. They later proved useful in other branches of mathematics such as geometry and analysis.
Definition
A ring is a set equipped with two binary operations + (addition) and ⋅ (multiplication) satisfying the following three sets of axioms, called the ring axioms:
is an abelian group under addition, meaning that:
for all in (that is, is associative).
for all in (that is, is commutative).
There is an element in such that for all in (that is, is the additive identity).
For each in there exists in such that (that is, is the additive inverse of ).
is a monoid under multiplication, meaning that:
for all in (that is, is associative).
There is an element in such that and for all in (that is, is the multiplicative identity).
Multiplication is distributive with respect to addition, meaning that:
for all in (left distributivity).
for all in (right distributivity).
In notation, the multiplication symbol is often omitted, in which case is written as .
Variations on the definition
In the terminology of this article, a ring is defined to have a multiplicative identity, while a structure with the same axiomatic definition but without the requirement for a multiplicative identity is instead called a "" (IPA: ) with a missing "i". For example, the set of even integers with the usual + and ⋅ is a rng, but not a ring. As explained in below, many authors apply the term "ring" without requiring a multiplicative identity.
Although ring addition is commutative, ring multiplication is not required to be commutative: need not necessarily equal . Rings that also satisfy commutativity for multiplication (such as the ring of integers) are called commutative rings. Books on commutative algebra or algebraic geometry often adopt the convention that ring means commutative ring, to simplify terminology.
In a ring, multiplicative inverses are not required to exist. A nonzero commutative ring in which every nonzero element has a multiplicative inverse is called a field.
The additive group of a ring is the underlying set equipped with only the operation of addition. Although the definition requires that the additive group be abelian, this can be inferred from the other ring axioms. The proof makes use of the "", and does not work in a rng. (For a rng, omitting the axiom of commutativity of addition leaves it inferable from the remaining rng assumptions only for elements that are products: .)
There are a few authors who use the term "ring" to refer to structures in which there is no requirement for multiplication to be associative. For these authors, every algebra is a "ring".
Illustration
The most familiar example of a ring is the set of all integers consisting of the numbers
The axioms of a ring were elaborated as a generalization of familiar properties of addition and multiplication of integers.
Some properties
Some basic properties of a ring follow immediately from the axioms:
The additive identity is unique.
The additive inverse of each element is unique.
The multiplicative identity is unique.
For any element in a ring , one has (zero is an absorbing element with respect to multiplication) and .
If in a ring (or more generally, is a unit element), then has only one element, and is called the zero ring.
If a ring contains the zero ring as a subring, then itself is the zero ring.
The binomial formula holds for any and satisfying .
Example: Integers modulo 4
Equip the set with the following operations:
The sum in is the remainder when the integer is divided by (as is always smaller than , this remainder is either or ). For example, and
The product in is the remainder when the integer is divided by . For example, and
Then is a ring: each axiom follows from the corresponding axiom for If is an integer, the remainder of when divided by may be considered as an element of and this element is often denoted by "" or which is consistent with the notation for . The additive inverse of any in is For example,
has a subring , and if is prime, then has no subrings.
Example: 2-by-2 matrices
The set of 2-by-2 square matrices with entries in a field is
With the operations of matrix addition and matrix multiplication, satisfies the above ring axioms. The element is the multiplicative identity of the ring. If and then while this example shows that the ring is noncommutative.
More generally, for any ring , commutative or not, and any nonnegative integer , the square matrices of dimension with entries in form a ring; see Matrix ring.
History
Dedekind
The study of rings originated from the theory of polynomial rings and the theory of algebraic integers. In 1871, Richard Dedekind defined the concept of the ring of integers of a number field. In this context, he introduced the terms "ideal" (inspired by Ernst Kummer's notion of ideal number) and "module" and studied their properties. Dedekind did not use the term "ring" and did not define the concept of a ring in a general setting.
Hilbert
The term "Zahlring" (number ring) was coined by David Hilbert in 1892 and published in 1897. In 19th century German, the word "Ring" could mean "association", which is still used today in English in a limited sense (for example, spy ring), so if that were the etymology then it would be similar to the way "group" entered mathematics by being a non-technical word for "collection of related things". According to Harvey Cohn, Hilbert used the term for a ring that had the property of "circling directly back" to an element of itself (in the sense of an equivalence). Specifically, in a ring of algebraic integers, all high powers of an algebraic integer can be written as an integral combination of a fixed set of lower powers, and thus the powers "cycle back". For instance, if then:
and so on; in general, is going to be an integral linear combination of , , and .
Fraenkel and Noether
The first axiomatic definition of a ring was given by Adolf Fraenkel in 1915, but his axioms were stricter than those in the modern definition. For instance, he required every non-zero-divisor to have a multiplicative inverse. In 1921, Emmy Noether gave a modern axiomatic definition of commutative rings (with and without 1) and developed the foundations of commutative ring theory in her paper Idealtheorie in Ringbereichen.
Multiplicative identity and the term "ring"
Fraenkel's axioms for a "ring" included that of a multiplicative identity, whereas Noether's did not.
Most or all books on algebra up to around 1960 followed Noether's convention of not requiring a for a "ring". Starting in the 1960s, it became increasingly common to see books including the existence of in the definition of "ring", especially in advanced books by notable authors such as Artin, Bourbaki, Eisenbud, and Lang. There are also books published as late as 2022 that use the term without the requirement for a . Likewise, the Encyclopedia of Mathematics does not require unit elements in rings. In a research article, the authors often specify which definition of ring they use in the beginning of that article.
Gardner and Wiegandt assert that, when dealing with several objects in the category of rings (as opposed to working with a fixed ring), if one requires all rings to have a , then some consequences include the lack of existence of infinite direct sums of rings, and that proper direct summands of rings are not subrings. They conclude that "in many, maybe most, branches of ring theory the requirement of the existence of a unity element is not sensible, and therefore unacceptable." Poonen makes the counterargument that the natural notion for rings would be the direct product rather than the direct sum. However, his main argument is that rings without a multiplicative identity are not totally associative, in the sense that they do not contain the product of any finite sequence of ring elements, including the empty sequence.
Authors who follow either convention for the use of the term "ring" may use one of the following terms to refer to objects satisfying the other convention:
to include a requirement for a multiplicative identity: "unital ring", "unitary ring", "unit ring", "ring with unity", "ring with identity", "ring with a unit", or "ring with 1".
to omit a requirement for a multiplicative identity: "rng" or "pseudo-ring", although the latter may be confusing because it also has other meanings.
Basic examples
Commutative rings
The prototypical example is the ring of integers with the two operations of addition and multiplication.
The rational, real and complex numbers are commutative rings of a type called fields.
A unital associative algebra over a commutative ring is itself a ring as well as an -module. Some examples:
The algebra of polynomials with coefficients in .
The algebra of formal power series with coefficients in .
The set of all continuous real-valued functions defined on the real line forms a commutative -algebra. The operations are pointwise addition and multiplication of functions.
Let be a set, and let be a ring. Then the set of all functions from to forms a ring, which is commutative if is commutative.
The ring of quadratic integers, the integral closure of in a quadratic extension of It is a subring of the ring of all algebraic integers.
The ring of profinite integers the (infinite) product of the rings of -adic integers over all prime numbers .
The Hecke ring, the ring generated by Hecke operators.
If is a set, then the power set of becomes a ring if we define addition to be the symmetric difference of sets and multiplication to be intersection. This is an example of a Boolean ring.
Noncommutative rings
For any ring and any natural number , the set of all square -by- matrices with entries from , forms a ring with matrix addition and matrix multiplication as operations. For , this matrix ring is isomorphic to itself. For (and not the zero ring), this matrix ring is noncommutative.
If is an abelian group, then the endomorphisms of form a ring, the endomorphism ring of . The operations in this ring are addition and composition of endomorphisms. More generally, if is a left module over a ring , then the set of all -linear maps forms a ring, also called the endomorphism ring and denoted by .
The endomorphism ring of an elliptic curve. It is a commutative ring if the elliptic curve is defined over a field of characteristic zero.
If is a group and is a ring, the group ring of over is a free module over having as basis. Multiplication is defined by the rules that the elements of commute with the elements of and multiply together as they do in the group .
The ring of differential operators (depending on the context). In fact, many rings that appear in analysis are noncommutative. For example, most Banach algebras are noncommutative.
Non-rings
The set of natural numbers with the usual operations is not a ring, since is not even a group (not all the elements are invertible with respect to addition – for instance, there is no natural number which can be added to to get as a result). There is a natural way to enlarge it to a ring, by including negative numbers to produce the ring of integers The natural numbers (including ) form an algebraic structure known as a semiring (which has all of the axioms of a ring excluding that of an additive inverse).
Let be the set of all continuous functions on the real line that vanish outside a bounded interval that depends on the function, with addition as usual but with multiplication defined as convolution: Then is a rng, but not a ring: the Dirac delta function has the property of a multiplicative identity, but it is not a function and hence is not an element of .
Basic concepts
Products and powers
For each nonnegative integer , given a sequence of elements of , one can define the product recursively: let and let for .
As a special case, one can define nonnegative integer powers of an element of a ring: and for . Then for all .
Elements in a ring
A left zero divisor of a ring is an element in the ring such that there exists a nonzero element of such that . A right zero divisor is defined similarly.
A nilpotent element is an element such that for some . One example of a nilpotent element is a nilpotent matrix. A nilpotent element in a nonzero ring is necessarily a zero divisor.
An idempotent is an element such that . One example of an idempotent element is a projection in linear algebra.
A unit is an element having a multiplicative inverse; in this case the inverse is unique, and is denoted by . The set of units of a ring is a group under ring multiplication; this group is denoted by or or . For example, if is the ring of all square matrices of size over a field, then consists of the set of all invertible matrices of size , and is called the general linear group.
Subring
A subset of is called a subring if any one of the following equivalent conditions holds:
the addition and multiplication of restrict to give operations making a ring with the same multiplicative identity as .
; and for all in , the elements , , and are in .
can be equipped with operations making it a ring such that the inclusion map is a ring homomorphism.
For example, the ring of integers is a subring of the field of real numbers and also a subring of the ring of polynomials (in both cases, contains 1, which is the multiplicative identity of the larger rings). On the other hand, the subset of even integers does not contain the identity element and thus does not qualify as a subring of one could call a subrng, however.
An intersection of subrings is a subring. Given a subset of , the smallest subring of containing is the intersection of all subrings of containing , and it is called the subring generated by .
For a ring , the smallest subring of is called the characteristic subring of . It can be generated through addition of copies of and . It is possible that ( times) can be zero. If is the smallest positive integer such that this occurs, then is called the characteristic of . In some rings, is never zero for any positive integer , and those rings are said to have characteristic zero.
Given a ring , let denote the set of all elements in such that commutes with every element in : for any in . Then is a subring of , called the center of . More generally, given a subset of , let be the set of all elements in that commute with every element in . Then is a subring of , called the centralizer (or commutant) of . The center is the centralizer of the entire ring . Elements or subsets of the center are said to be central in ; they (each individually) generate a subring of the center.
Ideal
Let be a ring. A left ideal of is a nonempty subset of such that for any in and in , the elements and are in . If denotes the -span of , that is, the set of finite sums
then is a left ideal if . Similarly, a right ideal is a subset such that . A subset is said to be a two-sided ideal or simply ideal if it is both a left ideal and right ideal. A one-sided or two-sided ideal is then an additive subgroup of . If is a subset of , then is a left ideal, called the left ideal generated by ; it is the smallest left ideal containing . Similarly, one can consider the right ideal or the two-sided ideal generated by a subset of .
If is in , then and are left ideals and right ideals, respectively; they are called the principal left ideals and right ideals generated by . The principal ideal is written as . For example, the set of all positive and negative multiples of along with form an ideal of the integers, and this ideal is generated by the integer . In fact, every ideal of the ring of integers is principal.
Like a group, a ring is said to be simple if it is nonzero and it has no proper nonzero two-sided ideals. A commutative simple ring is precisely a field.
Rings are often studied with special conditions set upon their ideals. For example, a ring in which there is no strictly increasing infinite chain of left ideals is called a left Noetherian ring. A ring in which there is no strictly decreasing infinite chain of left ideals is called a left Artinian ring. It is a somewhat surprising fact that a left Artinian ring is left Noetherian (the Hopkins–Levitzki theorem). The integers, however, form a Noetherian ring which is not Artinian.
For commutative rings, the ideals generalize the classical notion of divisibility and decomposition of an integer into prime numbers in algebra. A proper ideal of is called a prime ideal if for any elements we have that implies either or Equivalently, is prime if for any ideals , we have that implies either or . This latter formulation illustrates the idea of ideals as generalizations of elements.
Homomorphism
A homomorphism from a ring to a ring is a function from to that preserves the ring operations; namely, such that, for all , in the following identities hold:
If one is working with , then the third condition is dropped.
A ring homomorphism is said to be an isomorphism if there exists an inverse homomorphism to (that is, a ring homomorphism that is an inverse function), or equivalently if it is bijective.
Examples:
The function that maps each integer to its remainder modulo (a number in ) is a homomorphism from the ring to the quotient ring ("quotient ring" is defined below).
If is a unit element in a ring , then is a ring homomorphism, called an inner automorphism of .
Let be a commutative ring of prime characteristic . Then is a ring endomorphism of called the Frobenius homomorphism.
The Galois group of a field extension is the set of all automorphisms of whose restrictions to are the identity.
For any ring , there are a unique ring homomorphism and a unique ring homomorphism .
An epimorphism (that is, right-cancelable morphism) of rings need not be surjective. For example, the unique map is an epimorphism.
An algebra homomorphism from a -algebra to the endomorphism algebra of a vector space over is called a representation of the algebra.
Given a ring homomorphism , the set of all elements mapped to 0 by is called the kernel of . The kernel is a two-sided ideal of . The image of , on the other hand, is not always an ideal, but it is always a subring of .
To give a ring homomorphism from a commutative ring to a ring with image contained in the center of is the same as to give a structure of an algebra over to (which in particular gives a structure of an -module).
Quotient ring
The notion of quotient ring is analogous to the notion of a quotient group. Given a ring and a two-sided ideal of , view as subgroup of ; then the quotient ring is the set of cosets of together with the operations
for all in . The ring is also called a factor ring.
As with a quotient group, there is a canonical homomorphism , given by . It is surjective and satisfies the following universal property:
If is a ring homomorphism such that , then there is a unique homomorphism such that
For any ring homomorphism , invoking the universal property with produces a homomorphism that gives an isomorphism from to the image of .
Module
The concept of a module over a ring generalizes the concept of a vector space (over a field) by generalizing from multiplication of vectors with elements of a field (scalar multiplication) to multiplication with elements of a ring. More precisely, given a ring , an -module is an abelian group equipped with an operation (associating an element of to every pair of an element of and an element of ) that satisfies certain axioms. This operation is commonly denoted by juxtaposition and called multiplication. The axioms of modules are the following: for all , in and all , in ,
is an abelian group under addition.
When the ring is noncommutative these axioms define left modules; right modules are defined similarly by writing instead of . This is not only a change of notation, as the last axiom of right modules (that is ) becomes , if left multiplication (by ring elements) is used for a right module.
Basic examples of modules are ideals, including the ring itself.
Although similarly defined, the theory of modules is much more complicated than that of vector space, mainly, because, unlike vector spaces, modules are not characterized (up to an isomorphism) by a single invariant (the dimension of a vector space). In particular, not all modules have a basis.
The axioms of modules imply that , where the first minus denotes the additive inverse in the ring and the second minus the additive inverse in the module. Using this and denoting repeated addition by a multiplication by a positive integer allows identifying abelian groups with modules over the ring of integers.
Any ring homomorphism induces a structure of a module: if is a ring homomorphism, then is a left module over by the multiplication: . If is commutative or if is contained in the center of , the ring is called a -algebra. In particular, every ring is an algebra over the integers.
Constructions
Direct product
Let and be rings. Then the product can be equipped with the following natural ring structure:
for all in and in . The ring with the above operations of addition and multiplication and the multiplicative identity is called the direct product of with . The same construction also works for an arbitrary family of rings: if are rings indexed by a set , then is a ring with componentwise addition and multiplication.
Let be a commutative ring and be ideals such that whenever . Then the Chinese remainder theorem says there is a canonical ring isomorphism:
A "finite" direct product may also be viewed as a direct sum of ideals. Namely, let be rings, the inclusions with the images (in particular are rings though not subrings). Then are ideals of and
as a direct sum of abelian groups (because for abelian groups finite products are the same as direct sums). Clearly the direct sum of such ideals also defines a product of rings that is isomorphic to . Equivalently, the above can be done through central idempotents. Assume that has the above decomposition. Then we can write
By the conditions on one has that are central idempotents and , (orthogonal). Again, one can reverse the construction. Namely, if one is given a partition of 1 in orthogonal central idempotents, then let which are two-sided ideals. If each is not a sum of orthogonal central idempotents, then their direct sum is isomorphic to .
An important application of an infinite direct product is the construction of a projective limit of rings (see below). Another application is a restricted product of a family of rings (cf. adele ring).
Polynomial ring
Given a symbol (called a variable) and a commutative ring , the set of polynomials
forms a commutative ring with the usual addition and multiplication, containing as a subring. It is called the polynomial ring over . More generally, the set of all polynomials in variables forms a commutative ring, containing as subrings.
If is an integral domain, then is also an integral domain; its field of fractions is the field of rational functions. If is a Noetherian ring, then is a Noetherian ring. If is a unique factorization domain, then is a unique factorization domain. Finally, is a field if and only if is a principal ideal domain.
Let be commutative rings. Given an element of , one can consider the ring homomorphism
(that is, the substitution). If and , then . Because of this, the polynomial is often also denoted by . The image of the map is denoted by ; it is the same thing as the subring of generated by and .
Example: denotes the image of the homomorphism
In other words, it is the subalgebra of generated by and .
Example: let be a polynomial in one variable, that is, an element in a polynomial ring . Then is an element in and is divisible by in that ring. The result of substituting zero to in is , the derivative of at .
The substitution is a special case of the universal property of a polynomial ring. The property states: given a ring homomorphism and an element in there exists a unique ring homomorphism such that and restricts to . For example, choosing a basis, a symmetric algebra satisfies the universal property and so is a polynomial ring.
To give an example, let be the ring of all functions from to itself; the addition and the multiplication are those of functions. Let be the identity function. Each in defines a constant function, giving rise to the homomorphism . The universal property says that this map extends uniquely to
( maps to ) where is the polynomial function defined by . The resulting map is injective if and only if is infinite.
Given a non-constant monic polynomial in , there exists a ring containing such that is a product of linear factors in .
Let be an algebraically closed field. The Hilbert's Nullstellensatz (theorem of zeros) states that there is a natural one-to-one correspondence between the set of all prime ideals in and the set of closed subvarieties of . In particular, many local problems in algebraic geometry may be attacked through the study of the generators of an ideal in a polynomial ring. (cf. Gröbner basis.)
There are some other related constructions. A formal power series ring consists of formal power series
together with multiplication and addition that mimic those for convergent series. It contains as a subring. A formal power series ring does not have the universal property of a polynomial ring; a series may not converge after a substitution. The important advantage of a formal power series ring over a polynomial ring is that it is local (in fact, complete).
Matrix ring and endomorphism ring
Let be a ring (not necessarily commutative). The set of all square matrices of size with entries in forms a ring with the entry-wise addition and the usual matrix multiplication. It is called the matrix ring and is denoted by . Given a right -module , the set of all -linear maps from to itself forms a ring with addition that is of function and multiplication that is of composition of functions; it is called the endomorphism ring of and is denoted by .
As in linear algebra, a matrix ring may be canonically interpreted as an endomorphism ring: This is a special case of the following fact: If is an -linear map, then may be written as a matrix with entries in , resulting in the ring isomorphism:
Any ring homomorphism induces .
Schur's lemma says that if is a simple right -module, then is a division ring. If is a direct sum of -copies of simple -modules then
The Artin–Wedderburn theorem states any semisimple ring (cf. below) is of this form.
A ring and the matrix ring over it are Morita equivalent: the category of right modules of is equivalent to the category of right modules over . In particular, two-sided ideals in correspond in one-to-one to two-sided ideals in .
Limits and colimits of rings
Let be a sequence of rings such that is a subring of for all . Then the union (or filtered colimit) of is the ring defined as follows: it is the disjoint union of all 's modulo the equivalence relation if and only if in for sufficiently large .
Examples of colimits:
A polynomial ring in infinitely many variables:
The algebraic closure of finite fields of the same characteristic
The field of formal Laurent series over a field : (it is the field of fractions of the formal power series ring )
The function field of an algebraic variety over a field is where the limit runs over all the coordinate rings of nonempty open subsets (more succinctly it is the stalk of the structure sheaf at the generic point.)
Any commutative ring is the colimit of finitely generated subrings.
A projective limit (or a filtered limit) of rings is defined as follows. Suppose we are given a family of rings , running over positive integers, say, and ring homomorphisms , such that are all the identities and is whenever . Then is the subring of consisting of such that maps to under , .
For an example of a projective limit, see .
Localization
The localization generalizes the construction of the field of fractions of an integral domain to an arbitrary ring and modules. Given a (not necessarily commutative) ring and a subset of , there exists a ring together with the ring homomorphism that "inverts" ; that is, the homomorphism maps elements in to unit elements in and, moreover, any ring homomorphism from that "inverts" uniquely factors through The ring is called the localization of with respect to . For example, if is a commutative ring and an element in , then the localization consists of elements of the form (to be precise, )
The localization is frequently applied to a commutative ring with respect to the complement of a prime ideal (or a union of prime ideals) in . In that case one often writes for is then a local ring with the maximal ideal This is the reason for the terminology "localization". The field of fractions of an integral domain is the localization of at the prime ideal zero. If is a prime ideal of a commutative ring , then the field of fractions of is the same as the residue field of the local ring and is denoted by
If is a left -module, then the localization of with respect to is given by a change of rings
The most important properties of localization are the following: when is a commutative ring and a multiplicatively closed subset
is a bijection between the set of all prime ideals in disjoint from and the set of all prime ideals in
running over elements in with partial ordering given by divisibility.
The localization is exact: is exact over whenever is exact over .
Conversely, if is exact for any maximal ideal then is exact.
A remark: localization is no help in proving a global existence. One instance of this is that if two modules are isomorphic at all prime ideals, it does not follow that they are isomorphic. (One way to explain this is that the localization allows one to view a module as a sheaf over prime ideals and a sheaf is inherently a local notion.)
In category theory, a localization of a category amounts to making some morphisms isomorphisms. An element in a commutative ring may be thought of as an endomorphism of any -module. Thus, categorically, a localization of with respect to a subset of is a functor from the category of -modules to itself that sends elements of viewed as endomorphisms to automorphisms and is universal with respect to this property. (Of course, then maps to and -modules map to -modules.)
Completion
Let be a commutative ring, and let be an ideal of .
The completion of at is the projective limit it is a commutative ring. The canonical homomorphisms from to the quotients induce a homomorphism The latter homomorphism is injective if is a Noetherian integral domain and is a proper ideal, or if is a Noetherian local ring with maximal ideal , by Krull's intersection theorem. The construction is especially useful when is a maximal ideal.
The basic example is the completion of at the principal ideal generated by a prime number ; it is called the ring of -adic integers and is denoted The completion can in this case be constructed also from the -adic absolute value on The -adic absolute value on is a map from to given by where denotes the exponent of in the prime factorization of a nonzero integer into prime numbers (we also put and ). It defines a distance function on and the completion of as a metric space is denoted by It is again a field since the field operations extend to the completion. The subring of consisting of elements with is isomorphic to
Similarly, the formal power series ring is the completion of at (see also Hensel's lemma)
A complete ring has much simpler structure than a commutative ring. This owns to the Cohen structure theorem, which says, roughly, that a complete local ring tends to look like a formal power series ring or a quotient of it. On the other hand, the interaction between the integral closure and completion has been among the most important aspects that distinguish modern commutative ring theory from the classical one developed by the likes of Noether. Pathological examples found by Nagata led to the reexamination of the roles of Noetherian rings and motivated, among other things, the definition of excellent ring.
Rings with generators and relations
The most general way to construct a ring is by specifying generators and relations. Let be a free ring (that is, free algebra over the integers) with the set of symbols, that is, consists of polynomials with integral coefficients in noncommuting variables that are elements of . A free ring satisfies the universal property: any function from the set to a ring factors through so that is the unique ring homomorphism. Just as in the group case, every ring can be represented as a quotient of a free ring.
Now, we can impose relations among symbols in by taking a quotient. Explicitly, if is a subset of , then the quotient ring of by the ideal generated by is called the ring with generators and relations . If we used a ring, say, as a base ring instead of then the resulting ring will be over . For example, if then the resulting ring will be the usual polynomial ring with coefficients in in variables that are elements of (It is also the same thing as the symmetric algebra over with symbols .)
In the category-theoretic terms, the formation is the left adjoint functor of the forgetful functor from the category of rings to Set (and it is often called the free ring functor.)
Let , be algebras over a commutative ring . Then the tensor product of -modules is an -algebra with multiplication characterized by
Special kinds of rings
Domains
A nonzero ring with no nonzero zero-divisors is called a domain. A commutative domain is called an integral domain. The most important integral domains are principal ideal domains, PIDs for short, and fields. A principal ideal domain is an integral domain in which every ideal is principal. An important class of integral domains that contain a PID is a unique factorization domain (UFD), an integral domain in which every nonunit element is a product of prime elements (an element is prime if it generates a prime ideal.) The fundamental question in algebraic number theory is on the extent to which the ring of (generalized) integers in a number field, where an "ideal" admits prime factorization, fails to be a PID.
Among theorems concerning a PID, the most important one is the structure theorem for finitely generated modules over a principal ideal domain. The theorem may be illustrated by the following application to linear algebra. Let be a finite-dimensional vector space over a field and a linear map with minimal polynomial . Then, since is a unique factorization domain, factors into powers of distinct irreducible polynomials (that is, prime elements):
Letting we make a -module. The structure theorem then says is a direct sum of cyclic modules, each of which is isomorphic to the module of the form Now, if then such a cyclic module (for ) has a basis in which the restriction of is represented by a Jordan matrix. Thus, if, say, is algebraically closed, then all 's are of the form and the above decomposition corresponds to the Jordan canonical form of .
In algebraic geometry, UFDs arise because of smoothness. More precisely, a point in a variety (over a perfect field) is smooth if the local ring at the point is a regular local ring. A regular local ring is a UFD.
The following is a chain of class inclusions that describes the relationship between rings, domains and fields:
Division ring
A division ring is a ring such that every non-zero element is a unit. A commutative division ring is a field. A prominent example of a division ring that is not a field is the ring of quaternions. Any centralizer in a division ring is also a division ring. In particular, the center of a division ring is a field. It turned out that every finite domain (in particular finite division ring) is a field; in particular commutative (the Wedderburn's little theorem).
Every module over a division ring is a free module (has a basis); consequently, much of linear algebra can be carried out over a division ring instead of a field.
The study of conjugacy classes figures prominently in the classical theory of division rings; see, for example, the Cartan–Brauer–Hua theorem.
A cyclic algebra, introduced by L. E. Dickson, is a generalization of a quaternion algebra.
Semisimple rings
A semisimple module is a direct sum of simple modules. A semisimple ring is a ring that is semisimple as a left module (or right module) over itself.
Examples
A division ring is semisimple (and simple).
For any division ring and positive integer , the matrix ring is semisimple (and simple).
For a field and finite group , the group ring is semisimple if and only if the characteristic of does not divide the order of (Maschke's theorem).
Clifford algebras are semisimple.
The Weyl algebra over a field is a simple ring, but it is not semisimple. The same holds for a ring of differential operators in many variables.
Properties
Any module over a semisimple ring is semisimple. (Proof: A free module over a semisimple ring is semisimple and any module is a quotient of a free module.)
For a ring , the following are equivalent:
is semisimple.
is artinian and semiprimitive.
is a finite direct product where each is a positive integer, and each is a division ring (Artin–Wedderburn theorem).
Semisimplicity is closely related to separability. A unital associative algebra over a field is said to be separable if the base extension is semisimple for every field extension . If happens to be a field, then this is equivalent to the usual definition in field theory (cf. separable extension.)
Central simple algebra and Brauer group
For a field , a -algebra is central if its center is and is simple if it is a simple ring. Since the center of a simple -algebra is a field, any simple -algebra is a central simple algebra over its center. In this section, a central simple algebra is assumed to have finite dimension. Also, we mostly fix the base field; thus, an algebra refers to a -algebra. The matrix ring of size over a ring will be denoted by .
The Skolem–Noether theorem states any automorphism of a central simple algebra is inner.
Two central simple algebras and are said to be similar if there are integers and such that Since the similarity is an equivalence relation. The similarity classes with the multiplication form an abelian group called the Brauer group of and is denoted by . By the Artin–Wedderburn theorem, a central simple algebra is the matrix ring of a division ring; thus, each similarity class is represented by a unique division ring.
For example, is trivial if is a finite field or an algebraically closed field (more generally quasi-algebraically closed field; cf. Tsen's theorem). has order 2 (a special case of the theorem of Frobenius). Finally, if is a nonarchimedean local field (for example, then through the invariant map.
Now, if is a field extension of , then the base extension induces . Its kernel is denoted by . It consists of such that is a matrix ring over (that is, is split by .) If the extension is finite and Galois, then is canonically isomorphic to
Azumaya algebras generalize the notion of central simple algebras to a commutative local ring.
Valuation ring
If is a field, a valuation is a group homomorphism from the multiplicative group to a totally ordered abelian group such that, for any , in with nonzero, The valuation ring of is the subring of consisting of zero and all nonzero such that .
Examples:
The field of formal Laurent series over a field comes with the valuation such that is the least degree of a nonzero term in ; the valuation ring of is the formal power series ring
More generally, given a field and a totally ordered abelian group , let be the set of all functions from to whose supports (the sets of points at which the functions are nonzero) are well ordered. It is a field with the multiplication given by convolution: It also comes with the valuation such that is the least element in the support of . The subring consisting of elements with finite support is called the group ring of (which makes sense even if is not commutative). If is the ring of integers, then we recover the previous example (by identifying with the series whose th coefficient is .)
Rings with extra structure
A ring may be viewed as an abelian group (by using the addition operation), with extra structure: namely, ring multiplication. In the same way, there are other mathematical objects which may be considered as rings with extra structure. For example:
An associative algebra is a ring that is also a vector space over a field such that the scalar multiplication is compatible with the ring multiplication. For instance, the set of -by- matrices over the real field has dimension as a real vector space.
A ring is a topological ring if its set of elements is given a topology which makes the addition map () and the multiplication map to be both continuous as maps between topological spaces (where inherits the product topology or any other product in the category). For example, -by- matrices over the real numbers could be given either the Euclidean topology, or the Zariski topology, and in either case one would obtain a topological ring.
A λ-ring is a commutative ring together with operations that are like th exterior powers:
For example, is a λ-ring with the binomial coefficients. The notion plays a central rule in the algebraic approach to the Riemann–Roch theorem.
A totally ordered ring is a ring with a total ordering that is compatible with ring operations.
Some examples of the ubiquity of rings
Many different kinds of mathematical objects can be fruitfully analyzed in terms of some associated ring.
Cohomology ring of a topological space
To any topological space one can associate its integral cohomology ring
a graded ring. There are also homology groups of a space, and indeed these were defined first, as a useful tool for distinguishing between certain pairs of topological spaces, like the spheres and tori, for which the methods of point-set topology are not well-suited. Cohomology groups were later defined in terms of homology groups in a way which is roughly analogous to the dual of a vector space. To know each individual integral homology group is essentially the same as knowing each individual integral cohomology group, because of the universal coefficient theorem. However, the advantage of the cohomology groups is that there is a natural product, which is analogous to the observation that one can multiply pointwise a -multilinear form and an -multilinear form to get a ()-multilinear form.
The ring structure in cohomology provides the foundation for characteristic classes of fiber bundles, intersection theory on manifolds and algebraic varieties, Schubert calculus and much more.
Burnside ring of a group
To any group is associated its Burnside ring which uses a ring to describe the various ways the group can act on a finite set. The Burnside ring's additive group is the free abelian group whose basis is the set of transitive actions of the group and whose addition is the disjoint union of the action. Expressing an action in terms of the basis is decomposing an action into its transitive constituents. The multiplication is easily expressed in terms of the representation ring: the multiplication in the Burnside ring is formed by writing the tensor product of two permutation modules as a permutation module. The ring structure allows a formal way of subtracting one action from another. Since the Burnside ring is contained as a finite index subring of the representation ring, one can pass easily from one to the other by extending the coefficients from integers to the rational numbers.
Representation ring of a group ring
To any group ring or Hopf algebra is associated its representation ring or "Green ring". The representation ring's additive group is the free abelian group whose basis are the indecomposable modules and whose addition corresponds to the direct sum. Expressing a module in terms of the basis is finding an indecomposable decomposition of the module. The multiplication is the tensor product. When the algebra is semisimple, the representation ring is just the character ring from character theory, which is more or less the Grothendieck group given a ring structure.
Function field of an irreducible algebraic variety
To any irreducible algebraic variety is associated its function field. The points of an algebraic variety correspond to valuation rings contained in the function field and containing the coordinate ring. The study of algebraic geometry makes heavy use of commutative algebra to study geometric concepts in terms of ring-theoretic properties. Birational geometry studies maps between the subrings of the function field.
Face ring of a simplicial complex
Every simplicial complex has an associated face ring, also called its Stanley–Reisner ring. This ring reflects many of the combinatorial properties of the simplicial complex, so it is of particular interest in algebraic combinatorics. In particular, the algebraic geometry of the Stanley–Reisner ring was used to characterize the numbers of faces in each dimension of simplicial polytopes.
Category-theoretic description
Every ring can be thought of as a monoid in Ab, the category of abelian groups (thought of as a monoidal category under the tensor product of -modules). The monoid action of a ring on an abelian group is simply an -module. Essentially, an -module is a generalization of the notion of a vector space – where rather than a vector space over a field, one has a "vector space over a ring".
Let be an abelian group and let be its endomorphism ring (see above). Note that, essentially, is the set of all morphisms of , where if is in , and is in , the following rules may be used to compute and :
where as in is addition in , and function composition is denoted from right to left. Therefore, associated to any abelian group, is a ring. Conversely, given any ring, , is an abelian group. Furthermore, for every in , right (or left) multiplication by gives rise to a morphism of , by right (or left) distributivity. Let . Consider those endomorphisms of , that "factor through" right (or left) multiplication of . In other words, let be the set of all morphisms of , having the property that . It was seen that every in gives rise to a morphism of : right multiplication by . It is in fact true that this association of any element of , to a morphism of , as a function from to , is an isomorphism of rings. In this sense, therefore, any ring can be viewed as the endomorphism ring of some abelian -group (by -group, it is meant a group with being its set of operators). In essence, the most general form of a ring, is the endomorphism group of some abelian -group.
Any ring can be seen as a preadditive category with a single object. It is therefore natural to consider arbitrary preadditive categories to be generalizations of rings. And indeed, many definitions and theorems originally given for rings can be translated to this more general context. Additive functors between preadditive categories generalize the concept of ring homomorphism, and ideals in additive categories can be defined as sets of morphisms closed under addition and under composition with arbitrary morphisms.
Generalization
Algebraists have defined structures more general than rings by weakening or dropping some of ring axioms.
Rng
A rng is the same as a ring, except that the existence of a multiplicative identity is not assumed.
Nonassociative ring
A nonassociative ring is an algebraic structure that satisfies all of the ring axioms except the associative property and the existence of a multiplicative identity. A notable example is a Lie algebra. There exists some structure theory for such algebras that generalizes the analogous results for Lie algebras and associative algebras.
Semiring
A semiring (sometimes rig) is obtained by weakening the assumption that is an abelian group to the assumption that is a commutative monoid, and adding the axiom that for all a in (since it no longer follows from the other axioms).
Examples:
the non-negative integers with ordinary addition and multiplication;
the tropical semiring.
Other ring-like objects
Ring object in a category
Let be a category with finite products. Let pt denote a terminal object of (an empty product). A ring object in is an object equipped with morphisms (addition), (multiplication), (additive identity), (additive inverse), and (multiplicative identity) satisfying the usual ring axioms. Equivalently, a ring object is an object equipped with a factorization of its functor of points through the category of rings:
Ring scheme
In algebraic geometry, a ring scheme over a base scheme is a ring object in the category of -schemes. One example is the ring scheme over , which for any commutative ring returns the ring of -isotypic Witt vectors of length over .
Ring spectrum
In algebraic topology, a ring spectrum is a spectrum together with a multiplication and a unit map from the sphere spectrum , such that the ring axiom diagrams commute up to homotopy. In practice, it is common to define a ring spectrum as a monoid object in a good category of spectra such as the category of symmetric spectra.
See also
Algebra over a commutative ring
Categorical ring
Category of rings
Glossary of ring theory
Non-associative algebra
Ring of sets
Semiring
Spectrum of a ring
Simplicial commutative ring
Special types of rings:
Boolean ring
Dedekind ring
Differential ring
Exponential ring
Finite ring
Lie ring
Local ring
Noetherian and artinian rings
Ordered ring
Poisson ring
Reduced ring
Regular ring
Ring of periods
SBI ring
Valuation ring and discrete valuation ring
Notes
Citations
References
[corrected 5th printing]
General references
Special references
(also online)
Primary sources
Historical references
Bronshtein, I. N. and Semendyayev, K. A. (2004) Handbook of Mathematics, 4th ed. New York: Springer-Verlag .
History of ring theory at the MacTutor Archive
Faith, Carl (1999) Rings and things and a fine array of twentieth century associative algebra. Mathematical Surveys and Monographs, 65. American Mathematical Society .
Itô, K. editor (1986) "Rings." §368 in Encyclopedic Dictionary of Mathematics, 2nd ed., Vol. 2. Cambridge, MA: MIT Press.
Algebraic structures
Ring theory | Ring (mathematics) | [
"Mathematics"
] | 11,457 | [
"Mathematical structures",
"Mathematical objects",
"Ring theory",
"Fields of abstract algebra",
"Algebraic structures"
] |
48,405 | https://en.wikipedia.org/wiki/Caesar%20cipher | In cryptography, a Caesar cipher, also known as Caesar's cipher, the shift cipher, Caesar's code, or Caesar shift, is one of the simplest and most widely known encryption techniques. It is a type of substitution cipher in which each letter in the plaintext is replaced by a letter some fixed number of positions down the alphabet. For example, with a left shift of 3, would be replaced by , would become , and so on. The method is named after Julius Caesar, who used it in his private correspondence.
The encryption step performed by a Caesar cipher is often incorporated as part of more complex schemes, such as the Vigenère cipher, and still has modern application in the ROT13 system. As with all single-alphabet substitution ciphers, the Caesar cipher is easily broken and in modern practice offers essentially no communications security.
Example
The transformation can be represented by aligning two alphabets; the cipher alphabet is the plain alphabet rotated left or right by some number of positions. For instance, here is a Caesar cipher using a left rotation of three places, equivalent to a right shift of 23 (the shift parameter is used as the key):
When encrypting, a person looks up each letter of the message in the "plain" line and writes down the corresponding letter in the "cipher" line.
Plaintext: THE QUICK BROWN FOX JUMPS OVER THE LAZY DOG
Ciphertext: QEB NRFZH YOLTK CLU GRJMP LSBO QEB IXWV ALD
Deciphering is done in reverse, with a right shift of 3.
The encryption can also be represented using modular arithmetic by first transforming the letters into numbers, according to the scheme, A → 0, B → 1, ..., Z → 25. Encryption of a letter x by a shift n can be described mathematically as,
Decryption is performed similarly,
(Here, "mod" refers to the modulo operation. The value x is in the range 0 to 25, but if or are not in this range then 26 should be added or subtracted.)
The replacement remains the same throughout the message, so the cipher is classed as a type of monoalphabetic substitution, as opposed to polyalphabetic substitution.
History and usage
The Caesar cipher is named after Julius Caesar, who, according to Suetonius, used it with a shift of three (A becoming D when encrypting, and D becoming A when decrypting) to protect messages of military significance. While Caesar's was the first recorded use of this scheme, other substitution ciphers are known to have been used earlier.
His nephew, Augustus, also used the cipher, but with a right shift of one, and it did not wrap around to the beginning of the alphabet:
Evidence exists that Julius Caesar also used more complicated systems, and one writer, Aulus Gellius, refers to a (now lost) treatise on his ciphers:
It is unknown how effective the Caesar cipher was at the time; there is no record at that time of any techniques for the solution of simple substitution ciphers. The earliest surviving records date to the 9th-century works of Al-Kindi in the Arab world with the discovery of frequency analysis.
A piece of text encrypted in a Hebrew version of the Caesar cipher is sometimes found on the back of Jewish mezuzah scrolls. When each letter is replaced with the letter before it in the Hebrew alphabet the text translates as "YHWH, our God, YHWH", a quotation from the main part of the scroll.
In the 19th century, the personal advertisements section in newspapers would sometimes be used to exchange messages encrypted using simple cipher schemes. David Kahn (1967) describes instances of lovers engaging in secret communications enciphered using the Caesar cipher in The Times. Even as late as 1915, the Caesar cipher was in use: the Russian army employed it as a replacement for more complicated ciphers which had proved to be too difficult for their troops to master; German and Austrian cryptanalysts had little difficulty in decrypting their messages.
Caesar ciphers can be found today in children's toys such as secret decoder rings. A Caesar shift of thirteen is also performed in the ROT13 algorithm, a simple method of obfuscating text widely found on Usenet and used to obscure text (such as joke punchlines and story spoilers), but not seriously used as a method of encryption.
The Vigenère cipher uses a Caesar cipher with a different shift at each position in the text; the value of the shift is defined using a repeating keyword. If the keyword is as long as the message, is chosen at random, never becomes known to anyone else, and is never reused, this is the one-time pad cipher, proven unbreakable. However the problems involved in using a random key as long as the message make the one-time pad difficult to use in practice. Keywords shorter than the message (e.g., "Complete Victory" used by the Confederacy during the American Civil War), introduce a cyclic pattern that might be detected with a statistically advanced version of frequency analysis.
In April 2006, fugitive Mafia boss Bernardo Provenzano was captured in Sicily partly because some of his messages, clumsily written in a variation of the Caesar cipher, were broken. Provenzano's cipher used numbers, so that "A" would be written as "4", "B" as "5", and so on.
In 2011, Rajib Karim was convicted in the United Kingdom of "terrorism offences" after using the Caesar cipher to communicate with Bangladeshi Islamic activists discussing plots to blow up British Airways planes or disrupt their IT networks. Although the parties had access to far better encryption techniques (Karim himself used PGP for data storage on computer disks), they chose to use their own scheme (implemented in Microsoft Excel), rejecting a more sophisticated code program called Mujahideen Secrets "because 'kaffirs', or non-believers, know about it, so it must be less secure".
Breaking the cipher
The Caesar cipher can be easily broken even in a ciphertext-only scenario. Since there are only a limited number of possible shifts (25 in English), an attacker can mount a brute force attack by deciphering the message, or part of it, using each possible shift. The correct description will be the one which makes sense as English text. An example is shown on the right for the ciphertext ""; the candidate plaintext for shift four "" is the only one which makes sense as English text. Another type of brute force attack is to write out the alphabet beneath each letter of the ciphertext, starting at that letter. Again the correct decryption is the one which makes sense as English text. This technique is sometimes known as "completing the plain component".
Another approach is to match up the frequency distribution of the letters. By graphing the frequencies of letters in the ciphertext, and by knowing the expected distribution of those letters in the original language of the plaintext, a human can easily spot the value of the shift by looking at the displacement of particular features of the graph. This is known as frequency analysis. For example, in the English language the plaintext frequencies of the letters , , (usually most frequent), and , (typically least frequent) are particularly distinctive. Computers can automate this process by assessing the similarity between the observed frequency distribution and the expected distribution. This can be achieved, for instance, through the utilization of the chi-squared statistic or by minimizing the sum of squared errors between the observed and known language distributions.
The unicity distance for the Caesar cipher is about 2, meaning that on average at least two characters of ciphertext are required to determine the key. In rare cases more text may be needed. For example, the words "" and "" can be converted to each other with a Caesar shift, which means they can produce the same ciphertext with different shifts. However, in practice the key can almost certainly be found with at least 6 characters of ciphertext.
With the Caesar cipher, encrypting a text multiple times provides no additional security. This is because two encryptions of, say, shift A and shift B, will be equivalent to a single encryption with shift . In mathematical terms, the set of encryption operations under each possible key forms a group under composition.
See also
Scytale
Notes
Bibliography
Chris Savarese and Brian Hart, The Caesar Cipher, Trinity College, 1999
Further reading
External links
Classical ciphers
Group theory
Julius Caesar | Caesar cipher | [
"Mathematics"
] | 1,766 | [
"Group theory",
"Fields of abstract algebra"
] |
48,416 | https://en.wikipedia.org/wiki/Gottlob%20Frege | Friedrich Ludwig Gottlob Frege (; ; 8 November 1848 – 26 July 1925) was a German philosopher, logician, and mathematician. He was a mathematics professor at the University of Jena, and is understood by many to be the father of analytic philosophy, concentrating on the philosophy of language, logic, and mathematics. Though he was largely ignored during his lifetime, Giuseppe Peano (1858–1932), Bertrand Russell (1872–1970), and, to some extent, Ludwig Wittgenstein (1889–1951) introduced his work to later generations of philosophers. Frege is widely considered to be the greatest logician since Aristotle, and one of the most profound philosophers of mathematics ever.
His contributions include the development of modern logic in the Begriffsschrift and work in the foundations of mathematics. His book the Foundations of Arithmetic is the seminal text of the logicist project, and is cited by Michael Dummett as where to pinpoint the linguistic turn. His philosophical papers "On Sense and Reference" and "The Thought" are also widely cited. The former argues for two different types of meaning and descriptivism. In Foundations and "The Thought", Frege argues for Platonism against psychologism or formalism, concerning numbers and propositions respectively.
Life
Childhood (1848–69)
Frege was born in 1848 in Wismar, Mecklenburg-Schwerin (today part of Mecklenburg-Vorpommern). His father Carl (Karl) Alexander Frege (1809–1866) was the co-founder and headmaster of a girls' high school until his death. After Carl's death, the school was led by Frege's mother Auguste Wilhelmine Sophie Frege (née Bialloblotzky, 12 January 1815 – 14 October 1898); her mother was Auguste Amalia Maria Ballhorn, a descendant of Philipp Melanchthon and her father was Johann Heinrich Siegfried Bialloblotzky, a descendant of a Polish noble family who left Poland in the 17th century. Frege was a Lutheran.
In childhood, Frege encountered philosophies that would guide his future scientific career. For example, his father wrote a textbook on the German language for children aged 9–13, entitled Hülfsbuch zum Unterrichte in der deutschen Sprache für Kinder von 9 bis 13 Jahren (2nd ed., Wismar 1850; 3rd ed., Wismar and Ludwigslust: Hinstorff, 1862) (Help book for teaching German to children from 9 to 13 years old), the first section of which dealt with the structure and logic of language.
Frege studied at and graduated in 1869. Teacher of mathematics and natural science Gustav Adolf Leo Sachse (1843–1909), who was also a poet, played an important role in determining Frege's future scientific career, encouraging him to continue his studies at his own alma mater the University of Jena.
Studies at University (1869–74)
Frege matriculated at the University of Jena in the spring of 1869 as a citizen of the North German Confederation. In the four semesters of his studies he attended approximately twenty courses of lectures, most of them on mathematics and physics. His most important teacher was Ernst Karl Abbe (1840–1905; physicist, mathematician, and inventor). Abbe gave lectures on theory of gravity, galvanism and electrodynamics, complex analysis theory of functions of a complex variable, applications of physics, selected divisions of mechanics, and mechanics of solids. Abbe was more than a teacher to Frege: he was a trusted friend, and, as director of the optical manufacturer Carl Zeiss AG, he was in a position to advance Frege's career. After Frege's graduation, they came into closer correspondence.
His other notable university teachers were Christian Philipp Karl Snell (1806–86; subjects: use of infinitesimal analysis in geometry, analytic geometry of planes, analytical mechanics, optics, physical foundations of mechanics); Hermann Karl Julius Traugott Schaeffer (1824–1900; analytic geometry, applied physics, algebraic analysis, on the telegraph and other electronic machines); and the philosopher Kuno Fischer (1824–1907; Kantian and critical philosophy).
Starting in 1871, Frege continued his studies in Göttingen, the leading university in mathematics in German-speaking territories, where he attended the lectures of Rudolf Friedrich Alfred Clebsch (1833–72; analytic geometry), Ernst Christian Julius Schering (1824–97; function theory), Wilhelm Eduard Weber (1804–91; physical studies, applied physics), Eduard Riecke (1845–1915; theory of electricity), and Hermann Lotze (1817–81; philosophy of religion). Many of the philosophical doctrines of the mature Frege have parallels in Lotze; it has been the subject of scholarly debate whether or not there was a direct influence on Frege's views arising from his attending Lotze's lectures.
In 1873, Frege attained his doctorate under Ernst Christian Julius Schering, with a dissertation under the title of "Ueber eine geometrische Darstellung der imaginären Gebilde in der Ebene" ("On a Geometrical Representation of Imaginary Forms in a Plane"), in which he aimed to solve such fundamental problems in geometry as the mathematical interpretation of projective geometry's infinitely distant (imaginary) points.
Frege married Margarete Katharina Sophia Anna Lieseberg (15 February 1856 – 25 June 1904) on 14 March 1887. The couple had at least two children, who unfortunately died when young. Years later they adopted a son, Alfred. Little else is known about Frege's family life, however.
Work as a logician
Though his education and early mathematical work focused primarily on geometry, Frege's work soon turned to logic. His marked a turning point in the history of logic. The Begriffsschrift broke new ground, including a rigorous treatment of the ideas of functions and variables. Frege's goal was to show that mathematics grows out of logic, and in so doing, he devised techniques that separated him from the Aristotelian syllogistic but took him rather close to Stoic propositional logic.
In effect, Frege invented axiomatic predicate logic, in large part thanks to his invention of quantified variables, which eventually became ubiquitous in mathematics and logic, and which solved the problem of multiple generality. Previous logic had dealt with the logical constants and, or, if... then..., not, and some and all, but iterations of these operations, especially "some" and "all", were little understood: even the distinction between a sentence like "every boy loves some girl" and "some girl is loved by every boy" could be represented only very artificially, whereas Frege's formalism had no difficulty expressing the different readings of "every boy loves some girl who loves some boy who loves some girl" and similar sentences, in complete parallel with his treatment of, say, "every boy is foolish".
A frequently noted example is that Aristotle's logic is unable to represent mathematical statements like Euclid's theorem, a fundamental statement of number theory that there are an infinite number of prime numbers. Frege's "conceptual notation", however, can represent such inferences. The analysis of logical concepts and the machinery of formalization that is essential to Principia Mathematica (3 vols., 1910–13, by Bertrand Russell, 1872–1970, and Alfred North Whitehead, 1861–1947), to Russell's theory of descriptions, to Kurt Gödel's (1906–78) incompleteness theorems, and to Alfred Tarski's (1901–83) theory of truth, is ultimately due to Frege.
One of Frege's stated purposes was to isolate genuinely logical principles of inference, so that in the proper representation of mathematical proof, one would at no point appeal to "intuition". If there was an intuitive element, it was to be isolated and represented separately as an axiom: from there on, the proof was to be purely logical and without gaps. Having exhibited this possibility, Frege's larger purpose was to defend the view that arithmetic is a branch of logic, a view known as logicism: unlike geometry, arithmetic was to be shown to have no basis in "intuition", and no need for non-logical axioms. Already in the 1879 Begriffsschrift important preliminary theorems, for example, a generalized form of law of trichotomy, were derived within what Frege understood to be pure logic.
This idea was formulated in non-symbolic terms in his The Foundations of Arithmetic (Die Grundlagen der Arithmetik, 1884). Later, in his Basic Laws of Arithmetic (Grundgesetze der Arithmetik, vol. 1, 1893; vol. 2, 1903; vol. 2 was published at his own expense), Frege attempted to derive, by use of his symbolism, all of the laws of arithmetic from axioms he asserted as logical. Most of these axioms were carried over from his Begriffsschrift, though not without some significant changes. The one truly new principle was one he called the : the "value-range" of the function f(x) is the same as the "value-range" of the function g(x) if and only if ∀x[f(x) = g(x)].
The crucial case of the law may be formulated in modern notation as follows. Let {x|Fx} denote the extension of the predicate Fx, that is, the set of all Fs, and similarly for Gx. Then Basic Law V says that the predicates Fx and Gx have the same extension if and only if ∀x[Fx ↔ Gx]. The set of Fs is the same as the set of Gs just in case every F is a G and every G is an F. (The case is special because what is here being called the extension of a predicate, or a set, is only one type of "value-range" of a function.)
In a famous episode, Bertrand Russell wrote to Frege, just as Vol. 2 of the Grundgesetze was about to go to press in 1903, showing that Russell's paradox could be derived from Frege's Basic Law V. It is easy to define the relation of membership of a set or extension in Frege's system; Russell then drew attention to "the set of things x that are such that x is not a member of x". The system of the Grundgesetze entails that the set thus characterised both is and is not a member of itself, and is thus inconsistent. Frege wrote a hasty, last-minute Appendix to Vol. 2, deriving the contradiction and proposing to eliminate it by modifying Basic Law V. Frege opened the Appendix with the exceptionally honest comment: "Hardly anything more unfortunate can befall a scientific writer than to have one of the foundations of his edifice shaken after the work is finished. This was the position I was placed in by a letter of Mr. Bertrand Russell, just when the printing of this volume was nearing its completion." (This letter and Frege's reply are translated in Jean van Heijenoort 1967.)
Frege's proposed remedy was subsequently shown to imply that there is but one object in the universe of discourse, and hence is worthless (indeed, this would make for a contradiction in Frege's system if he had axiomatized the idea, fundamental to his discussion, that the True and the False are distinct objects; see, for example, Dummett 1973), but recent work has shown that much of the program of the Grundgesetze might be salvaged in other ways:
Basic Law V can be weakened in other ways. The best-known way is due to philosopher and mathematical logician George Boolos (1940–1996), who was an expert on the work of Frege. A "concept" F is "small" if the objects falling under F cannot be put into one-to-one correspondence with the universe of discourse, that is, unless: ∃R[R is 1-to-1 & ∀x∃y(xRy & Fy)]. Now weaken V to V*: a "concept" F and a "concept" G have the same "extension" if and only if neither F nor G is small or ∀x(Fx ↔ Gx). V* is consistent if second-order arithmetic is, and suffices to prove the axioms of second-order arithmetic.
Basic Law V can simply be replaced with Hume's principle, which says that the number of Fs is the same as the number of Gs if and only if the Fs can be put into a one-to-one correspondence with the Gs. This principle, too, is consistent if second-order arithmetic is, and suffices to prove the axioms of second-order arithmetic. This result is termed Frege's theorem because it was noticed that in developing arithmetic, Frege's use of Basic Law V is restricted to a proof of Hume's principle; it is from this, in turn, that arithmetical principles are derived. On Hume's principle and Frege's theorem, see "Frege's Logic, Theorem, and Foundations for Arithmetic".
Frege's logic, now known as second-order logic, can be weakened to so-called predicative second-order logic. Predicative second-order logic plus Basic Law V is provably consistent by finitistic or constructive methods, but it can interpret only very weak fragments of arithmetic.
Frege's work in logic had little international attention until 1903 when Russell wrote an appendix to The Principles of Mathematics stating his differences with Frege. The diagrammatic notation that Frege used had no antecedents (and has had no imitators since). Moreover, until Russell and Whitehead's Principia Mathematica (3 vols.) appeared in 1910–13, the dominant approach to mathematical logic was still that of George Boole (1815–64) and his intellectual descendants, especially Ernst Schröder (1841–1902). Frege's logical ideas nevertheless spread through the writings of his student Rudolf Carnap (1891–1970) and other admirers, particularly Bertrand Russell and Ludwig Wittgenstein (1889–1951).
Philosopher
Frege is one of the founders of analytic philosophy, whose work on logic and language gave rise to the linguistic turn in philosophy. His contributions to the philosophy of language include:
Function and argument analysis of the proposition;
Distinction between concept and object (Begriff und Gegenstand);
Principle of compositionality;
Context principle; and
Distinction between the sense and reference (Sinn und Bedeutung) of names and other expressions, sometimes said to involve a mediated reference theory.
As a philosopher of mathematics, Frege attacked the psychologistic appeal to mental explanations of the content of judgment of the meaning of sentences. His original purpose was very far from answering general questions about meaning; instead, he devised his logic to explore the foundations of arithmetic, undertaking to answer questions such as "What is a number?" or "What objects do number-words ('one', 'two', etc.) refer to?" But in pursuing these matters, he eventually found himself analysing and explaining what meaning is, and thus came to several conclusions that proved highly consequential for the subsequent course of analytic philosophy and the philosophy of language.
Sense and reference
Frege's 1892 paper, "On Sense and Reference" ("Über Sinn und Bedeutung"), introduced his influential distinction between sense ("Sinn") and reference ("Bedeutung", which has also been translated as "meaning", or "denotation"). While conventional accounts of meaning took expressions to have just one feature (reference), Frege introduced the view that expressions have two different aspects of significance: their sense and their reference.
Reference (or "Bedeutung") applied to proper names, where a given expression (say the expression "Tom") simply refers to the entity bearing the name (the person named Tom). Frege also held that propositions had a referential relationship with their truth-value (in other words, a statement "refers" to the truth-value it takes). By contrast, the sense (or "Sinn") associated with a complete sentence is the thought it expresses. The sense of an expression is said to be the "mode of presentation" of the item referred to, and there can be multiple modes of representation for the same referent.
The distinction can be illustrated thus: In their ordinary uses, the name "Charles Philip Arthur George Mountbatten-Windsor", which for logical purposes is an unanalysable whole, and the functional expression "the King of the United Kingdom", which contains the significant parts "the King of ξ" and "United Kingdom", have the same referent, namely, the person best known as King Charles III. But the sense of the word "United Kingdom" is a part of the sense of the latter expression, but no part of the sense of the "full name" of King Charles.
These distinctions were disputed by Bertrand Russell, especially in his paper "On Denoting"; the controversy has continued into the present, fueled especially by Saul Kripke's famous lectures "Naming and Necessity".
1924 diary
Frege's published philosophical writings were of a very technical nature and divorced from practical issues, so much so that Frege scholar Dummett expressed his "shock to discover, while reading Frege's diary, that his hero was an anti-Semite." After the German Revolution of 1918–19 his political opinions became more radical. In the last year of his life, at the age of 76, his diary contained political opinions opposing the parliamentary system, democrats, liberals, Catholics, the French and Jews, who he thought ought to be deprived of political rights and, preferably, expelled from Germany. Frege confided "that he had once thought of himself as a liberal and was an admirer of Bismarck", but then sympathized with General Ludendorff. In an entry dated 5 May 1924 Frege expressed agreement with an article published in Houston Stewart Chamberlain's Deutschlands Erneuerung which praised Adolf Hitler. Frege recorded the belief that it would be best if the Jews of Germany would "get lost, or better would like to disappear from Germany." Some interpretations have been written about that time. The diary contains a critique of universal suffrage and socialism. Frege had friendly relations with Jews in real life: among his students was Gershom Scholem, who greatly valued his teaching, and it was he who encouraged Ludwig Wittgenstein to leave for England in order to study with Bertrand Russell. The 1924 diary was published posthumously in 1994.
Personality
Frege was described by his students as a highly introverted person, seldom entering into dialogues with others and mostly facing the blackboard while lecturing. He was, however, known to occasionally show wit and even bitter sarcasm during his classes.
Important dates
Born 8 November 1848 in Wismar, Mecklenburg-Schwerin.
1869 — attends the University of Jena.
1871 — attends the University of Göttingen.
1873 — PhD, doctor in mathematics (geometry), attained at Göttingen.
1874 — Habilitation at Jena; private teacher.
1879 — Ausserordentlicher Professor at Jena.
1896 — Ordentlicher Honorarprofessor at Jena.
1918 — retires.
Died 26 July 1925 in Bad Kleinen (now part of Mecklenburg-Vorpommern).
Important works
Logic, foundation of arithmetic
Begriffsschrift: eine der arithmetischen nachgebildete Formelsprache des reinen Denkens (1879), Halle an der Saale: Verlag von Louis Nebert (online version).
In English: Begriffsschrift, a Formula Language, Modeled Upon That of Arithmetic, for Pure Thought, in: J. van Heijenoort (ed.), From Frege to Gödel: A Source Book in Mathematical Logic, 1879–1931, Harvard, MA: Harvard University Press, 1967, pp. 5–82.
In English (selected sections revised in modern formal notation): R. L. Mendelsohn, The Philosophy of Gottlob Frege, Cambridge: Cambridge University Press, 2005: "Appendix A. Begriffsschrift in Modern Notation: (1) to (51)" and "Appendix B. Begriffsschrift in Modern Notation: (52) to (68)."
Die Grundlagen der Arithmetik: Eine logisch-mathematische Untersuchung über den Begriff der Zahl (1884), Breslau: Verlag von Wilhelm Koebner (online version).
In English: The Foundations of Arithmetic: A Logico-Mathematical Enquiry into the Concept of Number, translated by J. L. Austin, Oxford: Basil Blackwell, 1950.
Grundgesetze der Arithmetik, Band I (1893); Band II (1903), Jena: Verlag Hermann Pohle (online version).
In English (translation of selected sections), "Translation of Part of Frege's Grundgesetze der Arithmetik," translated and edited Peter Geach and Max Black in Translations from the Philosophical Writings of Gottlob Frege, New York, NY: Philosophical Library, 1952, pp. 137–158.
In German (revised in modern formal notation): Grundgesetze der Arithmetik, Korpora (portal of the University of Duisburg-Essen), 2006: Band I and Band II .
In German (revised in modern formal notation): Grundgesetze der Arithmetik – Begriffsschriftlich abgeleitet. Band I und II: In moderne Formelnotation transkribiert und mit einem ausführlichen Sachregister versehen, edited by T. Müller, B. Schröder, and R. Stuhlmann-Laeisz, Paderborn: mentis, 2009.
In English: Basic Laws of Arithmetic, translated and edited with an introduction by Philip A. Ebert and Marcus Rossberg. Oxford: Oxford University Press, 2013. .
Philosophical studies
"Function and Concept" (1891)
Original: "Funktion und Begriff", an address to the Jenaische Gesellschaft für Medizin und Naturwissenschaft, Jena, 9 January 1891.
In English: "Function and Concept".
"On Sense and Reference" (1892)
Original: "Über Sinn und Bedeutung", in Zeitschrift für Philosophie und philosophische Kritik C (1892): 25–50.
In English: "On Sense and Reference", alternatively translated (in later edition) as "On Sense and Meaning".
"Concept and Object" (1892)
Original: "Ueber Begriff und Gegenstand", in Vierteljahresschrift für wissenschaftliche Philosophie XVI (1892): 192–205.
In English: "Concept and Object".
"What is a Function?" (1904)
Original: "Was ist eine Funktion?", in Festschrift Ludwig Boltzmann gewidmet zum sechzigsten Geburtstage, 20 February 1904, S. Meyer (ed.), Leipzig, 1904, pp. 656–666.
In English: "What is a Function?".
Logical Investigations (1918–1923). Frege intended that the following three papers be published together in a book titled Logische Untersuchungen (Logical Investigations). Though the German book never appeared, the papers were published together in Logische Untersuchungen, ed. G. Patzig, Vandenhoeck & Ruprecht, 1966, and English translations appeared together in Logical Investigations, ed. Peter Geach, Blackwell, 1975.
1918–19. "Der Gedanke: Eine logische Untersuchung" ("The Thought: A Logical Inquiry"), in Beiträge zur Philosophie des Deutschen Idealismus I: 58–77.
1918–19. "Die Verneinung" ("Negation") in Beiträge zur Philosophie des Deutschen Idealismus I: 143–157.
1923. "Gedankengefüge" ("Compound Thought"), in Beiträge zur Philosophie des Deutschen Idealismus III: 36–51.
Articles on geometry
1903: "Über die Grundlagen der Geometrie". II. Jahresbericht der deutschen Mathematiker-Vereinigung XII (1903), 368–375.
In English: "On the Foundations of Geometry".
1967: Kleine Schriften. (I. Angelelli, ed.). Darmstadt: Wissenschaftliche Buchgesellschaft, 1967 and Hildesheim, G. Olms, 1967. "Small Writings," a collection of most of his writings (e.g., the previous), posthumously published.
See also
Frege system
List of pioneers in computer science
Neo-Fregeanism
Notes
References
Sources
Primary
Online bibliography of Frege's works and their English translations (compiled by Edward N. Zalta, Stanford Encyclopedia of Philosophy).
1879. Begriffsschrift, eine der arithmetischen nachgebildete Formelsprache des reinen Denkens. Halle a. S.: Louis Nebert. Translation: Concept Script, a formal language of pure thought modelled upon that of arithmetic, by S. Bauer-Mengelberg in Jean Van Heijenoort, ed., 1967. From Frege to Gödel: A Source Book in Mathematical Logic, 1879–1931. Harvard University Press.
1884. Die Grundlagen der Arithmetik: Eine logisch-mathematische Untersuchung über den Begriff der Zahl. Breslau: W. Koebner. Translation: J. L. Austin, 1974. The Foundations of Arithmetic: A Logico-Mathematical Enquiry into the Concept of Number, 2nd ed. Blackwell.
1891. "Funktion und Begriff." Translation: "Function and Concept" in Geach and Black (1980).
1892a. "Über Sinn und Bedeutung" in Zeitschrift für Philosophie und philosophische Kritik 100:25–50. Translation: "On Sense and Reference" in Geach and Black (1980).
1892b. "Ueber Begriff und Gegenstand" in Vierteljahresschrift für wissenschaftliche Philosophie 16:192–205. Translation: "Concept and Object" in Geach and Black (1980).
1893. Grundgesetze der Arithmetik, Band I. Jena: Verlag Hermann Pohle. Band II, 1903. Band I+II online . Partial translation of volume 1: Montgomery Furth, 1964. The Basic Laws of Arithmetic. Univ. of California Press. Translation of selected sections from volume 2 in Geach and Black (1980). Complete translation of both volumes: Philip A. Ebert and Marcus Rossberg, 2013, Basic Laws of Arithmetic. Oxford University Press.
1904. "Was ist eine Funktion?" in Meyer, S., ed., 1904. Festschrift Ludwig Boltzmann gewidmet zum sechzigsten Geburtstage, 20. Februar 1904. Leipzig: Barth: 656–666. Translation: "What is a Function?" in Geach and Black (1980).
1918–1923. Peter Geach (editor): Logical Investigations, Blackwell, 1975.
1924. Gottfried Gabriel, Wolfgang Kienzler (editors): Gottlob Freges politisches Tagebuch. In: Deutsche Zeitschrift für Philosophie, vol. 42, 1994, pp. 1057–98. Introduction by the editors on pp. 1057–66. This article has been translated into English, in: Inquiry, vol. 39, 1996, pp. 303–342.
Peter Geach and Max Black, eds., and trans., 1980. Translations from the Philosophical Writings of Gottlob Frege, 3rd ed. Blackwell (1st ed. 1952).
Secondary
Philosophy
Badiou, Alain. "On a Contemporary Usage of Frege", trans. Justin Clemens and Sam Gillespie. UMBR(a), no. 1, 2000, pp. 99–115.
Baker, Gordon, and P.M.S. Hacker, 1984. Frege: Logical Excavations. Oxford University Press. — Vigorous, if controversial, criticism of both Frege's philosophy and influential contemporary interpretations such as Dummett's.
Currie, Gregory, 1982. Frege: An Introduction to His Philosophy. Harvester Press.
Dummett, Michael, 1973. Frege: Philosophy of Language. Harvard University Press.
------, 1981. The Interpretation of Frege's Philosophy. Harvard University Press.
Hill, Claire Ortiz, 1991. Word and Object in Husserl, Frege and Russell: The Roots of Twentieth-Century Philosophy. Athens OH: Ohio University Press.
------, and Rosado Haddock, G. E., 2000. Husserl or Frege: Meaning, Objectivity, and Mathematics. Open Court. — On the Frege-Husserl-Cantor triangle.
Kenny, Anthony, 1995. Frege – An introduction to the founder of modern analytic philosophy. Penguin Books. — Excellent non-technical introduction and overview of Frege's philosophy.
Klemke, E.D., ed., 1968. Essays on Frege. University of Illinois Press. — 31 essays by philosophers, grouped under three headings: 1. Ontology; 2. Semantics; and 3. Logic and Philosophy of Mathematics.
Rosado Haddock, Guillermo E., 2006. A Critical Introduction to the Philosophy of Gottlob Frege. Ashgate Publishing.
Sisti, Nicola, 2005. Il Programma Logicista di Frege e il Tema delle Definizioni. Franco Angeli. — On Frege's theory of definitions.
Sluga, Hans, 1980. Gottlob Frege. Routledge.
Nicla Vassallo, 2014, Frege on Thinking and Its Epistemic Significance with Pieranna Garavaso, Lexington Books–Rowman & Littlefield, Lanham, MD, Usa.
Weiner, Joan, 1990. Frege in Perspective, Cornell University Press.
Logic and mathematics
Anderson, D. J., and Edward Zalta, 2004, "Frege, Boolos, and Logical Objects," Journal of Philosophical Logic 33: 1–26.
Blanchette, Patricia, 2012, Frege's Conception of Logic. Oxford: Oxford University Press, 2012
Burgess, John, 2005. Fixing Frege. Princeton Univ. Press. — A critical survey of the ongoing rehabilitation of Frege's logicism.
Boolos, George, 1998. Logic, Logic, and Logic. MIT Press. — 12 papers on Frege's theorem and the logicist approach to the foundation of arithmetic.
Dummett, Michael, 1991. Frege: Philosophy of Mathematics. Harvard University Press.
Demopoulos, William, ed., 1995. Frege's Philosophy of Mathematics. Harvard Univ. Press. — Papers exploring Frege's theorem and Frege's mathematical and intellectual background.
Ferreira, F. and Wehmeier, K., 2002, "On the consistency of the Delta-1-1-CA fragment of Frege's Grundgesetze," Journal of Philosophic Logic 31: 301–11.
Grattan-Guinness, Ivor, 2000. The Search for Mathematical Roots 1870–1940. Princeton University Press. — Fair to the mathematician, less so to the philosopher.
Gillies, Donald A., 1982. Frege, Dedekind, and Peano on the foundations of arithmetic. Methodology and Science Foundation, 2. Van Gorcum & Co., Assen, 1982.
Gillies, Donald: The Fregean revolution in logic. Revolutions in mathematics, 265–305, Oxford Sci. Publ., Oxford Univ. Press, New York, 1992.
Irvine, Andrew David, 2010, "Frege on Number Properties," Studia Logica, 96(2): 239–60.
Charles Parsons, 1965, "Frege's Theory of Number." Reprinted with Postscript in Demopoulos (1965): 182–210. The starting point of the ongoing sympathetic reexamination of Frege's logicism.
Gillies, Donald: The Fregean revolution in logic. Revolutions in mathematics, 265–305, Oxford Sci. Publ., Oxford Univ. Press, New York, 1992.
Heck, Richard Kimberly: Frege's Theorem. Oxford: Oxford University Press, 2011
Heck, Richard Kimberly: Reading Frege's Grundgesetze. Oxford: Oxford University Press, 2013
Wright, Crispin, 1983. Frege's Conception of Numbers as Objects. Aberdeen University Press. — A systematic exposition and a scope-restricted defense of Frege's Grundlagen conception of numbers.
Historical context
External links
Frege at Genealogy Project
A comprehensive guide to Fregean material available on the web by Brian Carver.
Stanford Encyclopedia of Philosophy:
"Gottlob Frege" — by Edward Zalta.
"Frege's Logic, Theorem, and Foundations for Arithmetic" — by Edward Zalta.
Internet Encyclopedia of Philosophy:
Gottlob Frege — by Kevin C. Klement.
Frege and Language — by Dorothea Lotter.
Metaphysics Research Lab: Gottlob Frege.
Frege on Being, Existence and Truth.
Begriff, a LaTeX package for typesetting Frege's logic notation, earlier version.
grundgesetze, a LaTeX package for typesetting Frege's logic notation, mature version
Frege's Basic Laws of Arithmetic, website, incl. corrigenda and LaTeX typesetting tool — by P. A. Ebert and M. Rossberg.
Gottlob Frege
1848 births
1925 deaths
19th-century German male writers
19th-century German mathematicians
19th-century German philosophers
19th-century German writers
20th-century German male writers
20th-century German mathematicians
20th-century German philosophers
Analytic philosophers
German epistemologists
German logicians
German male non-fiction writers
Linguistic turn
Ontologists
People from the Grand Duchy of Mecklenburg-Schwerin
People from Wismar
German philosophers of education
German philosophers of language
Philosophers of logic
Philosophers of mathematics
German philosophers of mind
German philosophers of science
Platonists
German set theorists
University of Jena alumni
University of Göttingen alumni
Academic staff of the University of Jena
Mathematicians from the German Empire | Gottlob Frege | [
"Mathematics"
] | 7,345 | [
"Philosophers of mathematics"
] |
48,489 | https://en.wikipedia.org/wiki/Magic%20%28supernatural%29 | Magic, sometimes spelled magick, is the application of beliefs, rituals or actions employed in the belief that they can manipulate natural or supernatural beings and forces. It is a category into which have been placed various beliefs and practices sometimes considered separate from both religion and science.
Connotations have varied from positive to negative at times throughout history. Within Western culture, magic has been linked to ideas of the Other, foreignness, and primitivism; indicating that it is "a powerful marker of cultural difference" and likewise, a non-modern phenomenon. During the late nineteenth and early twentieth centuries, Western intellectuals perceived the practice of magic to be a sign of a primitive mentality and also commonly attributed it to marginalised groups of people.
Aleister Crowley (1875–1947), a British occultist, defined "magick" as "the Science and Art of causing Change to occur in conformity with Will", adding a 'k' to distinguish ceremonial or ritual magic from stage magic. In modern occultism and neopagan religions, many self-described magicians and witches regularly practice ritual magic. This view has been incorporated into chaos magic and the new religious movements of Thelema and Wicca.
Etymology
The English words magic, mage and magician come from the Latin term magus, through the Greek μάγος, which is from the Old Persian maguš. (𐎶𐎦𐎢𐏁|𐎶𐎦𐎢𐏁, magician). The Old Persian magu- is derived from the Proto-Indo-European megʰ-*magh (be able). The Persian term may have led to the Old Sinitic *Mγag (mage or shaman). The Old Persian form seems to have permeated ancient Semitic languages as the Talmudic Hebrew magosh, the Aramaic amgusha (magician), and the Chaldean maghdim (wisdom and philosophy); from the first century BCE onwards, Syrian magusai gained notoriety as magicians and soothsayers.
During the late-sixth and early-fifth centuries BCE, the term goetia found its way into ancient Greek, where it was used with negative connotations to apply to rites that were regarded as fraudulent, unconventional, and dangerous; in particular they dedicate themselves to the evocation and invocation of daimons (lesser divinities or spirits) to control and acquire powers. This concept remained pervasive throughout the Hellenistic period, when Hellenistic authors categorised a diverse range of practices—such as enchantment, witchcraft, incantations, divination, necromancy, and astrology—under the label "magic".
The Latin language adopted this meaning of the term in the first century BCE. Via Latin, the concept became incorporated into Christian theology during the first century CE. Early Christians associated magic with demons, and thus regarded it as against Christian religion. In early modern Europe, Protestants often claimed that Roman Catholicism was magic rather than religion, and as Christian Europeans began colonizing other parts of the world in the sixteenth century, they labelled the non-Christian beliefs they encountered as magical. In that same period, Italian humanists reinterpreted the term in a positive sense to express the idea of natural magic. Both negative and positive understandings of the term recurred in Western culture over the following centuries.
Since the nineteenth century, academics in various disciplines have employed the term magic but have defined it in different ways and used it in reference to different things. One approach, associated with the anthropologists Edward Tylor (1832–1917) and James G. Frazer (1854–1941), uses the term to describe beliefs in hidden sympathies between objects that allow one to influence the other. Defined in this way, magic is portrayed as the opposite to science. An alternative approach, associated with the sociologist Marcel Mauss (1872–1950) and his uncle Émile Durkheim (1858–1917), employs the term to describe private rites and ceremonies and contrasts it with religion, which it defines as a communal and organised activity. By the 1990s many scholars were rejecting the term's utility for scholarship. They argued that the label drew arbitrary lines between similar beliefs and practices that were alternatively considered religious, and that it constituted ethnocentric to apply the connotations of magic—rooted in Western and Christian history—to other cultures.
Branches or types
High and low
Historians and anthropologists have distinguished between practitioners who engage in high magic, and those who engage in low magic. High magic, also known as theurgy and ceremonial or ritual magic, is more complex, involving lengthy and detailed rituals as well as sophisticated, sometimes expensive, paraphernalia. Low magic and natural magic are associated with peasants and folklore with simpler rituals such as brief, spoken spells. Low magic is also closely associated with sorcery and witchcraft. Anthropologist Susan Greenwood writes that "Since the Renaissance, high magic has been concerned with drawing down forces and energies from heaven" and achieving unity with divinity. High magic is usually performed indoors while witchcraft is often performed outdoors.
White, gray and black
Historian Owen Davies says the term "white witch" was rarely used before the 20th century. White magic is understood as the use of magic for selfless or helpful purposes, while black magic was used for selfish, harmful or evil purposes. Black magic is the malicious counterpart of the benevolent white magic. There is no consensus as to what constitutes white, gray or black magic, as Phil Hine says, "like many other aspects of occultism, what is termed to be 'black magic' depends very much on who is doing the defining." Gray magic, also called "neutral magic", is magic that is not performed for specifically benevolent reasons, but is also not focused towards completely hostile practices.
Witchcraft
The historian Ronald Hutton notes the presence of four distinct meanings of the term witchcraft in the English language. Historically, the term primarily referred to the practice of causing harm to others through supernatural or magical means. This remains, according to Hutton, "the most widespread and frequent" understanding of the term. Moreover, Hutton also notes three other definitions in current usage; to refer to anyone who conducts magical acts, for benevolent or malevolent intent; for practitioners of the modern Pagan religion of Wicca; or as a symbol of women resisting male authority and asserting an independent female authority. Belief in witchcraft is often present within societies and groups whose cultural framework includes a magical world view.
Those regarded as being magicians have often faced suspicion from other members of their society. This is particularly the case if these perceived magicians have been associated with social groups already considered morally suspect in a particular society, such as foreigners, women, or the lower classes. In contrast to these negative associations, many practitioners of activities that have been labelled magical have emphasised that their actions are benevolent and beneficial. This conflicted with the common Christian view that all activities categorised as being forms of magic were intrinsically bad regardless of the intent of the magician, because all magical actions relied on the aid of demons. There could be conflicting attitudes regarding the practices of a magician; in European history, authorities often believed that cunning folk and traditional healers were harmful because their practices were regarded as magical and thus stemming from contact with demons, whereas a local community might value and respect these individuals because their skills and services were deemed beneficial.
In Western societies, the practice of magic, especially when harmful, was usually associated with women. For instance, during the witch trials of the early modern period, around three quarters of those executed as witches were female, to only a quarter who were men. That women were more likely to be accused and convicted of witchcraft in this period might have been because their position was more legally vulnerable, with women having little or no legal standing that was independent of their male relatives. The conceptual link between women and magic in Western culture may be because many of the activities regarded as magical—from rites to encourage fertility to potions to induce abortions—were associated with the female sphere. It might also be connected to the fact that many cultures portrayed women as being inferior to men on an intellectual, moral, spiritual, and physical level.
History
Mesopotamia
Magic was invoked in many kinds of rituals and medical formulae, and to counteract evil omens. Defensive or legitimate magic in Mesopotamia (asiputu or masmassutu in the Akkadian language) were incantations and ritual practices intended to alter specific realities. The ancient Mesopotamians believed that magic was the only viable defense against demons, ghosts, and evil sorcerers. To defend themselves against the spirits of those they had wronged, they would leave offerings known as kispu in the person's tomb in hope of appeasing them. If that failed, they also sometimes took a figurine of the deceased and buried it in the ground, demanding for the gods to eradicate the spirit, or force it to leave the person alone.
The ancient Mesopotamians also used magic intending to protect themselves from evil sorcerers who might place curses on them. Black magic as a category did not exist in ancient Mesopotamia, and a person legitimately using magic to defend themselves against illegitimate magic would use exactly the same techniques. The only major difference was that curses were enacted in secret; whereas a defense against sorcery was conducted in the open, in front of an audience if possible. One ritual to punish a sorcerer was known as Maqlû, or "The Burning". The person viewed as being afflicted by witchcraft would create an effigy of the sorcerer and put it on trial at night. Then, once the nature of the sorcerer's crimes had been determined, the person would burn the effigy and thereby break the sorcerer's power over them.
The ancient Mesopotamians also performed magical rituals to purify themselves of sins committed unknowingly. One such ritual was known as the Šurpu, or "Burning", in which the caster of the spell would transfer the guilt for all their misdeeds onto various objects such as a strip of dates, an onion, and a tuft of wool. The person would then burn the objects and thereby purify themself of all sins that they might have unknowingly committed. A whole genre of love spells existed. Such spells were believed to cause a person to fall in love with another person, restore love which had faded, or cause a male sexual partner to be able to sustain an erection when he had previously been unable. Other spells were used to reconcile a man with his patron deity or to reconcile a wife with a husband who had been neglecting her.
The ancient Mesopotamians made no distinction between rational science and magic. When a person became ill, doctors would prescribe both magical formulas to be recited as well as medicinal treatments. Most magical rituals were intended to be performed by an āšipu, an expert in the magical arts. The profession was generally passed down from generation to generation and was held in extremely high regard and often served as advisors to kings and great leaders. An āšipu probably served not only as a magician, but also as a physician, a priest, a scribe, and a scholar.
The Sumerian god Enki, who was later syncretized with the East Semitic god Ea, was closely associated with magic and incantations; he was the patron god of the bārȗ and the ašipū and was widely regarded as the ultimate source of all arcane knowledge. The ancient Mesopotamians also believed in omens, which could come when solicited or unsolicited. Regardless of how they came, omens were always taken with the utmost seriousness.
Incantation bowls
A common set of shared assumptions about the causes of evil and how to avert it are found in a form of early protective magic called incantation bowl or magic bowls. The bowls were produced in the Middle East, particularly in Upper Mesopotamia and Syria, what is now Iraq and Iran, and fairly popular during the sixth to eighth centuries. The bowls were buried face down and were meant to capture demons. They were commonly placed under the threshold, courtyards, in the corner of the homes of the recently deceased and in cemeteries. A subcategory of incantation bowls are those used in Jewish magical practice. Aramaic incantation bowls are an important source of knowledge about Jewish magical practices.
Egypt
In ancient Egypt (Kemet in the Egyptian language), Magic (personified as the god heka) was an integral part of religion and culture which is known to us through a substantial corpus of texts which are products of the Egyptian tradition.
While the category magic has been contentious for modern Egyptology, there is clear support for its applicability from ancient terminology. The Coptic term hik is the descendant of the pharaonic term heka, which, unlike its Coptic counterpart, had no connotation of impiety or illegality, and is attested from the Old Kingdom through to the Roman era. Heka was considered morally neutral and was applied to the practices and beliefs of both foreigners and Egyptians alike. The Instructions for Merikare informs us that heka was a beneficence gifted by the creator to humanity "in order to be weapons to ward off the blow of events".
Magic was practiced by both the literate priestly hierarchy and by illiterate farmers and herdsmen, and the principle of heka underlay all ritual activity, both in the temples and in private settings.
The main principle of heka is centered on the power of words to bring things into being. Karenga explains the pivotal power of words and their vital ontological role as the primary tool used by the creator to bring the manifest world into being. Because humans were understood to share a divine nature with the gods, snnw ntr (images of the god), the same power to use words creatively that the gods have is shared by humans.
Book of the Dead
The interior walls of the pyramid of Unas, the final pharaoh of the Egyptian Fifth Dynasty, are covered in hundreds of magical spells and inscriptions, running from floor to ceiling in vertical columns. These inscriptions are known as the Pyramid Texts and they contain spells needed by the pharaoh in order to survive in the afterlife. The Pyramid Texts were strictly for royalty only; the spells were kept secret from commoners and were written only inside royal tombs. During the chaos and unrest of the First Intermediate Period, however, tomb robbers broke into the pyramids and saw the magical inscriptions. Commoners began learning the spells and, by the beginning of the Middle Kingdom, commoners began inscribing similar writings on the sides of their own coffins, hoping that doing so would ensure their own survival in the afterlife. These writings are known as the Coffin Texts.
After a person died, his or her corpse would be mummified and wrapped in linen bandages to ensure that the deceased's body would survive for as long as possible because the Egyptians believed that a person's soul could only survive in the afterlife for as long as his or her physical body survived here on earth. The last ceremony before a person's body was sealed away inside the tomb was known as the Opening of the Mouth. In this ritual, the priests would touch various magical instruments to various parts of the deceased's body, thereby giving the deceased the ability to see, hear, taste, and smell in the afterlife.
Amulets
The use of amulets (meket) was widespread among both living and dead ancient Egyptians. They were used for protection and as a means of "reaffirming the fundamental fairness of the universe". The oldest amulets found are from the predynastic Badarian Period, and they persisted through to Roman times.
Judea
In the Mosaic Law, practices such as witchcraft (), being a soothsayer () or a sorcerer () or one who conjures spells () or one who calls up the dead () are specifically forbidden as abominations to the Lord.
Halakha (Jewish religious law) forbids divination and other forms of soothsaying, and the Talmud lists many persistent yet condemned divining practices. Practical Kabbalah in historical Judaism is a branch of the Jewish mystical tradition that concerns the use of magic. It was considered permitted white magic by its practitioners, reserved for the elite, who could separate its spiritual source from qlippothic realms of evil if performed under circumstances that were holy (Q-D-Š) and pure (). The concern of overstepping Judaism's strong prohibitions of impure magic ensured it remained a minor tradition in Jewish history. Its teachings include the use of Divine and angelic names for amulets and incantations. These magical practices of Judaic folk religion which became part of practical Kabbalah date from Talmudic times. The Talmud mentions the use of charms for healing, and a wide range of magical cures were sanctioned by rabbis. It was ruled that any practice actually producing a cure was not to be regarded superstitiously and there has been the widespread practice of medicinal amulets, and folk remedies () in Jewish societies across time and geography.
Although magic was forbidden by Levitical law in the Hebrew Bible, it was widely practised in the late Second Temple period, and particularly well documented in the period following the destruction of the temple into the 3rd, 4th, and 5th centuries CE.
Greco-Roman world
During the late sixth and early fifth centuries BCE, the Persian maguš was Graecicized and introduced into the ancient Greek language as μάγος and μαγεία. In doing so it transformed meaning, gaining negative connotations, with the magos being regarded as a charlatan whose ritual practices were fraudulent, strange, unconventional, and dangerous. As noted by Davies, for the ancient Greeks—and subsequently for the ancient Romans—"magic was not distinct from religion but rather an unwelcome, improper expression of it—the religion of the other". The historian Richard Gordon suggested that for the ancient Greeks, being accused of practicing magic was "a form of insult".
This change in meaning was influenced by the military conflicts that the Greek city-states were then engaged in against the Persian Empire. In this context, the term makes appearances in such surviving text as Sophocles' Oedipus Rex, Hippocrates' De morbo sacro, and Gorgias' Encomium of Helen. In Sophocles' play, for example, the character Oedipus derogatorily refers to the seer Tiresius as a magos—in this context meaning something akin to quack or charlatan—reflecting how this epithet was no longer reserved only for Persians.
In the first century BCE, the Greek concept of the magos was adopted into Latin and used by a number of ancient Roman writers as magus and magia. The earliest known Latin use of the term was in Virgil's Eclogue, written around 40 BCE, which makes reference to magicis ... sacris (magic rites). The Romans already had other terms for the negative use of supernatural powers, such as veneficus and saga. The Roman use of the term was similar to that of the Greeks, but placed greater emphasis on the judicial application of it. Within the Roman Empire, laws would be introduced criminalising things regarded as magic.
In ancient Roman society, magic was associated with societies to the east of the empire; the first century CE writer Pliny the Elder for instance claimed that magic had been created by the Iranian philosopher Zoroaster, and that it had then been brought west into Greece by the magician Osthanes, who accompanied the military campaigns of the Persian King Xerxes.
Ancient Greek scholarship of the 20th century, almost certainly influenced by Christianising preconceptions of the meanings of magic and religion, and the wish to establish Greek culture as the foundation of Western rationality, developed a theory of ancient Greek magic as primitive and insignificant, and thereby essentially separate from Homeric, communal (polis) religion. Since the last decade of the century, however, recognising the ubiquity and respectability of acts such as katadesmoi (binding spells), described as magic by modern and ancient observers alike, scholars have been compelled to abandon this viewpoint. The Greek word mageuo (practice magic) itself derives from the word Magos, originally simply the Greek name for a Persian tribe known for practicing religion. Non-civic mystery cults have been similarly re-evaluated:
Katadesmoi (), curses inscribed on wax or lead tablets and buried underground, were frequently executed by all strata of Greek society, sometimes to protect the entire polis. Communal curses carried out in public declined after the Greek classical period, but private curses remained common throughout antiquity. They were distinguished as magical by their individualistic, instrumental and sinister qualities. These qualities, and their perceived deviation from inherently mutable cultural constructs of normality, most clearly delineate ancient magic from the religious rituals of which they form a part.
A large number of magical papyri, in Greek, Coptic, and Demotic, have been recovered and translated. They contain early instances of:
the use of magic words said to have the power to command spirits;
the use of mysterious symbols or sigils which are thought to be useful when invoking or evoking spirits.
The practice of magic was banned in the late Roman world, and the Codex Theodosianus (438 AD) states:
Middle Ages
Magic practices such as divination, interpretation of omens, sorcery, and use of charms had been specifically forbidden in Mosaic Law and condemned in Biblical histories of the kings. Many of these practices were spoken against in the New Testament as well.
Some commentators say that in the first century CE, early Christian authors absorbed the Greco-Roman concept of magic and incorporated it into their developing Christian theology, and that these Christians retained the already implied Greco-Roman negative stereotypes of the term and extended them by incorporating conceptual patterns borrowed from Jewish thought, in particular the opposition of magic and miracle. Some early Christian authors followed the Greek-Roman thinking by ascribing the origin of magic to the human realm, mainly to Zoroaster and Osthanes. The Christian view was that magic was a product of the Babylonians, Persians, or Egyptians. The Christians shared with earlier classical culture the idea that magic was something distinct from proper religion, although drew their distinction between the two in different ways.
For early Christian writers like Augustine of Hippo, magic did not merely constitute fraudulent and unsanctioned ritual practices, but was the very opposite of religion because it relied upon cooperation from demons, the henchmen of Satan. In this, Christian ideas of magic were closely linked to the Christian category of paganism, and both magic and paganism were regarded as belonging under the broader category of superstitio (superstition), another term borrowed from pre-Christian Roman culture. This Christian emphasis on the inherent immorality and wrongness of magic as something conflicting with good religion was far starker than the approach in the other large monotheistic religions of the period, Judaism and Islam. For instance, while Christians regarded demons as inherently evil, the jinn—comparable entities in Islamic mythology—were perceived as more ambivalent figures by Muslims.
The model of the magician in Christian thought was provided by Simon Magus, (Simon the Magician), a figure who opposed Saint Peter in both the Acts of the Apostles and the apocryphal yet influential Acts of Peter. The historian Michael D. Bailey stated that in medieval Europe, magic was a "relatively broad and encompassing category". Christian theologians believed that there were multiple different forms of magic, the majority of which were types of divination, for instance, Isidore of Seville produced a catalogue of things he regarded as magic in which he listed divination by the four elements i.e. geomancy, hydromancy, aeromancy, and pyromancy, as well as by observation of natural phenomena e.g. the flight of birds and astrology. He also mentioned enchantment and ligatures (the medical use of magical objects bound to the patient) as being magical. Medieval Europe also saw magic come to be associated with the Old Testament figure of Solomon; various grimoires, or books outlining magical practices, were written that claimed to have been written by Solomon, most notably the Key of Solomon.
In early medieval Europe, magia was a term of condemnation. In medieval Europe, Christians often suspected Muslims and Jews of engaging in magical practices; in certain cases, these perceived magical rites—including the alleged Jewish sacrifice of Christian children—resulted in Christians massacring these religious minorities. Christian groups often also accused other, rival Christian groups such as the Hussites—which they regarded as heretical—of engaging in magical activities. Medieval Europe also saw the term maleficium applied to forms of magic that were conducted with the intention of causing harm. The later Middle Ages saw words for these practitioners of harmful magical acts appear in various European languages: sorcière in French, Hexe in German, strega in Italian, and bruja in Spanish. The English term for malevolent practitioners of magic, witch, derived from the earlier Old English term wicce.
Ars Magica or magic is a major component and supporting contribution to the belief and practice of spiritual, and in many cases, physical healing throughout the Middle Ages. Emanating from many modern interpretations lies a trail of misconceptions about magic, one of the largest revolving around wickedness or the existence of nefarious beings who practice it. These misinterpretations stem from numerous acts or rituals that have been performed throughout antiquity, and due to their exoticism from the commoner's perspective, the rituals invoked uneasiness and an even stronger sense of dismissal.
In the Medieval Jewish view, the separation of the mystical and magical elements of Kabbalah, dividing it into speculative theological Kabbalah (Kabbalah Iyyunit) with its meditative traditions, and theurgic practical Kabbalah (Kabbalah Ma'asit), had occurred by the beginning of the 14th century.
One societal force in the Middle Ages more powerful than the singular commoner, the Christian Church, rejected magic as a whole because it was viewed as a means of tampering with the natural world in a supernatural manner associated with the biblical verses of Deuteronomy 18:9–12. Despite the many negative connotations which surround the term magic, there exist many elements that are seen in a divine or holy light.
The divine right of kings in England was thought to be able to give them "sacred magic" power to heal thousands of their subjects from sicknesses.
Diversified instruments or rituals used in medieval magic include, but are not limited to: various amulets, talismans, potions, as well as specific chants, dances, and prayers. Along with these rituals are the adversely imbued notions of demonic participation which influence them. The idea that magic was devised, taught, and worked by demons would have seemed reasonable to anyone who read the Greek magical papyri or the Sefer-ha-Razim and found that healing magic appeared alongside rituals for killing people, gaining wealth, or personal advantage, and coercing women into sexual submission. Archaeology is contributing to a fuller understanding of ritual practices performed in the home, on the body and in monastic and church settings.
The Islamic reaction towards magic did not condemn magic in general and distinguished between magic which can heal sickness and possession, and sorcery. The former is therefore a special gift from God, while the latter is achieved through help of Jinn and devils. Ibn al-Nadim held that exorcists gain their power by their obedience to God, while sorcerers please the devils by acts of disobedience and sacrifices and they in return do him a favor. According to Ibn Arabi, Al-Ḥajjāj ibn Yusuf al-Shubarbuli was able to walk on water due to his piety. According to the Quran 2:102, magic was also taught to humans by devils and the angels Harut and Marut.
The influence of Arab Islamic magic in medieval and Renaissance Europe was very notable. Some magic books such as Picatrix and Al Kindi's De Radiis were the basis for much of medieval magic in Europe and for subsequent developments in the Renaissance. Another Arab Muslim author fundamental to the developments of medieval and Renaissance European magic was Ahmad al-Buni, with his books such as the Shams al-Ma'arif which deal above all with the evocation and invocation of spirits or jinn to control them, obtain powers and make wishes come true. These books are still important to the Islamic world specifically in Simiyya, a doctrine found commonly within Sufi-occult traditions.
During the early modern period, the concept of magic underwent a more positive reassessment through the development of the concept of magia naturalis (natural magic). This was a term introduced and developed by two Italian humanists, Marsilio Ficino and Giovanni Pico della Mirandola. For them, magia was viewed as an elemental force pervading many natural processes, and thus was fundamentally distinct from the mainstream Christian idea of demonic magic. Their ideas influenced an array of later philosophers and writers, among them Paracelsus, Giordano Bruno, Johannes Reuchlin, and Johannes Trithemius. According to the historian Richard Kieckhefer, the concept of magia naturalis took "firm hold in European culture" during the fourteenth and fifteenth centuries, attracting the interest of natural philosophers of various theoretical orientations, including Aristotelians, Neoplatonists, and Hermeticists.
Adherents of this position argued that magia could appear in both good and bad forms; in 1625, the French librarian Gabriel Naudé wrote his Apology for all the Wise Men Falsely Suspected of Magic, in which he distinguished "Mosoaicall Magick"—which he claimed came from God and included prophecies, miracles, and speaking in tongues—from "geotick" magic caused by demons. While the proponents of magia naturalis insisted that this did not rely on the actions of demons, critics disagreed, arguing that the demons had simply deceived these magicians. By the seventeenth century the concept of magia naturalis had moved in increasingly 'naturalistic' directions, with the distinctions between it and science becoming blurred. The validity of magia naturalis as a concept for understanding the universe then came under increasing criticism during the Age of Enlightenment in the eighteenth century.
Despite the attempt to reclaim the term magia for use in a positive sense, it did not supplant traditional attitudes toward magic in the West, which remained largely negative. At the same time as magia naturalis was attracting interest and was largely tolerated, Europe saw an active persecution of accused witches believed to be guilty of maleficia. Reflecting the term's continued negative associations, Protestants often sought to denigrate Roman Catholic sacramental and devotional practices as being magical rather than religious. Many Roman Catholics were concerned by this allegation and for several centuries various Roman Catholic writers devoted attention to arguing that their practices were religious rather than magical. At the same time, Protestants often used the accusation of magic against other Protestant groups which they were in contest with. In this way, the concept of magic was used to prescribe what was appropriate as religious belief and practice.
Similar claims were also being made in the Islamic world during this period. The Arabian cleric Muhammad ibn Abd al-Wahhab—founder of Wahhabism—for instance condemned a range of customs and practices such as divination and the veneration of spirits as sihr, which he in turn claimed was a form of shirk, the sin of idolatry.
The Renaissance
Renaissance humanism saw a resurgence in hermeticism and Neo-Platonic varieties of ceremonial magic. The Renaissance, on the other hand, saw the rise of science, in such forms as the dethronement of the Ptolemaic theory of the universe, the distinction of astronomy from astrology, and of chemistry from alchemy.
There was great uncertainty in distinguishing practices of superstition, occultism, and perfectly sound scholarly knowledge or pious ritual. The intellectual and spiritual tensions erupted in the Early Modern witch craze, further reinforced by the turmoil of the Protestant Reformation, especially in Germany, England, and Scotland.
In Hasidism, the displacement of practical Kabbalah using directly magical means, by conceptual and meditative trends gained much further emphasis, while simultaneously instituting meditative theurgy for material blessings at the heart of its social mysticism. Hasidism internalised Kabbalah through the psychology of deveikut (cleaving to God), and cleaving to the Tzadik (Hasidic Rebbe). In Hasidic doctrine, the tzaddik channels Divine spiritual and physical bounty to his followers by altering the Will of God (uncovering a deeper concealed Will) through his own deveikut and self-nullification. Dov Ber of Mezeritch is concerned to distinguish this theory of the Tzadik's will altering and deciding the Divine Will, from directly magical process.
In the sixteenth century, European societies began to conquer and colonise other continents around the world, and as they did so they applied European concepts of magic and witchcraft to practices found among the peoples whom they encountered. Usually, these European colonialists regarded the natives as primitives and savages whose belief systems were diabolical and needed to be eradicated and replaced by Christianity. Because Europeans typically viewed these non-European peoples as being morally and intellectually inferior to themselves, it was expected that such societies would be more prone to practicing magic. Women who practiced traditional rites were labelled as witches by the Europeans.
In various cases, these imported European concepts and terms underwent new transformations as they merged with indigenous concepts. In West Africa, for instance, Portuguese travellers introduced their term and concept of the feitiçaria (often translated as sorcery) and the feitiço (spell) to the native population, where it was transformed into the concept of the fetish. When later Europeans encountered these West African societies, they wrongly believed that the fetiche was an indigenous African term rather than the result of earlier inter-continental encounters. Sometimes, colonised populations themselves adopted these European concepts for their own purposes. In the early nineteenth century, the newly independent Haitian government of Jean-Jacques Dessalines began to suppress the practice of Vodou, and in 1835 Haitian law-codes categorised all Vodou practices as sortilège (sorcery/witchcraft), suggesting that it was all conducted with harmful intent, whereas among Vodou practitioners the performance of harmful rites was already given a separate and distinct category, known as maji.
Baroque period
During the Baroque era, several intriguing figures engaged with occult and magical themes that went beyond conventional thinking. Michael Sendivogius (1566–1636), a Polish alchemist, emphasized empirical experimentation in alchemy and made notable contributions to early chemistry. Tommaso Campanella (1568–1639), an Italian philosopher, blended Christianity with mysticism in works like The City of the Sun, envisioning an ideal society governed by divine principles. Jakob Böhme (1575–1624), a German mystic, explored the relationship between the divine and human experience, influencing later mystical movements.
Jan Baptist van Helmont, a Flemish chemist, coined the term "gas" and conducted experiments on plant growth, expanding the understanding of chemistry. Sir Kenelm Digby, known for his diverse interests, created the "Sympathetic Powder", believed to have mystical healing properties. Isaac Newton, famous for his scientific achievements, also delved into alchemy and collected esoteric manuscripts, revealing his fascination with hidden knowledge. These individuals collectively embody the curiosity and exploration characteristic of the Baroque period.
Modernity
By the nineteenth century, European intellectuals no longer saw the practice of magic through the framework of sin and instead regarded magical practices and beliefs as "an aberrational mode of thought antithetical to the dominant cultural logic—a sign of psychological impairment and marker of racial or cultural inferiority".
As educated elites in Western societies increasingly rejected the efficacy of magical practices, legal systems ceased to threaten practitioners of magical activities with punishment for the crimes of diabolism and witchcraft, and instead threatened them with the accusation that they were defrauding people through promising to provide things which they could not.
This spread of European colonial power across the world influenced how academics would come to frame the concept of magic. In the nineteenth century, several scholars adopted the traditional, negative concept of magic. That they chose to do so was not inevitable, for they could have followed the example adopted by prominent esotericists active at the time like Helena Blavatsky who had chosen to use the term and concept of magic in a positive sense.
Various writers also used the concept of magic to criticise religion by arguing that the latter still displayed many of the negative traits of the former. An example of this was the American journalist H. L. Mencken in his polemical 1930 work Treatise on the Gods; he sought to critique religion by comparing it to magic, arguing that the division between the two was misplaced. The concept of magic was also adopted by theorists in the new field of psychology, where it was often used synonymously with superstition, although the latter term proved more common in early psychological texts.
In the late nineteenth and twentieth centuries, folklorists examined rural communities across Europe in search of magical practices, which at the time they typically understood as survivals of ancient belief systems. It was only in the 1960s that anthropologists like Jeanne Favret-Saada also began looking in depth at magic in European contexts, having previously focused on examining magic in non-Western contexts. In the twentieth century, magic also proved a topic of interest to the Surrealists, an artistic movement based largely in Europe; the Surrealism André Breton for instance published L'Art magique in 1957, discussing what he regarded as the links between magic and art.
The scholarly application of magic as a sui generis category that can be applied to any socio-cultural context was linked with the promotion of modernity to both Western and non-Western audiences.
The term magic has become pervasive in the popular imagination and idiom.
In contemporary contexts, the word magic is sometimes used to "describe a type of excitement, of wonder, or sudden delight", and in such a context can be "a term of high praise". Despite its historical contrast against science, scientists have also adopted the term in application to various concepts, such as magic acid, magic bullets, and magic angles.
Modern Western magic has challenged widely-held preconceptions about contemporary religion and spirituality.
The polemical discourses about magic influenced the self-understanding of modern magicians, several whom—such as Aleister Crowley—were well versed in academic literature on the subject.
According to scholar of religion Henrik Bogdan, "arguably the best known emic definition" of the term magic was provided by Crowley. Crowley—who favoured the spelling 'magick' over magic to distinguish it from stage illusionism—was of the view that "Magick is the Science and Art of causing Change to occur in conformity with Will". Crowley's definition influenced that of subsequent magicians. Dion Fortune, the founder of Fraternity of the Inner Light for instance stated that "Magic is the art of changing consciousness according to Will". Gerald Gardner, the founder of Gardnerian Wicca, stated that magic was "attempting to cause the physically unusual", while Anton LaVey, the founder of LaVeyan Satanism, described magic as "the change in situations or events in accordance with one's will, which would, using normally acceptable methods, be unchangeable".
The chaos magic movement emerged during the late 20th century, as an attempt to strip away the symbolic, ritualistic, theological or otherwise ornamental aspects of other occult traditions and distill magic down to a set of basic techniques.
These modern Western concepts of magic rely on a belief in correspondences connected to an unknown occult force that permeates the universe. As noted by Hanegraaff, this operated according to "a new meaning of magic, which could not possibly have existed in earlier periods, precisely because it is elaborated in reaction to the 'disenchantment of the world.
For many, and perhaps most, modern Western magicians, the goal of magic is deemed to be personal spiritual development. The perception of magic as a form of self-development is central to the way that magical practices have been adopted into forms of modern Paganism and the New Age phenomenon. One significant development within modern Western magical practices has been sex magic. This was a practice promoted in the writings of Paschal Beverly Randolph and subsequently exerted a strong interest on occultist magicians like Crowley and Theodor Reuss.
The adoption of the term magic by modern occultists can in some instances be a deliberate attempt to champion those areas of Western society which have traditionally been marginalised as a means of subverting dominant systems of power. The influential American Wiccan and author Starhawk for instance stated that "Magic is another word that makes people uneasy, so I use it deliberately, because the words we are comfortable with, the words that sound acceptable, rational, scientific, and intellectually correct, are comfortable precisely because they are the language of estrangement." In the present day, "among some countercultural subgroups the label is considered 'cool.
Conceptual development
According to anthropologist Edward Evan Evans-Pritchard, magic formed a rational framework of beliefs and knowledge in some cultures, like the Azande people of Africa. The historian Owen Davies stated that the word magic was "beyond simple definition", and had "a range of meanings". Similarly, the historian Michael D. Bailey characterised magic as "a deeply contested category and a very fraught label"; as a category, he noted, it was "profoundly unstable" given that definitions of the term have "varied dramatically across time and between cultures". Scholars have engaged in extensive debates as to how to define magic, with such debates resulting in intense dispute. Throughout such debates, the scholarly community has failed to agree on a definition of magic, in a similar manner to how they have failed to agree on a definition of religion. According with scholar of religion Michael Stausberg the phenomenon of people applying the concept of magic to refer to themselves and their own practices and beliefs goes as far back as late antiquity. However, even among those throughout history who have described themselves as magicians, there has been no common ground of what magic is.
In Africa, the word magic might simply be understood as denoting management of forces, which, as an activity, is not weighted morally and is accordingly a neutral activity from the start of a magical practice, but by the will of the magician, is thought to become and to have an outcome which represents either good or bad (evil). Ancient African culture was in the habit customarily of always discerning difference between magic, and a group of other things, which are not magic, these things were medicine, divination, witchcraft and sorcery. Opinion differs on how religion and magic are related to each other with respect development or to which developed from which, some think they developed together from a shared origin, some think religion developed from magic, and some, magic from religion.
Anthropological and sociological theories of magic generally serve to sharply demarcate certain practices from other, otherwise similar practices in a given society. According to Bailey: "In many cultures and across various historical periods, categories of magic often define and maintain the limits of socially and culturally acceptable actions in respect to numinous or occult entities or forces. Even more, basically, they serve to delineate arenas of appropriate belief." In this, he noted that "drawing these distinctions is an exercise in power". This tendency has had repercussions for the study of magic, with academics self-censoring their research because of the effects on their careers.
Randall Styers noted that attempting to define magic represents "an act of demarcation" by which it is juxtaposed against "other social practices and modes of knowledge" such as religion and science. The historian Karen Louise Jolly described magic as "a category of exclusion, used to define an unacceptable way of thinking as either the opposite of religion or of science".
Modern scholarship has produced various definitions and theories of magic. According to Bailey, "these have typically framed magic in relation to, or more
frequently in distinction from, religion and science." Since the emergence of the study of religion and the social sciences, magic has been a "central theme in the theoretical literature" produced by scholars operating in these academic disciplines. Magic is one of the most heavily theorized concepts in the study of religion, and also played a key role in early theorising within anthropology. Styers believed that it held such a strong appeal for social theorists because it provides "such a rich site for articulating and contesting the nature and boundaries of modernity". Scholars have commonly used it as a foil for the concept of religion, regarding magic as the "illegitimate (and effeminized) sibling" of religion. Alternately, others have used it as a middle-ground category located between religion and science.
The context in which scholars framed their discussions of magic was informed by the spread of European colonial power across the world in the modern period.
These repeated attempts to define magic resonated with broader social concerns, and the pliability of the concept has allowed it to be "readily adaptable as a polemical and ideological tool". The links that intellectuals made between magic and those they characterized as primitives helped to legitimise European and Euro-American imperialism and colonialism, as these Western colonialists expressed the view that those who believed in and practiced magic were unfit to govern themselves and should be governed by those who, rather than believing in magic, believed in science and/or (Christian) religion. In Bailey's words, "the association of certain peoples [whether non-Europeans or poor, rural Europeans] with magic served to distance and differentiate them from those who ruled over them, and in large part to justify that rule."
Many different definitions of magic have been offered by scholars, although—according to Hanegraaff—these can be understood as variations of a small number of heavily influential theories.
Intellectualist approach
The intellectualist approach to defining magic is associated with two British anthropologists, Edward Tylor and James G. Frazer. This approach viewed magic as the theoretical opposite of science, and came to preoccupy much anthropological thought on the subject. This approach was situated within the evolutionary models which underpinned thinking in the social sciences during the early 19th century. The first social scientist to present magic as something that predated religion in an evolutionary development was Herbert Spencer; in his A System of Synthetic Philosophy, he used the term magic in reference to sympathetic magic. Spencer regarded both magic and religion as being rooted in false speculation about the nature of objects and their relationship to other things.
Tylor's understanding of magic was linked to his concept of animism. In his 1871 book Primitive Culture, Tylor characterized magic as beliefs based on "the error of mistaking ideal analogy for real analogy". In Tylor's view, "primitive man, having come to associate in thought those things which he found by experience to be connected in fact, proceeded erroneously to invert this action, and to conclude that association in thought must involve similar connection in reality. He thus attempted to discover, to foretell, and to cause events by means of processes which we can now see to have only an ideal significance". Tylor was dismissive of magic, describing it as "one of the most pernicious delusions that ever vexed mankind". Tylor's views proved highly influential, and helped to establish magic as a major topic of anthropological research.
Tylor's ideas were adopted and simplified by James Frazer. He used the term magic to mean sympathetic magic, describing it as a practice relying on the magician's belief "that things act on each other at a distance through a secret sympathy", something which he described as "an invisible ether". He further divided this magic into two forms, the "homeopathic (imitative, mimetic)" and the "contagious". The former was the idea that "like produces like", or that the similarity between two objects could result in one influencing the other. The latter was based on the idea that contact between two objects allowed the two to continue to influence one another at a distance. Like Taylor, Frazer viewed magic negatively, describing it as "the bastard sister of science", arising from "one great disastrous fallacy".
Where Frazer differed from Tylor was in characterizing a belief in magic as a major stage in humanity's cultural development, describing it as part of a tripartite division in which magic came first, religion came second, and eventually science came third. For Frazer, all early societies started as believers in magic, with some of them moving away from this and into religion. He believed that both magic and religion involved a belief in spirits but that they differed in the way that they responded to these spirits. For Frazer, magic "constrains or coerces" these spirits while religion focuses on "conciliating or propitiating them". He acknowledged that their common ground resulted in a cross-over of magical and religious elements in various instances; for instance he claimed that the sacred marriage was a fertility ritual which combined elements from both world-views.
Some scholars retained the evolutionary framework used by Frazer but changed the order of its stages; the German ethnologist Wilhelm Schmidt argued that religion—by which he meant monotheism—was the first stage of human belief, which later degenerated into both magic and polytheism. Others rejected the evolutionary framework entirely. Frazer's notion that magic had given way to religion as part of an evolutionary framework was later deconstructed by the folklorist and anthropologist Andrew Lang in his essay "Magic and Religion"; Lang did so by highlighting how Frazer's framework relied upon misrepresenting ethnographic accounts of beliefs and practiced among indigenous Australians to fit his concept of magic.
Functionalist approach
The functionalist approach to defining magic is associated with the French sociologists Marcel Mauss and Emile Durkheim.
In this approach, magic is understood as being the theoretical opposite of religion.
Mauss set forth his conception of magic in a 1902 essay, "A General Theory of Magic". Mauss used the term magic in reference to "any rite that is not part of an organized cult: a rite that is private, secret, mysterious, and ultimately tending towards one that is forbidden". Conversely, he associated religion with organised cult. By saying that magic was inherently non-social, Mauss had been influenced by the traditional Christian understandings of the concept. Mauss deliberately rejected the intellectualist approach promoted by Frazer, believing that it was inappropriate to restrict the term magic to sympathetic magic, as Frazer had done. He expressed the view that "there are not only magical rites which are not sympathetic, but neither is sympathy a prerogative of magic, since there are sympathetic practices in religion".
Mauss' ideas were adopted by Durkheim in his 1912 book The Elementary Forms of the Religious Life. Durkheim was of the view that both magic and religion pertained to "sacred things, that is to say, things set apart and forbidden". Where he saw them as being different was in their social organisation. Durkheim used the term magic to describe things that were inherently anti-social, existing in contrast to what he referred to as a Church, the religious beliefs shared by a social group; in his words, "There is no Church of magic." Durkheim expressed the view that "there is something inherently anti-religious about the maneuvers of the magician", and that a belief in magic "does not result in binding together those who adhere to it, nor in uniting them into a group leading a common life." Durkheim's definition encounters problems in situations—such as the rites performed by Wiccans—in which acts carried out communally have been regarded, either by practitioners or observers, as being magical.
Scholars have criticized the idea that magic and religion can be differentiated into two distinct, separate categories. The social anthropologist Alfred Radcliffe-Brown suggested that "a simple dichotomy between magic and religion" was unhelpful and thus both should be subsumed under the broader category of ritual. Many later anthropologists followed his example.
Nevertheless, this distinction is still often made by scholars discussing this topic.
Emotionalist approach
The emotionalist approach to magic is associated with the English anthropologist Robert Ranulph Marett, the Austrian Sigmund Freud, and the Polish anthropologist Bronisław Malinowski.
Marett viewed magic as a response to stress. In a 1904 article, he argued that magic was a cathartic or stimulating practice designed to relieve feelings of tension. As his thought developed, he increasingly rejected the idea of a division between magic and religion and began to use the term "magico-religious" to describe the early development of both. Malinowski similarly understood magic to Marett, tackling the issue in a 1925 article. He rejected Frazer's evolutionary hypothesis that magic was followed by religion and then science as a series of distinct stages in societal development, arguing that all three were present in each society. In his view, both magic and religion "arise and function in situations of emotional stress" although whereas religion is primarily expressive, magic is primarily practical. He therefore defined magic as "a practical art consisting of acts which are only means to a definite end expected to follow later on". For Malinowski, magical acts were to be carried out for a specific end, whereas religious ones were ends in themselves. He for instance believed that fertility rituals were magical because they were carried out with the intention of meeting a specific need. As part of his functionalist approach, Malinowski saw magic not as irrational but as something that served a useful function, being sensible within the given social and environmental context.
The term magic was used liberally by Freud. He also saw magic as emerging from human emotion but interpreted it very differently to Marett.
Freud explains that "the associated theory of magic merely explains the paths along which magic proceeds; it does not explain its true essence, namely the misunderstanding which leads it to replace the laws of nature by psychological ones". Freud emphasizes that what led primitive men to come up with magic is the power of wishes: "His wishes are accompanied by a motor impulse, the will, which is later destined to alter the whole face of the earth to satisfy his wishes. This motor impulse is at first employed to give a representation of the satisfying situation in such a way that it becomes possible to experience the satisfaction by means of what might be described as motor hallucinations. This kind of representation of a satisfied wish is quite comparable to children's play, which succeeds their earlier purely sensory technique of satisfaction. [...] As time goes on, the psychological accent shifts from the motives for the magical act on to the measures by which it is carried out—that is, on to the act itself. [...] It thus comes to appear as though it is the magical act itself which, owing to its similarity with the desired result, alone determines the occurrence of that result."
In the early 1960s, the anthropologists Murray and Rosalie Wax put forward the argument that scholars should look at the magical worldview of a given society on its own terms rather than trying to rationalize it in terms of Western ideas about scientific knowledge. Their ideas were heavily criticised by other anthropologists, who argued that they had set up a false dichotomy between non-magical Western worldviews and magical non-Western worldviews. The concept of the magical worldview nevertheless gained widespread use in history, folkloristics, philosophy, cultural theory, and psychology. The notion of magical thinking has also been utilised by various psychologists. In the 1920s, the psychologist Jean Piaget used the concept as part of his argument that children were unable to clearly differentiate between the mental and the physical. According to this perspective, children begin to abandon their magical thinking between the ages of six and nine.
According to Stanley Tambiah, magic, science, and religion all have their own "quality of rationality", and have been influenced by politics and ideology. As opposed to religion, Tambiah suggests that mankind has a much more personal control over events. Science, according to Tambiah, is "a system of behavior by which man acquires mastery of the environment."
Ethnocentrism
The magic-religion-science triangle developed in European society based on evolutionary ideas i.e. that magic evolved into religion, which in turn evolved into science. However using a Western analytical tool when discussing non-Western cultures, or pre-modern forms of Western society, raises problems as it may impose alien Western categories on them. While magic remains an emic (insider) term in the history of Western societies, it remains an etic (outsider) term when applied to non-Western societies and even within specific Western societies. For this reason, academics like Michael D. Bailey suggest abandoning the term altogether as an academic category. During the twentieth century, many scholars focusing on Asian and African societies rejected the term magic, as well as related concepts like witchcraft, in favour of the more precise terms and concepts that existed within these specific societies like Juju. A similar approach has been taken by many scholars studying pre-modern societies in Europe, such as Classical antiquity, who find the modern concept of magic inappropriate and favour more specific terms originating within the framework of the ancient cultures which they are studying. Alternately, this term implies that all categories of magic are ethnocentric and that such Western preconceptions are an unavoidable component of scholarly research. This century has seen a trend towards emic ethnographic studies by scholar practitioners that explicitly explore the emic/etic divide.
Many scholars have argued that the use of the term as an analytical tool within academic scholarship should be rejected altogether. The scholar of religion Jonathan Z. Smith for example argued that it had no utility as an etic term that scholars should use. The historian of religion Wouter Hanegraaff agreed, on the grounds that its use is founded in conceptions of Western superiority and has "...served as a 'scientific' justification for converting non-European peoples from benighted superstitions..." stating that "the term magic is an important object of historical research, but not intended for doing research."
Bailey noted that, as of the early 21st century, few scholars sought grand definitions of magic but instead focused with "careful attention to particular contexts", examining what a term like magic meant to a given society; this approach, he noted, "call[ed] into question the legitimacy of magic as a universal category". The scholars of religion Berndt-Christian Otto and Michael Stausberg suggested that it would be perfectly possible for scholars to talk about amulets, curses, healing procedures, and other cultural practices often regarded as magical in Western culture without any recourse to the concept of magic itself. The idea that magic should be rejected as an analytic term developed in anthropology, before moving into Classical studies and Biblical studies in the 1980s. Since the 1990s, the term's usage among scholars of religion has declined.
Magicians
Many of the practices which have been labelled magic can be performed by anyone. For instance, some charms can be recited by individuals with no specialist knowledge nor any claim to having a specific power. Others require specialised training in order to perform them. Some of the individuals who performed magical acts on a more than occasional basis came to be identified as magicians, or with related concepts like sorcerers/sorceresses, witches, or cunning folk. Identities as a magician can stem from an individual's own claims about themselves, or it can be a label placed upon them by others. In the latter case, an individual could embrace such a label, or they could reject it, sometimes vehemently.
Economic incentives can encourage individuals to identify as magicians. In the cases of various forms of traditional healers, as well as the later stage magicians or illusionists, the label of magician could become a job description. Others claim such an identity out of a genuinely held belief that they have specific unusual powers or talents. Different societies have different social regulations regarding who can take on such a role; for instance, it may be a question of familial heredity, or there may be gender restrictions on who is allowed to engage in such practices. A variety of personal traits may be credited with giving magical power, and frequently they are associated with an unusual birth into the world. For instance, in Hungary it was believed that a táltos would be born with teeth or an additional finger. In various parts of Europe, it was believed that being born with a caul would associate the child with supernatural abilities. In some cases, a ritual initiation is required before taking on a role as a specialist in such practices, and in others it is expected that an individual will receive a mentorship from another specialist.
Davies noted that it was possible to "crudely divide magic specialists into religious and lay categories". He noted for instance that Roman Catholic priests, with their rites of exorcism, and access to holy water and blessed herbs, could be conceived as being magical practitioners. Traditionally, the most common method of identifying, differentiating, and establishing magical practitioners from common people is by initiation. By means of rites the magician's relationship to the supernatural and his entry into a closed professional class is established (often through rituals that simulate death and rebirth into a new life). However, Berger and Ezzy explain that since the rise of Neopaganism, "As there is no central bureaucracy or dogma to determine authenticity, an individual's self-determination as a Witch, Wiccan, Pagan or Neopagan is usually taken at face value". Ezzy argues that practitioners' worldviews have been neglected in many sociological and anthropological studies and that this is because of "a culturally narrow understanding of science that devalues magical beliefs".
Mauss argues that the powers of both specialist and common magicians are determined by culturally accepted standards of the sources and the breadth of magic: a magician cannot simply invent or claim new magic. In practice, the magician is only as powerful as his peers believe him to be.
Throughout recorded history, magicians have often faced skepticism regarding their purported powers and abilities. For instance, in sixteenth-century England, the writer Reginald Scot wrote The Discoverie of Witchcraft, in which he argued that many of those accused of witchcraft or otherwise claiming magical capabilities were fooling people using illusionism.
See also
Books about magic
References
Citations
Works cited
Further reading
External links
Superstitions | Magic (supernatural) | [
"Biology"
] | 13,024 | [
"Behavior",
"Religious practices",
"Human behavior"
] |
48,510 | https://en.wikipedia.org/wiki/Terrestrial%20planet | A terrestrial planet, tellurian planet, telluric planet, or rocky planet, is a planet that is composed primarily of silicate, rocks or metals. Within the Solar System, the terrestrial planets accepted by the IAU are the inner planets closest to the Sun: Mercury, Venus, Earth and Mars. Among astronomers who use the geophysical definition of a planet, two or three planetary-mass satellites – Earth's Moon, Io, and sometimes Europa – may also be considered terrestrial planets. The large rocky asteroids Pallas and Vesta are sometimes included as well, albeit rarely. The terms "terrestrial planet" and "telluric planet" are derived from Latin words for Earth (Terra and Tellus), as these planets are, in terms of structure, Earth-like. Terrestrial planets are generally studied by geologists, astronomers, and geophysicists.
Terrestrial planets have a solid planetary surface, making them substantially different from larger gaseous planets, which are composed mostly of some combination of hydrogen, helium, and water existing in various physical states.
Structure
All terrestrial planets in the Solar System have the same basic structure, such as a central metallic core (mostly iron) with a surrounding silicate mantle.
The large rocky asteroid 4 Vesta has a similar structure; possibly so does the smaller one 21 Lutetia. Another rocky asteroid 2 Pallas is about the same size as Vesta, but is significantly less dense; it appears to have never differentiated a core and a mantle. The Earth's Moon and Jupiter's moon Io have similar structures to terrestrial planets, but Earth's Moon has a much smaller iron core. Another Jovian moon Europa has a similar density but has a significant ice layer on the surface: for this reason, it is sometimes considered an icy planet instead.
Terrestrial planets can have surface structures such as canyons, craters, mountains, volcanoes, and others, depending on the presence at any time of an erosive liquid or tectonic activity or both.
Terrestrial planets have secondary atmospheres, generated by volcanic out-gassing or from comet impact debris. This contrasts with the outer, giant planets, whose atmospheres are primary; primary atmospheres were captured directly from the original solar nebula.
Terrestrial planets within the Solar System
The Solar System has four terrestrial planets under the dynamical definition: Mercury, Venus, Earth and Mars. The Earth's Moon as well as Jupiter's moons Io and Europa would also count geophysically, as well as perhaps the large protoplanet-asteroids Pallas and Vesta (though those are borderline cases). Among these bodies, only the Earth has an active surface hydrosphere. Europa is believed to have an active hydrosphere under its ice layer.
During the formation of the Solar System, there were many terrestrial planetesimals and proto-planets, but most merged with or were ejected by the four terrestrial planets, leaving only Pallas and Vesta to survive more or less intact. These two were likely both dwarf planets in the past, but have been battered out of equilibrium shapes by impacts. Some other protoplanets began to accrete and differentiate but suffered catastrophic collisions that left only a metallic or rocky core, like 16 Psyche or 8 Flora respectively. Many S-type and M-type asteroids may be such fragments.
The other round bodies from the asteroid belt outward are geophysically icy planets. They are similar to terrestrial planets in that they have a solid surface, but are composed of ice and rock rather than of rock and metal. These include the dwarf planets, such as Ceres, Pluto and Eris, which are found today only in the regions beyond the formation snow line where water ice was stable under direct sunlight in the early Solar System. It also includes the other round moons, which are ice-rock (e.g. Ganymede, Callisto, Titan, and Triton) or even almost pure (at least 99%) ice (Tethys and Iapetus). Some of these bodies are known to have subsurface hydrospheres (Ganymede, Callisto, Enceladus, and Titan), like Europa, and it is also possible for some others (e.g. Ceres, Mimas, Dione, Miranda, Ariel, Triton, and Pluto). Titan even has surface bodies of liquid, albeit liquid methane rather than water. Jupiter's Ganymede, though icy, does have a metallic core like the Moon, Io, Europa, and the terrestrial planets.
The name Terran world has been suggested to define all solid worlds (bodies assuming a rounded shape), without regard to their composition. It would thus include both terrestrial and icy planets.
Density trends
The uncompressed density of a terrestrial planet is the average density its materials would have at zero pressure. A greater uncompressed density indicates a greater metal content. Uncompressed density differs from the true average density (also often called "bulk" density) because compression within planet cores increases their density; the average density depends on planet size, temperature distribution, and material stiffness as well as composition.
Calculations to estimate uncompressed density inherently require a model of the planet's structure. Where there have been landers or multiple orbiting spacecraft, these models are constrained by seismological data and also moment of inertia data derived from the spacecraft's orbits. Where such data is not available, uncertainties are inevitably higher.
The uncompressed densities of the rounded terrestrial bodies directly orbiting the Sun trend towards lower values as the distance from the Sun increases, consistent with the temperature gradient that would have existed within the primordial solar nebula. The Galilean satellites show a similar trend going outwards from Jupiter; however, no such trend is observable for the icy satellites of Saturn or Uranus. The icy worlds typically have densities less than 2 g·cm−3. Eris is significantly denser (), and may be mostly rocky with some surface ice, like Europa. It is unknown whether extrasolar terrestrial planets in general will follow such a trend.
The data in the tables below are mostly taken from a list of gravitationally rounded objects of the Solar System and planetary-mass moon. All distances from the Sun are averages.
Extrasolar terrestrial planets
Most of the planets discovered outside the Solar System are giant planets, because they are more easily detectable. But since 2005, hundreds of potentially terrestrial extrasolar planets have also been found, with several being confirmed as terrestrial. Most of these are super-Earths, i.e. planets with masses between Earth's and Neptune's; super-Earths may be gas planets or terrestrial, depending on their mass and other parameters.
During the early 1990s, the first extrasolar planets were discovered orbiting the pulsar PSR B1257+12, with masses of 0.02, 4.3, and 3.9 times that of Earth, by pulsar timing.
When 51 Pegasi b, the first planet found around a star still undergoing fusion, was discovered, many astronomers assumed it to be a gigantic terrestrial, because it was assumed no gas giant could exist as close to its star (0.052 AU) as 51 Pegasi b did. It was later found to be a gas giant.
In 2005, the first planets orbiting a main-sequence star and which showed signs of being terrestrial planets were found: Gliese 876 d and OGLE-2005-BLG-390Lb. Gliese 876 d orbits the red dwarf Gliese 876, 15 light years from Earth, and has a mass seven to nine times that of Earth and an orbital period of just two Earth days. OGLE-2005-BLG-390Lb has about 5.5 times the mass of Earth and orbits a star about 21,000 light-years away in the constellation Scorpius.
From 2007 to 2010, three (possibly four) potential terrestrial planets were found orbiting within the Gliese 581 planetary system. The smallest, Gliese 581e, is only about 1.9 Earth masses, but orbits very close to the star. Two others, Gliese 581c and the disputed Gliese 581d, are more-massive super-Earths orbiting in or close to the habitable zone of the star, so they could potentially be habitable, with Earth-like temperatures.
Another possibly terrestrial planet, HD 85512 b, was discovered in 2011; it has at least 3.6 times the mass of Earth.
The radius and composition of all these planets are unknown.
The first confirmed terrestrial exoplanet, Kepler-10b, was found in 2011 by the Kepler space telescope, specifically designed to discover Earth-size planets around other stars using the transit method.
In the same year, the Kepler space telescope mission team released a list of 1235 extrasolar planet candidates, including six that are "Earth-size" or "super-Earth-size" (i.e. they have a radius less than twice that of the Earth) and in the habitable zone of their star.
Since then, Kepler has discovered hundreds of planets ranging from Moon-sized to super-Earths, with many more candidates in this size range (see image).
In 2016, statistical modeling of the relationship between a planet's mass and radius using a broken power law appeared to suggest that the transition point between rocky, terrestrial worlds and mini-Neptunes without a defined surface was in fact very close to Earth and Venus's, suggesting that rocky worlds much larger than our own are in fact quite rare. This resulted in some advocating for the retirement of the term "super-earth" as being scientifically misleading. Since 2016 the catalog of known exoplanets has increased significantly, and there have been several published refinements of the mass-radius model. As of 2024, the expected transition point between rocky and intermediate-mass planets sits at roughly 4.4 earth masses, and roughly 1.6 earth radii.
In September 2020, astronomers using microlensing techniques reported the detection, for the first time, of an Earth-mass rogue planet (named OGLE-2016-BLG-1928) unbounded by any star, and free-floating in the Milky Way galaxy.
List of terrestrial exoplanets
The following exoplanets have a density of at least 5 g/cm3 and a mass below Neptune's and are thus very likely terrestrial:
Kepler-10b, Kepler-20b, Kepler-36b, Kepler-48d, Kepler 68c, Kepler-78b, Kepler-89b, Kepler-93b, Kepler-97b, Kepler-99b, Kepler-100b, Kepler-101c, Kepler-102b, Kepler-102d, Kepler-113b, Kepler-131b, Kepler-131c, Kepler-138c, Kepler-406b, Kepler-406c, Kepler-409b.
Frequency
In 2013, astronomers reported, based on Kepler space mission data, that there could be as many as 40 billion Earth- and super-Earth-sized planets orbiting in the habitable zones of Sun-like stars and red dwarfs within the Milky Way. Eleven billion of these estimated planets may be orbiting Sun-like stars. The nearest such planet may be 12 light-years away, according to the scientists. However, this does not give estimates for the number of extrasolar terrestrial planets, because there are planets as small as Earth that have been shown to be gas planets (see Kepler-138d).
Estimates show that about 80% of potentially habitable worlds are covered by land, and about 20% are ocean planets. Planets with rations more like those of Earth, which was 30% land and 70% ocean, only make up 1% of these worlds.
Types
Several possible classifications for solid planets have been proposed.
Silicate planet
A solid planet like Venus, Earth, or Mars, made primarily of a silicon-based rocky mantle with a metallic (iron) core.
Carbon planet (also called "diamond planet")
A theoretical class of planets, composed of a metal core surrounded by primarily carbon-based minerals. They may be considered a type of terrestrial planet if the metal content dominates. The Solar System contains no carbon planets but does have carbonaceous asteroids, such as Ceres and Hygiea. It is unknown if Ceres has a rocky or metallic core.
Iron planet
A theoretical type of solid planet that consists almost entirely of iron and therefore has a greater density and a smaller radius than other solid planets of comparable mass. Mercury in the Solar System has a metallic core equal to 60–70% of its planetary mass, and is sometimes called an iron planet, though its surface is made of silicates and is iron-poor. Iron planets are thought to form in the high-temperature regions close to a star, like Mercury, and if the protoplanetary disk is rich in iron.
Icy planet
A type of solid planet with an icy surface of volatiles. In the Solar System, most planetary-mass moons (such as Titan, Triton, and Enceladus) and many dwarf planets (such as Pluto and Eris) have such a composition. Europa is sometimes considered an icy planet due to its surface ice, but its higher density indicates that its interior is mostly rocky. Such planets can have internal saltwater oceans and cryovolcanoes erupting liquid water (i.e. an internal hydrosphere, like Europa or Enceladus); they can have an atmosphere and hydrosphere made from methane or nitrogen (like Titan). A metallic core is possible, as exists on Ganymede.
Coreless planet
A theoretical type of solid planet that consists of silicate rock but has no metallic core, i.e. the opposite of an iron planet. Although the Solar System contains no coreless planets, chondrite asteroids and meteorites are common in the Solar System. Ceres and Pallas have mineral compositions similar to carbonaceous chondrites, though Pallas is significantly less hydrated. Coreless planets are thought to form farther from the star where volatile oxidizing material is more common.
See also
Chthonian planet
Earth analog
List of potentially habitable exoplanets
Planetary habitability
Venus zone
List of gravitationally rounded objects of the Solar System
References
Types of planet
Solar System | Terrestrial planet | [
"Astronomy"
] | 2,952 | [
"Outer space",
"Solar System"
] |
48,517 | https://en.wikipedia.org/wiki/Antelope | The term antelope refers to numerous extant or recently extinct species of the ruminant artiodactyl family Bovidae that are indigenous to most of Africa, India, the Middle East, Central Asia, and a small area of Eastern Europe. Antelopes do not form a monophyletic group, as some antelopes are more closely related to other bovid groups, like bovines, goats, and sheep, than to other antelopes.
A better definition, also known as the "true antelopes", includes only the genera Gazella, Nanger, Eudorcas, and Antilope. One North American mammal, the pronghorn or "pronghorn antelope", is colloquially referred to as the "American antelope", despite the fact that it belongs to a completely different family (Antilocapridae) than the true Old-World antelopes; pronghorn are the sole extant member of an extinct prehistoric lineage that once included many unique species.
Although antelope are sometimes referred to, and easily misidentified as, "deer" (cervids), true deer are only distant relatives of antelopes. While antelope are found in abundance in Africa, only one deer species is found on the continent—the Barbary red deer of Northern Africa. By comparison, numerous deer species are usually found in regions of the world with fewer or no antelope species present, such as throughout Southeast Asia, Europe and all of the Americas. This is likely due to competition over shared resources, as deer and antelope fill a virtually identical ecological niche in their respective habitats. Countries like India, however, have large populations of endemic deer and antelope, with the different species generally keeping to their own "niches" with minimal overlap.
Unlike deer, in which the males sport elaborate head antlers that are shed and regrown annually, antelope horns are bone and grow steadily, never falling off. If a horn is broken, it will either remain broken or take years to partially regenerate, depending on the species of the antelope.
Etymology
The English word "antelope" first appeared in 1417 and is derived from the Old French antelop, itself derived from Medieval Latin ant(h)alopus, which in turn comes from the Byzantine Greek word ἀνθόλοψ, anthólops, first attested in Eustathius of Antioch (), according to whom it was a fabulous animal "haunting the banks of the Euphrates, very savage, hard to catch and having long, saw-like horns capable of cutting down trees". It perhaps derives from Greek ἀνθος, anthos (flower) and ώψ, ops (eye), perhaps meaning "beautiful eye" or alluding to the animals' long eyelashes. This, however, may be a folk etymology in Greek based on some earlier root. The word talopus and calopus, from Latin, came to be used in heraldry. In 1607, it was first used for living, cervine animals .
Species
There are 91 antelope species, most of which are native to Africa, occur in about 30 genera. The classification of tribes or subfamilies within Bovoidea is still a matter of debate, with several alternative systems proposed.
Antelope are not a cladistic or taxonomically defined group. The term is used to describe all members of the family Bovidae that do not fall under the category of sheep, cattle, or goats. Usually, all species of the Antilopinae, Hippotraginae, Reduncinae, Cephalophinae, many Bovinae, the grey rhebok, and the impala are called antelope.
Distribution and habitat
More species of antelope are native to Africa than to any other continent, almost exclusively in savannahs, with 25-40 species co-occurring over much of East Africa. Because savannah habitat in Africa has expanded and contracted five times over the last three million years, and the fossil record indicates this is when most extant species evolved, it is believed that isolation in refugia during contractions was a major driver of this diversification. Other species occur in Asia: the Arabian Peninsula is home to the Arabian oryx and Dorcas gazelle. South Asia is home to the nilgai, chinkara, blackbuck, Tibetan antelope, and four-horned antelope, while Russia and Central Asia have the Tibetan antelope and saiga.
No antelope species is native to Australasia or Antarctica, nor do any extant species occur in the Americas, though the nominate saiga subspecies occurred in North America during the Pleistocene. North America is currently home to the native pronghorn, which taxonomists do not consider a member of the antelope group, but which is often locally referred to as such (e.g., "American antelope"). In Europe, several extinct species occur in the fossil record, and the saiga was found widely during the Pleistocene but did not persist into the later Holocene, except in Russian Kalmykia and Astrakhan Oblast.
Many species of antelope have been imported to other parts of the world, especially the United States, for exotic game hunting. With some species possessing spectacular leaping and evasive skills, individuals may escape. Texas in particular has many game ranches, as well as habitats and climates that are very hospitable to African and Asian plains antelope species. Accordingly, wild populations of blackbuck antelope, gemsbok, and nilgai may be found in Texas.
Antelope live in a wide range of habitats. Most live in the African savannahs. However, many species are more secluded, such as the forest antelope, as well as the extreme cold-living saiga, the desert-adapted Arabian oryx, the rocky koppie-living klipspringer, and semiaquatic sitatunga.
Species living in forests, woodland, or bush tend to be sedentary, but many of the plains species undertake long migrations. These enable grass-eating species to follow the rains and thereby their food supply. The gnus and gazelles of East Africa perform some of the most impressive mass migratory circuits of all mammals.
Morphology
Body and covering
Antelope vary greatly in size. For example, a male common eland can measure at the shoulder and weigh almost , whereas an adult royal antelope may stand only at the shoulder and weigh a mere .
Not surprisingly for animals with long, slender yet powerful legs, many antelope have long strides and can run fast. Some (e.g. klipspringer) are also adapted to inhabiting rock koppies and crags. Both dibatags and gerenuks habitually stand on their two hind legs to reach acacia and other tree foliage. Different antelope have different body types, which can affect movement. Duikers are short, bush-dwelling antelope that can pick through dense foliage and dive into the shadows rapidly. Gazelle and springbok are known for their speed and leaping abilities. Even larger antelope, such as nilgai, elands, and kudus, are capable of jumping or greater, although their running speed is restricted by their greater mass.
Antelope have a wide variety of coverings, though most have a dense coat of short fur. In most species, the coat (pelage) is some variation of a brown colour (or several shades of brown), often with white or pale underbodies. Exceptions include the zebra-marked zebra duiker, the grey, black, and white Jentink's duiker, and the black lechwe. Most of the "spiral-horned" antelope have pale, vertical stripes on their backs. Many desert and semidesert species are particularly pale, some almost silvery or whitish (e.g. Arabian oryx); the beisa and southern oryxes have gray and black pelages with vivid black-and-white faces. Common features of various gazelles are white rumps, which flash a warning to others when they run from danger, and dark stripes midbody (the latter feature is also shared by the springbok and beira). The springbok also has a pouch of white, brushlike hairs running along its back, which opens up when the animal senses danger, causing the dorsal hairs to stand on end.
Many antelope are sexually dimorphic. In most species, both sexes have horns, but those of males tend to be larger. Males tend to be larger than the females, but exceptions in which the females tend to be heavier than the males include the bush duiker, dwarf antelope, Cape grysbok, and oribi, all rather small species. A number of species have hornless females (e.g., sitatunga, red lechwe, and suni). In some species, the males and females have differently coloured pelages (e.g. blackbuck and nyala).
Sensory and digestive systems
Antelope are ruminants, so they have well-developed molar teeth, which grind cud (food balls stored in the stomach) into a pulp for further digestion. They have no upper incisors, but rather a hard upper gum pad, against which their lower incisors bite to tear grass stems and leaves.
Like many other herbivores, antelope rely on keen senses to avoid predators. Their eyes are placed on the sides of their heads, giving them a broad radius of vision with minimal binocular vision. Their horizontally elongated pupils also help in this respect. Acute senses of smell and hearing give antelope the ability to perceive danger at night out in the open (when predators are often on the prowl). These same senses play an important role in contact between individuals of the same species; markings on their heads, ears, legs, and rumps are used in such communication. Many species "flash" such markings, as well as their tails; vocal communications include loud barks, whistles, "moos", and trumpeting; many species also use scent marking to define their territories or simply to maintain contact with their relatives and neighbors.
Antelope horns
The size and shape of antelope horns varies greatly. Those of the duikers and dwarf antelope tend to be simple "spikes", but differ in the angle to the head from backward curved and backward pointing (e.g. yellow-backed duiker) to straight and upright (e.g. steenbok). Other groups have twisted (e.g. common eland), spiral (e.g. greater kudu), "recurved" (e.g. the reedbucks), lyrate (e.g. impala), or long, curved (e.g. the oryxes) horns. Horns are not shed and their bony cores are covered with a thick, persistent sheath of horny material, both of which distinguish them from antlers.
Antelope horns are efficient weapons, and tend to be better developed in those species where males fight over females (large herd antelope) than in solitary or lekking species. With male-male competition for mates, horns are clashed in combat. Males more commonly use their horns against each other than against another species. The boss of the horns is typically arranged in such a way that two antelope striking at each other's horns cannot crack each other's skulls, making a fight via horn more ritualized than dangerous. Many species have ridges in their horns for at least two-thirds the length of their horns, but these ridges are not a direct indicator of age.
Behavior
Mating strategies
Antelope are often classified by their reproductive behavior.
Small antelope, such as dik-diks, tend to be monogamous. They live in a forest environment with patchy resources, and a male is unable to monopolize more than one female due to this sparse distribution. Larger forest species often form very small herds of two to four females and one male.
Some species, such as lechwes, pursue a lek breeding system, where the males gather on a lekking ground and compete for a small territory, while the females appraise males and choose one with which to mate.
Large grazing antelope, such as impala or wildebeest, form large herds made up of many females and a single breeding male, which excludes all other males, often by combat.
Defense
Antelope pursue a number of defense strategies, often dictated by their morphology.
Large antelope that gather in large herds, such as wildebeest, rely on numbers and running speed for protection. In some species, adults will encircle the offspring, protecting them from predators when threatened. Many forest antelope rely on cryptic coloring and good hearing to avoid predators. Forest antelope often have very large ears and dark or striped colorations. Small antelope, especially duikers, evade predation by jumping into dense bush where the predator cannot pursue. Springboks use a behavior known as stotting to confuse predators.
Open grassland species have nowhere to hide from predators, so they tend to be fast runners. They are agile and have good endurance—these are advantages when pursued by sprint-dependent predators such as cheetahs, which are the fastest of land animals, but tire quickly. Reaction distances vary with predator species and behaviour. For example, gazelles may not flee from a lion until it is closer than 200 m (650 ft)—lions hunt as a pride or by surprise, usually by stalking; one that can be seen clearly is unlikely to attack. However, sprint-dependent cheetahs will cause gazelles to flee at a range of over .
If escape is not an option, antelope are capable of fighting back. Oryxes in particular have been known to stand sideways like many unrelated bovids to appear larger than they are, and may charge at a predator as a last resort.
Status
About 25 species are rated by the IUCN as endangered, such as the dama gazelle and mountain nyala. A number of subspecies are also endangered, including the giant sable antelope and the mhorr gazelle. The main causes for concern for these species are habitat loss, competition with cattle for grazing, and trophy hunting.
The chiru or Tibetan antelope is hunted for its pelt, which is used in making shahtoosh wool, used in shawls. Since the fur can only be removed from dead animals, and each animal yields very little of the downy fur, several antelope must be killed to make a single shawl. This unsustainable demand has led to enormous declines in the chiru population.
The saiga is hunted for its horns, which are considered an aphrodisiac by some cultures. Only the males have horns, and have been so heavily hunted that some herds contain up to 800 females to one male. The species showed a steep decline and was formerly classified as critically endangered. However, the saigas have experienced a massive regrowth and are now classified as near threatened.
Lifespan
It is difficult to determine how long antelope live in the wild. With the preference of predators towards old and infirm individuals, which can no longer sustain peak speeds, few wild prey-animals live as long as their biological potential. In captivity, wildebeest have lived beyond 20 years old, and impalas have reached their late teens.
Relationship with humans
Culture
The antelope's horn is prized for supposed medicinal and magical powers in many places. The horn of the male saiga, in Eastern practice, is ground as an aphrodisiac, for which it has been hunted nearly to extinction. In the Congo, it is thought to confine spirits. The antelope's ability to run swiftly has also led to their association with the wind, such as in the Rig Veda, as the steeds of the Maruts and the wind god Vayu. There is, however, no scientific evidence that the horns of any antelope have any change on a human's physiology or characteristics.
In Mali, antelope were believed to have brought the skills of agriculture to mankind.
Humans have also used the term "Antelope" to refer to a tradition usually found in the sport of track and field.
Domestication
Domestication of animals requires certain traits in the animal that antelope do not typically display. Most species are difficult to contain in any density, due to the territoriality of the males, or in the case of oryxes (which have a relatively hierarchical social structure), an aggressive disposition; they can easily kill a human. Because many have extremely good jumping abilities, providing adequate fencing is a challenge. Also, antelope will consistently display a fear response to perceived predators, such as humans, making them very difficult to herd or handle. Although antelope have diets and rapid growth rates highly suitable for domestication, this tendency to panic and their non-hierarchical social structure explains why farm-raised antelope are uncommon. Ancient Egyptians kept herds of gazelles and addax for meat, and occasionally pets. It is unknown whether they were truly domesticated, but it seems unlikely, as no domesticated gazelles exist today.
However, humans have had success taming certain species, such as the elands. These antelope sometimes jump over each other's backs when alarmed, but this incongruous talent seems to be exploited only by wild members of the species; tame elands do not take advantage of it and can be enclosed within a very low fence. Their meat, milk, and hides are all of excellent quality, and experimental eland husbandry has been going on for some years in both Ukraine and Zimbabwe. In both locations, the animal has proved wholly amenable to domestication. Similarly, European visitors to Arabia reported "tame gazelles are very common in the Asiatic countries of which the species is a native; and the poetry of these countries abounds in allusions both to the beauty and the gentleness of the gazelle." Other antelope that have been tamed successfully include the gemsbok, the kudu, and the springbok.
Hybrid antelope
A wide variety of antelope hybrids have been recorded in zoos, game parks, and wildlife ranches, due to either a lack of more appropriate mates in enclosures shared with other species or a misidentification of species. The ease of hybridization shows how closely related some antelope species are. With few exceptions, most hybrid antelope occur only in captivity.
Most hybrids occur between species within the same genus. All reported examples occur within the same subfamily. As with most mammal hybrids, the less closely related the parents, the more likely the offspring will be sterile.
Heraldry
Antelope are a common symbol in heraldry, though they occur in a highly distorted form from nature. The heraldic antelope has the body of a stag and the tail of a lion, with serrated horns, and a small tusk at the end of its snout. This bizarre and inaccurate form was invented by European heralds in the Middle Ages, who knew little of foreign animals and made up the rest. The antelope was mistakenly imagined to be a monstrous beast of prey; the 16th century poet Edmund Spenser referred to it as being "as fierce and fell as a wolf."
Antelope can all also occur in their natural form, in which case they are termed "natural antelope" to distinguish them from the more usual heraldic antelope. The arms previously used by the Republic of South Africa featured a natural antelope, along with an oryx.
See also
Megafauna
References
External links
Ultimate Ungulate
San Diego Zoo Antelope
Bovidae
Paraphyletic groups | Antelope | [
"Biology"
] | 4,161 | [
"Phylogenetics",
"Paraphyletic groups"
] |
48,566 | https://en.wikipedia.org/wiki/Ecovillage | An ecovillage is a traditional or intentional community that aims to become more socially, culturally, economically and/or environmentally sustainable. An ecovillage strives to have the least possible negative impact on the natural environment through the intentional physical design and behavioural choices of its inhabitants. It is consciously designed through locally owned, participatory processes to regenerate and restore its social and natural environments. Most range from a population of 50 to 250 individuals, although some are smaller, and traditional ecovillages are often much larger. Larger ecovillages often exist as networks of smaller sub-communities. Some ecovillages have grown through like-minded individuals, families, or other small groups—who are not members, at least at the outset—settling on the ecovillage's periphery and participating de facto in the community. There are currently more than 10,000 ecovillages around the world.
Ecovillagers are united by shared ecological, social-economic and cultural-spiritual values. Concretely, ecovillagers seek alternatives to ecologically destructive electrical, water, transportation, and waste-treatment systems, as well as the larger social systems that mirror and support them. Many see the breakdown of traditional forms of community, wasteful consumerist lifestyles, the destruction of natural habitat, urban sprawl, factory farming, and over-reliance on fossil fuels as trends that must be changed to avert ecological disaster and create richer and more fulfilling ways of life.
Ecovillages offer small-scale communities with minimal ecological impact or regenerative impacts as an alternative. However, such communities often cooperate with peer villages in networks of their own (see Global Ecovillage Network (GEN) for an example). This model of collective action is similar to that of Ten Thousand Villages, which supports the fair trade of goods worldwide.
The concept of the ecovillage has undergone significant development over time, as evidenced by the remarkable growth and evolution of these communities over the past few decades. The various facets of the ecovillage include case studies of community models, discussions on sustainability alignment for diverse needs, examinations of their environmental impact, explorations of governance structures, and considerations of the challenges faced on their path towards a successful ecovillage.
Definition
Multiple sources define ecovillages as a subtype of intentional communities focusing on sustainability. More pronounced definitions are listed here:
In Joubert's view, ecovillages are seen as an ongoing process, rather than a particular outcome. They often start off with a focus on one of the four dimensions of sustainability, e.g. ecology, but evolve into holistic models for restoration. In this view, aiming for sustainability is not enough; it is vital to restore and regenerate the fabric of life and across all four dimensions of sustainability: social, environmental, economic and cultural.
Ecovillages have developed in recent years as technology has improved, so they have more sophisticated structures as noted by Baydoun, M. 2013.
Generally, the ecovillage concept is not tied to specific sectarian (religious, political, corporate) organizations or belief systems not directly related to environmentalism, such as monasteries, cults, or communes.
History
The modern-day desire for community was notably characterized by the communal "back to the land" movement of the 1960s and 1970s through communities such as the earliest example that still survives, the Miccosukee Land Co-op co-founded in May 1973 by James Clement van Pelt in Tallahassee, Florida. In the same decades, the imperative for alternatives to radically inefficient energy-use patterns, in particular automobile-enabled suburban sprawl, was brought into focus by recurrent energy crises. The term "eco-village" was introduced by Georgia Tech Professor George Ramsey in a 1978 address, "Passive Energy Applications for the Built Environment", to the First World Energy Conference of the Association of Energy Engineers, to describe small-scale, car-free, close-in developments, including suburban infill, arguing that "the great energy waste in the United States is not in its technology; it is in its lifestyle and concept of living." Ramsey's article includes a sketch for a "self-sufficient pedestrian solar village" by one of his students that looks very similar to eco-villages today.
The movement became more focused and organized in the cohousing and related alternative-community movements of the mid-1980s. Then, in 1991, Robert Gilman and Diane Gilman co-authored a germinal study called "Ecovillages and Sustainable Communities" for Gaia Trust, in which the ecological and communitarian themes were brought together.
The first Eco-Village in North America began its first stages in 1990. Earthaven Eco-Village in Black Mountain, NC was the first community called an Eco-Village and was designed using permaculture (holistic) principles. The first residents moved onto the vacant land in 1993. As of 2019 Earthaven Eco-Village has over 70 families living off the grid on 368 acres of land.
The ecovillage movement began to coalesce at the annual autumn conference of Findhorn, in Scotland, in 1995. The conference was called: "Ecovillages and Sustainable Communities", and conference organizers turned away hundreds of applicants. According to Ross Jackson, "somehow they had struck a chord that resonated far and wide. The word 'ecovillage'... thus became part of the language of the Cultural Creatives." After that conference, many intentional communities, including Findhorn, began calling themselves "ecovillages", giving birth to a new movement. The Global Ecovillage Network, formed by a group of about 25 people from various countries who had attended the Findhorn conference, crystallized the event by linking hundreds of small projects from around the world, that had similar goals but had formerly operated without knowledge of each other. Gaia Trust of Denmark agreed to fund the network for its first five years.
Since the 1995 conference, a number of the early members of the Global Ecovillage Network have tried other approaches to ecovillage building in an attempt to build settlements that would be attractive to mainstream culture in order to make sustainable development more generally accepted. One of these with some degree of success is Living Villages and The Wintles where eco-houses are arranged so that social connectivity is maximized and residents have shared food growing areas, woodlands, and animal husbandry for greater sustainability.
The most recent worldwide update emerges from the 2022 Annual Report of GEN International, detailing the mapping of 1,043 ecovillage communities on GEN's interactive ecovillage map. GEN collaborated closely with a diverse array of researchers and ecovillage communities spanning the globe to develop the Ecovillage Impact Assessment. Their innovative tool serves as a means for communities, groups, and individuals to accurately report, chart, evaluate, and present their efforts toward fostering participatory cultural, social, ecological, and economic regeneration. Over the course of three years, from February 2021 to April 2024, data from 140 surveys conducted within 75 ecovillages formed the basis of the comprehensive results. Through this assessment ecovillages are empowered to understand their impact and influence their community has had.
Case studies
Sustainability alignment
Ecovillages are defined by their commitment sustainability through a multitude of design, lifestyle, and community objectives. They prioritize environmental stewardship through various methods, including the utilization of renewable energy sources, the minimization of waste through recycling and composting, and the practice of organic agriculture and permaculture. In many cases, these communities strive for self-sufficiency in food production, with the aim of reducing the ecological footprint associated with food transportation. Ecovillage communities place a strong emphasis on the conservation of resources through the application of green building techniques, including passive solar design, natural insulation, and rainwater harvesting. Additionally, they promote alternative modes of transportation, such as cycling and walking, as a means of reducing reliance on fossil fuels. The objective of ecovillages is to cultivate robust social connections and a sense of belonging among residents through the promotion of collaboration, consensus-based decision-making, and shared responsibilities. This approach fosters a supportive environment that enhances both individual and collective resilience. Ecovillages represent an international phenomenon that encompasses cultural diversity, frequently integrating traditional wisdom alongside innovative practices. Many ecovillages espouse multiculturalism, indigenous knowledge, and participation as means of enhancing intergenerational learning. In essence, these communities endeavor to achieve sustainable living through a multitude of diverse efforts, offering valuable insight into the creation of a sustainable relationship between humanity and the natural world. In essence, these communities aim for sustainable living through a multitude of various efforts and offer valuable insight for creating a sustainable relationship between humanity and the natural world.
Environmental impact
The formation of ecovillages is frequently driven by a concern for environmental stewardship and a commitment to sustainable practices. Ecovillages frequently employ reusable power sources, such as solar and wind energy, and utilize natural materials, including mud, wood, and straw, in their construction. Such technologies as bioclimatic agriculture are employed in this regard.
A study on an ecovillage in Ithaca, New York found that the average ecological footprint of a resident in the ecovillage was 70% less than the ecological footprint of most Americans. Ecovillage residents seek a sustainable lifestyle (for example, of voluntary simplicity) for inhabitants with a minimum of trade outside the local area, or ecoregion. Many seek independence from existing infrastructures, although others, particularly in more urban settings, pursue more integration with existing infrastructure. Rural ecovillages are usually based on organic farming, permaculture and other approaches which promote ecosystem function and biodiversity. Ecovillages, whether urban or rural, tend to integrate community and ecological values within a principle-based approach to sustainability, such as permaculture design. In 2019, a study assessed the impact of community sustainability through a life cycle assessment conducted on three ecovillages. The results of this study revealed a substantial reduction in carbon emissions among residents of these ecovillages when compared to the average United States citizen. This study reported that residents had a 63% to 71% decrease in carbon emissions due to living in an ecovillage with sustainable practices and mitigation efforts to environmental impact.
Governance
Ecovillages, while united by their commitment to sustainability and communal living, often differ in their approaches to governance. Every ecovillage strives to reflect the diverse needs and values of their communities. Ultimately, the choice of governance model within ecovillages aims to demonstrates a balance between fostering community cohesion, promoting sustainability, and accommodating the varied needs and values of their members.
Establishing governance is a common method used by ecovillages to align individual actions with community objectives. Most ecovillages maintain a distinct set of policies to govern aspects of what keeps their society functioning. Policies within ecovillages are meant to evolve with new situations prompting revisions to existing guidelines. Ecovillages commonly incorporate elements of consensus decision-making into their governance processes. This approach aims to mitigate hierarchies, power imbalances, and inflexibility within their governments. The governmental framework designed in the Ecovillage Tamera, Portugal promotes inclusivity that actively works to combat hierarchical structures. The Tamera community attributes their success to their Women's Council who confront patriarchal norms and empower women within the governance system. Members of ecovillage communities will select their peers to serve as government members based off established trust within the community, this serves as an active strategy to mitigate the emergence of hierarchies. Through involvement of community members in reviewing and revising existing rules, ecovillages ensure flexibility and adaptability to evolving needs. Active participation in policy formulation fosters a sense of ownership among members regarding community expectations and boundaries. Ecovillage community members express their contentment knowing they had the opportunity to voice their concerns and contribute to the decision-making process.
Each ecovillage exhibits a unique approach to how they will develop their governance. Ecovillages acknowledge that there is a delicate balance in maintaining a functioning community that appreciates and considers the perspectives of its members. Through active involvement in the governance processes, ecovillages demonstrate a commitment to inclusivity, adaptability, and collective empowerment, demonstrating the principles of collaborative decision-making and community-driven change.
Challenges
While ecovillages aim to embody admirable dimensions of sustainability and community, they are not without their challenges. One significant challenge is the initial investment required to establish or transition to an ecovillage lifestyle. The costs of acquiring land, implementing sustainable infrastructure, and maintaining communal facilities can be prohibitive for some individuals or groups making available funds a limiting factor. Conflicts can arise regarding community rules, resource allocation, or individual responsibilities, it can be difficult to maintain cohesion which can be expected in any community type. An explorative study results concluded that the perceived quality of life of residents in eco-developments rated higher perceived quality of life than residents of developments in conventional settings while still noting various challenges they experienced. Another noteworthy challenge can be limited access to resources, like land that is adequate for agriculture, available water or renewable energy potential which can limit the viability of ecovillage initiatives.
See also
References
Kellogg, W. Keating, W. (2011), "Cleveland's Ecovillage: green and affordable housing through a network alliance", Housing Policy Debate, 21 (1), pp. 69–91
Cunningham, Paul A. and Wearing, Stephen L.(2013).The Politics of Consensus: An Exploration of the Cloughjordan Ecovillage, Ireland.[electronic version]. Cosmopolitan Civil Societies. 5(2) pp. 1–28
Further reading
Books
Christian, D. 2003. Creating a Life Together: Practical Tools to Grow Ecovillages and Intentional Communities New Society Publishers.
Dawson, Jonathan (2006) Ecovillages: Angelica Buenaventura for Sustainability. Green Books.
Hill, R. and Dunbar, R. 2002. "Social Network Size in Humans." Human Nature, Vol. 14, No. 1, pp. 53–72.
Jackson, H. and Svensson, K. 2002. Ecovillage Living: Restoring the Earth and Her People. Green Books.
Walker, Liz. 2005 EcoVillage at Ithaca: Pioneering a Sustainable Culture. New Society Publishers
Sunarti, Euis (eds.) 2009. Model of Ecovillage Development: Development of Rural Areas in Order To Improve Quality of Life for Rural Residents, Indonesia.
Joubert, Kosha and Dregger, Leila 2015. Ecovillage: 1001 ways to heal the planet . Triarchy Press.
Christian, Diana L. (ed.) The Ecovillage Movement Today. Ecovillage Newsletter.
Gilman, Robert (ed.) Living Together: Sustainable Community Development. In Context.
Genovese, Paolo Vincenzo (2019), Being Light on the Earth. Eco-Village Policy and Practice for a Sustainable World, Libria, Melfi, Vol. I., ISDN 978-88-6764-187-1. Also in eBook.
Frederica Miller, Ed. (2018) Ecovillages Around the World: 20 Regenerative Designs for Sustainable Communities - Rochester, Vermont, Findhorn Press,
Litfin, Karen T. (2013) Ecovillages: Lessons for Sustainable Community. Polity.
External links
Global Ecovillage Network
eurotopia - Living in Community: European Directory of Communities and Ecovillages
Fellowship for Intentional Community: Ecovillage Directory
Wanderer's End Tactical & Practical Ecovillage Network of the Americas
Vietnamese Eco Village in Saigon
Urban studies and planning terminology
Environmental design
Simple living | Ecovillage | [
"Engineering"
] | 3,229 | [
"Environmental design",
"Design"
] |
48,579 | https://en.wikipedia.org/wiki/Atomic%2C%20molecular%2C%20and%20optical%20physics | Atomic, molecular, and optical physics (AMO) is the study of matter–matter and light–matter interactions, at the scale of one or a few atoms and energy scales around several electron volts. The three areas are closely interrelated. AMO theory includes classical, semi-classical and quantum treatments. Typically, the theory and applications of emission, absorption, scattering of electromagnetic radiation (light) from excited atoms and molecules, analysis of spectroscopy, generation of lasers and masers, and the optical properties of matter in general, fall into these categories.
Atomic and molecular physics
Atomic physics is the subfield of AMO that studies atoms as an isolated system of electrons and an atomic nucleus, while molecular physics is the study of the physical properties of molecules. The term atomic physics is often associated with nuclear power and nuclear bombs, due to the synonymous use of atomic and nuclear in standard English. However, physicists distinguish between atomic physics — which deals with the atom as a system consisting of a nucleus and electrons — and nuclear physics, which considers atomic nuclei alone. The important experimental techniques are the various types of spectroscopy. Molecular physics, while closely related to atomic physics, also overlaps greatly with theoretical chemistry, physical chemistry and chemical physics.
Both subfields are primarily concerned with electronic structure and the dynamical processes by which these arrangements change. Generally this work involves using quantum mechanics. For molecular physics, this approach is known as quantum chemistry. One important aspect of molecular physics is that the essential atomic orbital theory in the field of atomic physics expands to the molecular orbital theory. Molecular physics is concerned with atomic processes in molecules, but it is additionally concerned with effects due to the molecular structure. Additionally to the electronic excitation states which are known from atoms, molecules are able to rotate and to vibrate. These rotations and vibrations are quantized; there are discrete energy levels. The smallest energy differences exist between different rotational states, therefore pure rotational spectra are in the far infrared region (about 30 - 150 μm wavelength) of the electromagnetic spectrum. Vibrational spectra are in the near infrared (about 1 - 5 μm) and spectra resulting from electronic transitions are mostly in the visible and ultraviolet regions. From measuring rotational and vibrational spectra properties of molecules like the distance between the nuclei can be calculated.
As with many scientific fields, strict delineation can be highly contrived and atomic physics is often considered in the wider context of atomic, molecular, and optical physics. Physics research groups are usually so classified.
Optical physics
Optical physics is the study of the generation of electromagnetic radiation, the properties of that radiation, and the interaction of that radiation with matter, especially its manipulation and control. It differs from general optics and optical engineering in that it is focused on the discovery and application of new phenomena. There is no strong distinction, however, between optical physics, applied optics, and optical engineering, since the devices of optical engineering and the applications of applied optics are necessary for basic research in optical physics, and that research leads to the development of new devices and applications. Often the same people are involved in both the basic research and the applied technology development, for example the experimental demonstration of electromagnetically induced transparency by S. E. Harris and of slow light by Harris and Lene Vestergaard Hau.
Researchers in optical physics use and develop light sources that span the electromagnetic spectrum from microwaves to X-rays. The field includes the generation and detection of light, linear and nonlinear optical processes, and spectroscopy. Lasers and laser spectroscopy have transformed optical science. Major study in optical physics is also devoted to quantum optics and coherence, and to femtosecond optics. In optical physics, support is also provided in areas such as the nonlinear response of isolated atoms to intense, ultra-short electromagnetic fields, the atom-cavity interaction at high fields, and quantum properties of the electromagnetic field.
Other important areas of research include the development of novel optical techniques for nano-optical measurements, diffractive optics, low-coherence interferometry, optical coherence tomography, and near-field microscopy. Research in optical physics places an emphasis on ultrafast optical science and technology. The applications of optical physics create advancements in communications, medicine, manufacturing, and even entertainment.
History
One of the earliest steps towards atomic physics was the recognition that matter was composed of atoms, in modern terms the basic unit of a chemical element. This theory was developed by John Dalton in the 18th century. At this stage, it wasn't clear what atoms were - although they could be described and classified by their observable properties in bulk; summarized by the developing periodic table, by John Newlands and Dmitri Mendeleyev around the mid to late 19th century.
Later, the connection between atomic physics and optical physics became apparent, by the discovery of spectral lines and attempts to describe the phenomenon - notably by Joseph von Fraunhofer, Fresnel, and others in the 19th century.
From that time to the 1920s, physicists were seeking to explain atomic spectra and blackbody radiation. One attempt to explain hydrogen spectral lines was the Bohr atom model.
Experiments including electromagnetic radiation and matter - such as the photoelectric effect, Compton effect, and spectra of sunlight the due to the unknown element of Helium, the limitation of the Bohr model to Hydrogen, and numerous other reasons, lead to an entirely new mathematical model of matter and light: quantum mechanics.
Classical oscillator model of matter
Early models to explain the origin of the index of refraction treated an electron in an atomic system classically according to the model of Paul Drude and Hendrik Lorentz. The theory was developed to attempt to provide an origin for the wavelength-dependent refractive index n of a material. In this model, incident electromagnetic waves forced an electron bound to an atom to oscillate. The amplitude of the oscillation would then have a relationship to the frequency of the incident electromagnetic wave and the resonant frequencies of the oscillator. The superposition of these emitted waves from many oscillators would then lead to a wave which moved more slowly.
Early quantum model of matter and light
Max Planck derived a formula to describe the electromagnetic field inside a box when in thermal equilibrium in 1900.
His model consisted of a superposition of standing waves. In one dimension, the box has length L, and only sinusoidal waves of wavenumber
can occur in the box, where n is a positive integer (mathematically denoted by ). The equation describing these standing waves is given by:
.
where E0 is the magnitude of the electric field amplitude, and E is the magnitude of the electric field at position x. From this basic, Planck's law was derived.
In 1911, Ernest Rutherford concluded, based on alpha particle scattering, that an atom has a central pointlike proton. He also thought that an electron would be still attracted to the proton by Coulomb's law, which he had verified still held at small scales. As a result, he believed that electrons revolved around the proton. Niels Bohr, in 1913, combined the Rutherford model of the atom with the quantisation ideas of Planck. Only specific and well-defined orbits of the electron could exist, which also do not radiate light. In jumping orbit the electron would emit or absorb light corresponding to the difference in energy of the orbits. His prediction of the energy levels was then consistent with observation.
These results, based on a discrete set of specific standing waves, were inconsistent with the continuous classical oscillator model.
Work by Albert Einstein in 1905 on the photoelectric effect led to the association of a light wave of frequency with a photon of energy . In 1917 Einstein created an extension to Bohrs model by the introduction of the three processes of stimulated emission, spontaneous emission and absorption (electromagnetic radiation).
Modern treatments
The largest steps towards the modern treatment was the formulation of quantum mechanics with the matrix mechanics approach by Werner Heisenberg and the discovery of the Schrödinger equation by Erwin Schrödinger.
There are a variety of semi-classical treatments within AMO. Which aspects of the problem are treated quantum mechanically and which are treated classically is dependent on the specific problem at hand. The semi-classical approach is ubiquitous in computational work within AMO, largely due to the large decrease in computational cost and complexity associated with it.
For matter under the action of a laser, a fully quantum mechanical treatment of the atomic or molecular system is combined with the system being under the action of a classical electromagnetic field. Since the field is treated classically it can not deal with spontaneous emission. This semi-classical treatment is valid for most systems, particular those under the action of high intensity laser fields. The distinction between optical physics and quantum optics is the use of semi-classical and fully quantum treatments respectively.
Within collision dynamics and using the semi-classical treatment, the internal degrees of freedom may be treated quantum mechanically, whilst the relative motion of the quantum systems under consideration are treated classically. When considering medium to high speed collisions, the nuclei can be treated classically while the electron is treated quantum mechanically. In low speed collisions the approximation fails.
Classical Monte-Carlo methods for the dynamics of electrons can be described as semi-classical in that the initial conditions are calculated using a fully quantum treatment, but all further treatment is classical.
Isolated atoms and molecules
Atomic, Molecular and Optical physics frequently considers atoms and molecules in isolation. Atomic models will consist of a single nucleus that may be surrounded by one or more bound electrons, whilst molecular models are typically concerned with molecular hydrogen and its molecular hydrogen ion. It is concerned with processes such as ionization, above threshold ionization and excitation by photons or collisions with atomic particles.
While modelling atoms in isolation may not seem realistic, if one considers molecules in a gas or plasma then the time-scales for molecule-molecule interactions are huge in comparison to the atomic and molecular processes that we are concerned with. This means that the individual molecules can be treated as if each were in isolation for the vast majority of the time. By this consideration atomic and molecular physics provides the underlying theory in plasma physics and atmospheric physics even though both deal with huge numbers of molecules.
Electronic configuration
Electrons form notional shells around the nucleus. These are naturally in a ground state but can be excited by the absorption of energy from light (photons), magnetic fields, or interaction with a colliding particle (typically other electrons).
Electrons that populate a shell are said to be in a bound state. The energy necessary to remove an electron from its shell (taking it to infinity) is called the binding energy. Any quantity of energy absorbed by the electron in excess of this amount is converted to kinetic energy according to the conservation of energy. The atom is said to have undergone the process of ionization.
In the event that the electron absorbs a quantity of energy less than the binding energy, it may transition to an excited state or to a virtual state. After a statistically sufficient quantity of time, an electron in an excited state will undergo a transition to a lower state via spontaneous emission. The change in energy between the two energy levels must be accounted for (conservation of energy). In a neutral atom, the system will emit a photon of the difference in energy. However, if the lower state is in an inner shell, a phenomenon known as the Auger effect may take place where the energy is transferred to another bound electrons causing it to go into the continuum. This allows one to multiply ionize an atom with a single photon.
There are strict selection rules as to the electronic configurations that can be reached by excitation by light—however there are no such rules for excitation by collision processes.
See also
Born–Oppenheimer approximation
Frequency doubling
Diffraction
Hyperfine structure
Interferometry
Isomeric shift
Metamaterial cloaking
Molecular energy state
Molecular modeling
Nanotechnology
Negative index metamaterials
Nonlinear optics
Optical engineering
Photon polarization
Quantum chemistry
Quantum optics
Rigid rotor
Spectroscopy
Superlens
Stationary state
Transition of state
Notes
References
Solid State Physics (2nd Edition), J.R. Hook, H.E. Hall, Manchester Physics Series, John Wiley & Sons, 2010,
Light and Matter: Electromagnetism, Optics, Spectroscopy and Lasers, Y.B. Band, John Wiley & Sons, 2010,
The Light Fantastic – Introduction to Classic and Quantum Optics, I.R. Kenyon, Oxford University Press, 2008,
Handbook of atomic, molecular, and optical physics, Editor: Gordon Drake, Springer, Various authors, 1996,
External links
ScienceDirect - Advances In Atomic, Molecular, and Optical Physics
Journal of Physics B: Atomic, Molecular and Optical Physics
Institutions
American Physical Society - Division of Atomic, Molecular & Optical Physics
European Physical Society - Atomic, Molecular & Optical Physics Division
National Science Foundation - Atomic, Molecular and Optical Physics
MIT-Harvard Center for Ultracold Atoms
Stanford QFARM Initiative for Quantum Science & Enginneering
JILA - Atomic and Molecular Physics
Joint Quantum Institute at University of Maryland and NIST
ORNL Physics Division
Queen's University Belfast - Center for Theoretical, Atomic, Molecular and Optical Physics,
University of California, Berkeley - Atomic, Molecular and Optical Physics | Atomic, molecular, and optical physics | [
"Physics",
"Chemistry"
] | 2,693 | [
"Atomic",
" molecular",
" and optical physics"
] |
48,597 | https://en.wikipedia.org/wiki/Furniture | Furniture refers to objects intended to support various human activities such as seating (e.g., stools, chairs, and sofas), eating (tables), storing items, working, and sleeping (e.g., beds and hammocks). Furniture is also used to hold objects at a convenient height for work (as horizontal surfaces above the ground, such as tables and desks), or to store things (e.g., cupboards, shelves, and drawers). Furniture can be a product of design and can be considered a form of decorative art. In addition to furniture's functional role, it can serve a symbolic or religious purpose. It can be made from a vast multitude of materials, including metal, plastic, and wood. Furniture can be made using a variety of woodworking joints which often reflects the local culture.
People have been using natural objects, such as tree stumps, rocks and moss, as furniture since the beginning of human civilization and continues today in some households/campsites. Archaeological research shows that from around 30,000 years ago, people started to construct and carve their own furniture, using wood, stone, and animal bones. Early furniture from this period is known from artwork such as a Venus figurine found in Russia, depicting the goddess on a throne. The first surviving extant furniture is in the homes of Skara Brae in Scotland, and includes cupboards, dressers and beds all constructed from stone. Complex construction techniques such as joinery began in the early dynastic period of ancient Egypt. This era saw constructed wooden pieces, including stools and tables, sometimes decorated with valuable metals or ivory. The evolution of furniture design continued in ancient Greece and ancient Rome, with thrones being commonplace as well as the klinai, multipurpose couches used for relaxing, eating, and sleeping. The furniture of the Middle Ages was usually heavy, oak, and ornamented. Furniture design expanded during the Italian Renaissance of the fourteenth and fifteenth century. The seventeenth century, in both Southern and Northern Europe, was characterized by opulent, often gilded Baroque designs. The nineteenth century is usually defined by revival styles. The first three-quarters of the twentieth century are often seen as the march towards Modernism. One unique outgrowth of post-modern furniture design is a return to natural shapes and textures.
Etymology
The English word furniture is derived from the French word , the noun form of , which means to supply or provide. Thus fourniture in French means supplies or provisions. The English usage, referring specifically to household objects, is specific to that language; French and other Romance languages as well as German use variants of the word meubles, which derives from Latin mobilia, meaning "moveable goods".
History
Prehistory
The practice of using natural objects as rudimentary pieces of furniture likely dates to the beginning of human civilization. Early humans are likely to have used tree stumps as seats, rocks as rudimentary tables, and mossy areas for sleeping. During the late Paleolithic or early Neolithic period, from around 30,000 years ago, people began constructing and carving their own furniture, using wood, stone and animal bones. The earliest evidence for the existence of constructed furniture is a Venus figurine found at the Gagarino site in Russia, which depicts the goddess in a sitting position, on a throne. A similar statue of a seated woman was found in Çatalhöyük in Turkey, dating to between 6000 and 5500 BCE. The inclusion of such a seat in the figurines implies that these were already common artefacts of that age.
A range of unique stone furniture has been excavated in Skara Brae, a Neolithic village in Orkney, Scotland The site dates from 3100 to 2500 BCE and due to a shortage of wood in Orkney, the people of Skara Brae were forced to build with stone, a readily available material that could be worked easily and turned into items for use within the household. Each house shows a high degree of sophistication and was equipped with an extensive assortment of stone furniture, ranging from cupboards, dressers, and beds to shelves, stone seats, and limpet tanks. The stone dresser was regarded as the most important as it symbolically faces the entrance in each house and is therefore the first item seen when entering, perhaps displaying symbolic objects, including decorative artwork such as several Neolithic carved stone balls also found at the site.
Antiquity
Ancient furniture has been excavated from the 8th-century BCE Phrygian tumulus, the Midas Mound, in Gordion, Turkey. Pieces found here include tables and inlaid serving stands. There are also surviving works from the 9th–8th-century BCE Assyrian palace of Nimrud. The earliest surviving carpet, the Pazyryk Carpet was discovered in a frozen tomb in Siberia and has been dated between the 6th and 3rd century BCE.
Ancient Egypt
Civilization in ancient Egypt began with the clearance and irrigation of land along the banks of the River Nile, which began in about 6000 BCE. By that time, society in the Nile Valley was already engaged in organized agriculture and the construction of large buildings. At this period, Egyptians in the southwestern corner of Egypt were herding cattle and also constructing large buildings. Mortar was in use by around 4000 BCE The inhabitants of the Nile Valley and delta were self-sufficient and were raising barley and emmer (an early variety of wheat) and stored it in pits lined with reed mats. They raised cattle, goats and pigs and they wove linens and baskets. Evidence of furniture from the predynastic period is scarce, but samples from First Dynasty tombs indicate an already advanced use of furnishings in the houses of the age.
During the Dynastic Period, which began in around 3200 BCE, Egyptian art developed significantly, and this included furniture design. Egyptian furniture was primarily constructed using wood, but other materials were sometimes used, such as leather, and pieces were often adorned with gold, silver, ivory and ebony, for decoration. Wood found in Egypt was not suitable for furniture construction, so it had to be imported into the country from other places, particularly Phoenicia. The scarcity of wood necessitated innovation in construction techniques. The use of scarf joints to join two shorter pieces together and form a longer beam was one example of this, as well as construction of veneers in which low quality cheap wood was used as the main building material, with a thin layer of expensive wood on the surface.
The earliest used seating furniture in the dynastic period was the stool, which was used throughout Egyptian society, from the royal family down to ordinary citizens. Various different designs were used, including stools with four vertical legs, and others with crossed splayed legs; almost all had rectangular seats, however. Examples include the workman's stool, a simple three legged structure with a concave seat, designed for comfort during labour, and the much more ornate folding stool, with crossed folding legs, which were decorated with carved duck heads and ivory, and had hinges made of bronze. Full chairs were much rarer in early Egypt, being limited to only wealthy and high ranking people, and seen as a status symbol; they did not reach ordinary households until the 18th dynasty. Early examples were formed by adding a straight back to a stool, while later chairs had an inclined back. Other furniture types in ancient Egypt include tables, which are heavily represented in art, but almost nonexistent as preserved items – perhaps because they were placed outside tombs rather than within, as well as beds and storage chests.
Ancient Greece
Historical knowledge of Greek furniture is derived from various sources, including literature, terracotta, sculptures, statuettes, and painted vases. Some pieces survive to this day, primarily those constructed from metals, including bronze, or marble. Wood was an important and common material in Greek furniture, both domestic and imported. A common technique was to construct the main sections of the furniture with cheap solid wood, then apply a veneer using an expensive wood, such as maple or ebony. Greek furniture construction also made use of dowels and tenons for joining the wooden parts of a piece together. Wood was shaped by carving, steam treatment, and the lathe, and furniture is known to have been decorated with ivory, tortoise shell, glass, gold or other precious materials.
The modern word "throne" is derived from the ancient Greek thronos (Greek singular: θρόνος), which was a seat designated for deities or individuals of high status/hierarchy or honor. The colossal chryselephantine statue of Zeus at Olympia, constructed by Phidias and lost in antiquity, featured the god Zeus seated on an elaborate throne, which was decorated with gold, precious stones, ebony and ivory, according to Pausanias. Other Greek seats included the klismos, an elegant Greek chair with a curved backrest and legs whose form was copied by the Romans and is now part of the vocabulary of furniture design, the backless stool (diphros), which existed in most Greek homes, and folding stool. The kline, used from the late seventh century BCE, was a multipurpose piece used as a bed, but also as a sofa and for reclining during meals. It was rectangular and supported on four legs, two of which could be longer than the other, providing support for an armrest or headboard. Mattresses, rugs, and blankets may have been used, but there is no evidence for sheets.
In general, Greek tables were low and often appear in depictions alongside klinai. The most common type of Greek table had a rectangular top supported on three legs, although numerous configurations exist, including trapezoid and circular. Tables in ancient Greece were used mostly for dining purposes – in depictions of banquets, it appears as though each participant would have used a single table, rather than a collective use of a larger piece. Tables also figured prominently in religious contexts, as indicated in vase paintings, for example, the wine vessel associated with Dionysus, dating to around 450 BCE and now housed at the Art Institute of Chicago. Chests were used for storage of clothes and personal items and were usually rectangular with hinged lids. Chests depicted in terracotta show elaborate patterns and design, including the Greek fret.
Ancient Rome
Roman furniture was based heavily on Greek furniture, in style and construction. Rome gradually superseded Greece as the foremost culture of Europe, leading eventually to Greece becoming a province of Rome in 146 BC. Rome thus took over production and distribution of Greek furniture, and the boundary between the two is blurred. The Romans did have some limited innovation outside of Greek influence, and styles distinctly their own.
Roman furniture was constructed principally using wood, metal and stone, with marble and limestone used for outside furniture. Very little wooden furniture survives intact, but there is evidence that a variety of woods were used, including maple, citron, beech, oak, and holly. Some imported wood such as satinwood was used for decoration. The most commonly used metal was bronze, of which numerous examples have survived, for example, headrests for couches and metal stools. Similar to the Greeks, Romans used tenons, dowels, nails, and glue to join wooden pieces together, and also practised veneering.
The 1738 and 1748 excavations of Herculaneum and Pompeii revealed Roman furniture, preserved in the ashes of the AD 79 eruption of Vesuvius.
Middle Ages
In contrast to the ancient civilizations of Egypt, Greece, and Rome, there is comparatively little evidence of furniture from the 5th to the 15th century. Very few extant pieces survive, and evidence in literature is also scarce. It is likely that the style of furniture prevalent in late antiquity persisted throughout the Middle Ages. For example, a throne similar to that of Zeus is depicted in a sixth-century diptych, while the Bayeux tapestry shows Edward the Confessor and Harold seated on seats similar to the Roman sella curulis. The furniture of the Middle Ages was usually heavy, oak, and ornamented with carved designs.
The Hellenistic influence upon Byzantine furniture can be seen through the use of acanthus leaves, palmettes, bay and olive leaves as ornaments. Oriental influences manifest through rosettes, arabesques and the geometric stylisation of certain vegetal motifs. Christianity brings symbols in Byzantine ornamentation: the pigeon, fishes, the lamb and vines. The furniture from Byzantine houses and palaces was usually luxurious, highly decorated and finely ornamented. Stone, marble, metal, wood and ivory are used. Surfaces and ornaments are gilded, painted plychrome, plated with sheets of gold, emailed in bright colors, and covered in precious stones. The variety of Byzantine furniture is pretty big: tables with square, rectangle or round top, sumptuous decorated, made of wood sometimes inlaid, with bronze, ivory or silver ornaments; chairs with high backs and with wool blankets or animal furs, with coloured pillows, and then banks and stools; wardrobes were used only for storing books; cloths and valuable objects were kept in chests, with iron locks; the form of beds imitated the Roman ones, but have different designs of legs.
The main ornament of Gothic furniture and all applied arts is the ogive. The geometric rosette accompanies the ogive many times, having a big variety of forms. Architectural elements are used at furniture, at the beginning with purely decorative reasons, but later as structure elements. Besides the ogive, the main ornaments are: acanthus leaves, ivy, oak leaves, haulms, clovers, fleurs-de-lis, knights with shields, heads with crowns and characters from the Bible. Chests are the main type of Gothic furniture used by the majority of the population. Usually, the locks and escutcheon of chests have also an ornamental scope, being finely made.
Renaissance
Along with the other arts, the Italian Renaissance of the fourteenth and fifteenth century marked a rebirth in design, often inspired by the Greco-Roman tradition. A similar explosion of design, and renaissance of culture in general occurred in Northern Europe, starting in the fifteenth century.
17th and 18th centuries
The 17th century, in both Southern and Northern Europe, was characterized by opulent, often gilded Baroque designs that frequently incorporated a profusion of vegetal and scrolling ornament. Starting in the eighteenth century, furniture designs began to develop more rapidly. Although there were some styles that belonged primarily to one nation, such as Palladianism in Great Britain or Louis Quinze in French furniture, others, such as the Rococo and Neoclassicism were perpetuated throughout Western Europe.
During the 18th century, the fashion was set in England by the French art. In the beginning of the century Boulle cabinets were at the peak of their popularity and Louis XIV was reigning in France. In this era, most of the furniture had metal and enamelled decorations in it and some of the furniture was covered in inlays of marbles lapis lazuli, and porphyry and other stones. By mid-century this Baroque style was displaced by the graceful curves, shining ormolu, and intricate marquetry of the Rococo style, which in turn gave way around 1770 to the more severe lines of Neoclassicism, modeled after the architecture of ancient Greece and Rome. Creating a mass market for furniture, the distinguished London cabinet maker Thomas Chippendale's The Gentleman and Cabinet Maker's Director (1754) is regarded as the "first comprehensive trade catalogue of its kind".
There is something so distinct in the development of taste in French furniture, marked out by the three styles to which the three monarchs have given the name of "Louis Quatorze", "Louis Quinze", and "Louis Seize". This will be evident to anyone who will visit, first the Palace of Versailles, then the Grand Trianon, and afterwards the Petit Trianon.
19th century
The nineteenth century is usually defined by concurrent revival styles, including Gothic, Neoclassicism, and Rococo. The design reforms of the late century introduced the Aesthetic movement and the Arts and Crafts movement. Art Nouveau was influenced by both of these movements. Shaker-style furniture became popular during this time in North America as well.
Early North American
This design was in many ways rooted in necessity and emphasizes both form and materials. Early British Colonial American chairs and tables are often constructed with turned spindles and chair backs often constructed with steaming to bend the wood. Wood choices tend to be deciduous hardwoods with a particular emphasis on the wood of edible or fruit bearing trees such as cherry or walnut.
Mid-Century Modern
The first three-quarters of the 20th century is seen as the march towards Modernism. The furniture designers of Art Deco, De Stijl, Bauhaus, Jugendstil, Wiener Werkstätte, and Vienna Secession all worked to some degree within the Modernist motto.
Born from the Bauhaus and Streamline Moderne came the post-World War II style "Mid-Century Modern". Mid-Century Modern materials developed during the war including laminated plywood, plastics, and fiberglass. Prime examples include furniture designed by George Nelson Associates, Charles and Ray Eames, Paul McCobb, Florence Knoll, Harry Bertoia, Eero Saarinen, Harvey Probber, Vladimir Kagan and Danish modern designers including Finn Juhl and Arne Jacobsen.
Contemporary
Industrialisation, Post-Modernism, and the Internet have allowed furniture design to become more accessible to a wider range of people than ever before. There are many modern styles of furniture design, each with roots in Classical, Modernist, and Post-Modern design and art movements. The growth of Maker Culture across the Western sphere of influence has encouraged higher participation and development of new, more accessible furniture design techniques. One unique outgrowth of this post-modern furniture design trajectory is Live Edge, which incorporates the natural surface of a tree as part of a furniture object, heralding a resurgence of these natural shapes and textures within the home. Additionally, the use of Epoxy Resin has become more prevalent in DIY furniture styles.
Ecodesign
Great efforts from individuals, governments, and companies has led to the manufacturing of products with higher sustainability known as Ecodesign. This new line of furniture is based on environmentally friendly design. Its use and popularity are increasing each year.
Postmodernism
Postmodern design, intersecting the Pop art movement, gained steam in the 1960s and 70s, promoted in the 80s by groups such as the Italy-based Memphis movement. Transitional furniture is intended to fill a place between Traditional and Modern tastes.
Asian history
Asian furniture has a quite distinct history. The traditions out of India, China, Korea, Pakistan, Indonesia (Bali and Java) and Japan are some of the best known, but places such as Mongolia, and the countries of South East Asia have unique facets of their own.
Far Eastern
The use of uncarved wood and bamboo and the use of heavy lacquers are well known Chinese styles. It is worth noting that Chinese furniture varies dramatically from one dynasty to the next. Chinese ornamentation is highly inspired by paintings, with floral and plant life motifs including bamboo trees, chrysanthemums, waterlilies, irises, magnolias, flowers and branches of cherry, apple, apricot and plum, or elongated bamboo leaves; animal ornaments include lions, bulls, ducks, peacocks, parrots, pheasants, roosters, ibises and butterflies. The dragon is the symbol of earth fertility, and of the power and wisdom of the emperor. Lacquers are mostly populated with princesses, various Chinese people, soldiers, children, ritually and daily scenes. Architectural features tend toward geometric ornaments, like meanders and labyrinths. The interior of a Chinese house was simple and sober. All Chinese furniture is made of wood, usually ebony, teak, or rosewood for heavier furniture (chairs, tables and benches) and bamboo, pine and larch for lighter furniture (stools and small chairs).
Traditional Japanese furniture is well known for its minimalist style, extensive use of wood, high-quality craftsmanship and reliance on wood grain instead of painting or thick lacquer. Japanese chests are known as Tansu, known for elaborate decorative iron work, and are some of the most sought-after of Japanese antiques. The antiques available generally date back to the Tokugawa and Meiji periods. Both the technique of lacquering and the specific lacquer (resin of Rhus vernicifera) originated in China, but the lacquer tree also grows well in Japan. The recipes of preparation are original to Japan: resin is mixed with wheat flour, clay or pottery powder, turpentine, iron powder or wood coal. In ornamentation, the chrysanthemum, known as kiku, the national flower, is a very popular ornament, including the 16-petal chrysanthemum symbolizing the Emperor. Cherry and apple flowers are used for decorating screens, vases and shōji. Common animal ornaments include dragons, carps, cranes, gooses, tigers, horses and monkeys; representations of architecture such as houses, pavilions, towers, torii gates, bridges and temples are also common. The furniture of a Japanese house consists of tables, shelves, wardrobes, small holders for flowers, bonsais or for bonkei, boxes, lanterns with wooden frames and translucent paper, neck and elbow holders, and jardinieres.
Types
For sitting
Seating is amongst the oldest known furniture types, and authors including Encyclopædia Britannica regard it as the most important. In addition to the functional design, seating has had an important decorative element from ancient times to the present day. This includes carved and sculpted pieces intended as works of art, as well as the styling of seats to indicate social importance, with senior figures or leaders granted the use of specially designed seats.
The simplest form of seat is the chair, which is a piece of furniture designed to allow a single person to sit down, which has a back and legs, as well as a platform for sitting. Chairs often feature cushions made from various fabrics.
Types of wood used
All different types of woods have unique signature marks that can help in easy identification of the type. Hardwood and softwood are the two main categories for wood. Both hardwoods and softwoods are used in furniture manufacturing, and each has its own specific uses. Deciduous trees, which have broad leaves that change color periodically throughout the year, are the source of hardwood. Coniferous trees, also known as cone-bearing trees, have small leaves or needles that stay on the tree throughout the year. Common softwoods used include pine, redwood and yew. Higher quality furniture tends to be made out of hardwood, including oak, maple, mahogany, teak, walnut, cherry and birch. Highest quality wood will have been air dried to rid it of its moisture.
Cherry
A popular furniture hardwood is American black cherry. Cherry is a light reddish brown to brown color that intensifies into a rich color as it ages, and grows mostly in the eastern United States. Cherry has a tighter grain than birch and is softer. Much cherry lumber is narrow, and it has been utilized to make many lovely classic furniture pieces.
Birch
Birch is a sturdy, durable, even-textured hardwood that is common in the United States and Canada. The wood appears white or creamy yellow to light brown with a crimson tinge in its natural state. Birch is frequently stained to complement other types of wood in furniture. Birch is used to make a lot of transparent, cabinet-grade plywood because it absorbs stain well and finishes beautifully. Birch is frequently used to construct interior doors and cupboards in addition to furniture.
Restoration of furniture
Restoring a piece of furniture may imply attempting to repair and revive the original finish in some way. More often than not, this entails removing the existing treatment and preparing the raw wood for a new finish. Methods for repair depend on what kind of wood it is: solid or veneered, hardwood or softwood, open grained or closed grained. These variables can sometimes decide if a piece of furniture is worth repairing, as well as the type of repairs and finish it will require if it is restored. The 3 methods of restoring furniture are rejuvenate, repair, and refinish.
Rejuvenate The piece can easily be restored by just cleaning and waxing the surface while preserving the current finish. It works on wooden furniture that is still in good shape and is the simplest way to clean it.
Repair This process can fix dents and cracks by touching up some worn-out areas without removing the surface with this technique, the finish can be maintained while repairing the object with specialized products.
Refinish Remove anything that is left for example any paint with a finish-stripper product or lightly sanding the area down and then applying wood finish like oil wax in order to protect the secure the wood.
Cleaning Remove dirt, dust, and grime from the furniture using a mild soap or specialized furniture cleaner.
Standards for design, functionality and safety
EN 527 Office furniture – Work tables and desks: This European standard specifies requirements and test methods for office work tables and desks, ensuring their functionality and safety.
EN 1335 Office furniture – Office work chair: This European standard sets requirements for office chairs, focusing on ergonomics and comfort to promote user well-being and productivity.
ANSI/BIFMA X 5.1 Office Seating: This American National Standard, published by the Business and Institutional Furniture Manufacturers Association (BIFMA), provides requirements for the performance and durability of office seating.
DIN 4551 Office furniture; revolving office chair: This German standard covers revolving office chairs with adjustable backrests, armrests, and height, ensuring their quality and safety.
EN 581 Outdoor furniture – Seating and tables for camping, domestic and contract use: This European standard specifies the requirements for outdoor seating and tables used in various settings, including camping and domestic use.
EN 1728:2014 Furniture – Seating – Test methods for the determination of strength and durability: This European standard outlines test methods to assess the strength and durability of seating furniture, last updated in 2014.
EN 1730:2012 Furniture – Test methods for the determination of stability, strength, and durability: This European standard provides test methods to evaluate the stability, strength, and durability of various types of furniture.
BS 4875 Furniture. Strength and stability of furniture: This British Standard focuses on determining the stability of non-domestic storage furniture, helping ensure its safety and reliability.
EN 747 Furniture – Bunk beds and high beds – Test methods for the determination of stability, strength, and durability: This European standard sets test methods to assess the stability, strength, and durability of bunk beds and high beds.
EN 13150 Workbenches for laboratories – Safety requirements and test methods: This European standard specifies safety requirements and test methods for laboratory workbenches to ensure safe working conditions.
EN 1729 Educational furniture, chairs, and tables for educational institutions: This European standard outlines requirements for educational furniture, including chairs and tables, to support comfort and ergonomics in educational settings.
RAL-GZ 430 Furniture standard from Germany: RAL is a German standardization organization, and RAL-GZ 430 provides guidelines and standards for various types of furniture in Germany.
NEN 1812 Furniture standard from the Netherlands: NEN is the Dutch Institute for Standardization, and NEN 1812 sets standards for furniture in the Netherlands.
GB 28007-2011 Children's furniture – General technical requirements for children's furniture: This Chinese standard specifies technical requirements for children's furniture designed and manufactured for children aged 3 to 14.
BS 5852: 2006 Methods of test for assessment of the ignitability of upholstered seating: This British Standard outlines test methods to assess the ignitability of upholstered seating, both by smoldering and flaming ignition sources.
BS 7176: This British Standard specifies requirements for the resistance to ignition of upholstered furniture used in non-domestic settings through composite testing. These standards help ensure the quality, safety, and performance of various types of furniture in different regions and applications. Manufacturers and consumers often use these standards as guidelines to meet specific requirements and ensure product reliability.
See also
Casters which make some furniture moveable
Furniture designer
Furniture museum
Furniture repair
Metal furniture
Multifunctional furniture
Notes
References
External links
Images of online furniture design available from the Visual Arts Data Service (VADS) – including images from the Design Council Slide Collection.
History of Furniture Timeline From Maltwood Art Museum and Gallery, University of Victoria
Illustrated History Of Furniture
Home Economics Archive: Tradition, Research, History (HEARTH) An e-book collection of over 1,000 books on home economics spanning 1850 to 1950, created by Cornell University's Mann Library. Includes several hundred works on the furniture and interior design in this period, itemized in a specific bibliography.
American Furniture in The Metropolitan Museum of Art, a fully digitized 2 volume exhibition catalog
Decorative arts
Home
Industrial design
Domestic implements
Wood-related terminology | Furniture | [
"Engineering"
] | 5,980 | [
"Industrial design",
"Design engineering",
"Design"
] |
48,628 | https://en.wikipedia.org/wiki/Gaussian%20integer | In number theory, a Gaussian integer is a complex number whose real and imaginary parts are both integers. The Gaussian integers, with ordinary addition and multiplication of complex numbers, form an integral domain, usually written as or
Gaussian integers share many properties with integers: they form a Euclidean domain, and thus have a Euclidean division and a Euclidean algorithm; this implies unique factorization and many related properties. However, Gaussian integers do not have a total ordering that respects arithmetic.
Gaussian integers are algebraic integers and form the simplest ring of quadratic integers.
Gaussian integers are named after the German mathematician Carl Friedrich Gauss.
Basic definitions
The Gaussian integers are the set
In other words, a Gaussian integer is a complex number such that its real and imaginary parts are both integers.
Since the Gaussian integers are closed under addition and multiplication, they form a commutative ring, which is a subring of the field of complex numbers. It is thus an integral domain.
When considered within the complex plane, the Gaussian integers constitute the -dimensional integer lattice.
The conjugate of a Gaussian integer is the Gaussian integer .
The norm of a Gaussian integer is its product with its conjugate.
The norm of a Gaussian integer is thus the square of its absolute value as a complex number. The norm of a Gaussian integer is a nonnegative integer, which is a sum of two squares. Thus a norm cannot be of the form , with integer.
The norm is multiplicative, that is, one has
for every pair of Gaussian integers . This can be shown directly, or by using the multiplicative property of the modulus of complex numbers.
The units of the ring of Gaussian integers (that is the Gaussian integers whose multiplicative inverse is also a Gaussian integer) are precisely the Gaussian integers with norm 1, that is, 1, –1, and .
Euclidean division
Gaussian integers have a Euclidean division (division with remainder) similar to that of integers and polynomials. This makes the Gaussian integers a Euclidean domain, and implies that Gaussian integers share with integers and polynomials many important properties such as the existence of a Euclidean algorithm for computing greatest common divisors, Bézout's identity, the principal ideal property, Euclid's lemma, the unique factorization theorem, and the Chinese remainder theorem, all of which can be proved using only Euclidean division.
A Euclidean division algorithm takes, in the ring of Gaussian integers, a dividend and divisor , and produces a quotient and remainder such that
In fact, one may make the remainder smaller:
Even with this better inequality, the quotient and the remainder are not necessarily unique, but one may refine the choice to ensure uniqueness.
To prove this, one may consider the complex number quotient . There are unique integers and such that and , and thus . Taking , one has
with
and
The choice of and in a semi-open interval is required for uniqueness.
This definition of Euclidean division may be interpreted geometrically in the complex plane (see the figure), by remarking that the distance from a complex number to the closest Gaussian integer is at most .
Principal ideals
Since the ring of Gaussian integers is a Euclidean domain, is a principal ideal domain, which means that every ideal of is principal. Explicitly, an ideal is a subset of a ring such that every sum of elements of and every product of an element of by an element of belong to . An ideal is principal if it consists of all multiples of a single element , that is, it has the form
In this case, one says that the ideal is generated by or that is a generator of the ideal.
Every ideal in the ring of the Gaussian integers is principal, because, if one chooses in a nonzero element of minimal norm, for every element of , the remainder of Euclidean division of by belongs also to and has a norm that is smaller than that of ; because of the choice of , this norm is zero, and thus the remainder is also zero. That is, one has , where is the quotient.
For any , the ideal generated by is also generated by any associate of , that is, ; no other element generates the same ideal. As all the generators of an ideal have the same norm, the norm of an ideal is the norm of any of its generators.
In some circumstances, it is useful to choose, once for all, a generator for each ideal. There are two classical ways for doing that, both considering first the ideals of odd norm. If the has an odd norm , then one of and is odd, and the other is even. Thus has exactly one associate with a real part that is odd and positive. In his original paper, Gauss made another choice, by choosing the unique associate such that the remainder of its division by is one. In fact, as , the norm of the remainder is not greater than 4. As this norm is odd, and 3 is not the norm of a Gaussian integer, the norm of the remainder is one, that is, the remainder is a unit. Multiplying by the inverse of this unit, one finds an associate that has one as a remainder, when divided by .
If the norm of is even, then either or , where is a positive integer, and is odd. Thus, one chooses the associate of for getting a which fits the choice of the associates for elements of odd norm.
Gaussian primes
As the Gaussian integers form a principal ideal domain, they also form a unique factorization domain. This implies that a Gaussian integer is irreducible (that is, it is not the product of two non-units) if and only if it is prime (that is, it generates a prime ideal).
The prime elements of are also known as Gaussian primes. An associate of a Gaussian prime is also a Gaussian prime. The conjugate of a Gaussian prime is also a Gaussian prime (this implies that Gaussian primes are symmetric about the real and imaginary axes).
A positive integer is a Gaussian prime if and only if it is a prime number that is congruent to 3 modulo 4 (that is, it may be written , with a nonnegative integer) . The other prime numbers are not Gaussian primes, but each is the product of two conjugate Gaussian primes.
A Gaussian integer is a Gaussian prime if and only if either:
one of is zero and the absolute value of the other is a prime number of the form (with a nonnegative integer), or
both are nonzero and is a prime number (which will not be of the form ).
In other words, a Gaussian integer is a Gaussian prime if and only if either its norm is a prime number, or is the product of a unit () and a prime number of the form .
It follows that there are three cases for the factorization of a prime natural number in the Gaussian integers:
If is congruent to 3 modulo 4, then it is a Gaussian prime; in the language of algebraic number theory, is said to be inert in the Gaussian integers.
If is congruent to 1 modulo 4, then it is the product of a Gaussian prime by its conjugate, both of which are non-associated Gaussian primes (neither is the product of the other by a unit); is said to be a decomposed prime in the Gaussian integers. For example, and .
If , we have ; that is, 2 is the product of the square of a Gaussian prime by a unit; it is the unique ramified prime in the Gaussian integers.
Unique factorization
As for every unique factorization domain, every Gaussian integer may be factored as a product of a unit and Gaussian primes, and this factorization is unique up to the order of the factors, and the replacement of any prime by any of its associates (together with a corresponding change of the unit factor).
If one chooses, once for all, a fixed Gaussian prime for each equivalence class of associated primes, and if one takes only these selected primes in the factorization, then one obtains a prime factorization which is unique up to the order of the factors. With the choices described above, the resulting unique factorization has the form
where is a unit (that is, ), and are nonnegative integers, are positive integers, and are distinct Gaussian primes such that, depending on the choice of selected associates,
either with odd and positive, and even,
or the remainder of the Euclidean division of by equals 1 (this is Gauss's original choice).
An advantage of the second choice is that the selected associates behave well under products for Gaussian integers of odd norm. On the other hand, the selected associate for the real Gaussian primes are negative integers. For example, the factorization of 231 in the integers, and with the first choice of associates is , while it is with the second choice.
Gaussian rationals
The field of Gaussian rationals is the field of fractions of the ring of Gaussian integers. It consists of the complex numbers whose real and imaginary part are both rational.
The ring of Gaussian integers is the integral closure of the integers in the Gaussian rationals.
This implies that Gaussian integers are quadratic integers and that a Gaussian rational is a Gaussian integer, if and only if it is a solution of an equation
with and integers. In fact is solution of the equation
and this equation has integer coefficients if and only if and are both integers.
Greatest common divisor
As for any unique factorization domain, a greatest common divisor (gcd) of two Gaussian integers is a Gaussian integer that is a common divisor of and , which has all common divisors of and as divisor. That is (where denotes the divisibility relation),
and , and
and implies .
Thus, greatest is meant relatively to the divisibility relation, and not for an ordering of the ring (for integers, both meanings of greatest coincide).
More technically, a greatest common divisor of and is a generator of the ideal generated by and (this characterization is valid for principal ideal domains, but not, in general, for unique factorization domains).
The greatest common divisor of two Gaussian integers is not unique, but is defined up to the multiplication by a unit. That is, given a greatest common divisor of and , the greatest common divisors of and are , and .
There are several ways for computing a greatest common divisor of two Gaussian integers and . When one knows the prime factorizations of and ,
where the primes are pairwise non associated, and the exponents non-associated, a greatest common divisor is
with
Unfortunately, except in simple cases, the prime factorization is difficult to compute, and Euclidean algorithm leads to a much easier (and faster) computation. This algorithm consists of replacing of the input by , where is the remainder of the Euclidean division of by , and repeating this operation until getting a zero remainder, that is a pair . This process terminates, because, at each step, the norm of the second Gaussian integer decreases. The resulting is a greatest common divisor, because (at each step) and have the same divisors as and , and thus the same greatest common divisor.
This method of computation works always, but is not as simple as for integers because Euclidean division is more complicated. Therefore, a third method is often preferred for hand-written computations. It consists in remarking that the norm of the greatest common divisor of and is a common divisor of , , and . When the greatest common divisor of these three integers has few factors, then it is easy to test, for common divisor, all Gaussian integers with a norm dividing .
For example, if , and , one has , , and . As the greatest common divisor of the three norms is 2, the greatest common divisor of and has 1 or 2 as a norm. As a gaussian integer of norm 2 is necessary associated to , and as divides and , then the greatest common divisor is .
If is replaced by its conjugate , then the greatest common divisor of the three norms is 34, the norm of , thus one may guess that the greatest common divisor is , that is, that . In fact, one has .
Congruences and residue classes
Given a Gaussian integer , called a modulus, two Gaussian integers are congruent modulo , if their difference is a multiple of , that is if there exists a Gaussian integer such that . In other words, two Gaussian integers are congruent modulo , if their difference belongs to the ideal generated by . This is denoted as .
The congruence modulo is an equivalence relation (also called a congruence relation), which defines a partition of the Gaussian integers into equivalence classes, called here congruence classes or residue classes. The set of the residue classes is usually denoted , or , or simply .
The residue class of a Gaussian integer is the set
of all Gaussian integers that are congruent to . It follows that if and only if .
Addition and multiplication are compatible with congruences. This means that and imply and .
This defines well-defined operations (that is independent of the choice of representatives) on the residue classes:
With these operations, the residue classes form a commutative ring, the quotient ring of the Gaussian integers by the ideal generated by , which is also traditionally called the residue class ring modulo (for more details, see Quotient ring).
Examples
There are exactly two residue classes for the modulus , namely (all multiples of ), and , which form a checkerboard pattern in the complex plane. These two classes form thus a ring with two elements, which is, in fact, a field, the unique (up to an isomorphism) field with two elements, and may thus be identified with the integers modulo 2. These two classes may be considered as a generalization of the partition of integers into even and odd integers. Thus one may speak of even and odd Gaussian integers (Gauss divided further even Gaussian integers into even, that is divisible by 2, and half-even).
For the modulus 2 there are four residue classes, namely . These form a ring with four elements, in which for every . Thus this ring is not isomorphic with the ring of integers modulo 4, another ring with four elements. One has , and thus this ring is not the finite field with four elements, nor the direct product of two copies of the ring of integers modulo 2.
For the modulus there are eight residue classes, namely , whereof four contain only even Gaussian integers and four contain only odd Gaussian integers.
Describing residue classes
Given a modulus , all elements of a residue class have the same remainder for the Euclidean division by , provided one uses the division with unique quotient and remainder, which is described above. Thus enumerating the residue classes is equivalent with enumerating the possible remainders. This can be done geometrically in the following way.
In the complex plane, one may consider a square grid, whose squares are delimited by the two lines
with and integers (blue lines in the figure). These divide the plane in semi-open squares (where and are integers)
The semi-open intervals that occur in the definition of have been chosen in order that every complex number belong to exactly one square; that is, the squares form a partition of the complex plane. One has
This implies that every Gaussian integer is congruent modulo to a unique Gaussian integer in (the green square in the figure), which its remainder for the division by . In other words, every residue class contains exactly one element in .
The Gaussian integers in (or in its boundary) are sometimes called minimal residues because their norm are not greater than the norm of any other Gaussian integer in the same residue class (Gauss called them absolutely smallest residues).
From this one can deduce by geometrical considerations, that the number of residue classes modulo a Gaussian integer equals its norm (see below for a proof; similarly, for integers, the number of residue classes modulo is its absolute value ).
Residue class fields
The residue class ring modulo a Gaussian integer is a field if and only if is a Gaussian prime.
If is a decomposed prime or the ramified prime (that is, if its norm is a prime number, which is either 2 or a prime congruent to 1 modulo 4), then the residue class field has a prime number of elements (that is, ). It is thus isomorphic to the field of the integers modulo .
If, on the other hand, is an inert prime (that is, is the square of a prime number, which is congruent to 3 modulo 4), then the residue class field has elements, and it is an extension of degree 2 (unique, up to an isomorphism) of the prime field with elements (the integers modulo ).
Primitive residue class group and Euler's totient function
Many theorems (and their proofs) for moduli of integers can be directly transferred to moduli of Gaussian integers, if one replaces the absolute value of the modulus by the norm. This holds especially for the primitive residue class group (also called multiplicative group of integers modulo ) and Euler's totient function. The primitive residue class group of a modulus is defined as the subset of its residue classes, which contains all residue classes that are coprime to , i.e. . Obviously, this system builds a multiplicative group. The number of its elements shall be denoted by (analogously to Euler's totient function for integers ).
For Gaussian primes it immediately follows that and for arbitrary composite Gaussian integers
Euler's product formula can be derived as
where the product is to build over all prime divisors of (with ). Also the important theorem of Euler can be directly transferred:
For all with , it holds that .
Historical background
The ring of Gaussian integers was introduced by Carl Friedrich Gauss in his second monograph on quartic reciprocity (1832). The theorem of quadratic reciprocity (which he had first succeeded in proving in 1796) relates the solvability of the congruence to that of . Similarly, cubic reciprocity relates the solvability of to that of , and biquadratic (or quartic) reciprocity is a relation between and . Gauss discovered that the law of biquadratic reciprocity and its supplements were more easily stated and proved as statements about "whole complex numbers" (i.e. the Gaussian integers) than they are as statements about ordinary whole numbers (i.e. the integers).
In a footnote he notes that the Eisenstein integers are the natural domain for stating and proving results on cubic reciprocity and indicates that similar extensions of the integers are the appropriate domains for studying higher reciprocity laws.
This paper not only introduced the Gaussian integers and proved they are a unique factorization domain, it also introduced the terms norm, unit, primary, and associate, which are now standard in algebraic number theory.
Unsolved problems
Most of the unsolved problems are related to distribution of Gaussian primes in the plane.
Gauss's circle problem does not deal with the Gaussian integers per se, but instead asks for the number of lattice points inside a circle of a given radius centered at the origin. This is equivalent to determining the number of Gaussian integers with norm less than a given value.
There are also conjectures and unsolved problems about the Gaussian primes. Two of them are:
The real and imaginary axes have the infinite set of Gaussian primes 3, 7, 11, 19, ... and their associates. Are there any other lines that have infinitely many Gaussian primes on them? In particular, are there infinitely many Gaussian primes of the form ?
Is it possible to walk to infinity using the Gaussian primes as stepping stones and taking steps of a uniformly bounded length? This is known as the Gaussian moat problem; it was posed in 1962 by Basil Gordon and remains unsolved.
See also
Algebraic integer
Cyclotomic field
Eisenstein integer
Eisenstein prime
Hurwitz quaternion
Proofs of Fermat's theorem on sums of two squares
Proofs of quadratic reciprocity
Quadratic integer
Splitting of prime ideals in Galois extensions describes the structure of prime ideals in the Gaussian integers
Table of Gaussian integer factorizations
Notes
References
; reprinted in Werke, Georg Olms Verlag, Hildesheim, 1973, pp. 93–148. A German translation of this paper is available online in ″H. Maser (ed.): Carl Friedrich Gauss’ Arithmetische Untersuchungen über höhere Arithmetik. Springer, Berlin 1889, pp. 534″.
External links
IMO Compendium text on quadratic extensions and Gaussian Integers in problem solving
Keith Conrad, The Gaussian Integers.
Algebraic numbers
Cyclotomic fields
Lattice points
Quadratic irrational numbers
Integers
Complex numbers | Gaussian integer | [
"Mathematics"
] | 4,563 | [
"Lattice points",
"Mathematical objects",
"Elementary mathematics",
"Algebraic numbers",
"Complex numbers",
"Integers",
"Numbers",
"Number theory"
] |
48,629 | https://en.wikipedia.org/wiki/Normal%20space | In topology and related branches of mathematics, a normal space is a topological space X that satisfies Axiom T4: every two disjoint closed sets of X have disjoint open neighborhoods. A normal Hausdorff space is also called a T4 space. These conditions are examples of separation axioms and their further strengthenings define completely normal Hausdorff spaces, or T5 spaces, and perfectly normal Hausdorff spaces, or T6 spaces.
Definitions
A topological space X is a normal space if, given any disjoint closed sets E and F, there are neighbourhoods U of E and V of F that are also disjoint. More intuitively, this condition says that E and F can be separated by neighbourhoods.
A T4 space is a T1 space X that is normal; this is equivalent to X being normal and Hausdorff.
A completely normal space, or , is a topological space X such that every subspace of X is a normal space. It turns out that X is completely normal if and only if every two separated sets can be separated by neighbourhoods. Also, X is completely normal if and only if every open subset of X is normal with the subspace topology.
A T5 space, or completely T4 space, is a completely normal T1 space X, which implies that X is Hausdorff; equivalently, every subspace of X must be a T4 space.
A perfectly normal space is a topological space in which every two disjoint closed sets and can be precisely separated by a function, in the sense that there is a continuous function from to the interval such that and . This is a stronger separation property than normality, as by Urysohn's lemma disjoint closed sets in a normal space can be separated by a function, in the sense of and , but not precisely separated in general. It turns out that X is perfectly normal if and only if X is normal and every closed set is a Gδ set. Equivalently, X is perfectly normal if and only if every closed set is the zero set of a continuous function. The equivalence between these three characterizations is called Vedenissoff's theorem. Every perfectly normal space is completely normal, because perfect normality is a hereditary property.
A T6 space, or perfectly T4 space, is a perfectly normal Hausdorff space.
Note that the terms "normal space" and "T4" and derived concepts occasionally have a different meaning. (Nonetheless, "T5" always means the same as "completely T4", whatever the meaning of T4 may be.) The definitions given here are the ones usually used today. For more on this issue, see History of the separation axioms.
Terms like "normal regular space" and "normal Hausdorff space" also turn up in the literature—they simply mean that the space both is normal and satisfies the other condition mentioned. In particular, a normal Hausdorff space is the same thing as a T4 space. Given the historical confusion of the meaning of the terms, verbal descriptions when applicable are helpful, that is, "normal Hausdorff" instead of "T4", or "completely normal Hausdorff" instead of "T5".
Fully normal spaces and fully T4 spaces are discussed elsewhere; they are related to paracompactness.
A locally normal space is a topological space where every point has an open neighbourhood that is normal. Every normal space is locally normal, but the converse is not true. A classical example of a completely regular locally normal space that is not normal is the Nemytskii plane.
Examples of normal spaces
Most spaces encountered in mathematical analysis are normal Hausdorff spaces, or at least normal regular spaces:
All metric spaces (and hence all metrizable spaces) are perfectly normal Hausdorff;
All pseudometric spaces (and hence all pseudometrisable spaces) are perfectly normal regular, although not in general Hausdorff;
All compact Hausdorff spaces are normal;
In particular, the Stone–Čech compactification of a Tychonoff space is normal Hausdorff;
Generalizing the above examples, all paracompact Hausdorff spaces are normal, and all paracompact regular spaces are normal;
All paracompact topological manifolds are perfectly normal Hausdorff. However, there exist non-paracompact manifolds that are not even normal.
All order topologies on totally ordered sets are hereditarily normal and Hausdorff.
Every regular second-countable space is completely normal, and every regular Lindelöf space is normal.
Also, all fully normal spaces are normal (even if not regular). Sierpiński space is an example of a normal space that is not regular.
Examples of non-normal spaces
An important example of a non-normal topology is given by the Zariski topology on an algebraic variety or on the spectrum of a ring, which is used in algebraic geometry.
A non-normal space of some relevance to analysis is the topological vector space of all functions from the real line R to itself, with the topology of pointwise convergence.
More generally, a theorem of Arthur Harold Stone states that the product of uncountably many non-compact metric spaces is never normal.
Properties
Every closed subset of a normal space is normal. The continuous and closed image of a normal space is normal.
The main significance of normal spaces lies in the fact that they admit "enough" continuous real-valued functions, as expressed by the following theorems valid for any normal space X.
Urysohn's lemma:
If A and B are two disjoint closed subsets of X, then there exists a continuous function f from X to the real line R such that f(x) = 0 for all x in A and f(x) = 1 for all x in B.
In fact, we can take the values of f to be entirely within the unit interval [0,1]. In fancier terms, disjoint closed sets are not only separated by neighbourhoods, but also separated by a function.
More generally, the Tietze extension theorem:
If A is a closed subset of X and f is a continuous function from A to R, then there exists a continuous function F: X → R that extends f in the sense that F(x) = f(x) for all x in A.
The map has the lifting property with respect to a map from a certain finite topological space with five points (two open and three closed) to the space with one open and two closed points.
If U is a locally finite open cover of a normal space X, then there is a partition of unity precisely subordinate to U. This shows the relationship of normal spaces to paracompactness.
In fact, any space that satisfies any one of these three conditions must be normal.
A product of normal spaces is not necessarily normal. This fact was first proved by Robert Sorgenfrey. An example of this phenomenon is the Sorgenfrey plane. In fact, since there exist spaces which are Dowker, a product of a normal space and [0, 1] need not to be normal. Also, a subset of a normal space need not be normal (i.e. not every normal Hausdorff space is a completely normal Hausdorff space), since every Tychonoff space is a subset of its Stone–Čech compactification (which is normal Hausdorff). A more explicit example is the Tychonoff plank. The only large class of product spaces of normal spaces known to be normal are the products of compact Hausdorff spaces, since both compactness (Tychonoff's theorem) and the T2 axiom are preserved under arbitrary products.
Relationships to other separation axioms
If a normal space is R0, then it is in fact completely regular.
Thus, anything from "normal R0" to "normal completely regular" is the same as what we usually call normal regular.
Taking Kolmogorov quotients, we see that all normal T1 spaces are Tychonoff.
These are what we usually call normal Hausdorff spaces.
A topological space is said to be pseudonormal if given two disjoint closed sets in it, one of which is countable, there are disjoint open sets containing them. Every normal space is pseudonormal, but not vice versa.
Counterexamples to some variations on these statements can be found in the lists above.
Specifically, Sierpiński space is normal but not regular, while the space of functions from R to itself is Tychonoff but not normal.
See also
Citations
References
Engelking, Ryszard, General Topology, Heldermann Verlag Berlin, 1989.
Properties of topological spaces
Separation axioms | Normal space | [
"Mathematics"
] | 1,813 | [
"Properties of topological spaces",
"Topological spaces",
"Topology",
"Space (mathematics)"
] |
48,631 | https://en.wikipedia.org/wiki/Paracompact%20space | In mathematics, a paracompact space is a topological space in which every open cover has an open refinement that is locally finite. These spaces were introduced by . Every compact space is paracompact. Every paracompact Hausdorff space is normal, and a Hausdorff space is paracompact if and only if it admits partitions of unity subordinate to any open cover. Sometimes paracompact spaces are defined so as to always be Hausdorff.
Every closed subspace of a paracompact space is paracompact. While compact subsets of Hausdorff spaces are always closed, this is not true for paracompact subsets. A space such that every subspace of it is a paracompact space is called hereditarily paracompact. This is equivalent to requiring that every open subspace be paracompact.
The notion of paracompact space is also studied in pointless topology, where it is more well-behaved. For example, the product of any number of paracompact locales is a paracompact locale, but the product of two paracompact spaces may not be paracompact. Compare this to Tychonoff's theorem, which states that the product of any collection of compact topological spaces is compact. However, the product of a paracompact space and a compact space is always paracompact.
Every metric space is paracompact. A topological space is metrizable if and only if it is a paracompact and locally metrizable Hausdorff space.
Definition
A cover of a set is a collection of subsets of whose union contains . In symbols, if is an indexed family of subsets of , then is a cover of if
A cover of a topological space is open if all its members are open sets. A refinement of a cover of a space is a new cover of the same space such that every set in the new cover is a subset of some set in the old cover. In symbols, the cover is a refinement of the cover if and only if, for every in , there exists some in such that .
An open cover of a space is locally finite if every point of the space has a neighborhood that intersects only finitely many sets in the cover. In symbols, is locally finite if and only if, for any in , there exists some neighbourhood of such that the set
is finite. A topological space is now said to be paracompact if every open cover has a locally finite open refinement.
This definition extends verbatim to locales, with the exception of locally finite: an open cover of is locally finite iff the set of opens that intersect only finitely many opens in also form a cover of . Note that an open cover on a topological space is locally finite iff its a locally finite cover of the underlying locale.
Examples
Every compact space is paracompact.
Every regular Lindelöf space is paracompact, by Michael's theorem in the Hausdorff case. In particular, every locally compact Hausdorff second-countable space is paracompact.
The Sorgenfrey line is paracompact, even though it is neither compact, locally compact, second countable, nor metrizable.
Every CW complex is paracompact.
(Theorem of A. H. Stone) Every metric space is paracompact. Early proofs were somewhat involved, but an elementary one was found by M. E. Rudin. Existing proofs of this require the axiom of choice for the non-separable case. It has been shown that ZF theory is not sufficient to prove it, even after the weaker axiom of dependent choice is added.
A Hausdorff space admitting an exhaustion by compact sets is paracompact.
Some examples of spaces that are not paracompact include:
The most famous counterexample is the long line, which is a nonparacompact topological manifold. (The long line is locally compact, but not second countable.)
Another counterexample is a product of uncountably many copies of an infinite discrete space. Any infinite set carrying the particular point topology is not paracompact; in fact it is not even metacompact.
The Prüfer manifold P is a non-paracompact surface. (It is easy to find an uncountable open cover of P with no refinement of any kind.)
The bagpipe theorem shows that there are 2ℵ1 topological equivalence classes of non-paracompact surfaces.
The Sorgenfrey plane is not paracompact despite being a product of two paracompact spaces.
Properties
Paracompactness is weakly hereditary, i.e. every closed subspace of a paracompact space is paracompact. This can be extended to F-sigma subspaces as well.
(Michael's theorem) A regular space is paracompact if every open cover admits a locally finite refinement, not necessarily open. In particular, every regular Lindelöf space is paracompact.
(Smirnov metrization theorem) A topological space is metrizable if and only if it is paracompact, Hausdorff, and locally metrizable.
Michael selection theorem states that lower semicontinuous multifunctions from X into nonempty closed convex subsets of Banach spaces admit continuous selection iff X is paracompact.
Although a product of paracompact spaces need not be paracompact, the following are true:
The product of a paracompact space and a compact space is paracompact.
The product of a metacompact space and a compact space is metacompact.
Both these results can be proved by the tube lemma which is used in the proof that a product of finitely many compact spaces is compact.
Paracompact Hausdorff spaces
Paracompact spaces are sometimes required to also be Hausdorff to extend their properties.
(Theorem of Jean Dieudonné) Every paracompact Hausdorff space is normal.
Every paracompact Hausdorff space is a shrinking space, that is, every open cover of a paracompact Hausdorff space has a shrinking: another open cover indexed by the same set such that the closure of every set in the new cover lies inside the corresponding set in the old cover.
On paracompact Hausdorff spaces, sheaf cohomology and Čech cohomology are equal.
Partitions of unity
The most important feature of paracompact Hausdorff spaces is that they admit partitions of unity subordinate to any open cover. This means the following: if X is a paracompact Hausdorff space with a given open cover, then there exists a collection of continuous functions on X with values in the unit interval [0, 1] such that:
for every function f: X → R from the collection, there is an open set U from the cover such that the support of f is contained in U;
for every point x in X, there is a neighborhood V of x such that all but finitely many of the functions in the collection are identically 0 in V and the sum of the nonzero functions is identically 1 in V.
In fact, a T1 space is Hausdorff and paracompact if and only if it admits partitions of unity subordinate to any open cover (see below). This property is sometimes used to define paracompact spaces (at least in the Hausdorff case).
Partitions of unity are useful because they often allow one to extend local constructions to the whole space. For instance, the integral of differential forms on paracompact manifolds is first defined locally (where the manifold looks like Euclidean space and the integral is well known), and this definition is then extended to the whole space via a partition of unity.
Proof that paracompact Hausdorff spaces admit partitions of unity
Relationship with compactness
There is a similarity between the definitions of compactness and paracompactness:
For paracompactness, "subcover" is replaced by "open refinement" and "finite" by is replaced by "locally finite". Both of these changes are significant: if we take the definition of paracompact and change "open refinement" back to "subcover", or "locally finite" back to "finite", we end up with the compact spaces in both cases.
Paracompactness has little to do with the notion of compactness, but rather more to do with breaking up topological space entities into manageable pieces.
Comparison of properties with compactness
Paracompactness is similar to compactness in the following respects:
Every closed subset of a paracompact space is paracompact.
Every paracompact Hausdorff space is normal.
It is different in these respects:
A paracompact subset of a Hausdorff space need not be closed. In fact, for metric spaces, all subsets are paracompact.
A product of paracompact spaces need not be paracompact. The square of the real line R in the lower limit topology is a classical example for this.
Variations
There are several variations of the notion of paracompactness. To define them, we first need to extend the list of terms above:
A topological space is:
metacompact if every open cover has an open point-finite refinement.
orthocompact if every open cover has an open refinement such that the intersection of all the open sets about any point in this refinement is open.
fully normal if every open cover has an open star refinement, and fully T4 if it is fully normal and T1 (see separation axioms).
The adverb "countably" can be added to any of the adjectives "paracompact", "metacompact", and "fully normal" to make the requirement apply only to countable open covers.
Every paracompact space is metacompact, and every metacompact space is orthocompact.
Definition of relevant terms for the variations
Given a cover and a point, the star of the point in the cover is the union of all the sets in the cover that contain the point. In symbols, the star of x in U = {Uα : α in A} is
The notation for the star is not standardised in the literature, and this is just one possibility.
A star refinement of a cover of a space X is a cover of the same space such that, given any point in the space, the star of the point in the new cover is a subset of some set in the old cover. In symbols, V is a star refinement of U = {Uα : α in A} if for any x in X, there exists a Uα in U such that V*(x) is contained in Uα.
A cover of a space X is point-finite (or point finite) if every point of the space belongs to only finitely many sets in the cover. In symbols, U is point finite if for any x in X, the set is finite.
As the names imply, a fully normal space is normal and a fully T4 space is T4. Every fully T4 space is paracompact. In fact, for Hausdorff spaces, paracompactness and full normality are equivalent. Thus, a fully T4 space is the same thing as a paracompact Hausdorff space.
Without the Hausdorff property, paracompact spaces are not necessarily fully normal. Any compact space that is not regular provides an example.
A historical note: fully normal spaces were defined before paracompact spaces, in 1940, by John W. Tukey.
The proof that all metrizable spaces are fully normal is easy. When it was proved by A.H. Stone that for Hausdorff spaces full normality and paracompactness are equivalent, he implicitly proved that all metrizable spaces are paracompact. Later Ernest Michael
gave a direct proof of the latter fact and
M.E. Rudin gave another, elementary, proof.
See also
a-paracompact space
Paranormal space
Notes
References
Lynn Arthur Steen and J. Arthur Seebach, Jr., Counterexamples in Topology (2 ed), Springer Verlag, 1978, . P.23.
External links
Separation axioms
Compactness (mathematics)
Properties of topological spaces | Paracompact space | [
"Mathematics"
] | 2,627 | [
"Properties of topological spaces",
"Topological spaces",
"Topology",
"Space (mathematics)"
] |
48,632 | https://en.wikipedia.org/wiki/Locally%20compact%20space | In topology and related branches of mathematics, a topological space is called locally compact if, roughly speaking, each small portion of the space looks like a small portion of a compact space. More precisely, it is a topological space in which every point has a compact neighborhood.
When locally compact spaces are Hausdorff they are called locally compact Hausdorff, which are of particular interest in mathematical analysis.
Formal definition
Let X be a topological space. Most commonly X is called locally compact if every point x of X has a compact neighbourhood, i.e., there exists an open set U and a compact set K, such that .
There are other common definitions: They are all equivalent if X is a Hausdorff space (or preregular). But they are not equivalent in general:
1. every point of X has a compact neighbourhood.
2. every point of X has a closed compact neighbourhood.
2′. every point of X has a relatively compact neighbourhood.
2″. every point of X has a local base of relatively compact neighbourhoods.
3. every point of X has a local base of compact neighbourhoods.
4. every point of X has a local base of closed compact neighbourhoods.
5. X is Hausdorff and satisfies any (or equivalently, all) of the previous conditions.
Logical relations among the conditions:
Each condition implies (1).
Conditions (2), (2′), (2″) are equivalent.
Neither of conditions (2), (3) implies the other.
Condition (4) implies (2) and (3).
Compactness implies conditions (1) and (2), but not (3) or (4).
Condition (1) is probably the most commonly used definition, since it is the least restrictive and the others are equivalent to it when X is Hausdorff. This equivalence is a consequence of the facts that compact subsets of Hausdorff spaces are closed, and closed subsets of compact spaces are compact. Spaces satisfying (1) are also called , as they satisfy the weakest of the conditions here.
As they are defined in terms of relatively compact sets, spaces satisfying (2), (2'), (2") can more specifically be called locally relatively compact. Steen & Seebach calls (2), (2'), (2") strongly locally compact to contrast with property (1), which they call locally compact.
Spaces satisfying condition (4) are exactly the spaces. Indeed, such a space is regular, as every point has a local base of closed neighbourhoods. Conversely, in a regular locally compact space suppose a point has a compact neighbourhood . By regularity, given an arbitrary neighbourhood of , there is a closed neighbourhood of contained in and is compact as a closed set in a compact set.
Condition (5) is used, for example, in Bourbaki. Any space that is locally compact (in the sense of condition (1)) and also Hausdorff automatically satisfies all the conditions above. Since in most applications locally compact spaces are also Hausdorff, these locally compact Hausdorff spaces will thus be the spaces that this article is primarily concerned with.
Examples and counterexamples
Compact Hausdorff spaces
Every compact Hausdorff space is also locally compact, and many examples of compact spaces may be found in the article compact space.
Here we mention only:
the unit interval [0,1];
the Cantor set;
the Hilbert cube.
Locally compact Hausdorff spaces that are not compact
The Euclidean spaces Rn (and in particular the real line R) are locally compact as a consequence of the Heine–Borel theorem.
Topological manifolds share the local properties of Euclidean spaces and are therefore also all locally compact. This even includes nonparacompact manifolds such as the long line.
All discrete spaces are locally compact and Hausdorff (they are just the zero-dimensional manifolds). These are compact only if they are finite.
All open or closed subsets of a locally compact Hausdorff space are locally compact in the subspace topology. This provides several examples of locally compact subsets of Euclidean spaces, such as the unit disc (either the open or closed version).
The space Qp of p-adic numbers is locally compact, because it is homeomorphic to the Cantor set minus one point. Thus locally compact spaces are as useful in p-adic analysis as in classical analysis.
Hausdorff spaces that are not locally compact
As mentioned in the following section, if a Hausdorff space is locally compact, then it is also a Tychonoff space. For this reason, examples of Hausdorff spaces that fail to be locally compact because they are not Tychonoff spaces can be found in the article dedicated to Tychonoff spaces.
But there are also examples of Tychonoff spaces that fail to be locally compact, such as:
the space Q of rational numbers (endowed with the topology from R), since any neighborhood contains a Cauchy sequence corresponding to an irrational number, which has no convergent subsequence in Q;
the subspace of , since the origin does not have a compact neighborhood;
the lower limit topology or upper limit topology on the set R of real numbers (useful in the study of one-sided limits);
any T0, hence Hausdorff, topological vector space that is infinite-dimensional, such as an infinite-dimensional Hilbert space.
The first two examples show that a subset of a locally compact space need not be locally compact, which contrasts with the open and closed subsets in the previous section.
The last example contrasts with the Euclidean spaces in the previous section; to be more specific, a Hausdorff topological vector space is locally compact if and only if it is finite-dimensional (in which case it is a Euclidean space).
This example also contrasts with the Hilbert cube as an example of a compact space; there is no contradiction because the cube cannot be a neighbourhood of any point in Hilbert space.
Non-Hausdorff examples
The one-point compactification of the rational numbers Q is compact and therefore locally compact in senses (1) and (2) but it is not locally compact in senses (3) or (4).
The particular point topology on any infinite set is locally compact in senses (1) and (3) but not in senses (2) or (4), because the closure of any neighborhood is the entire space, which is non-compact.
The disjoint union of the above two examples is locally compact in sense (1) but not in senses (2), (3) or (4).
The right order topology on the real line is locally compact in senses (1) and (3) but not in senses (2) or (4), because the closure of any neighborhood is the entire non-compact space.
The Sierpiński space is locally compact in senses (1), (2) and (3), and compact as well, but it is not Hausdorff or regular (or even preregular) so it is not locally compact in senses (4) or (5). The disjoint union of countably many copies of Sierpiński space is a non-compact space which is still locally compact in senses (1), (2) and (3), but not (4) or (5).
More generally, the excluded point topology is locally compact in senses (1), (2) and (3), and compact, but not locally compact in senses (4) or (5).
The cofinite topology on an infinite set is locally compact in senses (1), (2), and (3), and compact as well, but it is not Hausdorff or regular so it is not locally compact in senses (4) or (5).
The indiscrete topology on a set with at least two elements is locally compact in senses (1), (2), (3), and (4), and compact as well, but it is not Hausdorff so it is not locally compact in sense (5).
General classes of examples
Every space with an Alexandrov topology is locally compact in senses (1) and (3).
Properties
Every locally compact preregular space is, in fact, completely regular. It follows that every locally compact Hausdorff space is a Tychonoff space. Since straight regularity is a more familiar condition than either preregularity (which is usually weaker) or complete regularity (which is usually stronger), locally compact preregular spaces are normally referred to in the mathematical literature as locally compact regular spaces. Similarly locally compact Tychonoff spaces are usually just referred to as locally compact Hausdorff spaces.
Every locally compact regular space, in particular every locally compact Hausdorff space, is a Baire space.
That is, the conclusion of the Baire category theorem holds: the interior of every countable union of nowhere dense subsets is empty.
A subspace X of a locally compact Hausdorff space Y is locally compact if and only if X is locally closed in Y (that is, X can be written as the set-theoretic difference of two closed subsets of Y). In particular, every closed set and every open set in a locally compact Hausdorff space is locally compact. Also, as a corollary, a dense subspace X of a locally compact Hausdorff space Y is locally compact if and only if X is open in Y. Furthermore, if a subspace X of any Hausdorff space Y is locally compact, then X still must be locally closed in Y, although the converse does not hold in general.
Without the Hausdorff hypothesis, some of these results break down with weaker notions of locally compact. Every closed set in a weakly locally compact space (= condition (1) in the definitions above) is weakly locally compact. But not every open set in a weakly locally compact space is weakly locally compact. For example, the one-point compactification of the rational numbers is compact, and hence weakly locally compact. But it contains as an open set which is not weakly locally compact.
Quotient spaces of locally compact Hausdorff spaces are compactly generated.
Conversely, every compactly generated Hausdorff space is a quotient of some locally compact Hausdorff space.
For functions defined on a locally compact space, local uniform convergence is the same as compact convergence.
The point at infinity
This section explores compactifications of locally compact spaces. Every compact space is its own compactification. So to avoid trivialities it is assumed below that the space X is not compact.
Since every locally compact Hausdorff space X is Tychonoff, it can be embedded in a compact Hausdorff space using the Stone–Čech compactification.
But in fact, there is a simpler method available in the locally compact case; the one-point compactification will embed X in a compact Hausdorff space with just one extra point.
(The one-point compactification can be applied to other spaces, but will be Hausdorff if and only if X is locally compact and Hausdorff.)
The locally compact Hausdorff spaces can thus be characterised as the open subsets of compact Hausdorff spaces.
Intuitively, the extra point in can be thought of as a point at infinity.
The point at infinity should be thought of as lying outside every compact subset of X.
Many intuitive notions about tendency towards infinity can be formulated in locally compact Hausdorff spaces using this idea.
For example, a continuous real or complex valued function f with domain X is said to vanish at infinity if, given any positive number e, there is a compact subset K of X such that whenever the point x lies outside of K. This definition makes sense for any topological space X. If X is locally compact and Hausdorff, such functions are precisely those extendable to a continuous function g on its one-point compactification where
Gelfand representation
For a locally compact Hausdorff space X, the set of all continuous complex-valued functions on X that vanish at infinity is a commutative C*-algebra. In fact, every commutative C*-algebra is isomorphic to for some unique (up to homeomorphism) locally compact Hausdorff space X. This is shown using the Gelfand representation.
Locally compact groups
The notion of local compactness is important in the study of topological groups mainly because every Hausdorff locally compact group G carries natural measures called the Haar measures which allow one to integrate measurable functions defined on G.
The Lebesgue measure on the real line is a special case of this.
The Pontryagin dual of a topological abelian group A is locally compact if and only if A is locally compact.
More precisely, Pontryagin duality defines a self-duality of the category of locally compact abelian groups.
The study of locally compact abelian groups is the foundation of harmonic analysis, a field that has since spread to non-abelian locally compact groups.
See also
Core-compact space
Citations
References
Compactness (mathematics)
Properties of topological spaces | Locally compact space | [
"Mathematics"
] | 2,717 | [
"Properties of topological spaces",
"Topological spaces",
"Topology",
"Space (mathematics)"
] |
48,634 | https://en.wikipedia.org/wiki/Nowhere%20dense%20set | In mathematics, a subset of a topological space is called nowhere dense or rare if its closure has empty interior. In a very loose sense, it is a set whose elements are not tightly clustered (as defined by the topology on the space) anywhere. For example, the integers are nowhere dense among the reals, whereas the interval (0, 1) is not nowhere dense.
A countable union of nowhere dense sets is called a meagre set. Meagre sets play an important role in the formulation of the Baire category theorem, which is used in the proof of several fundamental results of functional analysis.
Definition
Density nowhere can be characterized in different (but equivalent) ways. The simplest definition is the one from density:
A subset of a topological space is said to be dense in another set if the intersection is a dense subset of is or in if is not dense in any nonempty open subset of
Expanding out the negation of density, it is equivalent that each nonempty open set contains a nonempty open subset disjoint from It suffices to check either condition on a base for the topology on In particular, density nowhere in is often described as being dense in no open interval.
Definition by closure
The second definition above is equivalent to requiring that the closure, cannot contain any nonempty open set. This is the same as saying that the interior of the closure of is empty; that is, Alternatively, the complement of the closure must be a dense subset of in other words, the exterior of is dense in
Properties
The notion of nowhere dense set is always relative to a given surrounding space. Suppose where has the subspace topology induced from The set may be nowhere dense in but not nowhere dense in Notably, a set is always dense in its own subspace topology. So if is nonempty, it will not be nowhere dense as a subset of itself. However the following results hold:
If is nowhere dense in then is nowhere dense in
If is open in , then is nowhere dense in if and only if is nowhere dense in
If is dense in , then is nowhere dense in if and only if is nowhere dense in
A set is nowhere dense if and only if its closure is.
Every subset of a nowhere dense set is nowhere dense, and a finite union of nowhere dense sets is nowhere dense. Thus the nowhere dense sets form an ideal of sets, a suitable notion of negligible set. In general they do not form a 𝜎-ideal, as meager sets, which are the countable unions of nowhere dense sets, need not be nowhere dense. For example, the set is not nowhere dense in
The boundary of every open set and of every closed set is closed and nowhere dense. A closed set is nowhere dense if and only if it is equal to its boundary, if and only if it is equal to the boundary of some open set (for example the open set can be taken as the complement of the set). An arbitrary set is nowhere dense if and only if it is a subset of the boundary of some open set (for example the open set can be taken as the exterior of ).
Examples
The set and its closure are nowhere dense in since the closure has empty interior.
The Cantor set is an uncountable nowhere dense set in
viewed as the horizontal axis in the Euclidean plane is nowhere dense in
is nowhere dense in but the rationals are not (they are dense everywhere).
is nowhere dense in : it is dense in the open interval and in particular the interior of its closure is
The empty set is nowhere dense. In a discrete space, the empty set is the nowhere dense set.
In a T1 space, any singleton set that is not an isolated point is nowhere dense.
A vector subspace of a topological vector space is either dense or nowhere dense.
Nowhere dense sets with positive measure
A nowhere dense set is not necessarily negligible in every sense. For example, if is the unit interval not only is it possible to have a dense set of Lebesgue measure zero (such as the set of rationals), but it is also possible to have a nowhere dense set with positive measure. One such example is the Smith–Volterra–Cantor set.
For another example (a variant of the Cantor set), remove from all dyadic fractions, i.e. fractions of the form in lowest terms for positive integers and the intervals around them:
Since for each this removes intervals adding up to at most the nowhere dense set remaining after all such intervals have been removed has measure of at least (in fact just over because of overlaps) and so in a sense represents the majority of the ambient space
This set is nowhere dense, as it is closed and has an empty interior: any interval is not contained in the set since the dyadic fractions in have been removed.
Generalizing this method, one can construct in the unit interval nowhere dense sets of any measure less than although the measure cannot be exactly 1 (because otherwise the complement of its closure would be a nonempty open set with measure zero, which is impossible).
For another simpler example, if is any dense open subset of having finite Lebesgue measure then is necessarily a closed subset of having infinite Lebesgue measure that is also nowhere dense in (because its topological interior is empty). Such a dense open subset of finite Lebesgue measure is commonly constructed when proving that the Lebesgue measure of the rational numbers is This may be done by choosing any bijection (it actually suffices for to merely be a surjection) and for every letting
(here, the Minkowski sum notation was used to simplify the description of the intervals).
The open subset is dense in because this is true of its subset and its Lebesgue measure is no greater than
Taking the union of closed, rather than open, intervals produces the F-subset
that satisfies Because is a subset of the nowhere dense set it is also nowhere dense in
Because is a Baire space, the set
is a dense subset of (which means that like its subset cannot possibly be nowhere dense in ) with Lebesgue measure that is also a nonmeager subset of (that is, is of the second category in ), which makes a comeager subset of whose interior in is also empty; however, is nowhere dense in if and only if its in has empty interior.
The subset in this example can be replaced by any countable dense subset of and furthermore, even the set can be replaced by for any integer
See also
References
Bibliography
External links
Some nowhere dense sets with positive measure
General topology
de:Dichte Teilmenge#Nirgends dichte Teilmenge | Nowhere dense set | [
"Mathematics"
] | 1,363 | [
"General topology",
"Topology"
] |
48,636 | https://en.wikipedia.org/wiki/Partition%20of%20unity | In mathematics, a partition of unity of a topological space is a set of continuous functions from to the unit interval [0,1] such that for every point :
there is a neighbourhood of where all but a finite number of the functions of are 0, and
the sum of all the function values at is 1, i.e.,
Partitions of unity are useful because they often allow one to extend local constructions to the whole space. They are also important in the interpolation of data, in signal processing, and the theory of spline functions.
Existence
The existence of partitions of unity assumes two distinct forms:
Given any open cover of a space, there exists a partition indexed over the same set such that supp Such a partition is said to be subordinate to the open cover
If the space is locally-compact, given any open cover of a space, there exists a partition indexed over a possibly distinct index set such that each has compact support and for each , supp for some .
Thus one chooses either to have the supports indexed by the open cover, or compact supports. If the space is compact, then there exist partitions satisfying both requirements.
A finite open cover always has a continuous partition of unity subordinated to it, provided the space is locally compact and Hausdorff.
Paracompactness of the space is a necessary condition to guarantee the existence of a partition of unity subordinate to any open cover. Depending on the category to which the space belongs, it may also be a sufficient condition. The construction uses mollifiers (bump functions), which exist in continuous and smooth manifolds, but not in analytic manifolds. Thus for an open cover of an analytic manifold, an analytic partition of unity subordinate to that open cover generally does not exist. See analytic continuation.
If and are partitions of unity for spaces and , respectively, then the set of all pairs is a partition of unity for the cartesian product space . The tensor product of functions act as
Example
We can construct a partition of unity on by looking at a chart on the complement of a point sending to with center . Now, let be a bump function on defined by then, both this function and can be extended uniquely onto by setting . Then, the set forms a partition of unity over .
Variant definitions
Sometimes a less restrictive definition is used: the sum of all the function values at a particular point is only required to be positive, rather than 1, for each point in the space. However, given such a set of functions one can obtain a partition of unity in the strict sense by dividing by the sum; the partition becomes where , which is well defined since at each point only a finite number of terms are nonzero. Even further, some authors drop the requirement that the supports be locally finite, requiring only that for all .
In the field of operator algebras, a partition of unity is composed of projections . In the case of -algebras, it can be shown that the entries are pairwise-orthogonal:
Note it is not the case that in a general *-algebra that the entries of a partition of unity are pairwise-orthogonal.
If is a normal element of a unital -algebra , and has finite spectrum , then the projections in the spectral decomposition:
form a partition of unity.
In the field of compact quantum groups, the rows and columns of the fundamental representation of a quantum permutation group form partitions of unity.
Applications
A partition of unity can be used to define the integral (with respect to a volume form) of a function defined over a manifold: one first defines the integral of a function whose support is contained in a single coordinate patch of the manifold; then one uses a partition of unity to define the integral of an arbitrary function; finally one shows that the definition is independent of the chosen partition of unity.
A partition of unity can be used to show the existence of a Riemannian metric on an arbitrary manifold.
Method of steepest descent employs a partition of unity to construct asymptotics of integrals.
Linkwitz–Riley filter is an example of practical implementation of partition of unity to separate input signal into two output signals containing only high- or low-frequency components.
The Bernstein polynomials of a fixed degree m are a family of m+1 linearly independent single-variable polynomials that are a partition of unity for the unit interval .
The weak Hilbert Nullstellensatz asserts that if are polynomials with no common vanishing points in , then there are polynomials with . That is, form a polynomial partition of unity subordinate to the Zariski-open cover .
Partitions of unity are used to establish global smooth approximations for Sobolev functions in bounded domains.
See also
Gluing axiom
Fine sheaf
References
, see chapter 13
External links
General information on partition of unity at [Mathworld]
Differential topology
Topology | Partition of unity | [
"Physics",
"Mathematics"
] | 981 | [
"Topology",
"Space",
"Differential topology",
"Geometry",
"Spacetime"
] |
48,660 | https://en.wikipedia.org/wiki/Fuzzy%20control%20system | A fuzzy control system is a control system based on fuzzy logic a mathematical system that analyzes analog input values in terms of logical variables that take on continuous values between 0 and 1, in contrast to classical or digital logic, which operates on discrete values of either 1 or 0 (true or false, respectively).
Fuzzy logic is widely used in machine control. The term "fuzzy" refers to the fact that the logic involved can deal with concepts that cannot be expressed as the "true" or "false" but rather as "partially true". Although alternative approaches such as genetic algorithms and neural networks can perform just as well as fuzzy logic in many cases, fuzzy logic has the advantage that the solution to the problem can be cast in terms that human operators can understand, such that that their experience can be used in the design of the controller. This makes it easier to mechanize tasks that are already successfully performed by humans.
History and applications
Fuzzy logic was proposed by Lotfi A. Zadeh of the University of California at Berkeley in a 1965 paper. He elaborated on his ideas in a 1973 paper that introduced the concept of "linguistic variables", which in this article equates to a variable defined as a fuzzy set. Other research followed, with the first industrial application, a cement kiln built in Denmark, coming on line in 1976.
Fuzzy systems were initially implemented in Japan.
Interest in fuzzy systems was sparked by Seiji Yasunobu and Soji Miyamoto of Hitachi, who in 1985 provided simulations that demonstrated the feasibility of fuzzy control systems for the Sendai Subway. Their ideas were adopted, and fuzzy systems were used to control accelerating, braking, and stopping when the Namboku Line opened in 1987.
In 1987, Takeshi Yamakawa demonstrated the use of fuzzy control, through a set of simple dedicated fuzzy logic chips, in an "inverted pendulum" experiment. This is a classic control problem, in which a vehicle tries to keep a pole mounted on its top by a hinge upright by moving back and forth. Yamakawa subsequently made the demonstration more sophisticated by mounting a wine glass containing water and even a live mouse to the top of the pendulum: the system maintained stability in both cases. Yamakawa eventually went on to organize his own fuzzy-systems research lab to help exploit his patents in the field.
Japanese engineers subsequently developed a wide range of fuzzy systems for both industrial and consumer applications. In 1988 Japan established the Laboratory for International Fuzzy Engineering (LIFE), a cooperative arrangement between 48 companies to pursue fuzzy research. The automotive company Volkswagen was the only foreign corporate member of LIFE, dispatching a researcher for a duration of three years.
Japanese consumer goods often incorporate fuzzy systems. Matsushita vacuum cleaners use microcontrollers running fuzzy algorithms to interrogate dust sensors and adjust suction power accordingly. Hitachi washing machines use fuzzy controllers to load-weight, fabric-mix, and dirt sensors and automatically set the wash cycle for the best use of power, water, and detergent.
Canon developed an autofocusing camera that uses a charge-coupled device (CCD) to measure the clarity of the image in six regions of its field of view and use the information provided to determine if the image is in focus. It also tracks the rate of change of lens movement during focusing, and controls its speed to prevent overshoot. The camera's fuzzy control system uses 12 inputs: 6 to obtain the current clarity data provided by the CCD and 6 to measure the rate of change of lens movement. The output is the position of the lens. The fuzzy control system uses 13 rules and requires 1.1 kilobytes of memory.
An industrial air conditioner designed by Mitsubishi uses 25 heating rules and 25 cooling rules. A temperature sensor provides input, with control outputs fed to an inverter, a compressor valve, and a fan motor. Compared to the previous design, the fuzzy controller heats and cools five times faster, reduces power consumption by 24%, increases temperature stability by a factor of two, and uses fewer sensors.
Other applications investigated or implemented include: character and handwriting recognition; optical fuzzy systems; robots, including one for making Japanese flower arrangements; voice-controlled robot helicopters (hovering is a "balancing act" rather similar to the inverted pendulum problem); rehabilitation robotics to provide patient-specific solutions (e.g. to control heart rate and blood pressure ); control of flow of powders in film manufacture; elevator systems; and so on.
Work on fuzzy systems is also proceeding in North America and Europe, although on a less extensive scale than in Japan.
The US Environmental Protection Agency has investigated fuzzy control for energy-efficient motors, and NASA has studied fuzzy control for automated space docking: simulations show that a fuzzy control system can greatly reduce fuel consumption.
Firms such as Boeing, General Motors, Allen-Bradley, Chrysler, Eaton, and Whirlpool have worked on fuzzy logic for use in low-power refrigerators, improved automotive transmissions, and energy-efficient electric motors.
In 1995 Maytag introduced an "intelligent" dishwasher based on a fuzzy controller and a "one-stop sensing module" that combines a thermistor, for temperature measurement; a conductivity sensor, to measure detergent level from the ions present in the wash; a turbidity sensor that measures scattered and transmitted light to measure the soiling of the wash; and a magnetostrictive sensor to read spin rate. The system determines the optimum wash cycle for any load to obtain the best results with the least amount of energy, detergent, and water. It even adjusts for dried-on foods by tracking the last time the door was opened, and estimates the number of dishes by the number of times the door was opened.
Xiera Technologies Inc. has developed the first auto-tuner for the fuzzy logic controller's knowledge base known as edeX. This technology was tested by Mohawk College and was able to solve non-linear 2x2 and 3x3 multi-input multi-output problems.
Research and development is also continuing on fuzzy applications in software, as opposed to firmware, design, including fuzzy expert systems and integration of fuzzy logic with neural-network and so-called adaptive "genetic" software systems, with the ultimate goal of building "self-learning" fuzzy-control systems. These systems can be employed to control complex, nonlinear dynamic plants, for example, human body.
Fuzzy sets
The input variables in a fuzzy control system are in general mapped by sets of membership functions similar to this, known as "fuzzy sets". The process of converting a crisp input value to a fuzzy value is called "fuzzification". The fuzzy logic based approach had been considered by designing two fuzzy systems, one for error heading angle and the other for velocity control.
A control system may also have various types of switch, or "ON-OFF", inputs along with its analog inputs, and such switch inputs of course will always have a truth value equal to either 1 or 0, but the scheme can deal with them as simplified fuzzy functions that happen to be either one value or another.
Given "mappings" of input variables into membership functions and truth values, the microcontroller then makes decisions for what action to take, based on a set of "rules", each of the form:
IF brake temperature IS warm AND speed IS not very fast
THEN brake pressure IS slightly decreased.
In this example, the two input variables are "brake temperature" and "speed" that have values defined as fuzzy sets. The output variable, "brake pressure" is also defined by a fuzzy set that can have values like "static" or "slightly increased" or "slightly decreased" etc.
Fuzzy control in detail
Fuzzy controllers are very simple conceptually. They consist of an input stage, a processing stage, and an output stage. The input stage maps sensor or other inputs, such as switches, thumbwheels, and so on, to the appropriate membership functions and truth values. The processing stage invokes each appropriate rule and generates a result for each, then combines the results of the rules. Finally, the output stage converts the combined result back into a specific control output value.
The most common shape of membership functions is triangular, although trapezoidal and bell curves are also used, but the shape is generally less important than the number of curves and their placement. From three to seven curves are generally appropriate to cover the required range of an input value, or the "universe of discourse" in fuzzy jargon.
As discussed earlier, the processing stage is based on a collection of logic rules in the form of IF-THEN statements, where the IF part is called the "antecedent" and the THEN part is called the "consequent". Typical fuzzy control systems have dozens of rules.
Consider a rule for a thermostat:
IF (temperature is "cold") THEN turn (heater is "high")
This rule uses the truth value of the "temperature" input, which is some truth value of "cold", to generate a result in the fuzzy set for the "heater" output, which is some value of "high". This result is used with the results of other rules to finally generate the crisp composite output. Obviously, the greater the truth value of "cold", the higher the truth value of "high", though this does not necessarily mean that the output itself will be set to "high" since this is only one rule among many.
In some cases, the membership functions can be modified by "hedges" that are equivalent to adverbs. Common hedges include "about", "near", "close to", "approximately", "very", "slightly", "too", "extremely", and "somewhat". These operations may have precise definitions, though the definitions can vary considerably between different implementations. "Very", for one example, squares membership functions; since the membership values are always less than 1, this narrows the membership function. "Extremely" cubes the values to give greater narrowing, while "somewhat" broadens the function by taking the square root.
In practice, the fuzzy rule sets usually have several antecedents that are combined using fuzzy operators, such as AND, OR, and NOT, though again the definitions tend to vary: AND, in one popular definition, simply uses the minimum weight of all the antecedents, while OR uses the maximum value. There is also a NOT operator that subtracts a membership function from 1 to give the "complementary" function.
There are several ways to define the result of a rule, but one of the most common and simplest is the "max-min" inference method, in which the output membership function is given the truth value generated by the premise.
Rules can be solved in parallel in hardware, or sequentially in software. The results of all the rules that have fired are "defuzzified" to a crisp value by one of several methods. There are dozens, in theory, each with various advantages or drawbacks.
The "centroid" method is very popular, in which the "center of mass" of the result provides the crisp value. Another approach is the "height" method, which takes the value of the biggest contributor. The centroid method favors the rule with the output of greatest area, while the height method obviously favors the rule with the greatest output value.
The diagram below demonstrates max-min inferencing and centroid defuzzification for a system with input variables "x", "y", and "z" and an output variable "n". Note that "mu" is standard fuzzy-logic nomenclature for "truth value":
Notice how each rule provides a result as a truth value of a particular membership function for the output variable. In centroid defuzzification the values are OR'd, that is, the maximum value is used and values are not added, and the results are then combined using a centroid calculation.
Fuzzy control system design is based on empirical methods, basically a methodical approach to trial-and-error. The general process is as follows:
Document the system's operational specifications and inputs and outputs.
Document the fuzzy sets for the inputs.
Document the rule set.
Determine the defuzzification method.
Run through test suite to validate system, adjust details as required.
Complete document and release to production.
As a general example, consider the design of a fuzzy controller for a steam turbine. The block diagram of this control system appears as follows:
The input and output variables map into the following fuzzy set:
—where:
N3: Large negative.
N2: Medium negative.
N1: Small negative.
Z: Zero.
P1: Small positive.
P2: Medium positive.
P3: Large positive.
The rule set includes such rules as:
rule 1: IF temperature IS cool AND pressure IS weak,
THEN throttle is P3.
rule 2: IF temperature IS cool AND pressure IS low,
THEN throttle is P2.
rule 3: IF temperature IS cool AND pressure IS ok,
THEN throttle is Z.
rule 4: IF temperature IS cool AND pressure IS strong,
THEN throttle is N2.
In practice, the controller accepts the inputs and maps them into their membership functions and truth values. These mappings are then fed into the rules. If the rule specifies an AND relationship between the mappings of the two input variables, as the examples above do, the minimum of the two is used as the combined truth value; if an OR is specified, the maximum is used. The appropriate output state is selected and assigned a membership value at the truth level of the premise. The truth values are then defuzzified.
For example, assume the temperature is in the "cool" state, and the pressure is in the "low" and "ok" states. The pressure values ensure that only rules 2 and 3 fire:
The two outputs are then defuzzified through centroid defuzzification:
__
| Z P2
1 -+ * *
| * * * *
| * * * *
| * * * *
| * 222222222
| * 22222222222
| 333333332222222222222
+---33333333222222222222222-->
^
+150
__
The output value will adjust the throttle and then the control cycle will begin again to generate the next value.
Building a fuzzy controller
Consider implementing with a microcontroller chip a simple feedback controller:
A fuzzy set is defined for the input error variable "e", and the derived change in error, "delta", as well as the "output", as follows:
LP: large positive
SP: small positive
ZE: zero
SN: small negative
LN: large negative
If the error ranges from -1 to +1, with the analog-to-digital converter used having a resolution of 0.25, then the input variable's fuzzy set (which, in this case, also applies to the output variable) can be described very simply as a table, with the error / delta / output values in the top row and the truth values for each membership function arranged in rows beneath:
___
-1 -0.75 -0.5 -0.25 0 0.25 0.5 0.75 1
___
mu(LP) 0 0 0 0 0 0 0.3 0.7 1
mu(SP) 0 0 0 0 0.3 0.7 1 0.7 0.3
mu(ZE) 0 0 0.3 0.7 1 0.7 0.3 0 0
mu(SN) 0.3 0.7 1 0.7 0.3 0 0 0 0
mu(LN) 1 0.7 0.3 0 0 0 0 0 0
___ —or, in graphical form (where each "X" has a value of 0.1):
LN SN ZE SP LP
+------------------------------------------------------------------+
| |
-1.0 | XXXXXXXXXX XXX : : : |
-0.75 | XXXXXXX XXXXXXX : : : |
-0.5 | XXX XXXXXXXXXX XXX : : |
-0.25 | : XXXXXXX XXXXXXX : : |
0.0 | : XXX XXXXXXXXXX XXX : |
0.25 | : : XXXXXXX XXXXXXX : |
0.5 | : : XXX XXXXXXXXXX XXX |
0.75 | : : : XXXXXXX XXXXXXX |
1.0 | : : : XXX XXXXXXXXXX |
| |
+------------------------------------------------------------------+
Suppose this fuzzy system has the following rule base:
rule 1: IF e = ZE AND delta = ZE THEN output = ZE
rule 2: IF e = ZE AND delta = SP THEN output = SN
rule 3: IF e = SN AND delta = SN THEN output = LP
rule 4: IF e = LP OR delta = LP THEN output = LN
These rules are typical for control applications in that the antecedents consist of the logical combination of the error and error-delta signals, while the consequent is a control command output.
The rule outputs can be defuzzified using a discrete centroid computation:
SUM( I = 1 TO 4 OF ( mu(I) * output(I) ) ) / SUM( I = 1 TO 4 OF mu(I) )
Now, suppose that at a given time:
e = 0.25
delta = 0.5
Then this gives:
e delta
mu(LP) 0 0.3
mu(SP) 0.7 1
mu(ZE) 0.7 0.3
mu(SN) 0 0
mu(LN) 0 0
Plugging this into rule 1 gives:
rule 1: IF e = ZE AND delta = ZE THEN output = ZE
mu(1) = MIN( 0.7, 0.3 ) = 0.3
output(1) = 0
-- where:
mu(1): Truth value of the result membership function for rule 1. In terms of a centroid calculation, this is the "mass" of this result for this discrete case.
output(1): Value (for rule 1) where the result membership function (ZE) is maximum over the output variable fuzzy set range. That is, in terms of a centroid calculation, the location of the "center of mass" for this individual result. This value is independent of the value of "mu". It simply identifies the location of ZE along the output range.
The other rules give:
rule 2: IF e = ZE AND delta = SP THEN output = SN
mu(2) = MIN( 0.7, 1 ) = 0.7
output(2) = -0.5
rule 3: IF e = SN AND delta = SN THEN output = LP
mu(3) = MIN( 0.0, 0.0 ) = 0
output(3) = 1
rule 4: IF e = LP OR delta = LP THEN output = LN
mu(4) = MAX( 0.0, 0.3 ) = 0.3
output(4) = -1
The centroid computation yields:
—for the final control output. Simple. Of course the hard part is figuring out what rules actually work correctly in practice.
If you have problems figuring out the centroid equation, remember that a centroid is defined by summing all the moments (location times mass) around the center of gravity and equating the sum to zero. So if is the center of gravity, is the location of each mass, and is each mass, this gives:
In our example, the values of mu correspond to the masses, and the values of X to location of the masses
(mu, however, only 'corresponds to the masses' if the initial 'mass' of the output functions are all the same/equivalent. If they are not the same, i.e. some are narrow triangles, while others maybe wide trapezoids or shouldered triangles, then the mass or area of the output function must be known or calculated. It is this mass that is then scaled by mu and multiplied by its location X_i).
This system can be implemented on a standard microprocessor, but dedicated fuzzy chips are now available. For example, Adaptive Logic INC of San Jose, California, sells a "fuzzy chip", the AL220, that can accept four analog inputs and generate four analog outputs. A block diagram of the chip is shown below:
+---------+ +-------+
analog --4-->| analog | | mux / +--4--> analog
in | mux | | SH | out
+----+----+ +-------+
| ^
V |
+-------------+ +--+--+
| ADC / latch | | DAC |
+------+------+ +-----+
| ^
| |
8 +-----------------------------+
| | |
| V |
| +-----------+ +-------------+ |
+-->| fuzzifier | | defuzzifier +--+
+-----+-----+ +-------------+
| ^
| +-------------+ |
| | rule | |
+->| processor +--+
| (50 rules) |
+------+------+
|
+------+------+
| parameter |
| memory |
| 256 x 8 |
+-------------+
ADC: analog-to-digital converter
DAC: digital-to-analog converter
SH: sample/hold
Antilock brakes
As an example, consider an anti-lock braking system, directed by a microcontroller chip. The microcontroller has to make decisions based on brake temperature, speed, and other variables in the system.
The variable "temperature" in this system can be subdivided into a range of "states": "cold", "cool", "moderate", "warm", "hot", "very hot". The transition from one state to the next is hard to define.
An arbitrary static threshold might be set to divide "warm" from "hot". For example, at exactly 90 degrees, warm ends and hot begins. But this would result in a discontinuous change when the input value passed over that threshold. The transition wouldn't be smooth, as would be required in braking situations.
The way around this is to make the states fuzzy. That is, allow them to change gradually from one state to the next. In order to do this, there must be a dynamic relationship established between different factors.
Start by defining the input temperature states using "membership functions":
With this scheme, the input variable's state no longer jumps abruptly from one state to the next. Instead, as the temperature changes, it loses value in one membership function while gaining value in the next. In other words, its ranking in the category of cold decreases as it becomes more highly ranked in the warmer category.
At any sampled timeframe, the "truth value" of the brake temperature will almost always be in some degree part of two membership functions: i.e.: '0.6 nominal and 0.4 warm', or '0.7 nominal and 0.3 cool', and so on.
The above example demonstrates a simple application, using the abstraction of values from multiple values. This only represents one kind of data, however, in this case, temperature.
Adding additional sophistication to this braking system, could be done by additional factors such as traction, speed, inertia, set up in dynamic functions, according to the designed fuzzy system.
Logical interpretation of fuzzy control
In spite of the appearance there are several difficulties to give a rigorous logical interpretation of the IF-THEN rules. As an example, interpret a rule as IF (temperature is "cold") THEN (heater is "high") by the first order formula Cold(x)→High(y) and assume that r is an input such that Cold(r) is false. Then the formula Cold(r)→High(t) is true for any t and therefore any t gives a correct control given r. A rigorous logical justification of fuzzy control is given in Hájek's book (see Chapter 7) where fuzzy control is represented as a theory of Hájek's basic logic.
In Gerla 2005 another logical approach to fuzzy control is proposed based on fuzzy logic programming: Denote by f the fuzzy function arising of an IF-THEN systems of rules. Then this system can be translated into a fuzzy program P containing a series of rules whose head is "Good(x,y)". The interpretation of this predicate in the least fuzzy Herbrand model of P coincides with f. This gives further useful tools to fuzzy control.
Fuzzy qualitative simulation
Before an Artificial Intelligence system is able to plan the action sequence, some kind of model is needed. For video games, the model is equal to the game rules. From the programming perspective, the game rules are implemented as a Physics engine which accepts an action from a player and calculates, if the action is valid. After the action was executed, the game is in follow up state. If the aim isn't only to play mathematical games but determine the actions for real world applications, the most obvious bottleneck is, that no game rules are available. The first step is to model the domain. System identification can be realized with precise mathematical equations or with Fuzzy rules.
Using Fuzzy logic and ANFIS systems (Adaptive network based fuzzy inference system) for creating the forward model for a domain has many disadvantages. A qualitative simulation isn't able to determine the correct follow up state, but the system will only guess what will happen if the action was taken. The Fuzzy qualitative simulation can't predict the exact numerical values, but it's using imprecise natural language to speculate about the future. It takes the current situation plus the actions from the past and generates the expected follow up state of the game.
The output of the ANFIS system isn't providing correct information, but only a Fuzzy set notation, for example [0,0.2,0.4,0]. After converting the set notation back into numerical values the accuracy get worse. This makes Fuzzy qualitative simulation a bad choice for practical applications.
Applications
Fuzzy control systems are suitable when the process complexity is high including uncertainty and nonlinear behavior, and there are no precise mathematical models available. Successful applications of fuzzy control systems have been reported worldwide mainly in Japan with pioneering solutions since 80s.
Some applications reported in the literature are:
Air conditioners
Automatic focus systems in cameras
Domestic appliances (refrigerators, washing machines...)
Control and optimization of industrial processes and system
Writing systems
Fuel efficiency in engines
Environment
Expert systems
Decision trees
Robotics
Autonomous vehicles
See also
Dynamic logic
Bayesian inference
Function approximation
Fuzzy concept
Fuzzy markup language
Hysteresis
Neuro-fuzzy
Fuzzy control language
Type-2 fuzzy sets and systems
References
Further reading
Kevin M. Passino and Stephen Yurkovich, Fuzzy Control, Addison Wesley Longman, Menlo Park, CA, 1998 (522 pages)
Cox, E. (Oct. 1992). Fuzzy fundamentals. IEEE Spectrum, 29:10. pp. 58–61.
Cox, E. (Feb. 1993) Adaptive fuzzy systems. IEEE Spectrum, 30:2. pp. 7–31.
Jan Jantzen, "Tuning Of Fuzzy PID Controllers", Technical University of Denmark, report 98-H 871, September 30, 1998.
Jan Jantzen, Foundations of Fuzzy Control. Wiley, 2007 (209 pages) (Table of contents)
Computational Intelligence: A Methodological Introduction by Kruse, Borgelt, Klawonn, Moewes, Steinbrecher, Held, 2013, Springer,
External links
Introduction to Fuzzy Control
Fuzzy Logic in Embedded Microcomputers and Control Systems
IEC 1131-7 CD1 IEC 1131-7 CD1 PDF
Online interactive demonstration of a system with 3 fuzzy rules
Data driven fuzzy systems
Fuzzy logic
Control engineering | Fuzzy control system | [
"Engineering"
] | 6,077 | [
"Control engineering"
] |
48,662 | https://en.wikipedia.org/wiki/Computer%20number%20format | A computer number format is the internal representation of numeric values in digital device hardware and software, such as in programmable computers and calculators. Numerical values are stored as groupings of bits, such as bytes and words. The encoding between numerical values and bit patterns is chosen for convenience of the operation of the computer; the encoding used by the computer's instruction set generally requires conversion for external use, such as for printing and display. Different types of processors may have different internal representations of numerical values and different conventions are used for integer and real numbers. Most calculations are carried out with number formats that fit into a processor register, but some software systems allow representation of arbitrarily large numbers using multiple words of memory.
Binary number representation
Computers represent data in sets of binary digits. The representation is composed of bits, which in turn are grouped into larger sets such as bytes.
A bit is a binary digit that represents one of two states. The concept of a bit can be understood as a value of either 1 or 0, on or off, yes or no, true or false, or encoded by a switch or toggle of some kind.
While a single bit, on its own, is able to represent only two values, a string of bits may be used to represent larger values. For example, a string of three bits can represent up to eight distinct values as illustrated in Table 1.
As the number of bits composing a string increases, the number of possible 0 and 1 combinations increases exponentially. A single bit allows only two value-combinations, two bits combined can make four separate values, three bits for eight, and so on, increasing with the formula 2n. The amount of possible combinations doubles with each binary digit added as illustrated in Table 2.
Groupings with a specific number of bits are used to represent varying things and have specific names.
A byte is a bit string containing the number of bits needed to represent a character. On most modern computers, this is an eight bit string. Because the definition of a byte is related to the number of bits composing a character, some older computers have used a different bit length for their byte. In many computer architectures, the byte is the smallest addressable unit, the atom of addressability, say. For example, even though 64-bit processors may address memory sixty-four bits at a time, they may still split that memory into eight-bit pieces. This is called byte-addressable memory. Historically, many CPUs read data in some multiple of eight bits. Because the byte size of eight bits is so common, but the definition is not standardized, the term octet is sometimes used to explicitly describe an eight bit sequence.
A nibble (sometimes nybble), is a number composed of four bits. Being a half-byte, the nibble was named as a play on words. A person may need several nibbles for one bite from something; similarly, a nybble is a part of a byte. Because four bits allow for sixteen values, a nibble is sometimes known as a hexadecimal digit.
Octal and hexadecimal number display
Octal and hexadecimal encoding are convenient ways to represent binary numbers, as used by computers. Computer engineers often need to write out binary quantities, but in practice writing out a binary number such as 1001001101010001 is tedious and prone to errors. Therefore, binary quantities are written in a base-8, or "octal", or, much more commonly, a base-16, "hexadecimal" (hex), number format. In the decimal system, there are 10 digits, 0 through 9, which combine to form numbers. In an octal system, there are only 8 digits, 0 through 7. That is, the value of an octal "10" is the same as a decimal "8", an octal "20" is a decimal "16", and so on. In a hexadecimal system, there are 16 digits, 0 through 9 followed, by convention, with A through F. That is, a hexadecimal "10" is the same as a decimal "16" and a hexadecimal "20" is the same as a decimal "32". An example and comparison of numbers in different bases is described in the chart below.
When typing numbers, formatting characters are used to describe the number system, for example 000_0000B or 0b000_00000 for binary and 0F8H or 0xf8 for hexadecimal numbers.
Converting between bases
Each of these number systems is a positional system, but while decimal weights are powers of 10, the octal weights are powers of 8 and the hexadecimal weights are powers of 16. To convert from hexadecimal or octal to decimal, for each digit one multiplies the value of the digit by the value of its position and then adds the results. For example:
Representing fractions in binary
Fixed-point numbers
Fixed-point formatting can be useful to represent fractions in binary.
The number of bits needed for the precision and range desired must be chosen to store the fractional and integer parts of a number. For instance, using a 32-bit format, 16 bits may be used for the integer and 16 for the fraction.
The eight's bit is followed by the four's bit, then the two's bit, then the one's bit. The fractional bits continue the pattern set by the integer bits. The next bit is the half's bit, then the quarter's bit, then the ⅛'s bit, and so on. For example:
This form of encoding cannot represent some values in binary. For example, the fraction , 0.2 in decimal, the closest approximations would be as follows:
Even if more digits are used, an exact representation is impossible. The number , written in decimal as 0.333333333..., continues indefinitely. If prematurely terminated, the value would not represent precisely.
Floating-point numbers
While both unsigned and signed integers are used in digital systems, even a 32-bit integer is not enough to handle all the range of numbers a calculator can handle, and that's not even including fractions. To approximate the greater range and precision of real numbers, we have to abandon signed integers and fixed-point numbers and go to a "floating-point" format.
In the decimal system, we are familiar with floating-point numbers of the form (scientific notation):
1.1030402 × 105 = 1.1030402 × 100000 = 110304.02
or, more compactly:
1.1030402E5
which means "1.1030402 times 1 followed by 5 zeroes". We have a certain numeric value (1.1030402) known as a "significand", multiplied by a power of 10 (E5, meaning 105 or 100,000), known as an "exponent". If we have a negative exponent, that means the number is multiplied by a 1 that many places to the right of the decimal point. For example:
2.3434E−6 = 2.3434 × 10−6 = 2.3434 × 0.000001 = 0.0000023434
The advantage of this scheme is that by using the exponent we can get a much wider range of numbers, even if the number of digits in the significand, or the "numeric precision", is much smaller than the range. Similar binary floating-point formats can be defined for computers. There is a number of such schemes, the most popular has been defined by Institute of Electrical and Electronics Engineers (IEEE). The IEEE 754-2008 standard specification defines a 64 bit floating-point format with:
an 11-bit binary exponent, using "excess-1023" format. Excess-1023 means the exponent appears as an unsigned binary integer from 0 to 2047; subtracting 1023 gives the actual signed value
a 52-bit significand, also an unsigned binary number, defining a fractional value with a leading implied "1"
a sign bit, giving the sign of the number.
With the bits stored in 8 bytes of memory:
where "S" denotes the sign bit, "x" denotes an exponent bit, and "m" denotes a significand bit. Once the bits here have been extracted, they are converted with the computation:
<sign> × (1 + <fractional significand>) × 2<exponent> − 1023
This scheme provides numbers valid out to about 15 decimal digits, with the following range of numbers:
The specification also defines several special values that are not defined numbers, and are known as NaNs, for "Not A Number". These are used by programs to designate invalid operations and the like.
Some programs also use 32-bit floating-point numbers. The most common scheme uses a 23-bit significand with a sign bit, plus an 8-bit exponent in "excess-127" format, giving seven valid decimal digits.
The bits are converted to a numeric value with the computation:
<sign> × (1 + <fractional significand>) × 2<exponent> − 127
leading to the following range of numbers:
Such floating-point numbers are known as "reals" or "floats" in general, but with a number of variations:
A 32-bit float value is sometimes called a "real32" or a "single", meaning "single-precision floating-point value".
A 64-bit float is sometimes called a "real64" or a "double", meaning "double-precision floating-point value".
The relation between numbers and bit patterns is chosen for convenience in computer manipulation; eight bytes stored in computer memory may represent a 64-bit real, two 32-bit reals, or four signed or unsigned integers, or some other kind of data that fits into eight bytes. The only difference is how the computer interprets them. If the computer stored four unsigned integers and then read them back from memory as a 64-bit real, it almost always would be a perfectly valid real number, though it would be junk data.
Only a finite range of real numbers can be represented with a given number of bits. Arithmetic operations can overflow or underflow, producing a value too large or too small to be represented.
The representation has a limited precision. For example, only 15 decimal digits can be represented with a 64-bit real. If a very small floating-point number is added to a large one, the result is just the large one. The small number was too small to even show up in 15 or 16 digits of resolution, and the computer effectively discards it. Analyzing the effect of limited precision is a well-studied problem. Estimates of the magnitude of round-off errors and methods to limit their effect on large calculations are part of any large computation project. The precision limit is different from the range limit, as it affects the significand, not the exponent.
The significand is a binary fraction that doesn't necessarily perfectly match a decimal fraction. In many cases a sum of reciprocal powers of 2 does not match a specific decimal fraction, and the results of computations will be slightly off. For example, the decimal fraction "0.1" is equivalent to an infinitely repeating binary fraction: 0.000110011 ...
Numbers in programming languages
Programming in assembly language requires the programmer to keep track of the representation of numbers. Where the processor does not support a required mathematical operation, the programmer must work out a suitable algorithm and instruction sequence to carry out the operation; on some microprocessors, even integer multiplication must be done in software.
High-level programming languages such as Ruby and Python offer an abstract number that may be an expanded type such as rational, bignum, or complex. Mathematical operations are carried out by library routines provided by the implementation of the language. A given mathematical symbol in the source code, by operator overloading, will invoke different object code appropriate to the representation of the numerical type; mathematical operations on any number—whether signed, unsigned, rational, floating-point, fixed-point, integral, or complex—are written exactly the same way.
Some languages, such as REXX and Java, provide decimal floating-points operations, which provide rounding errors of a different form.
See also
Arbitrary-precision arithmetic
Binary-coded decimal
Binary-to-text encoding
Binary number
Gray code
Numeral system
Notes and references
Computer arithmetic
Numeral systems | Computer number format | [
"Mathematics"
] | 2,623 | [
"Mathematical objects",
"Computer arithmetic",
"Numeral systems",
"Arithmetic",
"Numbers"
] |
48,668 | https://en.wikipedia.org/wiki/Perlite | Perlite is an amorphous volcanic glass that has a relatively high water content, typically formed by the hydration of obsidian. It occurs naturally and has the unusual property of greatly expanding when heated sufficiently. It is an industrial mineral, suitable "as ceramic flux to lower the sintering temperature", and a commercial product useful for its low density after processing.
Properties
Perlite softens when it reaches temperatures of . Water trapped in the structure of the material vaporises and escapes, and this causes the expansion of the material to 7–16 times its original volume. The expanded material is a brilliant white, due to the reflectivity of the trapped bubbles. Unexpanded ("raw") perlite has a bulk density around 1100 kg/m3 (1.1 g/cm3), while typical expanded perlite has a bulk density of about 30–150 kg/m3 (0.03–0.150 g/cm3).
Typical analysis
70–75% silicon dioxide: SiO2
12–15% aluminium oxide: Al2O3
3–4% sodium oxide: Na2O
3–5% potassium oxide: K2O
0.5-2% iron oxide: Fe2O3
0.2–0.7% magnesium oxide: MgO
0.5–1.5% calcium oxide: CaO
3–5% loss on ignition (chemical / combined water)
Sources and production
Perlite is a non-renewable resource. The world reserves of perlite are estimated at 700 million tonnes.
The confirmed resources of perlite existing in Armenia amount to 150 million m3, whereas the total amount of projected resources reaches up to 3 billion m3. Considering specific density of 1.1 ton/m3 confirmed reserves in Armenia amount to 165 million tons.
Other reported reserves are: Greece - 120 million tonnes, Turkey, USA and Hungary - about 49-57 million tonnes.
Perlite world production, led by China, Turkey, Greece, USA, Armenia and Hungary, summed up to 4.6 million tonnes in 2018.
Osham hills of Patanvav, Gujarat, India are the only source of mineral Perlite in India.
Uses
Because of its low density and relatively low price (about US$150 per tonne of unexpanded perlite), many commercial applications for perlite have been developed.
Construction and manufacturing
In the construction and manufacturing fields, it is used in lightweight plasters, concrete and mortar, insulation and ceiling tiles. It may also be used to build composite materials that are sandwich-structured or to create syntactic foam.
Perlite filters are fairly common in filtering beer before it is bottled.
Small quantities of perlite are also used in foundries, cryogenic insulation, and ceramics (as a clay additive). It is also used by the explosives industry.
Aquatic filtration
Perlite is currently used in commercial pool filtration technology, as a replacement to diatomaceous earth filters. Perlite is an excellent filtration aid and is used extensively as an alternative to diatomaceous earth. The popularity of perlite usage as a filter medium is growing considerably worldwide. Several products exist in the market to provide perlite based filtration. Several perlite filters and perlite media have met NSF-50 approval (Aquify PMF Series and AquaPerl), which standardizes water quality and technology safety and performance. Perlite can be safely disposed of through existing sewage systems, although some pool operators choose to separate the perlite using settling tanks or screening systems to be disposed of separately.
Biotechnology
Due to thermal and mechanical stability, non-toxicity, and high resistance against microbial attacks and organic solvents, perlite is widely used in biotechnological applications. Perlite was found to be an excellent support for immobilization of biocatalysts such as enzymes for bioremediation and sensing applications.
Agriculture
In horticulture, perlite can be used as a soil amendment or alone as a medium for hydroponics or for starting cuttings. When used as an amendment, it has high permeability and low water retention and helps prevent soil compaction.
Cosmetics
Perlite is used in cosmetics as an absorbent and mechanical exfoliant.
Substitutes
Perlite can be replaced for all of its uses. Substitutes include:
Diatomite, used for filter-aids
Expanded clay, an alternative lightweight filler for building materials
Shale
Pumice
Slag
Vermiculite - many expanders of perlite are also exfoliating vermiculite and belong to both trade associations
Occupational safety
As perlite contains silicon dioxide, goggles and silica filtering masks are recommended when handling large quantities.
United States
The Occupational Safety and Health Administration (OSHA) has set the legal limit (permissible exposure limit) for perlite exposure in the workplace as 15 mg/m3 total exposure and 5 mg/m3 respiratory exposure over an 8-hour workday. The National Institute for Occupational Safety and Health (NIOSH) has set a recommended exposure limit (REL) of 10 mg/m3 total exposure and 5 mg/m3 respiratory exposure over an 8-hour workday.
See also
Biochar
Foam glass
Industrial minerals
Mortar (firestop)
Vermiculite
References
External links
The Perlite Institute
Mineral Information Institute – perlite
"That Wonderful Volcanic Popcorn." Popular Mechanics, December 1954, p. 136.
CDC – NIOSH Pocket Guide to Chemical Hazards
Felsic rocks
Vitreous rocks
Building stone
Soil improvers
Industrial minerals | Perlite | [
"Chemistry"
] | 1,121 | [
"Felsic rocks",
"Igneous rocks by composition"
] |
48,685 | https://en.wikipedia.org/wiki/Quantum%20suicide%20and%20immortality | Quantum suicide is a thought experiment in quantum mechanics and the philosophy of physics. Purportedly, it can falsify any interpretation of quantum mechanics other than the Everett many-worlds interpretation by means of a variation of the Schrödinger's cat thought experiment, from the cat's point of view. Quantum immortality refers to the subjective experience of surviving quantum suicide. This concept is sometimes conjectured to be applicable to real-world causes of death as well.
As a thought experiment, quantum suicide is an intellectual exercise in which an abstract setup is followed through to its logical consequences merely to prove a theoretical point. Virtually all physicists and philosophers of science who have described it, especially in popularized treatments, underscore that it relies on contrived, idealized circumstances that may be impossible or exceedingly difficult to realize in real life, and that its theoretical premises are controversial even among supporters of the many-worlds interpretation. Thus, as cosmologist Anthony Aguirre warns, "[...] it would be foolish (and selfish) in the extreme to let this possibility guide one's actions in any life-and-death question."
History
Hugh Everett did not mention quantum suicide or quantum immortality in writing; his work was intended as a solution to the paradoxes of quantum mechanics. Eugene Shikhovtsev's biography of Everett states that "Everett firmly believed that his many-worlds theory guaranteed him immortality: his consciousness, he argued, is bound at each branching to follow whatever path does not lead to death". Peter Byrne, author of a biography of Everett, reports that Everett also privately discussed quantum suicide (such as to play high-stakes Russian roulette and survive in the winning branch), but adds that "[i]t is unlikely, however, that Everett subscribed to this [quantum immortality] view, as the only sure thing it guarantees is that the majority of your copies will die, hardly a rational goal."
Among scientists, the thought experiment was introduced by Euan Squires in 1986. Afterwards, it was published independently by Hans Moravec in 1987 and Bruno Marchal in 1988; it was also described by Huw Price in 1997, who credited it to Dieter Zeh, and independently presented formally by Max Tegmark in 1998. It was later discussed by philosophers Peter J. Lewis in 2000 and David Lewis in 2001.
Thought experiment
The quantum suicide thought experiment involves a similar apparatus to Schrödinger's cat – a box which kills the occupant in a given time frame with probability one-half due to quantum uncertainty. The only difference is to have the experimenter recording observations be the one inside the box. The significance of this thought experiment is that someone whose life or death depends on a qubit could possibly distinguish between interpretations of quantum mechanics. By definition, fixed observers cannot.
At the start of the first iteration, under both interpretations, the probability of surviving the experiment is 50%, as given by the squared norm of the wave function. At the start of the second iteration, assuming a single-world interpretation of quantum mechanics (like the widely-held Copenhagen interpretation) is true, the wave function has already collapsed; thus, if the experimenter is already dead, there is a 0% chance of survival for any further iterations. However, if the many-worlds interpretation is true, a superposition of the live experimenter necessarily exists (as also does the one who dies). Now, barring the possibility of life after death, after every iteration only one of the two experimenter superpositions – the live one – is capable of having any sort of conscious experience. Putting aside the philosophical problems associated with individual identity and its persistence, under the many-worlds interpretation, the experimenter, or at least a version of them, continues to exist through all of their superpositions where the outcome of the experiment is that they live. In other words, a version of the experimenter survives all iterations of the experiment. Since the superpositions where a version of the experimenter lives occur by quantum necessity (under the many-worlds interpretation), it follows that their survival, after any realizable number of iterations, is physically necessary; hence, the notion of quantum immortality.
A version of the experimenter surviving stands in stark contrast to the implications of the Copenhagen interpretation, according to which, although the survival outcome is possible in every iteration, its probability tends towards zero as the number of iterations increases. According to the many-worlds interpretation, the above scenario has the opposite property: the probability of a version of the experimenter living is necessarily one for any number of iterations.
In the book Our Mathematical Universe, Max Tegmark lays out three criteria that, in abstract, a quantum suicide experiment must fulfill:
The random number generator must be quantum, not deterministic, so that the experimenter enters a state of superposition of being dead and alive.
The experimenter must be rendered dead (or at least unconscious) on a time scale shorter than that on which they can become aware of the outcome of the quantum measurement.
The experiment must be virtually certain to kill the experimenter, and not merely injure them.
Analysis of real-world feasibility
In response to questions about "subjective immortality" from normal causes of death, Tegmark suggested that the flaw in that reasoning is that dying is not a binary event as in the thought experiment; it is a progressive process, with a continuum of states of decreasing consciousness. He states that in most real causes of death, one experiences such a gradual loss of self-awareness. It is only within the confines of an abstract scenario that an observer finds they defy all odds. Referring to the above criteria, he elaborates as follows: "[m]ost accidents and common causes of death clearly don't satisfy all three criteria, suggesting you won't feel immortal after all. In particular, regarding criterion 2, under normal circumstances dying isn't a binary thing where you're either alive or dead [...] What makes the quantum suicide work is that it forces an abrupt transition."
David Lewis' commentary and subsequent criticism
The philosopher David Lewis explored the possibility of quantum immortality in a 2001 lecture titled "How Many Lives Has Schrödinger's Cat?", his first academic foray into the field of the interpretation of quantum mechanics – and his last, due to his death less than four months afterwards. In the lecture, published posthumously in 2004, Lewis rejected the many-worlds interpretation, allowing that it offers initial theoretical attractions, but also arguing that it suffers from irremediable flaws, mainly regarding probabilities, and came to tentatively endorse the Ghirardi–Rimini–Weber theory instead. Lewis concluded the lecture by stating that the quantum suicide thought experiment, if applied to real-world causes of death, would entail what he deemed a "terrifying corollary": as all causes of death are ultimately quantum-mechanical in nature, if the many-worlds interpretation were true, in Lewis' view an observer should subjectively "expect with certainty to go on forever surviving whatever dangers [he or she] may encounter", as there will always be possibilities of survival, no matter how unlikely; faced with branching events of survival and death, an observer should not "equally expect to experience life and death", as there is no such thing as experiencing death, and should thus divide his or her expectations only among branches where he or she survives. If survival is guaranteed, however, this is not the case for good health or integrity. This would lead to a Tithonus-like deterioration of one's body that continues indefinitively, leaving the subject forever just short of death.
Interviewed for the 2004 book Schrödinger's Rabbits, Tegmark rejected this scenario for the reason that "the fading of consciousness is a continuous process. Although I cannot experience a world line in which I am altogether absent, I can enter one in which my speed of thought is diminishing, my memories and other faculties fading [...] [Tegmark] is confident that even if he cannot die all at once, he can gently fade away." In the same book, philosopher of science and many-worlds proponent David Wallace undermines the case for real-world quantum immortality on the basis that death can be understood as a continuum of decreasing states of consciousness not only in time, as argued by Tegmark, but also in space: "our consciousness is not located at one unique point in the brain, but is presumably a kind of emergent or holistic property of a sufficiently large group of neurons [...] our consciousness might not be able to go out like a light, but it can dwindle exponentially until it is, for all practical purposes, gone."
Directly responding to Lewis' lecture, British philosopher and many-worlds proponent David Papineau, while finding Lewis' other objections to the many-worlds interpretation lacking, strongly denies that any modification to the usual probability rules is warranted in death situations. Assured subjective survival can follow from the quantum suicide idea only if an agent reasons in terms of "what will be experienced next" instead of the more obvious "what will happen next, whether will be experienced or not". He writes: "[I]t is by no means obvious why Everettians should modify their intensity rule in this way. For it seems perfectly open for them to apply the unmodified intensity rule in life-or-death situations, just as elsewhere. If they do this, then they can expect all futures in proportion to their intensities, whether or not those futures contain any of their live successors. For example, even when you know you are about to be the subject in a fifty-fifty Schrödinger’s experiment, you should expect a future branch where you perish, to just the same degree as you expect a future branch where you survive."
On a similar note, quoting Lewis' position that death should not be expected as an experience, philosopher of science Charles Sebens concedes that, in a quantum suicide experiment, "[i]t is tempting to think you should expect survival with certainty." However, he remarks that expectation of survival could follow only if the quantum branching and death were absolutely simultaneous, otherwise normal chances of death apply: "[i]f death is indeed immediate on all branches but one, the thought has some plausibility. But if there is any delay it should be rejected. In such a case, there is a short period of time when there are multiple copies of you, each (effectively) causally isolated from the others and able to assign a credence to being the one who will live. Only one will survive. Surely rationality does not compel you to be maximally optimistic in such a scenario." Sebens also explores the possibility that death might not be simultaneous to branching, but still faster than a human can mentally realize the outcome of the experiment. Again, an agent should expect to die with normal probabilities: "[d]o the copies need to last long enough to have thoughts to cause trouble? I think not. If you survive, you can consider what credences you should have assigned during the short period after splitting when you coexisted with the other copies."
Writing in the journal Ratio, philosopher István Aranyosi, while noting that "[the] tension between the idea of states being both actual and probable is taken as the chief weakness of the many-worlds interpretation of quantum mechanics," summarizes that most of the critical commentary of Lewis' immortality argument has revolved around its premises. But even if, for the sake of argument, one were willing to entirely accept Lewis' assumptions, Aranyosi strongly denies that the "terrifying corollary" would be the correct implication of said premises. Instead, the two scenarios that would most likely follow would be what Aranyosi describes as the "comforting corollary", in which an observer should never expect to get very sick in the first place, or the "momentary life" picture, in which an observer should expect "eternal life, spent almost entirely in an unconscious state", punctuated by extremely brief, amnesiac moments of consciousness. Thus, Aranyosi concludes that while "[w]e can't assess whether one or the other [of the two alternative scenarios] gets the lion's share of the total intensity associated with branches compatible with self-awareness, [...] we can be sure that they together (i.e. their disjunction) do indeed get the lion's share, which is much reassuring."
Analysis by other proponents of the many-worlds interpretation
Physicist David Deutsch, though a proponent of the many-worlds interpretation, states regarding quantum suicide that "that way of applying probabilities does not follow directly from quantum theory, as the usual one does. It requires an additional assumption, namely that when making decisions one should ignore the histories in which the decision-maker is absent....[M]y guess is that the assumption is false."
Tegmark now believes experimenters should only expect a normal probability of survival, not immortality. The experimenter's probability amplitude in the wavefunction decreases significantly, meaning they exist with a much lower measure than they had before. Per the anthropic principle, a person is less likely to find themselves in a world where they are less likely to exist, that is, a world with a lower measure has a lower probability of being observed by them. Therefore, the experimenter will have a lower probability of observing the world in which they survive than the earlier world in which they set up the experiment. This same problem of reduced measure was pointed out by Lev Vaidman in the Stanford Encyclopedia of Philosophy. In the 2001 paper, "Probability and the many-worlds interpretation of quantum theory", Vaidman writes that an agent should not agree to undergo a quantum suicide experiment: "The large 'measures' of the worlds with dead successors is a good reason not to play." Vaidman argues that it is the instantaneity of death that may seem to imply subjective survival of the experimenter, but that normal probabilities nevertheless must apply even in this special case: "[i]ndeed, the instantaneity makes it difficult to establish the probability postulate, but after it has been justified in the wide range of other situations it is natural to apply the postulate for all cases."
In his 2013 book The Emergent Multiverse, Wallace opines that the reasons for expecting subjective survival in the thought experiment "do not really withstand close inspection", although he concedes that it would be "probably fair to say [...] that precisely because death is philosophically complicated, my objections fall short of being a knock-down refutation". Besides re-stating that there appears to be no motive to reason in terms of expectations of experience instead of expectations of what will happen, he suggests that a decision-theoretic analysis shows that "an agent who prefers certain life to certain death is rationally compelled to prefer life in high-weight branches and death in low-weight branches to the opposite."
Physicist Sean M. Carroll, another proponent of the many-worlds interpretation, states regarding quantum suicide that neither experiences nor rewards should be thought of as being shared between future versions of oneself, as they become distinct persons when the world splits. He further states that one cannot pick out some future versions of oneself as "really you" over others, and that quantum suicide still cuts off the existence of some of these future selves, which would be worth objecting to just as if there were a single world.
Analysis by skeptics of the many-worlds interpretation
Cosmologist Anthony Aguirre, while personally skeptical of most accounts of the many-worlds interpretation, in his book Cosmological Koans writes that "[p]erhaps reality actually is this bizarre, and we really do subjectively 'survive' any form of death that is both instantaneous and binary." Aguirre notes, however, that most causes of death do not fulfill these two requirements: "If there are degrees of survival, things are quite different." If loss of consciousness was binary like in the thought experiment, the quantum suicide effect would prevent an observer from subjectively falling asleep or undergoing anesthesia, conditions in which mental activities are greatly diminished but not altogether abolished. Consequently, upon most causes of death, even outwardly sudden, if the quantum suicide effect holds true an observer is more likely to progressively slip into an attenuated state of consciousness, rather than remain fully awake by some very improbable means. Aguirre further states that quantum suicide as a whole might be characterized as a sort of reductio ad absurdum against the current understanding of both the many-worlds interpretation and theory of mind. He finally hypothesizes that a different understanding of the relationship between the mind and time should remove the bizarre implications of necessary subjective survival.
Physicist and writer Philip Ball, a critic of the many-worlds interpretation, in his book Beyond Weird, describes the quantum suicide experiment as "cognitively unstable" and exemplificatory of the difficulties of the many-worlds theory with probabilities. While he acknowledges Lev Vaidman's argument that an experimenter should subjectively expect outcomes in proportion of the "measure of existence" of the worlds in which they happen, Ball ultimately rejects this explanation. "What this boils down to is the interpretation of probabilities in the MWI. If all outcomes occur with 100% probability, where does that leave the probabilistic character of quantum mechanics?" Furthermore, Ball explains that such arguments highlight what he recognizes as another major problem of the many-worlds interpretation, connected but independent from the issue of probability: the incompatibility with the notion of selfhood. Ball ascribes most attempts of justifying probabilities in the many-worlds interpretation to "saying that quantum probabilities are just what quantum mechanics look like when consciousness is restricted to only one world" but that "there is in fact no meaningful way to explain or justify such a restriction." Before performing a quantum measurement, an "Alice before" experimenter "can't use quantum mechanics to predict what will happen to her in a way that can be articulated – because there is no logical way to talk about 'her' at any moment except the conscious present (which, in a frantically splitting universe, doesn't exist). Because it is logically impossible to connect the perceptions of Alice Before to Alice After [the experiment], "Alice" has disappeared. [...] [The MWI] eliminates any coherent notion of what we can experience, or have experienced, or are experiencing right now."
Philosopher of science Peter J. Lewis, a critic of the many-worlds interpretation, considers the whole thought experiment an example of the difficulty of accommodating probability within the many-worlds framework: "Standard quantum mechanics yields probabilities for various future occurrences, and these probabilities can be fed into an appropriate decision theory. But if every physically possible consequence of the current state of affairs is certain to occur, on what basis should I decide what to do? For example, if I point a gun at my head and pull the trigger, it looks like Everett's theory entails that I am certain to survive—and that I am certain to die. This is at least worrying, and perhaps rationally disabling." In his book Quantum Ontology, Lewis explains that for the subjective immortality argument to be drawn out of the many-worlds theory, one has to adopt an understanding of probability – the so-called "branch-counting" approach, in which an observer can meaningfully ask "which post-measurement branch will I end up on?" – that is ruled out by experimental, empirical evidence as it would yield probabilities that do not match with the well-confirmed Born rule. Lewis identifies instead in the Deutsch-Wallace decision-theoretic analysis the most promising (although still, to his judgement, incomplete) way of addressing probabilities in the many-worlds interpretation, in which it is not possible to count branches (and, similarly, the persons that "end up" on each branch). Lewis concludes that immortality argument is perhaps best viewed as a dramatic demonstration of the fundamental conflict between branch-counting (or person-counting) intuitions about probability and the decision theoretic approach. The many-worlds theory, to the extent that it is viable, does not entail that you should expect to live forever."
See also
Multiverse
Quarantine – novel by Australian sci-fi author Greg Egan which explores the Copenhagen interpretation of quantum mechanics, suicide and immortality.
Immortality
Explanatory notes
References
Consciousness
Immortality
Multiverse
Quantum measurement
Suicide
Thought experiments in quantum mechanics | Quantum suicide and immortality | [
"Physics",
"Astronomy",
"Biology"
] | 4,308 | [
"Astronomical hypotheses",
"Behavior",
"Human behavior",
"Quantum mechanics",
"Quantum measurement",
"Thought experiments in quantum mechanics",
"Multiverse",
"Suicide"
] |
48,707 | https://en.wikipedia.org/wiki/GNU%20Octave | GNU Octave is a scientific programming language for scientific computing and numerical computation. Octave helps in solving linear and nonlinear problems numerically, and for performing other numerical experiments using a language that is mostly compatible with MATLAB. It may also be used as a batch-oriented language. As part of the GNU Project, it is free software under the terms of the GNU General Public License.
History
The project was conceived around 1988. At first it was intended to be a companion to a chemical reactor design course. Full development was started by John W. Eaton in 1992. The first alpha release dates back to 4 January 1993 and on 17 February 1994 version 1.0 was released. Version 9.2.0 was released on 7 June 2024.
The program is named after Octave Levenspiel, a former professor of the principal author. Levenspiel was known for his ability to perform quick back-of-the-envelope calculations.
Development history
Developments
In addition to use on desktops for personal scientific computing, Octave is used in academia and industry. For example, Octave was used on a massive parallel computer at Pittsburgh Supercomputing Center to find vulnerabilities related to guessing social security numbers.
Acceleration with OpenCL or CUDA is also possible with use of GPUs.
Technical details
Octave is written in C++ using the C++ standard library.
Octave uses an interpreter to execute the Octave scripting language.
Octave is extensible using dynamically loadable modules.
Octave interpreter has an OpenGL-based graphics engine to create plots, graphs and charts and to save or print them. Alternatively, gnuplot can be used for the same purpose.
Octave includes a graphical user interface (GUI) in addition to the traditional command-line interface (CLI); see #User interfaces for details.
Octave, the language
The Octave language is an interpreted programming language. It is a structured programming language (similar to C) and supports many common C standard library functions, and also certain UNIX system calls and functions. However, it does not support passing arguments by reference although function arguments are copy-on-write to avoid unnecessary duplication.
Octave programs consist of a list of function calls or a script. The syntax is matrix-based and provides various functions for matrix operations. It supports various data structures and allows object-oriented programming.
Its syntax is very similar to MATLAB, and careful programming of a script will allow it to run on both Octave and MATLAB.
Because Octave is made available under the GNU General Public License, it may be freely changed, copied and used. The program runs on Microsoft Windows and most Unix and Unix-like operating systems, including Linux, Android, and macOS.
Notable features
Command and variable name completion
Typing a TAB character on the command line causes Octave to attempt to complete variable, function, and file names (similar to Bash's tab completion). Octave uses the text before the cursor as the initial portion of the name to complete.
Command history
When running interactively, Octave saves the commands typed in an internal buffer so that they can be recalled and edited.
Data structures
Octave includes a limited amount of support for organizing data in structures. In this example, we see a structure with elements , , and , (an integer, an array, and a string, respectively):
octave:1> x.a = 1; x.b = [1, 2; 3, 4]; x.c = "string";
octave:2> x.a
ans = 1
octave:3> x.b
ans =
1 2
3 4
octave:4> x.c
ans = string
octave:5> x
x =
scalar structure containing the fields:
a = 1
b =
1 2
3 4
c = string
Short-circuit Boolean operators
Octave's && and || logical operators are evaluated in a short-circuit fashion (like the corresponding operators in the C language), in contrast to the element-by-element operators & and |.
Increment and decrement operators
Octave includes the C-like increment and decrement operators ++ and -- in both their prefix and postfix forms.
Octave also does augmented assignment, e.g. x += 5.
Unwind-protect
Octave supports a limited form of exception handling modelled after the unwind_protect of Lisp. The general form of an unwind_protect block looks like this:
unwind_protect
body
unwind_protect_cleanup
cleanup
end_unwind_protect
As a general rule, GNU Octave recognizes as termination of a given block either the keyword end (which is compatible with the MATLAB language) or a more specific keyword endblock or, in some cases, end_block. As a consequence, an unwind_protect block can be terminated either with the keyword end_unwind_protect as in the example, or with the more portable keyword end.
The cleanup part of the block is always executed. In case an exception is raised by the body part, cleanup is executed immediately before propagating the exception outside the block unwind_protect.
GNU Octave also supports another form of exception handling (compatible with the MATLAB language):
try
body
catch
exception_handling
end
This latter form differs from an unwind_protect block in two ways. First, exception_handling is only executed when an exception is raised by body. Second, after the execution of exception_handling the exception is not propagated outside the block (unless a rethrow( lasterror ) statement is explicitly inserted within the exception_handling code).
Variable-length argument lists
Octave has a mechanism for handling functions that take an unspecified number of arguments without explicit upper limit. To specify a list of zero or more arguments, use the special argument varargin as the last (or only) argument in the list. varargin is a cell array containing all the input arguments.
function s = plus (varargin)
if (nargin==0)
s = 0;
else
s = varargin{1} + plus (varargin{2:nargin});
end
end
Variable-length return lists
A function can be set up to return any number of values by using the special return value varargout. For example:
function varargout = multiassign (data)
for k=1:nargout
varargout{k} = data(:,k);
end
end
C++ integration
It is also possible to execute Octave code directly in a C++ program. For example, here is a code snippet for calling rand([10,1]):
#include <octave/oct.h>
...
ColumnVector NumRands(2);
NumRands(0) = 10;
NumRands(1) = 1;
octave_value_list f_arg, f_ret;
f_arg(0) = octave_value(NumRands);
f_ret = feval("rand", f_arg, 1);
Matrix unis(f_ret(0).matrix_value());
C and C++ code can be integrated into GNU Octave by creating oct files, or using the MATLAB compatible MEX files.
MATLAB compatibility
Octave has been built with MATLAB compatibility in mind, and shares many features with MATLAB:
Matrices as fundamental data type.
Built-in support for complex numbers.
Powerful built-in math functions and extensive function libraries.
Extensibility in the form of user-defined functions.
Octave treats incompatibility with MATLAB as a bug; therefore, it could be considered a software clone, which does not infringe software copyright as per Lotus v. Borland court case.
MATLAB scripts from the MathWorks' FileExchange repository in principle are compatible with Octave. However, while they are often provided and uploaded by users under an Octave compatible and proper open source BSD license, the FileExchange Terms of use prohibit any usage beside MathWorks' proprietary MATLAB.
Syntax compatibility
There are a few purposeful, albeit minor, syntax additions :
Comment lines can be prefixed with the # character as well as the % character;
Various C-based operators ++, --, +=, *=, /= are supported;
Elements can be referenced without creating a new variable by cascaded indexing, e.g. [1:10](3);
Strings can be defined with the double-quote " character as well as the single-quote ' character;
When the variable type is single (a single-precision floating-point number), Octave calculates the "mean" in the single-domain (MATLAB in double-domain) which is faster but gives less accurate results;
Blocks can also be terminated with more specific Control structure keywords, i.e., endif, endfor, endwhile, etc.;
Functions can be defined within scripts and at the Octave prompt;
Presence of a do-until loop (similar to do-while in C).
Function compatibility
Many, but not all, of the numerous MATLAB functions are available in GNU Octave, some of them accessible through packages in Octave Forge. The functions available as part of either core Octave or Forge packages are listed online .
A list of unavailable functions is included in the Octave function __unimplemented.m__. Unimplemented functions are also listed under many Octave Forge packages in the Octave Wiki.
When an unimplemented function is called the following error message is shown:
octave:1> guide
warning: the 'guide' function is not yet implemented in Octave
Please read <http://www.octave.org/missing.html> to learn how you can contribute missing functionality.
error: 'guide' undefined near line 1 column 1
User interfaces
Octave comes with an official graphical user interface (GUI) and an integrated development environment (IDE) based on Qt. It has been available since Octave 3.8, and has become the default interface (over the command-line interface) with the release of Octave 4.0.
It was well-received by an EDN contributor, who wrote "[Octave] now has a very workable GUI" in reviewing the then-new GUI in 2014.
Several 3rd-party graphical front-ends have also been developed, like ToolboX for coding education.
GUI applications
With Octave code, the user can create GUI applications. See GUI Development (GNU Octave (version 7.1.0)). Below are some examples:
Button, edit control, checkbox# create figure and panel on it
f = figure;
# create a button (default style)
b1 = uicontrol (f, "string", "A Button", "position",[10 10 150 40]);
# create an edit control
e1 = uicontrol (f, "style", "edit", "string", "editable text", "position",[10 60 300 40]);
# create a checkbox
c1 = uicontrol (f, "style", "checkbox", "string", "a checkbox", "position",[10 120 150 40]);Textboxprompt = {"Width", "Height", "Depth"};
defaults = {"1.10", "2.20", "3.30"};
rowscols = [1,10; 2,20; 3,30];
dims = inputdlg (prompt, "Enter Box Dimensions", rowscols, defaults);Listbox with message boxes.my_options = {"An item", "another", "yet another"};
[sel, ok] = listdlg ("ListString", my_options, "SelectionMode", "Multiple");
if (ok == 1)
msgbox ("You selected:");
for i = 1:numel (sel)
msgbox (sprintf ("\t%s", my_options{sel(i)}));
endfor
else
msgbox ("You cancelled.");
endifRadiobuttons# create figure and panel on it
f = figure;
# create a button group
gp = uibuttongroup (f, "Position", [ 0 0.5 1 1])
# create a buttons in the group
b1 = uicontrol (gp, "style", "radiobutton", "string", "Choice 1", "Position", [ 10 150 100 50 ]);
b2 = uicontrol (gp, "style", "radiobutton", "string", "Choice 2", "Position", [ 10 50 100 30 ]);
# create a button not in the group
b3 = uicontrol (f, "style", "radiobutton","string", "Not in the group","Position", [ 10 50 100 50 ]);
Packages
Octave also has many packages available. Those packages are located at Octave-Forge Octave Forge - Packages, or Github Octave Packages. It is also possible for anyone to create and maintain packages.
Comparison with other similar software
Alternatives to GNU Octave under an open source license, other than the aforementioned MATLAB, include Scilab and FreeMat. Octave is more compatible with MATLAB than Scilab is, and FreeMat has not been updated since June 2013.
Also the Julia programming language and its plotting capabilities has similarities with GNU Octave.
See also
List of numerical-analysis software
Comparison of numerical-analysis software
List of statistical packages
List of numerical libraries
Notes
References
Further reading
External links
Array programming languages
Articles with example MATLAB/Octave code
Cross-platform free software
Data analysis software
Data mining and machine learning software
Free educational software
Free mathematics software
Free software programmed in C++
Octave
Numerical analysis software for Linux
Numerical analysis software for macOS
Numerical analysis software for Windows
Numerical programming languages
Science software that uses Qt
Software that uses Qt | GNU Octave | [
"Mathematics"
] | 2,935 | [
"Free mathematics software",
"Mathematical software"
] |
48,740 | https://en.wikipedia.org/wiki/Henri%20Poincar%C3%A9 | Jules Henri Poincaré (, ; ; 29 April 185417 July 1912) was a French mathematician, theoretical physicist, engineer, and philosopher of science. He is often described as a polymath, and in mathematics as "The Last Universalist", since he excelled in all fields of the discipline as it existed during his lifetime. He has further been called "the Gauss of modern mathematics". Due to his success in science, along with his influence and philosophy, he has been called "the philosopher par excellence of modern science."
As a mathematician and physicist, he made many original fundamental contributions to pure and applied mathematics, mathematical physics, and celestial mechanics. In his research on the three-body problem, Poincaré became the first person to discover a chaotic deterministic system which laid the foundations of modern chaos theory. Poincaré is regarded as the creator of the field of algebraic topology, and is further credited with introducing automorphic forms. He also made important contributions to algebraic geometry, number theory, complex analysis and Lie theory. He famously introduced the concept of the Poincaré recurrence theorem, which states that a state will eventually return arbitrarily close to its initial state after a sufficiently long time, which has far-reaching consequences. Early in the 20th century he formulated the Poincaré conjecture, which became, over time, one of the famous unsolved problems in mathematics. It was eventually solved in 2002–2003 by Grigori Perelman. Poincaré popularized the use of non-Euclidean geometry in mathematics as well.
Poincaré made clear the importance of paying attention to the invariance of laws of physics under different transformations, and was the first to present the Lorentz transformations in their modern symmetrical form. Poincaré discovered the remaining relativistic velocity transformations and recorded them in a letter to Hendrik Lorentz in 1905. Thus he obtained perfect invariance of all of Maxwell's equations, an important step in the formulation of the theory of special relativity, for which he is also credited with laying down the foundations for, further writing foundational papers in 1905. He first proposed gravitational waves (ondes gravifiques) emanating from a body and propagating at the speed of light as being required by the Lorentz transformations, doing so in 1905. In 1912, he wrote an influential paper which provided a mathematical argument for quantum mechanics. Poincaré also laid the seeds of the discovery of radioactivity through his interest and study of X-rays, which influenced physicist Henri Becquerel, who then discovered the phenomena. The Poincaré group used in physics and mathematics was named after him, after he introduced the notion of the group.
Poincaré was considered the dominant figure in mathematics and theoretical physics during his time, and was the most respected mathematician of his time, being described as "the living brain of the rational sciences" by mathematician Paul Painlevé. Philosopher Karl Popper regarded Poincaré as the greatest philosopher of science of all time, with Poincaré also originating the conventionalist view in science. Poincaré was a public intellectual in his time, and personally, he believed in political equality for all, while wary of the influence of anti-intellectual positions that the Catholic Church held at the time. He served as the president of the French Academy of Sciences (1906), the president of Société astronomique de France (1901–1903), and twice the president of Société mathématique de France (1886, 1900).
Life
Poincaré was born on 29 April 1854 in Cité Ducale neighborhood, Nancy, Meurthe-et-Moselle, into an influential French family. His father Léon Poincaré (1828–1892) was a professor of medicine at the University of Nancy. His younger sister Aline married the spiritual philosopher Émile Boutroux. Another notable member of Henri's family was his cousin, Raymond Poincaré, a fellow member of the Académie française, who was President of France from 1913 to 1920, and three-time Prime Minister of France between 1913 and 1929.
Education
During his childhood he was seriously ill for a time with diphtheria and received special instruction from his mother, Eugénie Launois (1830–1897).
In 1862, Henri entered the Lycée in Nancy (now renamed the in his honour, along with Henri Poincaré University, also in Nancy). He spent eleven years at the Lycée and during this time he proved to be one of the top students in every topic he studied. He excelled in written composition. His mathematics teacher described him as a "monster of mathematics" and he won first prizes in the concours général, a competition between the top pupils from all the Lycées across France. His poorest subjects were music and physical education, where he was described as "average at best". Poor eyesight and a tendency towards absentmindedness may explain these difficulties. He graduated from the Lycée in 1871 with a baccalauréat in both letters and sciences.
During the Franco-Prussian War of 1870, he served alongside his father in the Ambulance Corps.
Poincaré entered the École Polytechnique as the top qualifier in 1873 and graduated in 1875. There he studied mathematics as a student of Charles Hermite, continuing to excel and publishing his first paper (Démonstration nouvelle des propriétés de l'indicatrice d'une surface) in 1874. From November 1875 to June 1878 he studied at the École des Mines, while continuing the study of mathematics in addition to the mining engineering syllabus, and received the degree of ordinary mining engineer in March 1879.
As a graduate of the École des Mines, he joined the Corps des Mines as an inspector for the Vesoul region in northeast France. He was on the scene of a mining disaster at Magny in August 1879 in which 18 miners died. He carried out the official investigation into the accident.
At the same time, Poincaré was preparing for his Doctorate in Science in mathematics under the supervision of Charles Hermite. His doctoral thesis was in the field of differential equations. It was named Sur les propriétés des fonctions définies par les équations aux différences partielles. Poincaré devised a new way of studying the properties of these equations. He not only faced the question of determining the integral of such equations, but also was the first person to study their general geometric properties. He realised that they could be used to model the behaviour of multiple bodies in free motion within the Solar System. He graduated from the University of Paris in 1879.
First scientific achievements
After receiving his degree, Poincaré began teaching as junior lecturer in mathematics at the University of Caen in Normandy (in December 1879). At the same time he published his first major article concerning the treatment of a class of automorphic functions.
There, in Caen, he met his future wife, Louise Poulain d'Andecy (1857–1934), granddaughter of Isidore Geoffroy Saint-Hilaire and great-granddaughter of Étienne Geoffroy Saint-Hilaire and on 20 April 1881, they married. Together they had four children: Jeanne (born 1887), Yvonne (born 1889), Henriette (born 1891), and Léon (born 1893).
Poincaré immediately established himself among the greatest mathematicians of Europe, attracting the attention of many prominent mathematicians. In 1881 Poincaré was invited to take a teaching position at the Faculty of Sciences of the University of Paris; he accepted the invitation. During the years 1883 to 1897, he taught mathematical analysis in the École Polytechnique.
In 1881–1882, Poincaré created a new branch of mathematics: qualitative theory of differential equations. He showed how it is possible to derive the most important information about the behavior of a family of solutions without having to solve the equation (since this may not always be possible). He successfully used this approach to problems in celestial mechanics and mathematical physics.
Career
He never fully abandoned his career in the mining administration to mathematics. He worked at the Ministry of Public Services as an engineer in charge of northern railway development from 1881 to 1885. He eventually became chief engineer of the Corps des Mines in 1893 and inspector general in 1910.
Beginning in 1881 and for the rest of his career, he taught at the University of Paris (the Sorbonne). He was initially appointed as the maître de conférences d'analyse (associate professor of analysis). Eventually, he held the chairs of Physical and Experimental Mechanics, Mathematical Physics and Theory of Probability, and Celestial Mechanics and Astronomy.
In 1887, at the young age of 32, Poincaré was elected to the French Academy of Sciences. He became its president in 1906, and was elected to the Académie française on 5 March 1908.
In 1887, he won Oscar II, King of Sweden's mathematical competition for a resolution of the three-body problem concerning the free motion of multiple orbiting bodies. (See three-body problem section below.)
In 1893, Poincaré joined the French Bureau des Longitudes, which engaged him in the synchronisation of time around the world. In 1897 Poincaré backed an unsuccessful proposal for the decimalisation of circular measure, and hence time and longitude. It was this post which led him to consider the question of establishing international time zones and the synchronisation of time between bodies in relative motion. (See work on relativity section below.)
In 1904, he intervened in the trials of Alfred Dreyfus, attacking the spurious scientific claims regarding evidence brought against Dreyfus.
Poincaré was the President of the Société Astronomique de France (SAF), the French astronomical society, from 1901 to 1903.
Students
Poincaré had two notable doctoral students at the University of Paris, Louis Bachelier (1900) and Dimitrie Pompeiu (1905).
Death
In 1912, Poincaré underwent surgery for a prostate problem and subsequently died from an embolism on 17 July 1912, in Paris. He was 58 years of age. He is buried in the Poincaré family vault in the Cemetery of Montparnasse, Paris, in section 16 close to the gate Rue Émile-Richard.
A former French Minister of Education, Claude Allègre, proposed in 2004 that Poincaré be reburied in the Panthéon in Paris, which is reserved for French citizens of the highest honour.
Work
Summary
Poincaré made many contributions to different fields of pure and applied mathematics such as: celestial mechanics, fluid mechanics, optics, electricity, telegraphy, capillarity, elasticity, thermodynamics, potential theory, Quantum mechanics, theory of relativity and physical cosmology.
Among the specific topics he contributed to are the following:
algebraic topology (a field that Poincaré virtually invented)
the theory of analytic functions of several complex variables
the theory of abelian functions
algebraic geometry
the Poincaré conjecture, proven in 2003 by Grigori Perelman.
Poincaré recurrence theorem
hyperbolic geometry
number theory
the three-body problem
the theory of diophantine equations
electromagnetism
special relativity
the fundamental group
In the field of differential equations Poincaré has given many results that are critical for the qualitative theory of differential equations, for example the Poincaré sphere and the Poincaré map.
Poincaré on "everybody's belief" in the Normal Law of Errors (see normal distribution for an account of that "law")
Published an influential paper providing a novel mathematical argument in support of quantum mechanics.
Three-body problem
The problem of finding the general solution to the motion of more than two orbiting bodies in the Solar System had eluded mathematicians since Newton's time. This was known originally as the three-body problem and later the n-body problem, where n is any number of more than two orbiting bodies. The n-body solution was considered very important and challenging at the close of the 19th century. Indeed, in 1887, in honour of his 60th birthday, Oscar II, King of Sweden, advised by Gösta Mittag-Leffler, established a prize for anyone who could find the solution to the problem. The announcement was quite specific:
Given a system of arbitrarily many mass points that attract each according to Newton's law, under the assumption that no two points ever collide, try to find a representation of the coordinates of each point as a series in a variable that is some known function of time and for all of whose values the series converges uniformly.
In case the problem could not be solved, any other important contribution to classical mechanics would then be considered to be prizeworthy. The prize was finally awarded to Poincaré, even though he did not solve the original problem. One of the judges, the distinguished Karl Weierstrass, said, "This work cannot indeed be considered as furnishing the complete solution of the question proposed, but that it is nevertheless of such importance that its publication will inaugurate a new era in the history of celestial mechanics." (The first version of his contribution even contained a serious error; for details see the article by Diacu and the book by Barrow-Green). The version finally printed contained many important ideas which led to the theory of chaos. The problem as stated originally was finally solved by Karl F. Sundman for n = 3 in 1912 and was generalised to the case of n > 3 bodies by Qiudong Wang in the 1990s. The series solutions have very slow convergence. It would take millions of terms to determine the motion of the particles for even very short intervals of time, so they are unusable in numerical work.
Work on relativity
Local time
Poincaré's work at the Bureau des Longitudes on establishing international time zones led him to consider how clocks at rest on the Earth, which would be moving at different speeds relative to absolute space (or the "luminiferous aether"), could be synchronised. At the same time Dutch theorist Hendrik Lorentz was developing Maxwell's theory into a theory of the motion of charged particles ("electrons" or "ions"), and their interaction with radiation. In 1895 Lorentz had introduced an auxiliary quantity (without physical interpretation) called "local time"
and introduced the hypothesis of length contraction to explain the failure of optical and electrical experiments to detect motion relative to the aether (see Michelson–Morley experiment). Poincaré was a constant interpreter (and sometimes friendly critic) of Lorentz's theory. Poincaré as a philosopher was interested in the "deeper meaning". Thus he interpreted Lorentz's theory and in so doing he came up with many insights that are now associated with special relativity. In The Measure of Time (1898), Poincaré said, "A little reflection is sufficient to understand that all these affirmations have by themselves no meaning. They can have one only as the result of a convention." He also argued that scientists have to set the constancy of the speed of light as a postulate to give physical theories the simplest form.
Based on these assumptions he discussed in 1900 Lorentz's "wonderful invention" of local time and remarked that it arose when moving clocks are synchronised by exchanging light signals assumed to travel with the same speed in both directions in a moving frame.
Principle of relativity and Lorentz transformations
In 1881 Poincaré described hyperbolic geometry in terms of the hyperboloid model, formulating transformations leaving invariant the Lorentz interval , which makes them mathematically equivalent to the Lorentz transformations in 2+1 dimensions. In addition, Poincaré's other models of hyperbolic geometry (Poincaré disk model, Poincaré half-plane model) as well as the Beltrami–Klein model can be related to the relativistic velocity space (see Gyrovector space).
In 1892 Poincaré developed a mathematical theory of light including polarization. His vision of the action of polarizers and retarders, acting on a sphere representing polarized states, is called the Poincaré sphere. It was shown that the Poincaré sphere possesses an underlying Lorentzian symmetry, by which it can be used as a geometrical representation of Lorentz transformations and velocity additions.
He discussed the "principle of relative motion" in two papers in 1900
and named it the principle of relativity in 1904, according to which no physical experiment can discriminate between a state of uniform motion and a state of rest.
In 1905 Poincaré wrote to Lorentz about Lorentz's paper of 1904, which Poincaré described as a "paper of supreme importance". In this letter he pointed out an error Lorentz had made when he had applied his transformation to one of Maxwell's equations, that for charge-occupied space, and also questioned the time dilation factor given by Lorentz.
In a second letter to Lorentz, Poincaré gave his own reason why Lorentz's time dilation factor was indeed correct after all—it was necessary to make the Lorentz transformation form a group—and he gave what is now known as the relativistic velocity-addition law.
Poincaré later delivered a paper at the meeting of the Academy of Sciences in Paris on 5 June 1905 in which these issues were addressed. In the published version of that he wrote:
The essential point, established by Lorentz, is that the equations of the electromagnetic field are not altered by a certain transformation (which I will call by the name of Lorentz) of the form:
and showed that the arbitrary function must be unity for all (Lorentz had set by a different argument) to make the transformations form a group. In an enlarged version of the paper that appeared in 1906 Poincaré pointed out that the combination is invariant. He noted that a Lorentz transformation is merely a rotation in four-dimensional space about the origin by introducing as a fourth imaginary coordinate, and he used an early form of four-vectors. Poincaré expressed a lack of interest in a four-dimensional reformulation of his new mechanics in 1907, because in his opinion the translation of physics into the language of four-dimensional geometry would entail too much effort for limited profit. So it was Hermann Minkowski who worked out the consequences of this notion in 1907.
Mass–energy relation
Like others before, Poincaré (1900) discovered a relation between mass and electromagnetic energy. While studying the conflict between the action/reaction principle and Lorentz ether theory, he tried to determine whether the center of gravity still moves with a uniform velocity when electromagnetic fields are included. He noticed that the action/reaction principle does not hold for matter alone, but that the electromagnetic field has its own momentum. Poincaré concluded that the electromagnetic field energy of an electromagnetic wave behaves like a fictitious fluid (fluide fictif) with a mass density of E/c2. If the center of mass frame is defined by both the mass of matter and the mass of the fictitious fluid, and if the fictitious fluid is indestructible—it's neither created or destroyed—then the motion of the center of mass frame remains uniform. But electromagnetic energy can be converted into other forms of energy. So Poincaré assumed that there exists a non-electric energy fluid at each point of space, into which electromagnetic energy can be transformed and which also carries a mass proportional to the energy. In this way, the motion of the center of mass remains uniform. Poincaré said that one should not be too surprised by these assumptions, since they are only mathematical fictions.
However, Poincaré's resolution led to a paradox when changing frames: if a Hertzian oscillator radiates in a certain direction, it will suffer a recoil from the inertia of the fictitious fluid. Poincaré performed a Lorentz boost (to order v/c) to the frame of the moving source. He noted that energy conservation holds in both frames, but that the law of conservation of momentum is violated. This would allow perpetual motion, a notion which he abhorred. The laws of nature would have to be different in the frames of reference, and the relativity principle would not hold. Therefore, he argued that also in this case there has to be another compensating mechanism in the ether.
Poincaré himself came back to this topic in his St. Louis lecture (1904). He rejected the possibility that energy carries mass and criticized his own solution to compensate the above-mentioned problems:
In the above quote he refers to the Hertz assumption of total aether entrainment that was falsified by the Fizeau experiment but that experiment does indeed show that that light is partially "carried along" with a substance. Finally in 1908 he revisits the problem and ends with abandoning the principle of reaction altogether in favor of supporting a solution based in the inertia of aether itself.
He also discussed two other unexplained effects: (1) non-conservation of mass implied by Lorentz's variable mass , Abraham's theory of variable mass and Kaufmann's experiments on the mass of fast moving electrons and (2) the non-conservation of energy in the radium experiments of Marie Curie.
It was Albert Einstein's concept of mass–energy equivalence (1905) that a body losing energy as radiation or heat was losing mass of amount m = E/c2 that resolved Poincaré's paradox, without using any compensating mechanism within the ether. The Hertzian oscillator loses mass in the emission process, and momentum is conserved in any frame. However, concerning Poincaré's solution of the Center of Gravity problem, Einstein noted that Poincaré's formulation and his own from 1906 were mathematically equivalent.
Gravitational waves
In 1905 Poincaré first proposed gravitational waves (ondes gravifiques) emanating from a body and propagating at the speed of light. He wrote:
Poincaré and Einstein
Einstein's first paper on relativity was published three months after Poincaré's short paper, but before Poincaré's longer version. Einstein relied on the principle of relativity to derive the Lorentz transformations and used a similar clock synchronisation procedure (Einstein synchronisation) to the one that Poincaré (1900) had described, but Einstein's paper was remarkable in that it contained no references at all. Poincaré never acknowledged Einstein's work on special relativity. However, Einstein expressed sympathy with Poincaré's outlook obliquely in a letter to Hans Vaihinger on 3 May 1919, when Einstein considered Vaihinger's general outlook to be close to his own and Poincaré's to be close to Vaihinger's. In public, Einstein acknowledged Poincaré posthumously in the text of a lecture in 1921 titled "Geometrie und Erfahrung (Geometry and Experience)" in connection with non-Euclidean geometry, but not in connection with special relativity. A few years before his death, Einstein commented on Poincaré as being one of the pioneers of relativity, saying "Lorentz had already recognized that the transformation named after him is essential for the analysis of Maxwell's equations, and Poincaré deepened this insight still further ....".
Assessments on Poincaré and relativity
Poincaré's work in the development of special relativity is well recognised, though most historians stress that despite many similarities with Einstein's work, the two had very different research agendas and interpretations of the work. Poincaré developed a similar physical interpretation of local time and noticed the connection to signal velocity, but contrary to Einstein he continued to use the ether-concept in his papers and argued that clocks at rest in the ether show the "true" time, and moving clocks show the local time. So Poincaré tried to keep the relativity principle in accordance with classical concepts, while Einstein developed a mathematically equivalent kinematics based on the new physical concepts of the relativity of space and time.
While this is the view of most historians, a minority go much further, such as E. T. Whittaker, who held that Poincaré and Lorentz were the true discoverers of relativity.
Algebra and number theory
Poincaré introduced group theory to physics, and was the first to study the group of Lorentz transformations. He also made major contributions to the theory of discrete groups and their representations.
Topology
The subject is clearly defined by Felix Klein in his "Erlangen Program" (1872): the geometry invariants of arbitrary continuous transformation, a kind of geometry. The term "topology" was introduced, as suggested by Johann Benedict Listing, instead of previously used "Analysis situs". Some important concepts were introduced by Enrico Betti and Bernhard Riemann. But the foundation of this science, for a space of any dimension, was created by Poincaré. His first article on this topic appeared in 1894.
His research in geometry led to the abstract topological definition of homotopy and homology. He also first introduced the basic concepts and invariants of combinatorial topology, such as Betti numbers and the fundamental group. Poincaré proved a formula relating the number of edges, vertices and faces of n-dimensional polyhedron (the Euler–Poincaré theorem) and gave the first precise formulation of the intuitive notion of dimension.
Astronomy and celestial mechanics
Poincaré published two now classical monographs, "New Methods of Celestial Mechanics" (1892–1899) and "Lectures on Celestial Mechanics" (1905–1910). In them, he successfully applied the results of their research to the problem of the motion of three bodies and studied in detail the behavior of solutions (frequency, stability, asymptotic, and so on). They introduced the small parameter method, fixed points, integral invariants, variational equations, the convergence of the asymptotic expansions. Generalizing a theory of Bruns (1887), Poincaré showed that the three-body problem is not integrable. In other words, the general solution of the three-body problem can not be expressed in terms of algebraic and transcendental functions through unambiguous coordinates and velocities of the bodies. His work in this area was the first major achievement in celestial mechanics since Isaac Newton.
These monographs include an idea of Poincaré, which later became the basis for mathematical "chaos theory" (see, in particular, the Poincaré recurrence theorem) and the general theory of dynamical systems.
Poincaré authored important works on astronomy for the equilibrium figures of a gravitating rotating fluid. He introduced the important concept of bifurcation points and proved the existence of equilibrium figures such as the non-ellipsoids, including ring-shaped and pear-shaped figures, and their stability. For this discovery, Poincaré received the Gold Medal of the Royal Astronomical Society (1900).
Differential equations and mathematical physics
After defending his doctoral thesis on the study of singular points of the system of differential equations, Poincaré wrote a series of memoirs under the title "On curves defined by differential equations" (1881–1882). In these articles, he built a new branch of mathematics, called "qualitative theory of differential equations". Poincaré showed that even if the differential equation can not be solved in terms of known functions, yet from the very form of the equation, a wealth of information about the properties and behavior of the solutions can be found. In particular, Poincaré investigated the nature of the trajectories of the integral curves in the plane, gave a classification of singular points (saddle, focus, center, node), introduced the concept of a limit cycle and the loop index, and showed that the number of limit cycles is always finite, except for some special cases. Poincaré also developed a general theory of integral invariants and solutions of the variational equations. For the finite-difference equations, he created a new direction – the asymptotic analysis of the solutions. He applied all these achievements to study practical problems of mathematical physics and celestial mechanics, and the methods used were the basis of its topological works.
Character
Poincaré's work habits have been compared to a bee flying from flower to flower. Poincaré was interested in the way his mind worked; he studied his habits and gave a talk about his observations in 1908 at the Institute of General Psychology in Paris. He linked his way of thinking to how he made several discoveries.
The mathematician Darboux claimed he was un intuitif (an intuitive), arguing that this is demonstrated by the fact that he worked so often by visual representation. Jacques Hadamard wrote that Poincaré's research demonstrated marvelous clarity and Poincaré himself wrote that he believed that logic was not a way to invent but a way to structure ideas and that logic limits ideas.
Toulouse's characterisation
Poincaré's mental organisation was interesting not only to Poincaré himself but also to Édouard Toulouse, a psychologist of the Psychology Laboratory of the School of Higher Studies in Paris. Toulouse wrote a book entitled Henri Poincaré (1910). In it, he discussed Poincaré's regular schedule:
He worked during the same times each day in short periods of time. He undertook mathematical research for four hours a day, between 10 a.m. and noon then again from 5 p.m. to 7 p.m.. He would read articles in journals later in the evening.
His normal work habit was to solve a problem completely in his head, then commit the completed problem to paper.
He was ambidextrous and nearsighted.
His ability to visualise what he heard proved particularly useful when he attended lectures, since his eyesight was so poor that he could not see properly what the lecturer wrote on the blackboard.
These abilities were offset to some extent by his shortcomings:
He was physically clumsy and artistically inept.
He was always in a rush and disliked going back for changes or corrections.
He never spent a long time on a problem since he believed that the subconscious would continue working on the problem while he consciously worked on another problem.
In addition, Toulouse stated that most mathematicians worked from principles already established while Poincaré started from basic principles each time (O'Connor et al., 2002).
His method of thinking is well summarised as:
Publications
Legacy
Poincaré is credited with laying the foundations of special relativity, with some arguing that he should be credited with its creation. He is said to have "dominated the mathematics and the theoretical physics of his time", and that "he was without a doubt the most admired mathematician while he was alive, and he remains today one of the world's most emblematic scientific figures." Poincaré is regarded as a "universal specialist", as he refined celestial mechanics, he progressed nearly all parts of mathematics of his time, including creating new subjects, is a father of special relativity, participated in all the great debates of his time in physics, was a major actor in the great epistemological debates of his day in relation to philosophy of science, and Poincaré was the one who investigated the 1879 Magny shaft firedamp explosion as an engineer. Due to the breadth of his research, Poincaré was the only member to be elected to every section of the French Academy of Sciences of the time, those being geometry, mechanics, physics, astronomy and navigation.
Physicist Henri Becquerel nominated Poincaré for a Nobel Prize in 1904, as Becquerel took note that "Poincaré's mathematical and philosophical genius surveyed all of physics and was among those that contributed most to human progress by giving researchers a solid basis for their journeys into the unknown." After his death, he was praised by many intellectual figures of his time, as the author Marie Bonaparte wrote to his widowed wife Louise that "He was – as you know better than anyone – not only the greatest thinker, the most powerful genius of our time – but also a deep and incomparable heart; and having been close to him remains the precious memory of a whole life."
Mathematician E.T. Bell titled Poincaré as "The Last Universalist", and noted his prowess in many fields, stating that:
When philosopher and mathematician Bertrand Russell was asked who was the greatest man that France had produced in modern times, he instantly replied "Poincaré". Bell noted that if Poincaré had been as strong in practical science as he was in theoretical, he might have "made a fourth with the incomparable three, Archimedes, Newton, and Gauss."
Bell further noted his powerful memory, one that was even superior to Leonhard Euler's, stating that:
Bell notes the terrible eyesight of Poincaré, he almost completely remembered formulas and theorems by ear, and "unable to see the board distinctly when he became a student of advanced mathematics, he sat back and listened, following and remembering perfectly without taking notes - an easy feat for him, but one incomprehensible to most mathematicians."
Honours
Awards
Oscar II, King of Sweden's mathematical competition (1887)
Foreign member of the Royal Netherlands Academy of Arts and Sciences (1897)
American Philosophical Society (1899)
Gold Medal of the Royal Astronomical Society of London (1900)
Commander of the Legion of Honour (1903)
Bolyai Prize (1905)
Matteucci Medal (1905)
French Academy of Sciences (1906)
Académie française (1909)
Bruce Medal (1911)
Named after him
Institut Henri Poincaré (mathematics and theoretical physics centre)
Maison Poincaré, a mathematics museum in the 5th arrondissement of Paris
Poincaré Prize (Mathematical Physics International Prize)
Annales Henri Poincaré (Scientific Journal)
Poincaré Seminar (nicknamed "Bourbaphy")
The crater Poincaré on the Moon
Asteroid 2021 Poincaré
List of things named after Henri Poincaré
Henri Poincaré did not receive the Nobel Prize in Physics, but he had influential advocates like Henri Becquerel or committee member Gösta Mittag-Leffler. The nomination archive reveals that Poincaré received a total of 51 nominations between 1904 and 1912, the year of his death. Of the 58 nominations for the 1910 Nobel Prize, 34 named Poincaré. Nominators included Nobel laureates Hendrik Lorentz and Pieter Zeeman (both of 1902), Marie Curie (of 1903), Albert Michelson (of 1907), Gabriel Lippmann (of 1908) and Guglielmo Marconi (of 1909).
The fact that renowned theoretical physicists like Poincaré, Boltzmann or Gibbs were not awarded the Nobel Prize is seen as evidence that the Nobel committee had more regard for experimentation than theory. In Poincaré's case, several of those who nominated him pointed out that the greatest problem was to name a specific discovery, invention, or technique.
Philosophy
Poincaré had philosophical views opposite to those of Bertrand Russell and Gottlob Frege, who believed that mathematics was a branch of logic. Poincaré strongly disagreed, claiming that intuition was the life of mathematics. Poincaré gives an interesting point of view in his 1902 book Science and Hypothesis:
Poincaré believed that arithmetic is synthetic. He argued that Peano's axioms cannot be proven non-circularly with the principle of induction (Murzi, 1998), therefore concluding that arithmetic is a priori synthetic and not analytic. Poincaré then went on to say that mathematics cannot be deduced from logic since it is not analytic. His views were similar to those of Immanuel Kant (Kolak, 2001, Folina 1992). He strongly opposed Cantorian set theory, objecting to its use of impredicative definitions.
However, Poincaré did not share Kantian views in all branches of philosophy and mathematics. For example, in geometry, Poincaré believed that the structure of non-Euclidean space can be known analytically. Poincaré held that convention plays an important role in physics. His view (and some later, more extreme versions of it) came to be known as "conventionalism". Poincaré believed that Newton's first law was not empirical but is a conventional framework assumption for mechanics (Gargani, 2012). He also believed that the geometry of physical space is conventional. He considered examples in which either the geometry of the physical fields or gradients of temperature can be changed, either describing a space as non-Euclidean measured by rigid rulers, or as a Euclidean space where the rulers are expanded or shrunk by a variable heat distribution. However, Poincaré thought that we were so accustomed to Euclidean geometry that we would prefer to change the physical laws to save Euclidean geometry rather than shift to non-Euclidean physical geometry.
Free will
Poincaré's famous lectures before the Société de Psychologie in Paris (published as Science and Hypothesis, The Value of Science, and Science and Method) were cited by Jacques Hadamard as the source for the idea that creativity and invention consist of two mental stages, first random combinations of possible solutions to a problem, followed by a critical evaluation.
Although he most often spoke of a deterministic universe, Poincaré said that the subconscious generation of new possibilities involves chance.
It is certain that the combinations which present themselves to the mind in a kind of sudden illumination after a somewhat prolonged period of unconscious work are generally useful and fruitful combinations... all the combinations are formed as a result of the automatic action of the subliminal ego, but those only which are interesting find their way into the field of consciousness... A few only are harmonious, and consequently at once useful and beautiful, and they will be capable of affecting the geometrician's special sensibility I have been speaking of; which, once aroused, will direct our attention upon them, and will thus give them the opportunity of becoming conscious... In the subliminal ego, on the contrary, there reigns what I would call liberty, if one could give this name to the mere absence of discipline and to disorder born of chance.
Poincaré's two stages—random combinations followed by selection—became the basis for Daniel Dennett's two-stage model of free will.
Bibliography
Poincaré's writings in English translation
Popular writings on the philosophy of science:
; reprinted in 1921; this book includes the English translations of Science and Hypothesis (1902), The Value of Science (1905), Science and Method (1908).
1905. "", The Walter Scott Publishing Co.
1906. "", Athenæum
1913. "The New Mechanics", The Monist, Vol. XXIII.
1913. "The Relativity of Space", The Monist, Vol. XXIII.
1913.
1956. Chance. In James R. Newman, ed., The World of Mathematics (4 Vols).
1958. The Value of Science, New York: Dover.
On algebraic topology:
1895. . The first systematic study of topology.
On celestial mechanics:
1890.
1892–99. New Methods of Celestial Mechanics, 3 vols. English trans., 1967. .
1905. "The Capture Hypothesis of J. J. See", The Monist, Vol. XV.
1905–10. Lessons of Celestial Mechanics.
On the philosophy of mathematics:
Ewald, William B., ed., 1996. From Kant to Hilbert: A Source Book in the Foundations of Mathematics, 2 vols. Oxford Univ. Press. Contains the following works by Poincaré:
1894, "On the Nature of Mathematical Reasoning", 972–981.
1898, "On the Foundations of Geometry", 982–1011.
1900, "Intuition and Logic in Mathematics", 1012–1020.
1905–06, "Mathematics and Logic, I–III", 1021–1070.
1910, "On Transfinite Numbers", 1071–1074.
1905. "The Principles of Mathematical Physics", The Monist, Vol. XV.
1910. "The Future of Mathematics", The Monist, Vol. XX.
1910. "Mathematical Creation", The Monist, Vol. XX.
Other:
1904. Maxwell's Theory and Wireless Telegraphy, New York, McGraw Publishing Company.
1905. "The New Logics", The Monist, Vol. XV.
1905. "The Latest Efforts of the Logisticians", The Monist, Vol. XV.
Exhaustive bibliography of English translations:
1892–2017. .
See also
Concepts
Poincaré–Andronov–Hopf bifurcation
Poincaré complex – an abstraction of the singular chain complex of a closed, orientable manifold
Poincaré duality
Poincaré disk model
Poincaré expansion
Poincaré gauge
Poincaré group
Poincaré half-plane model
Poincaré homology sphere
Poincaré inequality
Poincaré lemma
Poincaré map
Poincaré residue
Poincaré series (modular form)
Poincaré space
Poincaré metric
Poincaré plot
Poincaré polynomial
Poincaré series
Poincaré sphere
Poincaré–Einstein synchronisation
Poincaré–Lelong equation
Poincaré–Lindstedt method
Poincaré–Lindstedt perturbation theory
Poincaré–Steklov operator
Euler–Poincaré characteristic
Neumann–Poincaré operator
Reflecting Function
Theorems
Here is a list of theorems proved by Poincaré:
Poincaré's recurrence theorem: certain systems will, after a sufficiently long but finite time, return to a state very close to the initial state.
Poincaré–Bendixson theorem: a statement about the long-term behaviour of orbits of continuous dynamical systems on the plane, cylinder, or two-sphere.
Poincaré–Hopf theorem: a generalization of the hairy-ball theorem, which states that there is no smooth vector field on a sphere having no sources or sinks.
Poincaré–Lefschetz duality theorem: a version of Poincaré duality in geometric topology, applying to a manifold with boundary
Poincaré separation theorem: gives the upper and lower bounds of eigenvalues of a real symmetric matrix B'AB that can be considered as the orthogonal projection of a larger real symmetric matrix A onto a linear subspace spanned by the columns of B.
Poincaré–Birkhoff theorem: every area-preserving, orientation-preserving homeomorphism of an annulus that rotates the two boundaries in opposite directions has at least two fixed points.
Poincaré–Birkhoff–Witt theorem: an explicit description of the universal enveloping algebra of a Lie algebra.
Poincaré–Bjerknes circulation theorem: theorem about a conservation of quantity for the rotating frame.
Poincaré conjecture (now a theorem): Every simply connected, closed 3-manifold is homeomorphic to the 3-sphere.
Poincaré–Miranda theorem: a generalization of the intermediate value theorem to n dimensions.
Other
French epistemology
History of special relativity
List of things named after Henri Poincaré
Institut Henri Poincaré, Paris
Brouwer fixed-point theorem
Relativity priority dispute
Epistemic structural realism
References
Footnotes
Sources
Bell, Eric Temple, 1986. Men of Mathematics (reissue edition). Touchstone Books. .
Belliver, André, 1956. Henri Poincaré ou la vocation souveraine. Paris: Gallimard.
Bernstein, Peter L, 1996. "Against the Gods: A Remarkable Story of Risk". (pp. 199–200). John Wiley & Sons.
Boyer, B. Carl, 1968. A History of Mathematics: Henri Poincaré, John Wiley & Sons.
Grattan-Guinness, Ivor, 2000. The Search for Mathematical Roots 1870–1940. Princeton Uni. Press.
. Internet version published in Journal of the ACMS 2004.
Folina, Janet, 1992. Poincaré and the Philosophy of Mathematics. Macmillan, New York.
Gray, Jeremy, 1986. Linear differential equations and group theory from Riemann to Poincaré, Birkhauser
Gray, Jeremy, 2013. Henri Poincaré: A scientific biography. Princeton University Press
Kolak, Daniel, 2001. Lovers of Wisdom, 2nd ed. Wadsworth.
Gargani, Julien, 2012. Poincaré, le hasard et l'étude des systèmes complexes, L'Harmattan.
Murzi, 1998. "Henri Poincaré".
O'Connor, J. John, and Robertson, F. Edmund, 2002, "Jules Henri Poincaré". University of St. Andrews, Scotland.
Peterson, Ivars, 1995. Newton's Clock: Chaos in the Solar System (reissue edition). W H Freeman & Co. .
Sageret, Jules, 1911. Henri Poincaré. Paris: Mercure de France.
Toulouse, E., 1910. Henri Poincaré – (Source biography in French) at University of Michigan Historic Math Collection.
–
Verhulst, Ferdinand, 2012 Henri Poincaré. Impatient Genius. N.Y.: Springer.
Henri Poincaré, l'œuvre scientifique, l'œuvre philosophique, by Vito Volterra, Jacques Hadamard, Paul Langevin and Pierre Boutroux, Felix Alcan, 1914.
Henri Poincaré, l'œuvre mathématique, by Vito Volterra.
Henri Poincaré, le problème des trois corps, by Jacques Hadamard.
Henri Poincaré, le physicien, by Paul Langevin.
Henri Poincaré, l'œuvre philosophique, by Pierre Boutroux.
Further reading
Secondary sources to work on relativity
Non-mainstream sources
External links
Henri Poincaré's Bibliography
Internet Encyclopedia of Philosophy: "Henri Poincaré " – by Mauro Murzi.
Internet Encyclopedia of Philosophy: "Poincaré’s Philosophy of Mathematics" – by Janet Folina.
Henri Poincaré on Information Philosopher
A timeline of Poincaré's life University of Nantes (in French).
Henri Poincaré Papers University of Nantes (in French).
Bruce Medal page
Collins, Graham P., "Henri Poincaré, His Conjecture, Copacabana and Higher Dimensions," Scientific American, 9 June 2004.
BBC in Our Time, "Discussion of the Poincaré conjecture," 2 November 2006, hosted by Melvyn Bragg.
Poincare Contemplates Copernicus at MathPages
High Anxieties – The Mathematics of Chaos (2008) BBC documentary directed by David Malone looking at the influence of Poincaré's discoveries on 20th Century mathematics.
1854 births
1912 deaths
19th-century French essayists
19th-century French male writers
19th-century French mathematicians
19th-century French non-fiction writers
19th-century French philosophers
20th-century French essayists
20th-century French male writers
20th-century French mathematicians
20th-century French philosophers
Algebraic geometers
Burials at Montparnasse Cemetery
Chaos theorists
Continental philosophers
Corps des mines
Corresponding members of the Saint Petersburg Academy of Sciences
Deaths from embolism
Determinists
Dynamical systems theorists
École Polytechnique alumni
French fluid dynamicists
Foreign associates of the National Academy of Sciences
Foreign members of the Royal Society
French male essayists
French male non-fiction writers
French male writers
French military personnel of the Franco-Prussian War
French mining engineers
French geometers
Hyperbolic geometers
French lecturers
French mathematical analysts
Members of the Académie Française
Members of the Royal Netherlands Academy of Arts and Sciences
Mines Paris - PSL alumni
Officers of the French Academy of Sciences
Scientists from Nancy, France
Philosophers of logic
Philosophers of mathematics
Philosophers of psychology
French philosophers of science
French philosophy academics
Philosophy writers
Recipients of the Bruce Medal
Recipients of the Gold Medal of the Royal Astronomical Society
French relativity theorists
Thermodynamicists
Topologists
Academic staff of the University of Paris
Recipients of the Matteucci Medal | Henri Poincaré | [
"Physics",
"Chemistry",
"Mathematics"
] | 9,885 | [
"Topologists",
"Topology",
"Thermodynamics",
"Thermodynamicists",
"Philosophers of mathematics",
"Dynamical systems theorists",
"Dynamical systems"
] |
48,778 | https://en.wikipedia.org/wiki/Action%20theory%20%28philosophy%29 | Action theory or theory of action is an area in philosophy concerned with theories about the processes causing willful human bodily movements of a more or less complex kind. This area of thought involves epistemology, ethics, metaphysics, jurisprudence, and philosophy of mind, and has attracted the strong interest of philosophers ever since Aristotle's Nicomachean Ethics (Third Book). With the advent of psychology and later neuroscience, many theories of action are now subject to empirical testing.
Philosophical action theory, or the philosophy of action, should not be confused with sociological theories of social action, such as the action theory established by Talcott Parsons. Nor should it be confused with activity theory.
Overview
Basic action theory typically describes action as intentional behavior caused by an agent in a particular situation. The agent's desires and beliefs (e.g. a person wanting a glass of water and believing that the clear liquid in the cup in front of them is water) lead to bodily behavior (e.g. reaching across for the glass). In the simple theory (see Donald Davidson), the desire and belief jointly cause the action. Michael Bratman has raised problems for such a view and argued that we should take the concept of intention as basic and not analyzable into beliefs and desires.
Aristotle held that a thorough explanation must give an account of both the efficient cause, the agent, and the final cause, the intention.
In some theories a desire plus a belief about the means of satisfying that desire are always what is behind an action. Agents aim, in acting, to maximize the satisfaction of their desires. Such a theory of prospective rationality underlies much of economics and other social sciences within the more sophisticated framework of rational choice. However, many theories of action argue that rationality extends far beyond calculating the best means to achieve one's ends. For instance, a belief that I ought to do X, in some theories, can directly cause me to do X without my having to want to do X (i.e. have a desire to do X). Rationality, in such theories, also involves responding correctly to the reasons an agent perceives, not just acting on wants.
While action theorists generally employ the language of causality in their theories of what the nature of action is, the issue of what causal determination comes to has been central to controversies about the nature of free will.
Conceptual discussions also revolve around a precise definition of action in philosophy. Scholars may disagree on which bodily movements fall under this category, e.g. whether thinking should be analysed as action, and how complex actions involving several steps to be taken and diverse intended consequences are to be summarised or decomposed.
See also
Praxeology
Free will
Cybernetics
References
Further reading
Maurice Blondel (1893). L'Action - Essai d'une critique de la vie et d'une science de la pratique
G. E. M. Anscombe (1957). Intention, Basil Blackwell, Oxford.
James Sommerville (1968). Total Commitment, Blondel's L'Action, Corpus Books.
Michel Crozier, & Erhard Friedberg (1980). Actors and Systems Chicago: [University of Chicago Press].
Donald Davidson (1980). Essays on Actions and Events, Clarendon Press, Oxford.
Jonathan Dancy & Constantine Sandis (eds.) (2015). Philosophy of Action: An Anthology, Wiley-Blackwell, Oxford.
Jennifer Hornsby (1980). Actions, Routledge, London.
Lilian O'Brien (2014). Philosophy of Action, Palgrave, Basingstoke.
Christine Korsgaard (2008). The Constitution of Agency, Oxford University Press, Oxford.
Alfred R. Mele (ed.) (1997). The Philosophy of Action, Oxford University Press, Oxford.
John Hyman & Helen Steward (eds.) (2004). Agency and Action, Cambridge University Press, Cambridge.
Anton Leist (ed.) (2007). Action in Context, Walter de Gruyter, Berlin.
Timothy O'Connor & Constantine Sandis (eds.) (2010). A Companion to the Philosophy of Action, Wiley-Blackwell, Oxford.
Sarah Paul (2020). The Philosophy of Action: A Contemporary Introduction, London, Routledge.
Peter Šajda et al. (eds.) (2012). Affectivity, Agency and Intersubjectivity, L'Harmattan, Paris.
Constantine Sandis (ed.) (2009). New Essays on the Explanation of Action, Palgrave Macmillan, Basingstoke.
Constantine Sandis (ed.) (2019). Philosophy of Action from Suarez to Anscombe, London, Routledge.
Michael Thompson (2012). Life and Action: Elementary Structures of Practice and Practical Thought, Boston, MA, Harvard University Press.
Lawrence H. Davis (1979). Theory of Action, Prentice-Hall, (Foundations of Philosophy Series), Englewood Cliffs, NJ.
External links
The Meaning of Action by Various Authors at PhilosophersAnswer.com
Free will
Subfields of metaphysics
Metaphysics of mind
Neuroscience
Ontology
Theory of mind
Epistemological theories | Action theory (philosophy) | [
"Biology"
] | 1,059 | [
"Neuroscience"
] |
48,781 | https://en.wikipedia.org/wiki/Philosophi%C3%A6%20Naturalis%20Principia%20Mathematica | (English: The Mathematical Principles of Natural Philosophy) often referred to as simply the (), is a book by Isaac Newton that expounds Newton's laws of motion and his law of universal gravitation. The Principia is written in Latin and comprises three volumes, and was authorized, imprimatur, by Samuel Pepys, then-President of the Royal Society on 5 July 1686 and first published in 1687.
The is considered one of the most important works in the history of science. The French mathematical physicist Alexis Clairaut assessed it in 1747: "The famous book of Mathematical Principles of Natural Philosophy marked the epoch of a great revolution in physics. The method followed by its illustrious author Sir Newton ... spread the light of mathematics on a science which up to then had remained in the darkness of conjectures and hypotheses." The French scientist Joseph-Louis Lagrange described it as "the greatest production of the human mind". French polymath Pierre-Simon Laplace stated that "The Principia is pre-eminent above any other production of human genius". Newton's work has also been called the "greatest scientific work in history", and "the supreme expression in human thought of the mind's ability to hold the universe fixed as an object of contemplation".
A more recent assessment has been that while acceptance of Newton's laws was not immediate, by the end of the century after publication in 1687, "no one could deny that [out of the ] a science had emerged that, at least in certain respects, so far exceeded anything that had ever gone before that it stood alone as the ultimate exemplar of science generally".
The forms a mathematical foundation for the theory of classical mechanics. Among other achievements, it explains Johannes Kepler's laws of planetary motion, which Kepler had first obtained empirically. In formulating his physical laws, Newton developed and used mathematical methods now included in the field of calculus, expressing them in the form of geometric propositions about "vanishingly small" shapes. In a revised conclusion to the , Newton emphasized the empirical nature of the work with the expression Hypotheses non fingo ("I frame/feign no hypotheses").
After annotating and correcting his personal copy of the first edition, Newton published two further editions, during 1713 with errors of the 1687 corrected, and an improved version of 1726.
Contents
Expressed aim and topics covered
The Preface of the work states:
Newton situates himself within the contemporary scientific movement which had "omit[ed] substantial forms and the occult qualities" and instead endeavoured to explain the world by empirical investigation and outlining of empirical regularities.
The Principia deals primarily with massive bodies in motion, initially under a variety of conditions and hypothetical laws of force in both non-resisting and resisting media, thus offering criteria to decide, by observations, which laws of force are operating in phenomena that may be observed. It attempts to cover hypothetical or possible motions both of celestial bodies and of terrestrial projectiles. It explores difficult problems of motions perturbed by multiple attractive forces. Its third and final book deals with the interpretation of observations about the movements of planets and their satellites.
The book:
shows how astronomical observations verify the inverse square law of gravitation (to an accuracy that was high by the standards of Newton's time);
offers estimates of relative masses for the known giant planets and for the Earth and the Sun;
defines the motion of the Sun relative to the Solar System barycenter;
shows how the theory of gravity can account for irregularities in the motion of the Moon;
identifies the oblateness of the shape of the Earth;
accounts approximately for marine tides including phenomena of spring and neap tides by the perturbing (and varying) gravitational attractions of the Sun and Moon on the Earth's waters;
explains the precession of the equinoxes as an effect of the gravitational attraction of the Moon on the Earth's equatorial bulge; and
gives theoretical basis for numerous phenomena about comets and their elongated, near-parabolic orbits.
The opening sections of the Principia contain, in revised and extended form, nearly all of the content of Newton's 1684 tract De motu corporum in gyrum.
The Principia begin with "Definitions" and "Axioms or Laws of Motion", and continues in three books:
Book 1, De motu corporum
Book 1, subtitled De motu corporum (On the motion of bodies) concerns motion in the absence of any resisting medium. It opens with a collection of mathematical lemmas on "the method of first and last ratios", a geometrical form of infinitesimal calculus.
The second section establishes relationships between centripetal forces and the law of areas now known as Kepler's second law (Propositions 1–3), and relates circular velocity and radius of path-curvature to radial force (Proposition 4), and relationships between centripetal forces varying as the inverse-square of the distance to the center and orbits of conic-section form (Propositions 5–10).
Propositions 11–31 establish properties of motion in paths of eccentric conic-section form including ellipses, and their relationship with inverse-square central forces directed to a focus and include Newton's theorem about ovals (lemma 28).
Propositions 43–45 are demonstration that in an eccentric orbit under centripetal force where the apse may move, a steady non-moving orientation of the line of apses is an indicator of an inverse-square law of force.
Book 1 contains some proofs with little connection to real-world dynamics. But there are also sections with far-reaching application to the solar system and universe:
Propositions 57–69 deal with the "motion of bodies drawn to one another by centripetal forces". This section is of primary interest for its application to the Solar System, and includes Proposition 66 along with its 22 corollaries: here Newton took the first steps in the definition and study of the problem of the movements of three massive bodies subject to their mutually perturbing gravitational attractions, a problem which later gained name and fame (among other reasons, for its great difficulty) as the three-body problem.
Propositions 70–84 deal with the attractive forces of spherical bodies. The section contains Newton's proof that a massive spherically symmetrical body attracts other bodies outside itself as if all its mass were concentrated at its centre. This fundamental result, called the Shell theorem, enables the inverse square law of gravitation to be applied to the real solar system to a very close degree of approximation.
Book 2, part 2 of De motu corporum
Part of the contents originally planned for the first book was divided out into a second book, which largely concerns motion through resisting mediums. Just as Newton examined consequences of different conceivable laws of attraction in Book 1, here he examines different conceivable laws of resistance; thus Section 1 discusses resistance in direct proportion to velocity, and Section 2 goes on to examine the implications of resistance in proportion to the square of velocity. Book 2 also discusses (in Section 5) hydrostatics and the properties of compressible fluids; Newton also derives Boyle's law. The effects of air resistance on pendulums are studied in Section 6, along with Newton's account of experiments that he carried out, to try to find out some characteristics of air resistance in reality by observing the motions of pendulums under different conditions. Newton compares the resistance offered by a medium against motions of globes with different properties (material, weight, size). In Section 8, he derives rules to determine the speed of waves in fluids and relates them to the density and condensation (Proposition 48; this would become very important in acoustics). He assumes that these rules apply equally to light and sound and estimates that the speed of sound is around 1088 feet per second and can increase depending on the amount of water in air.
Less of Book 2 has stood the test of time than of Books 1 and 3, and it has been said that Book 2 was largely written to refute a theory of Descartes which had some wide acceptance before Newton's work (and for some time after). According to Descartes's theory of vortices, planetary motions were produced by the whirling of fluid vortices that filled interplanetary space and carried the planets along with them. Newton concluded Book 2 by commenting that the hypothesis of vortices was completely at odds with the astronomical phenomena, and served not so much to explain as to confuse them.
Book 3, De mundi systemate
Book 3, subtitled De mundi systemate (On the system of the world), is an exposition of many consequences of universal gravitation, especially its consequences for astronomy. It builds upon the propositions of the previous books and applies them with further specificity than in Book 1 to the motions observed in the Solar System. Here (introduced by Proposition 22, and continuing in Propositions 25–35) are developed several of the features and irregularities of the orbital motion of the Moon, especially the variation. Newton lists the astronomical observations on which he relies, and establishes in a stepwise manner that the inverse square law of mutual gravitation applies to Solar System bodies, starting with the satellites of Jupiter and going on by stages to show that the law is of universal application. He also gives starting at Lemma 4 and Proposition 40 the theory of the motions of comets, for which much data came from John Flamsteed and Edmond Halley, and accounts for the tides, attempting quantitative estimates of the contributions of the Sun and Moon to the tidal motions; and offers the first theory of the precession of the equinoxes. Book 3 also considers the harmonic oscillator in three dimensions, and motion in arbitrary force laws.
In Book 3 Newton also made clear his heliocentric view of the Solar System, modified in a somewhat modern way, since already in the mid-1680s he recognised the "deviation of the Sun" from the centre of gravity of the Solar System. For Newton, "the common centre of gravity of the Earth, the Sun and all the Planets is to be esteem'd the Centre of the World", and that this centre "either is at rest, or moves uniformly forward in a right line". Newton rejected the second alternative after adopting the position that "the centre of the system of the world is immoveable", which "is acknowledg'd by all, while some contend that the Earth, others, that the Sun is fix'd in that centre". Newton estimated the mass ratios Sun:Jupiter and Sun:Saturn, and pointed out that these put the centre of the Sun usually a little way off the common center of gravity, but only a little, the distance at most "would scarcely amount to one diameter of the Sun".
Commentary on the Principia
The sequence of definitions used in setting up dynamics in the Principia is recognisable in many textbooks today. Newton first set out the definition of mass
This was then used to define the "quantity of motion" (today called momentum), and the principle of inertia in which mass replaces the previous Cartesian notion of intrinsic force. This then set the stage for the introduction of forces through the change in momentum of a body. Curiously, for today's readers, the exposition looks dimensionally incorrect, since Newton does not introduce the dimension of time in rates of changes of quantities.
He defined space and time "not as they are well known to all". Instead, he defined "true" time and space as "absolute" and explained:
To some modern readers it can appear that some dynamical quantities recognised today were used in the Principia but not named. The mathematical aspects of the first two books were so clearly consistent that they were easily accepted; for example, Locke asked Huygens whether he could trust the mathematical proofs and was assured about their correctness.
However, the concept of an attractive force acting at a distance received a cooler response. In his notes, Newton wrote that the inverse square law arose naturally due to the structure of matter. However, he retracted this sentence in the published version, where he stated that the motion of planets is consistent with an inverse square law, but refused to speculate on the origin of the law. Huygens and Leibniz noted that the law was incompatible with the notion of the aether. From a Cartesian point of view, therefore, this was a faulty theory. Newton's defence has been adopted since by many famous physicists—he pointed out that the mathematical form of the theory had to be correct since it explained the data, and he refused to speculate further on the basic nature of gravity. The sheer number of phenomena that could be organised by the theory was so impressive that younger "philosophers" soon adopted the methods and language of the Principia.
Rules of Reason
Perhaps to reduce the risk of public misunderstanding, Newton included at the beginning of Book 3 (in the second (1713) and third (1726) editions) a section titled "Rules of Reasoning in Philosophy". In the four rules, as they came finally to stand in the 1726 edition, Newton effectively offers a methodology for handling unknown phenomena in nature and reaching towards explanations for them. The four Rules of the 1726 edition run as follows (omitting some explanatory comments that follow each):
We are to admit no more causes of natural things than such as are both true and sufficient to explain their appearances.
Therefore to the same natural effects we must, as far as possible, assign the same causes.
The qualities of bodies, which admit neither intensification nor remission of degrees, and which are found to belong to all bodies within the reach of our experiments, are to be esteemed the universal qualities of all bodies whatsoever.
In experimental philosophy we are to look upon propositions inferred by general induction from phenomena as accurately or very nearly true, not withstanding any contrary hypothesis that may be imagined, till such time as other phenomena occur, by which they may either be made more accurate, or liable to exceptions.
This section of Rules for philosophy is followed by a listing of "Phenomena", in which are listed a number of mainly astronomical observations, that Newton used as the basis for inferences later on, as if adopting a consensus set of facts from the astronomers of his time.
Both the "Rules" and the "Phenomena" evolved from one edition of the Principia to the next. Rule 4 made its appearance in the third (1726) edition; Rules 1–3 were present as "Rules" in the second (1713) edition, and predecessors of them were also present in the first edition of 1687, but there they had a different heading: they were not given as "Rules", but rather in the first (1687) edition the predecessors of the three later "Rules", and of most of the later "Phenomena", were all lumped together under a single heading "Hypotheses" (in which the third item was the predecessor of a heavy revision that gave the later Rule 3).
From this textual evolution, it appears that Newton wanted by the later headings "Rules" and "Phenomena" to clarify for his readers his view of the roles to be played by these various statements.
In the third (1726) edition of the Principia, Newton explains each rule in an alternative way and/or gives an example to back up what the rule is claiming. The first rule is explained as a philosophers' principle of economy. The second rule states that if one cause is assigned to a natural effect, then the same cause so far as possible must be assigned to natural effects of the same kind: for example, respiration in humans and in animals, fires in the home and in the Sun, or the reflection of light whether it occurs terrestrially or from the planets. An extensive explanation is given of the third rule, concerning the qualities of bodies, and Newton discusses here the generalisation of observational results, with a caution against making up fancies contrary to experiments, and use of the rules to illustrate the observation of gravity and space.
General Scholium
The General Scholium is a concluding essay added to the second edition, 1713 (and amended in the third edition, 1726). It is not to be confused with the General Scholium at the end of Book 2, Section 6, which discusses his pendulum experiments and resistance due to air, water, and other fluids.
Here Newton used the expression hypotheses non fingo, "I formulate no hypotheses", in response to criticisms of the first edition of the Principia. ("Fingo" is sometimes nowadays translated "feign" rather than the traditional "frame," although "feign" does not properly translate "fingo"). Newton's gravitational attraction, an invisible force able to act over vast distances, had led to criticism that he had introduced "occult agencies" into science. Newton firmly rejected such criticisms and wrote that it was enough that the phenomena implied gravitational attraction, as they did; but the phenomena did not so far indicate the cause of this gravity, and it was both unnecessary and improper to frame hypotheses of things not implied by the phenomena: such hypotheses "have no place in experimental philosophy", in contrast to the proper way in which "particular propositions are inferr'd from the phenomena and afterwards rendered general by induction".
Newton also underlined his criticism of the vortex theory of planetary motions, of Descartes, pointing to its incompatibility with the highly eccentric orbits of comets, which carry them "through all parts of the heavens indifferently".
Newton also gave theological argument. From the system of the world, he inferred the existence of a god, along lines similar to what is sometimes called the argument from intelligent or purposive design. It has been suggested that Newton gave "an oblique argument for a unitarian conception of God and an implicit attack on the doctrine of the Trinity". The General Scholium does not address or attempt to refute the church doctrine; it simply does not mention Jesus, the Holy Ghost, or the hypothesis of the Trinity.
Publishing the book
Halley and Newton's initial stimulus
In January 1684, Edmond Halley, Christopher Wren and Robert Hooke had a conversation in which Hooke claimed to not only have derived the inverse-square law but also all the laws of planetary motion. Wren was unconvinced, Hooke did not produce the claimed derivation although the others gave him time to do it, and Halley, who could derive the inverse-square law for the restricted circular case (by substituting Kepler's relation into Huygens' formula for the centrifugal force) but failed to derive the relation generally, resolved to ask Newton.
Halley's visits to Newton in 1684 thus resulted from Halley's debates about planetary motion with Wren and Hooke, and they seem to have provided Newton with the incentive and spur to develop and write what became Philosophiae Naturalis Principia Mathematica. Halley was at that time a Fellow and Council member of the Royal Society in London (positions that in 1686 he resigned to become the Society's paid Clerk). Halley's visit to Newton in Cambridge in 1684 probably occurred in August. When Halley asked Newton's opinion on the problem of planetary motions discussed earlier that year between Halley, Hooke and Wren, Newton surprised Halley by saying that he had already made the derivations some time ago; but that he could not find the papers. (Matching accounts of this meeting come from Halley and Abraham De Moivre to whom Newton confided.) Halley then had to wait for Newton to "find" the results, and in November 1684 Newton sent Halley an amplified version of whatever previous work Newton had done on the subject. This took the form of a 9-page manuscript, De motu corporum in gyrum (Of the motion of bodies in an orbit): the title is shown on some surviving copies, although the (lost) original may have been without a title.
Newton's tract De motu corporum in gyrum, which he sent to Halley in late 1684, derived what is now known as the three laws of Kepler, assuming an inverse square law of force, and generalised the result to conic sections. It also extended the methodology by adding the solution of a problem on the motion of a body through a resisting medium. The contents of De motu so excited Halley by their mathematical and physical originality and far-reaching implications for astronomical theory, that he immediately went to visit Newton again, in November 1684, to ask Newton to let the Royal Society have more of such work. The results of their meetings clearly helped to stimulate Newton with the enthusiasm needed to take his investigations of mathematical problems much further in this area of physical science, and he did so in a period of highly concentrated work that lasted at least until mid-1686.
Newton's single-minded attention to his work generally, and to his project during this time, is shown by later reminiscences from his secretary and copyist of the period, Humphrey Newton. His account tells of Isaac Newton's absorption in his studies, how he sometimes forgot his food, or his sleep, or the state of his clothes, and how when he took a walk in his garden he would sometimes rush back to his room with some new thought, not even waiting to sit before beginning to write it down. Other evidence also shows Newton's absorption in the Principia: Newton for years kept up a regular programme of chemical or alchemical experiments, and he normally kept dated notes of them, but for a period from May 1684 to April 1686, Newton's chemical notebooks have no entries at all. So, it seems that Newton abandoned pursuits to which he was formally dedicated and did very little else for well over a year and a half, but concentrated on developing and writing what became his great work.
The first of the three constituent books was sent to Halley for the printer in spring 1686, and the other two books somewhat later. The complete work, published by Halley at his own financial risk, appeared in July 1687. Newton had also communicated De motu to Flamsteed, and during the period of composition, he exchanged a few letters with Flamsteed about observational data on the planets, eventually acknowledging Flamsteed's contributions in the published version of the Principia of 1687.
Preliminary version
The process of writing that first edition of the Principia went through several stages and drafts: some parts of the preliminary materials still survive, while others are lost except for fragments and cross-references in other documents.
Surviving materials show that Newton (up to some time in 1685) conceived his book as a two-volume work. The first volume was to be titled De motu corporum, Liber primus, with contents that later appeared in extended form as Book 1 of the Principia.
A fair-copy draft of Newton's planned second volume De motu corporum, Liber Secundus survives, its completion dated to about the summer of 1685. It covers the application of the results of Liber primus to the Earth, the Moon, the tides, the Solar System, and the universe; in this respect, it has much the same purpose as the final Book 3 of the Principia, but it is written much less formally and is more easily read.
It is not known just why Newton changed his mind so radically about the final form of what had been a readable narrative in De motu corporum, Liber Secundus of 1685, but he largely started afresh in a new, tighter, and less accessible mathematical style, eventually to produce Book 3 of the Principia as we know it. Newton frankly admitted that this change of style was deliberate when he wrote that he had (first) composed this book "in a popular method, that it might be read by many", but to "prevent the disputes" by readers who could not "lay aside the[ir] prejudices", he had "reduced" it "into the form of propositions (in the mathematical way) which should be read by those only, who had first made themselves masters of the principles established in the preceding books". The final Book 3 also contained in addition some further important quantitative results arrived at by Newton in the meantime, especially about the theory of the motions of comets, and some of the perturbations of the motions of the Moon.
The result was numbered Book 3 of the Principia rather than Book 2 because in the meantime, drafts of Liber primus had expanded and Newton had divided it into two books. The new and final Book 2 was concerned largely with the motions of bodies through resisting mediums.
But the Liber Secundus of 1685 can still be read today. Even after it was superseded by Book 3 of the Principia, it survived complete, in more than one manuscript. After Newton's death in 1727, the relatively accessible character of its writing encouraged the publication of an English translation in 1728 (by persons still unknown, not authorised by Newton's heirs). It appeared under the English title A Treatise of the System of the World. This had some amendments relative to Newton's manuscript of 1685, mostly to remove cross-references that used obsolete numbering to cite the propositions of an early draft of Book 1 of the Principia. Newton's heirs shortly afterwards published the Latin version in their possession, also in 1728, under the (new) title De Mundi Systemate, amended to update cross-references, citations and diagrams to those of the later editions of the Principia, making it look superficially as if it had been written by Newton after the Principia, rather than before. The System of the World was sufficiently popular to stimulate two revisions (with similar changes as in the Latin printing), a second edition (1731), and a "corrected" reprint of the second edition (1740).
Halley's role as publisher
The text of the first of the three books of the Principia was presented to the Royal Society at the close of April 1686. Hooke made some priority claims (but failed to substantiate them), causing some delay. When Hooke's claim was made known to Newton, who hated disputes, Newton threatened to withdraw and suppress Book 3 altogether, but Halley, showing considerable diplomatic skills, tactfully persuaded Newton to withdraw his threat and let it go forward to publication. Samuel Pepys, as president, gave his imprimatur on 30 June 1686, licensing the book for publication. The Society had just spent its book budget on De Historia piscium, and the cost of publication was borne by Edmund Halley (who was also then acting as publisher of the Philosophical Transactions of the Royal Society): the book appeared in summer 1687. After Halley had personally financed the publication of Principia, he was informed that the society could no longer afford to provide him the promised annual salary of £50. Instead, Halley was paid with leftover copies of De Historia piscium.
Historical context
Beginnings of the Scientific Revolution
Nicolaus Copernicus had moved the Earth away from the center of the universe with the heliocentric theory for which he presented evidence in his book De revolutionibus orbium coelestium (On the revolutions of the heavenly spheres) published in 1543. Johannes Kepler wrote the book Astronomia nova (A new astronomy) in 1609, setting out the evidence that planets move in elliptical orbits with the Sun at one focus, and that planets do not move with constant speed along this orbit. Rather, their speed varies so that the line joining the centres of the sun and a planet sweeps out equal areas in equal times. To these two laws he added a third a decade later, in his 1619 book Harmonices Mundi (Harmonies of the world). This law sets out a proportionality between the third power of the characteristic distance of a planet from the Sun and the square of the length of its year.
The foundation of modern dynamics was set out in Galileo's book Dialogo sopra i due massimi sistemi del mondo (Dialogue on the two main world systems) where the notion of inertia was implicit and used. In addition, Galileo's experiments with inclined planes had yielded precise mathematical relations between elapsed time and acceleration, velocity or distance for uniform and uniformly accelerated motion of bodies.
Descartes' book of 1644 Principia philosophiae (Principles of philosophy) stated that bodies can act on each other only through contact: a principle that induced people, among them himself, to hypothesize a universal medium as the carrier of interactions such as light and gravity—the aether. Newton was criticized for apparently introducing forces that acted at distance without any medium. Not until the development of particle theory was Descartes' notion vindicated when it was possible to describe all interactions, like the strong, weak, and electromagnetic fundamental interactions, using mediating gauge bosons and gravity through hypothesized gravitons.
Newton's role
Newton had studied these books, or, in some cases, secondary sources based on them, and taken notes entitled Quaestiones quaedam philosophicae (Questions about philosophy) during his days as an undergraduate. During this period (1664–1666) he created the basis of calculus and performed the first experiments in the optics of colour. At this time, his proof that white light was a combination of primary colours (found via prismatics) replaced the prevailing theory of colours and received an overwhelmingly favourable response and occasioned bitter disputes with Robert Hooke and others, which forced him to sharpen his ideas to the point where he already composed sections of his later book Opticks by the 1670s in response. Work on calculus is shown in various papers and letters, including two to Leibniz. He became a fellow of the Royal Society and the second Lucasian Professor of Mathematics (succeeding Isaac Barrow) at Trinity College, Cambridge.
Newton's early work on motion
In the 1660s Newton studied the motion of colliding bodies and deduced that the centre of mass of two colliding bodies remains in uniform motion. Surviving manuscripts of the 1660s also show Newton's interest in planetary motion and that by 1669 he had shown, for a circular case of planetary motion, that the force he called "endeavour to recede" (now called centrifugal force) had an inverse-square relation with distance from the center. After his 1679–1680 correspondence with Hooke, described below, Newton adopted the language of inward or centripetal force. According to Newton scholar J. Bruce Brackenridge, although much has been made of the change in language and difference of point of view, as between centrifugal or centripetal forces, the actual computations and proofs remained the same either way. They also involved the combination of tangential and radial displacements, which Newton was making in the 1660s. The difference between the centrifugal and centripetal points of view, though a significant change of perspective, did not change the analysis. Newton also clearly expressed the concept of linear inertia in the 1660s: for this Newton was indebted to Descartes' work published 1644.
Controversy with Hooke
Hooke published his ideas about gravitation in the 1660s and again in 1674. He argued for an attracting principle of gravitation in Micrographia of 1665, in a 1666 Royal Society lecture On gravity, and again in 1674, when he published his ideas about the System of the World in somewhat developed form, as an addition to An Attempt to Prove the Motion of the Earth from Observations. Hooke clearly postulated mutual attractions between the Sun and planets, in a way that increased with nearness to the attracting body, along with a principle of linear inertia. Hooke's statements up to 1674 made no mention, however, that an inverse square law applies or might apply to these attractions. Hooke's gravitation was also not yet universal, though it approached universality more closely than previous hypotheses. Hooke also did not provide accompanying evidence or mathematical demonstration. On these two aspects, Hooke stated in 1674: "Now what these several degrees [of gravitational attraction] are I have not yet experimentally verified" (indicating that he did not yet know what law the gravitation might follow); and as to his whole proposal: "This I only hint at present", "having my self many other things in hand which I would first compleat, and therefore cannot so well attend it" (i.e., "prosecuting this Inquiry").
In November 1679, Hooke began an exchange of letters with Newton, of which the full text is now published. Hooke told Newton that Hooke had been appointed to manage the Royal Society's correspondence, and wished to hear from members about their researches, or their views about the researches of others; and as if to whet Newton's interest, he asked what Newton thought about various matters, giving a whole list, mentioning "compounding the celestial motions of the planets of a direct motion by the tangent and an attractive motion towards the central body", and "my hypothesis of the lawes or causes of springinesse", and then a new hypothesis from Paris about planetary motions (which Hooke described at length), and then efforts to carry out or improve national surveys, the difference of latitude between London and Cambridge, and other items. Newton's reply offered "a fansy of my own" about a terrestrial experiment (not a proposal about celestial motions) which might detect the Earth's motion, by the use of a body first suspended in air and then dropped to let it fall. The main point was to indicate how Newton thought the falling body could experimentally reveal the Earth's motion by its direction of deviation from the vertical, but he went on hypothetically to consider how its motion could continue if the solid Earth had not been in the way (on a spiral path to the centre). Hooke disagreed with Newton's idea of how the body would continue to move. A short further correspondence developed, and towards the end of it Hooke, writing on 6 January 1680 to Newton, communicated his "supposition ... that the Attraction always is in a duplicate proportion to the Distance from the Center Reciprocall, and Consequently that the Velocity will be in a subduplicate proportion to the Attraction and Consequently as Kepler Supposes Reciprocall to the Distance." (Hooke's inference about the velocity was actually incorrect.)
In 1686, when the first book of Newton's Principia was presented to the Royal Society, Hooke claimed that Newton had obtained from him the "notion" of "the rule of the decrease of Gravity, being reciprocally as the squares of the distances from the Center". At the same time (according to Edmond Halley's contemporary report) Hooke agreed that "the Demonstration of the Curves generated therby" was wholly Newton's.
A recent assessment about the early history of the inverse square law is that "by the late 1660s", the assumption of an "inverse proportion between gravity and the square of distance was rather common and had been advanced by a number of different people for different reasons". Newton himself had shown in the 1660s that for planetary motion under a circular assumption, force in the radial direction had an inverse-square relation with distance from the center. Newton, faced in May 1686 with Hooke's claim on the inverse square law, denied that Hooke was to be credited as author of the idea, giving reasons including the citation of prior work by others before Hooke. Newton also firmly claimed that even if it had happened that he had first heard of the inverse square proportion from Hooke, which it had not, he would still have some rights to it in view of his mathematical developments and demonstrations, which enabled observations to be relied on as evidence of its accuracy, while Hooke, without mathematical demonstrations and evidence in favour of the supposition, could only guess (according to Newton) that it was approximately valid "at great distances from the center".
The background described above shows there was basis for Newton to deny deriving the inverse square law from Hooke. On the other hand, Newton did accept and acknowledge, in all editions of the Principia, that Hooke (but not exclusively Hooke) had separately appreciated the inverse square law in the Solar System. Newton acknowledged Wren, Hooke and Halley in this connection in the Scholium to Proposition 4 in Book 1. Newton also acknowledged to Halley that his correspondence with Hooke in 1679–80 had reawakened his dormant interest in astronomical matters, but that did not mean, according to Newton, that Hooke had told Newton anything new or original: "yet am I not beholden to him for any light into that business but only for the diversion he gave me from my other studies to think on these things & for his dogmaticalness in writing as if he had found the motion in the Ellipsis, which inclined me to try it ...".) Newton's reawakening interest in astronomy received further stimulus by the appearance of a comet in the winter of 1680/1681, on which he corresponded with John Flamsteed.
In 1759, decades after the deaths of both Newton and Hooke, Alexis Clairaut, mathematical astronomer eminent in his own right in the field of gravitational studies, made his assessment after reviewing what Hooke had published on gravitation. "One must not think that this idea ... of Hooke diminishes Newton's glory", Clairaut wrote; "The example of Hooke" serves "to show what a distance there is between a truth that is glimpsed and a truth that is demonstrated".
Location of early edition copies
It has been estimated that as many as 750 copies of the first edition were printed by the Royal Society, and "it is quite remarkable that so many copies of this small first edition are still in existence ... but it may be because the original Latin text was more revered than read". A survey published in 1953 located 189 surviving copies with nearly 200 further copies located by the most recent survey published in 2020, suggesting that the initial print run was larger than previously thought. However, more recent book historical and bibliographical research has examined those prior claims, and concludes that Macomber's earlier estimation of 500 copies is likely correct.
Cambridge University Library has Newton's own copy of the first edition, with handwritten notes for the second edition.
The Earl Gregg Swem Library at the College of William & Mary has a first edition copy of the Principia. Throughout are Latin annotations written by Thomas S. Savage. These handwritten notes are currently being researched at the college.
The Frederick E. Brasch Collection of Newton and Newtoniana in Stanford University also has a first edition of the Principia.
A first edition forms part of the Crawford Collection, housed at the Royal Observatory, Edinburgh.
The Uppsala University Library owns a first edition copy, which was stolen in the 1960s and returned to the library in 2009.
The Folger Shakespeare Library in Washington, D.C. owns a first edition, as well as a 1713 second edition.
The Huntington Library in San Marino, California owns Isaac Newton's personal copy, with annotations in Newton's own hand.
The Bodmer Library in Switzerland keeps a copy of the original edition that was owned by Leibniz. It contains handwritten notes by Leibniz, in particular concerning the controversy of who first formulated calculus (although he published it later, Newton argued that he developed it earlier).
The Iron Library in Switzerland holds a first edition copy that was formerly in the library of the physicist Ernst Mach. The copy contains critical marginalia in Mach's hand.
The University of St Andrews Library holds both variants of the first edition, as well as copies of the 1713 and 1726 editions.
The Fisher Library in the University of Sydney has a first-edition copy, annotated by a mathematician of uncertain identity and corresponding notes from Newton himself.
The Linda Hall Library holds the first edition, as well as a copy of the 1713 and 1726 editions.
The Teleki-Bolyai Library of Târgu-Mureș holds a 2-line imprint first edition.
One book is also located at Vasaskolan, Gävle, in Sweden.
Dalhousie University has a copy as part of the William I. Morse collection.
McGill University in Montreal has the copy once owned by Sir William Osler.
The University of Toronto has a copy in the Thomas Fisher Rare Book Collection.
University College London Special Collections has a copy previously owned by the lawyer and mathematician John T. Graves.
In 2016, a first edition sold for $3.7 million.
The second edition (1713) were printed in 750 copies, and the third edition (1726) were printed in 1,250 copies.
A facsimile edition (based on the 3rd edition of 1726 but with variant readings from earlier editions and important annotations) was published in 1972 by Alexandre Koyré and I. Bernard Cohen.
Later editions
Second edition, 1713
Two later editions were published by Newton: Newton had been urged to make a new edition of the Principia since the early 1690s, partly because copies of the first edition had already become very rare and expensive within a few years after 1687. Newton referred to his plans for a second edition in correspondence with Flamsteed in November 1694. Newton also maintained annotated copies of the first edition specially bound up with interleaves on which he could note his revisions; two of these copies still survive, but he had not completed the revisions by 1708. Newton had almost severed connections with one would-be editor, Nicolas Fatio de Duillier, and another, David Gregory seems not to have met with his approval and was also terminally ill, dying in 1708. Nevertheless, reasons were accumulating not to put off the new edition any longer. Richard Bentley, master of Trinity College, persuaded Newton to allow him to undertake a second edition, and in June 1708 Bentley wrote to Newton with a specimen print of the first sheet, at the same time expressing the (unfulfilled) hope that Newton had made progress towards finishing the revisions. It seems that Bentley then realised that the editorship was technically too difficult for him, and with Newton's consent he appointed Roger Cotes, Plumian professor of astronomy at Trinity, to undertake the editorship for him as a kind of deputy (but Bentley still made the publishing arrangements and had the financial responsibility and profit). The correspondence of 1709–1713 shows Cotes reporting to two masters, Bentley and Newton, and managing (and often correcting) a large and important set of revisions to which Newton sometimes could not give his full attention. Under the weight of Cotes' efforts, but impeded by priority disputes between Newton and Leibniz, and by troubles at the Mint, Cotes was able to announce publication to Newton on 30 June 1713. Bentley sent Newton only six presentation copies; Cotes was unpaid; Newton omitted any acknowledgement to Cotes.
Among those who gave Newton corrections for the Second Edition were: Firmin Abauzit, Roger Cotes and David Gregory. However, Newton omitted acknowledgements to some because of the priority disputes. John Flamsteed, the Astronomer Royal, suffered this especially.
The Second Edition was the basis of the first edition to be printed abroad, which appeared in Amsterdam in 1714.
Third edition, 1726
After his serious illness in 1722 and after the appearance of a reprint of the second edition in Amsterdam in 1723, the 80-year-old Newton began to revise once again the Principia in the fall of 1723. The third edition was published 25 March 1726, under the stewardship of Henry Pemberton, M.D., a man of the greatest skill in these matters...; Pemberton later said that this recognition was worth more to him than the two hundred guinea award from Newton.
In 1739–1742, two French priests, Pères Thomas LeSeur and François Jacquier (of the Minim order, but sometimes erroneously identified as Jesuits), produced with the assistance of J.-L. Calandrini an extensively annotated version of the Principia in the 3rd edition of 1726. Sometimes this is referred to as the Jesuit edition: it was much used, and reprinted more than once in Scotland during the 19th century.
Émilie du Châtelet also made a translation of Newton's Principia into French. Unlike LeSeur and Jacquier's edition, hers was a complete translation of Newton's three books and their prefaces. She also included a Commentary section where she fused the three books into a much clearer and easier to understand summary. She included an analytical section where she applied the new mathematics of calculus to Newton's most controversial theories. Previously, geometry was the standard mathematics used to analyse theories. Du Châtelet's translation is the only complete one to have been done in French and hers remains the standard French translation to this day.
Translations
Four full English translations of Newton's Principia have appeared, all based on Newton's 3rd edition of 1726. The first, from 1729, by Andrew Motte, was described by Newton scholar I. Bernard Cohen (in 1968) as "still of enormous value in conveying to us the sense of Newton's words in their own time, and it is generally faithful to the original: clear, and well written". The 1729 version was the basis for several republications, often incorporating revisions, among them a widely used modernised English version of 1934, which appeared under the editorial name of Florian Cajori (though completed and published only some years after his death). Cohen pointed out ways in which the 18th-century terminology and punctuation of the 1729 translation might be confusing to modern readers, but he also made severe criticisms of the 1934 modernised English version, and showed that the revisions had been made without regard to the original, also demonstrating gross errors "that provided the final impetus to our decision to produce a wholly new translation".
The second full English translation, into modern English, is the work that resulted from this decision by collaborating translators I. Bernard Cohen, Anne Whitman, and Julia Budenz; it was published in 1999 with a guide by way of introduction.
The third such translation is due to Ian Bruce, and appears, with many other translations of mathematical works of the 17th and 18th centuries, on his website.
The fourth complete English translation is due to Charles Leedham-Green, professor emeritus of mathematics at Queen Mary University of London, and was published in 2021 by Cambridge University Press. Prof. Leedham-Green was motivated to produce that translation, on which he worked for twenty years, in part because of his dissatisfaction with the work of Cohen, Whitman, and Budenz, whose translation of the Principia he found unnecessarily obscure. Leedham-Green's aim was to convey Newton's own reasoning and arguments in a way intelligible to a modern mathematical scientist. His translation is heavily annotated and his explanatory notes make use of the modern secondary literature on some of the more difficult technical aspects of Newton's work.
Dana Densmore and William H. Donahue have published a translation of the work's central argument, published in 1996, along with expansion of included proofs and ample commentary. The book was developed as a textbook for classes at St. John's College and the aim of this translation is to be faithful to the Latin text.
Varia
In 1977, the spacecraft Voyager 1 and 2 left earth for the interstellar space carrying a picture of a page from Newton's Principia Mathematica, as part of the Golden Record, a collection of messages from humanity to extraterrestrials.
In 2014, British astronaut Tim Peake named his upcoming mission to the International Space Station Principia after the book, in "honour of Britain's greatest scientist". Tim Peake's Principia launched on 15 December 2015 aboard Soyuz TMA-19M.
See also
Atomism
Elements of the Philosophy of Newton
Isaac Newton's occult studies
References
Further reading
Miller, Laura, Reading Popular Newtonianism: Print, the Principia, and the Dissemination of Newtonian Science (University of Virginia Press, 2018) online review
Alexandre Koyré, Newtonian studies (London: Chapman and Hall, 1965).
I. Bernard Cohen, Introduction to Newton's Principia (Harvard University Press, 1971).
Richard S. Westfall, Force in Newton's physics; the science of dynamics in the seventeenth century (New York: American Elsevier, 1971).
S. Chandrasekhar, Newton's Principia for the common reader (New York: Oxford University Press, 1995).
Guicciardini, N., 2005, "Philosophia Naturalis..." in Grattan-Guinness, I., ed., Landmark Writings in Western Mathematics. Elsevier: 59–87.
Andrew Janiak, Newton as Philosopher (Cambridge University Press, 2008).
François De Gandt, Force and geometry in Newton's Principia trans. Curtis Wilson (Princeton, NJ: Princeton University Press, c1995).
Steffen Ducheyne, The main Business of Natural Philosophy: Isaac Newton's Natural-Philosophical Methodology (Dordrecht e.a.: Springer, 2012).
John Herivel, The background to Newton's Principia; a study of Newton's dynamical researches in the years 1664–84 (Oxford, Clarendon Press, 1965).
Brian Ellis, "The Origin and Nature of Newton's Laws of Motion" in Beyond the Edge of Certainty, ed. R. G. Colodny. (Pittsburgh: University Pittsburgh Press, 1965), 29–68.
E.A. Burtt, Metaphysical Foundations of Modern Science (Garden City, NY: Doubleday and Company, 1954).
Colin Pask, Magnificent Principia: Exploring Isaac Newton's Masterpiece (New York: Prometheus Books, 2013).
External links
Latin versions
First edition (1687)
Trinity College Library, Cambridge High resolution digitised version of Newton's own copy of the first edition, with annotations.
Cambridge University, Cambridge Digital Library High resolution digitised version of Newton's own copy of the first edition, interleaved with blank pages for his annotations and corrections.
1687: Newton's Principia, first edition (1687, in Latin). High-resolution presentation of the Gunnerus Library copy.
1687: Newton's Principia, first edition (1687, in Latin).
Project Gutenberg.
ETH-Bibliothek Zürich. From the library of Gabriel Cramer.
Philosophiæ Naturalis Principia Mathematica From the Rare Book and Special Collection Division at the Library of Congress
Second edition (1713)
ETH-Bibliothek Zürich.
ETH-Bibliothek Zürich (pirated Amsterdam reprint of 1723).
Philosophiæ naturalis principia mathematica (Adv.b.39.2), a 1713 edition with annotations by Newton in the collections of Cambridge University Library and fully digitised in Cambridge Digital Library
Third edition (1726)
ETH-Bibliothek Zürich.
Later Latin editions
Principia (in Latin, annotated). 1833 Glasgow reprint (volume 1) with Books 1 and 2 of the Latin edition annotated by Leseur, Jacquier and Calandrini 1739–42 (described above).
Archive.org (1871 reprint of the 1726 edition)
English translations
Andrew Motte, 1729, first English translation of third edition (1726)
WikiSource, Partial
Google books, vol. 1 with Book 1.
Internet Archive, vol. 2 with Books 2 and 3. (Book 3 starts at p.200.) (Google's metadata wrongly labels this vol. 1).
Partial HTML
Robert Thorpe 1802 translation
N. W. Chittenden, ed., 1846 "American Edition" a partly modernised English version, largely the Motte translation of 1729.
Wikisource
Archive.org #1
Archive.org #2
eBooks@Adelaide eBooks@Adelaide
Percival Frost 1863 translation with interpolations Archive.org
Florian Cajori 1934 modernisation of 1729 Motte and 1802 Thorpe translations
Ian Bruce has made a complete translation of the third edition, with notes, on his website.
Charles Leedham-Green 2021 has published a complete and heavily annotated translation. Cambridge; Cambridge University Press.
Other links
David R. Wilkins of the School of Mathematics at Trinity College, Dublin has transcribed a few sections into TeX and METAPOST and made the source, as well as a formatted PDF available at Extracts from the Works of Isaac Newton.
1680s in science
1687 non-fiction books
1687 in England
1687 in science
17th-century books in Latin
1713 non-fiction books
1726 non-fiction books
18th-century books in Latin
Books by Isaac Newton
Copernican Revolution
Historical physics publications
Prose texts in Latin
Texts in Latin
Mathematics books
Natural philosophy
Physics books
Books about philosophy of mathematics
Treatises
Books about philosophy of physics | Philosophiæ Naturalis Principia Mathematica | [
"Astronomy"
] | 11,162 | [
"Copernican Revolution",
"History of astronomy"
] |
48,791 | https://en.wikipedia.org/wiki/Pathology | Pathology is the study of disease. The word pathology also refers to the study of disease in general, incorporating a wide range of biology research fields and medical practices. However, when used in the context of modern medical treatment, the term is often used in a narrower fashion to refer to processes and tests that fall within the contemporary medical field of "general pathology", an area that includes a number of distinct but inter-related medical specialties that diagnose disease, mostly through analysis of tissue and human cell samples. Idiomatically, "a pathology" may also refer to the predicted or actual progression of particular diseases (as in the statement "the many different forms of cancer have diverse pathologies", in which case a more proper choice of word would be "pathophysiologies"). The suffix pathy is sometimes used to indicate a state of disease in cases of both physical ailment (as in cardiomyopathy) and psychological conditions (such as psychopathy). A physician practicing pathology is called a pathologist.
As a field of general inquiry and research, pathology addresses components of disease: cause, mechanisms of development (pathogenesis), structural alterations of cells (morphologic changes), and the consequences of changes (clinical manifestations). In common medical practice, general pathology is mostly concerned with analyzing known clinical abnormalities that are markers or precursors for both infectious and non-infectious disease, and is conducted by experts in one of two major specialties, anatomical pathology and clinical pathology. Further divisions in specialty exist on the basis of the involved sample types (comparing, for example, cytopathology, hematopathology, and histopathology), organs (as in renal pathology), and physiological systems (oral pathology), as well as on the basis of the focus of the examination (as with forensic pathology).
Pathology is a significant field in modern medical diagnosis and medical research.
Etymology
The Latin term pathology derives from the Ancient Greek roots pathos (), meaning "experience" or "suffering", and -logia (), meaning "study of". The term is of early 16th-century origin, and became increasingly popularized after the 1530s.
History
The study of pathology, including the detailed examination of the body, including dissection and inquiry into specific maladies, dates back to antiquity. Rudimentary understanding of many conditions was present in most early societies and is attested to in the records of the earliest historical societies, including those of the Middle East, India, and China. By the Hellenic period of ancient Greece, a concerted causal study of disease was underway (see Medicine in ancient Greece), with many notable early physicians (such as Hippocrates, for whom the modern Hippocratic Oath is named) having developed methods of diagnosis and prognosis for a number of diseases. The medical practices of the Romans and those of the Byzantines continued from these Greek roots, but, as with many areas of scientific inquiry, growth in understanding of medicine stagnated somewhat after the Classical Era, but continued to slowly develop throughout numerous cultures. Notably, many advances were made in the medieval era of Islam (see Medicine in medieval Islam), during which numerous texts of complex pathologies were developed, also based on the Greek tradition. Even so, growth in complex understanding of disease mostly languished until knowledge and experimentation again began to proliferate in the Renaissance, Enlightenment, and Baroque eras, following the resurgence of the empirical method at new centers of scholarship. By the 17th century, the study of rudimentary microscopy was underway and examination of tissues had led British Royal Society member Robert Hooke to coin the word "cell", setting the stage for later germ theory.
Modern pathology began to develop as a distinct field of inquiry during the 19th Century through natural philosophers and physicians that studied disease and the informal study of what they termed "pathological anatomy" or "morbid anatomy". However, pathology as a formal area of specialty was not fully developed until the late 19th and early 20th centuries, with the advent of detailed study of microbiology. In the 19th century, physicians had begun to understand that disease-causing pathogens, or "germs" (a catch-all for disease-causing, or pathogenic, microbes, such as bacteria, viruses, fungi, amoebae, molds, protists, and prions) existed and were capable of reproduction and multiplication, replacing earlier beliefs in humors or even spiritual agents, that had dominated for much of the previous 1,500 years in European medicine. With the new understanding of causative agents, physicians began to compare the characteristics of one germ's symptoms as they developed within an affected individual to another germ's characteristics and symptoms. This approach led to the foundational understanding that diseases are able to replicate themselves, and that they can have many profound and varied effects on the human host. To determine causes of diseases, medical experts used the most common and widely accepted assumptions or symptoms of their times, a general principle of approach that persists in modern medicine.
Modern medicine was particularly advanced by further developments of the microscope to analyze tissues, to which Rudolf Virchow gave a significant contribution, leading to a slew of research developments.
By the late 1920s to early 1930s pathology was deemed a medical specialty. Combined with developments in the understanding of general physiology, by the beginning of the 20th century, the study of pathology had begun to split into a number of distinct fields, resulting in the development of a large number of modern specialties within pathology and related disciplines of diagnostic medicine.
General pathology
The modern practice of pathology is divided into a number of subdisciplines within the distinct but deeply interconnected aims of biological research and medical practice. Biomedical research into disease incorporates the work of a vast variety of life science specialists, whereas, in most parts of the world, to be licensed to practice pathology as a medical specialty, one has to complete medical school and secure a license to practice medicine. Structurally, the study of disease is divided into many different fields that study or diagnose markers for disease using methods and technologies particular to specific scales, organs, and tissue types.
Anatomical pathology
Anatomical pathology (Commonwealth) or anatomic pathology (United States) is a medical specialty that is concerned with the diagnosis of disease based on the gross, microscopic, chemical, immunologic and molecular examination of organs, tissues, and whole bodies (as in a general examination or an autopsy). Anatomical pathology is itself divided into subfields, the main divisions being surgical pathology, cytopathology, and forensic pathology. Anatomical pathology is one of two main divisions of the medical practice of pathology, the other being clinical pathology, the diagnosis of disease through the laboratory analysis of bodily fluids and tissues. Sometimes, pathologists practice both anatomical and clinical pathology, a combination known as general pathology.
Cytopathology
Cytopathology (sometimes referred to as "cytology") is a branch of pathology that studies and diagnoses diseases on the cellular level. It is usually used to aid in the diagnosis of cancer, but also helps in the diagnosis of certain infectious diseases and other inflammatory conditions as well as thyroid lesions, diseases involving sterile body cavities (peritoneal, pleural, and cerebrospinal), and a wide range of other body sites. Cytopathology is generally used on samples of free cells or tissue fragments (in contrast to histopathology, which studies whole tissues) and cytopathologic tests are sometimes called smear tests because the samples may be smeared across a glass microscope slide for subsequent staining and microscopic examination. However, cytology samples may be prepared in other ways, including cytocentrifugation.
Dermatopathology
Dermatopathology is a subspecialty of anatomic pathology that focuses on the skin and the rest of the integumentary system as an organ. It is unique, in that there are two paths a physician can take to obtain the specialization. All general pathologists and general dermatologists train in the pathology of the skin, so the term dermatopathologist denotes either of these who has reached a certain level of accreditation and experience; in the US, either a general pathologist or a dermatologist can undergo a 1 to 2 year fellowship in the field of dermatopathology. The completion of this fellowship allows one to take a subspecialty board examination, and becomes a board certified dermatopathologist. Dermatologists are able to recognize most skin diseases based on their appearances, anatomic distributions, and behavior. Sometimes, however, those criteria do not lead to a conclusive diagnosis, and a skin biopsy is taken to be examined under the microscope using usual histological tests. In some cases, additional specialized testing needs to be performed on biopsies, including immunofluorescence, immunohistochemistry, electron microscopy, flow cytometry, and molecular-pathologic analysis. One of the greatest challenges of dermatopathology is its scope. More than 1500 different disorders of the skin exist, including cutaneous eruptions ("rashes") and neoplasms. Therefore, dermatopathologists must maintain a broad base of knowledge in clinical dermatology, and be familiar with several other specialty areas in Medicine.
Forensic pathology
Forensic pathology focuses on determining the cause of death by post-mortem examination of a corpse or partial remains. An autopsy is typically performed by a coroner or medical examiner, often during criminal investigations; in this role, coroners and medical examiners are also frequently asked to confirm the identity of a corpse. The requirements for becoming a licensed practitioner of forensic pathology varies from country to country (and even within a given nation) but typically a minimal requirement is a medical doctorate with a specialty in general or anatomical pathology with subsequent study in forensic medicine. The methods forensic scientists use to determine death include examination of tissue specimens to identify the presence or absence of natural disease and other microscopic findings, interpretations of toxicology on body tissues and fluids to determine the chemical cause of overdoses, poisonings or other cases involving toxic agents, and examinations of physical trauma. Forensic pathology is a major component in the trans-disciplinary field of forensic science.
Histopathology
Histopathology refers to the microscopic examination of various forms of human tissue. Specifically, in clinical medicine, histopathology refers to the examination of a biopsy or surgical specimen by a pathologist, after the specimen has been processed and histological sections have been placed onto glass slides. This contrasts with the methods of cytopathology, which uses free cells or tissue fragments. Histopathological examination of tissues starts with surgery, biopsy, or autopsy. The tissue is removed from the body of an organism and then placed in a fixative that stabilizes the tissues to prevent decay. The most common fixative is formalin, although frozen section fixing is also common. To see the tissue under a microscope, the sections are stained with one or more pigments. The aim of staining is to reveal cellular components; counterstains are used to provide contrast. Histochemistry refers to the science of using chemical reactions between laboratory chemicals and components within tissue. The histological slides are then interpreted diagnostically and the resulting pathology report describes the histological findings and the opinion of the pathologist. In the case of cancer, this represents the tissue diagnosis required for most treatment protocols.
Neuropathology
Neuropathology is the study of disease of nervous system tissue, usually in the form of either surgical biopsies or sometimes whole brains in the case of autopsy. Neuropathology is a subspecialty of anatomic pathology, neurology, and neurosurgery. In many English-speaking countries, neuropathology is considered a subfield of anatomical pathology. A physician who specializes in neuropathology, usually by completing a fellowship after a residency in anatomical or general pathology, is called a neuropathologist. In day-to-day clinical practice, a neuropathologist generates diagnoses for patients. If a disease of the nervous system is suspected, and the diagnosis cannot be made by less invasive methods, a biopsy of nervous tissue is taken from the brain or spinal cord to aid in diagnosis. Biopsy is usually requested after a mass is detected by medical imaging. With autopsies, the principal work of the neuropathologist is to help in the post-mortem diagnosis of various conditions that affect the central nervous system. Biopsies can also consist of the skin. Epidermal nerve fiber density testing (ENFD) is a more recently developed neuropathology test in which a punch skin biopsy is taken to identify small fiber neuropathies by analyzing the nerve fibers of the skin. This test is becoming available in select labs as well as many universities; it replaces the traditional nerve biopsy test as less invasive.
Pulmonary pathology
Pulmonary pathology is a subspecialty of anatomic (and especially surgical) pathology that deals with diagnosis and characterization of neoplastic and non-neoplastic diseases of the lungs and thoracic pleura. Diagnostic specimens are often obtained via bronchoscopic transbronchial biopsy, CT-guided percutaneous biopsy, or video-assisted thoracic surgery. These tests can be necessary to diagnose between infection, inflammation, or fibrotic conditions.
Renal pathology
Renal pathology is a subspecialty of anatomic pathology that deals with the diagnosis and characterization of disease of the kidneys. In a medical setting, renal pathologists work closely with nephrologists and transplant surgeons, who typically obtain diagnostic specimens via percutaneous renal biopsy. The renal pathologist must synthesize findings from traditional microscope histology, electron microscopy, and immunofluorescence to obtain a definitive diagnosis. Medical renal diseases may affect the glomerulus, the tubules and interstitium, the vessels, or a combination of these compartments.
Surgical pathology
Surgical pathology is one of the primary areas of practice for most anatomical pathologists. Surgical pathology involves the gross and microscopic examination of surgical specimens, as well as biopsies submitted by surgeons and non-surgeons such as general internists, medical subspecialists, dermatologists, and interventional radiologists. Often an excised tissue sample is the best and most definitive evidence of disease (or lack thereof) in cases where tissue is surgically removed from a patient. These determinations are usually accomplished by a combination of gross (i.e., macroscopic) and histologic (i.e., microscopic) examination of the tissue, and may involve evaluations of molecular properties of the tissue by immunohistochemistry or other laboratory tests.
There are two major types of specimens submitted for surgical pathology analysis: biopsies and surgical resections. A biopsy is a small piece of tissue removed primarily for surgical pathology analysis, most often in order to render a definitive diagnosis. Types of biopsies include core biopsies, which are obtained through the use of large-bore needles, sometimes under the guidance of radiological techniques such as ultrasound, CT scan, or magnetic resonance imaging. Incisional biopsies are obtained through diagnostic surgical procedures that remove part of a suspicious lesion, whereas excisional biopsies remove the entire lesion, and are similar to therapeutic surgical resections. Excisional biopsies of skin lesions and gastrointestinal polyps are very common. The pathologist's interpretation of a biopsy is critical to establishing the diagnosis of a benign or malignant tumor, and can differentiate between different types and grades of cancer, as well as determining the activity of specific molecular pathways in the tumor. Surgical resection specimens are obtained by the therapeutic surgical removal of an entire diseased area or organ (and occasionally multiple organs). These procedures are often intended as definitive surgical treatment of a disease in which the diagnosis is already known or strongly suspected, but pathological analysis of these specimens remains important in confirming the previous diagnosis.
Clinical pathology
Clinical pathology is a medical specialty that is concerned with the diagnosis of disease based on the laboratory analysis of bodily fluids such as blood and urine, as well as tissues, using the tools of chemistry, clinical microbiology, hematology and molecular pathology. Clinical pathologists work in close collaboration with medical technologists, hospital administrations, and referring physicians. Clinical pathologists learn to administer a number of visual and microscopic tests and an especially large variety of tests of the biophysical properties of tissue samples involving automated analysers and cultures. Sometimes the general term "laboratory medicine specialist" is used to refer to those working in clinical pathology, including medical doctors, Ph.D.s and doctors of pharmacology. Immunopathology, the study of an organism's immune response to infection, is sometimes considered to fall within the domain of clinical pathology.
Hematopathology
Hematopathology is the study of diseases of blood cells (including constituents such as white blood cells, red blood cells, and platelets) and the tissues, and organs comprising
the hematopoietic system. The term hematopoietic system refers to tissues and organs that produce and/or primarily host hematopoietic cells and includes bone marrow, the lymph nodes, thymus, spleen, and other lymphoid tissues. In the United States, hematopathology is a board certified subspecialty (licensed under the American Board of Pathology) practiced by those physicians who have completed a general pathology residency (anatomic, clinical, or combined) and an additional year of fellowship training in hematology. The hematopathologist reviews biopsies of lymph nodes, bone marrows and other tissues involved by an infiltrate of cells of the hematopoietic system. In addition, the hematopathologist may be in charge of flow cytometric and/or molecular hematopathology studies.
Molecular pathology
Molecular pathology is focused upon the study and diagnosis of disease through the examination of molecules within organs, tissues or bodily fluids. Molecular pathology is multidisciplinary by nature and shares some aspects of practice with both anatomic pathology and clinical pathology, molecular biology, biochemistry, proteomics and genetics. It is often applied in a context that is as much scientific as directly medical and encompasses the development of molecular and genetic approaches to the diagnosis and classification of human diseases, the design and validation of predictive biomarkers for treatment response and disease progression, and the susceptibility of individuals of different genetic constitution to particular disorders. The crossover between molecular pathology and epidemiology is represented by a related field "molecular pathological epidemiology". Molecular pathology is commonly used in diagnosis of cancer and infectious diseases. Molecular Pathology is primarily used to detect cancers such as melanoma, brainstem glioma, brain tumors as well as many other types of cancer and infectious diseases. Techniques are numerous but include quantitative polymerase chain reaction (qPCR), multiplex PCR, DNA microarray, in situ hybridization, DNA sequencing, antibody-based immunofluorescence tissue assays, molecular profiling of pathogens, and analysis of bacterial genes for antimicrobial resistance. Techniques used are based on analyzing samples of DNA and RNA. Pathology is widely used for gene therapy and disease diagnosis.
Oral and maxillofacial pathology
Oral and Maxillofacial Pathology is one of nine dental specialties recognized by the American Dental Association, and is sometimes considered a specialty of both dentistry and pathology. Oral Pathologists must complete three years of post doctoral training in an accredited program and subsequently obtain diplomate status from the American Board of Oral and Maxillofacial Pathology. The specialty focuses on the diagnosis, clinical management and investigation of diseases that affect the oral cavity and surrounding maxillofacial structures including but not limited to odontogenic, infectious, epithelial, salivary gland, bone and soft tissue pathologies. It also significantly intersects with the field of dental pathology. Although concerned with a broad variety of diseases of the oral cavity, they have roles distinct from otorhinolaryngologists ("ear, nose, and throat" specialists), and speech pathologists, the latter of which helps diagnose many neurological or neuromuscular conditions relevant to speech phonology or swallowing. Owing to the availability of the oral cavity to non-invasive examination, many conditions in the study of oral disease can be diagnosed, or at least suspected, from gross examination, but biopsies, cell smears, and other tissue analysis remain important diagnostic tools in oral pathology.
Medical training and accreditation
Becoming a pathologist generally requires specialty-training after medical school, but individual nations vary some in the medical licensing required of pathologists. In the United States, pathologists are physicians (D.O. or M.D.) who have completed a four-year undergraduate program, four years of medical school training, and three to four years of postgraduate training in the form of a pathology residency. Training may be within two primary specialties, as recognized by the American Board of Pathology: [anatomical pathology and clinical pathology, each of which requires separate board certification. The American Osteopathic Board of Pathology also recognizes four primary specialties: anatomic pathology, dermatopathology, forensic pathology, and laboratory medicine. Pathologists may pursue specialised fellowship training within one or more subspecialties of either anatomical or clinical pathology. Some of these subspecialties permit additional board certification, while others do not.
In the United Kingdom, pathologists are physicians licensed by the UK General Medical Council. The training to become a pathologist is under the oversight of the Royal College of Pathologists. After four to six years of undergraduate medical study, trainees proceed to a two-year foundation program. Full-time training in histopathology currently lasts between five and five and a half years and includes specialist training in surgical pathology, cytopathology, and autopsy pathology. It is also possible to take a Royal College of Pathologists diploma in forensic pathology, dermatopathology, or cytopathology, recognising additional specialist training and expertise and to get specialist accreditation in forensic pathology, pediatric pathology, and neuropathology. All postgraduate medical training and education in the UK is overseen by the General Medical Council.
In France, pathology is separated into two distinct specialties, anatomical pathology, and clinical pathology. Residencies for both lasts four years. Residency in anatomical pathology is open to physicians only, while clinical pathology is open to both physicians and pharmacists. At the end of the second year of clinical pathology residency, residents can choose between general clinical pathology and a specialization in one of the disciplines, but they can not practice anatomical pathology, nor can anatomical pathology residents practice clinical pathology.
Overlap with other diagnostic medicine
Though separate fields in terms of medical practice, a number of areas of inquiry in medicine and
medical science either overlap greatly with general pathology, work in tandem with it, or contribute significantly to the understanding of the pathology of a given disease or its course in an individual. As a significant portion of all general pathology practice is concerned with cancer, the practice of oncology makes extensive use of both anatomical and clinical pathology in diagnosis and treatment. In particular, biopsy, resection, and blood tests are all examples of pathology work that is essential for the diagnoses of many kinds of cancer and for the staging of cancerous masses. In a similar fashion, the tissue and blood analysis techniques of general pathology are of central significance to the investigation of serious infectious disease and as such inform significantly upon the fields of epidemiology, etiology, immunology, and parasitology. General pathology methods are of great importance to biomedical research into disease, wherein they are sometimes referred to as "experimental" or "investigative" pathology.
Medical imaging is the generating of visual representations of the interior of a body for clinical analysis and medical intervention. Medical imaging reveals details of internal physiology that help medical professionals plan appropriate treatments for tissue infection and trauma. Medical imaging is also central in supplying the biometric data necessary to establish baseline features of anatomy and physiology so as to increase the accuracy with which early or fine-detail abnormalities are detected. These diagnostic techniques are often performed in combination with general pathology procedures and are themselves often essential to developing new understanding of the pathogenesis of a given disease and tracking the progress of disease in specific medical cases. Examples of important subdivisions in medical imaging include radiology (which uses the imaging technologies of X-ray radiography) magnetic resonance imaging, medical ultrasonography (or ultrasound), endoscopy, elastography, tactile imaging, thermography, medical photography, nuclear medicine and functional imaging techniques such as positron emission tomography. Though they do not strictly relay images, readings from diagnostics tests involving electroencephalography, magnetoencephalography, and electrocardiography often give hints as to the state and function of certain tissues in the brain and heart respectively.
Pathology informatics
Pathology informatics is a subfield of health informatics. It is the use of information technology in pathology. It encompasses pathology laboratory operations, data analysis, and the interpretation of pathology-related information.
Key aspects of pathology informatics include:
Laboratory information management systems (LIMS): Implementing and managing computer systems specifically designed for pathology departments. These systems help in tracking and managing patient specimens, results, and other pathology data.
Digital pathology: Involves the use of digital technology to create, manage, and analyze pathology images. This includes side scanning and automated image analysis.
Telepathology: Using technology to enable remote pathology consultation and collaboration.
Quality assurance and reporting: Implementing informatics solutions to ensure the quality and accuracy of pathology processes.
Psychopathology
Psychopathology is the study of mental illness, particularly of severe disorders. Informed heavily by both psychology and neurology, its purpose is to classify mental illness, elucidate its underlying causes, and guide clinical psychiatric treatment accordingly. Although diagnosis and classification of mental norms and disorders is largely the purview of psychiatry—the results of which are guidelines such as the Diagnostic and Statistical Manual of Mental Disorders, which attempt to classify mental disease mostly on behavioural evidence, though not without controversy—the field is also heavily, and increasingly, informed upon by neuroscience and other of the biological cognitive sciences. Mental or social disorders or behaviours seen as generally unhealthy or excessive in a given individual, to the point where they cause harm or severe disruption to the person's lifestyle, are often called "pathological" (e.g., pathological gambling or pathological liar).
Non-humans
Although the vast majority of lab work and research in pathology concerns the development of disease in humans, pathology is of significance throughout the biological sciences. Two main catch-all fields exist to represent most complex organisms capable of serving as host to a pathogen or other form of disease: veterinary pathology (concerned with all non-human species of kingdom of Animalia) and phytopathology, which studies disease in plants.
Veterinary pathology
Veterinary pathology covers a vast array of species, but with a significantly smaller number of practitioners, so understanding of disease in non-human animals, especially as regards veterinary practice, varies considerably by species. Nevertheless, significant amounts of pathology research are conducted on animals, for two primary reasons: 1) The origins of diseases are typically zoonotic in nature, and many infectious pathogens have animal vectors and, as such, understanding the mechanisms of action for these pathogens in non-human hosts is essential to the understanding and application of epidemiology and 2) those animals that share physiological and genetic traits with humans can be used as surrogates for the study of the disease and potential treatments as well as the effects of various synthetic products. For this reason, as well as their roles as livestock and companion animals, mammals generally have the largest body of research in veterinary pathology. Animal testing remains a controversial practice, even in cases where it is used to research treatment for human disease. As in human medical pathology, the practice of veterinary pathology is customarily divided into the two main fields of anatomical and clinical pathology.
Plant pathology
Although the pathogens and their mechanics differ greatly from those of animals, plants are subject to a wide variety of diseases, including those caused by fungi, oomycetes, bacteria, viruses, viroids, virus-like organisms, phytoplasmas, protozoa, nematodes and parasitic plants. Damage caused by insects, mites, vertebrate, and other small herbivores is not considered a part of the domain of plant pathology. The field is connected to plant disease epidemiology and especially concerned with the horticulture of species that are of high importance to the human diet or other human utility.
See also
Biopsy
Causal inference
Cell (biology)
Disease
Environmental pathology
Epidemiology
Etiology (medicine)
Hematology
Histology
Immunology
List of pathologists
Medical diagnosis
Medical jurisprudence
Medicine
Microbiology
Microscopy
Minimally-invasive procedures
Oncology
Parasitology
Pathogen
Pathogenesis
Pathophysiology
Precision medicine
Spectroscopy
Speech–language pathology
Telepathology
References
External links
American Society for Clinical Pathology (ASCP)
American Society for Investigative Pathology (ASIP)
Pathpedia online pathology resource: Comprehensive pathology website with numerous resources.
College of American Pathologists
humpath.com (Atlas in Human Pathology)
Intersociety Council for Pathology Training (ICPI)
Pathological Society of Great Britain and Ireland
Royal College of Pathologists (UK)
Royal College of Pathologists of Australasia (Australia & Oceania)
United States and Canadian Academy of Pathology
WebPath: The Internet Pathology Laboratory for Medical Education
Atlases: High Resolution Pathology Images
Branches of biology | Pathology | [
"Biology"
] | 6,133 | [
"nan",
"Pathology"
] |
48,803 | https://en.wikipedia.org/wiki/Gamma-ray%20burst | In gamma-ray astronomy, gamma-ray bursts (GRBs) are immensely energetic events occurring in distant galaxies which represent the brightest and "most powerful class of explosion in the universe." These extreme electromagnetic events are second only to the Big Bang as the most energetic and luminous phenomenon ever known. Gamma-ray bursts can last from a few milliseconds to several hours. After the initial flash of gamma rays, a longer-lived §afterglow is emitted, usually in the longer wavelengths of X-ray, ultraviolet, optical, infrared, microwave or radio frequencies.
The intense radiation of most observed GRBs is thought to be released during a supernova or superluminous supernova as a high-mass star implodes to form a neutron star or a black hole. From gravitational wave observations, short-duration (sGRB) events describe a subclass of GRB signals that are now known to originate from the cataclysmic merger of binary neutron stars.
The sources of most GRB are billions of light years away from Earth, implying that the explosions are both extremely energetic (a typical burst releases as much energy in a few seconds as the Sun will in its entire 10-billion-year lifetime) and extremely rare (a few per galaxy per million years). All GRBs in recorded history have originated from outside the Milky Way galaxy, although a related class of phenomena, soft gamma repeaters, are associated with magnetars within our galaxy. This may be self-evident, since a gamma-ray burst in the Milky Way pointed directly at Earth would likely sterilize the planet or effect a mass extinction. The Late Ordovician mass extinction has been hypothesised by some researchers to have occurred as a result of such a gamma-ray burst.
GRB signals were first detected in 1967 by the Vela satellites, which were designed to detect covert nuclear weapons tests; after an "exhaustive" period of analysis, this was published as academic research in 1973. Following their discovery, hundreds of theoretical models were proposed to explain these bursts, such as collisions between comets and neutron stars. Little information was available to verify these models until the 1997 detection of the first X-ray and optical afterglows and direct measurement of their redshifts using optical spectroscopy, and thus their distances and energy outputs. These discoveries—and subsequent studies of the galaxies and supernovae associated with the bursts—clarified the distance and luminosity of GRBs, definitively placing them in distant galaxies.
History
Gamma-ray bursts were first observed in the late 1960s by the U.S. Vela satellites, which were built to detect gamma radiation pulses emitted by nuclear weapons tested in space. The United States suspected that the Soviet Union might attempt to conduct secret nuclear tests after signing the Nuclear Test Ban Treaty in 1963. On July 2, 1967, at 14:19 UTC, the Vela 4 and Vela 3 satellites detected a flash of gamma radiation unlike any known nuclear weapons signature. Uncertain what had happened but not considering the matter particularly urgent, the team at the Los Alamos National Laboratory, led by Ray Klebesadel, filed the data away for investigation. As additional Vela satellites were launched with better instruments, the Los Alamos team continued to find inexplicable gamma-ray bursts in their data. By analyzing the different arrival times of the bursts as detected by different satellites, the team was able to determine rough estimates for the sky positions of 16 bursts and definitively rule out a terrestrial or solar origin. Contrary to popular belief, the data was never classified. After thorough analysis, the findings were published in 1973 as an Astrophysical Journal article entitled "Observations of Gamma-Ray Bursts of Cosmic Origin".
Most early hypotheses of gamma-ray bursts posited nearby sources within the Milky Way Galaxy. From 1991, the Compton Gamma Ray Observatory (CGRO) and its Burst and Transient Source Explorer (BATSE) instrument, an extremely sensitive gamma-ray detector, provided data that showed the distribution of GRBs is isotropicnot biased towards any particular direction in space. If the sources were from within our own galaxy, they would be strongly concentrated in or near the galactic plane. The absence of any such pattern in the case of GRBs provided strong evidence that gamma-ray bursts must come from beyond the Milky Way. However, some Milky Way models are still consistent with an isotropic distribution.
Counterpart objects as candidate sources
For decades after the discovery of GRBs, astronomers searched for a counterpart at other wavelengths: i.e., any astronomical object in positional coincidence with a recently observed burst. Astronomers considered many distinct classes of objects, including white dwarfs, pulsars, supernovae, globular clusters, quasars, Seyfert galaxies, and BL Lac objects. All such searches were unsuccessful, and in a few cases particularly well-localized bursts (those whose positions were determined with what was then a high degree of accuracy) could be clearly shown to have no bright objects of any nature consistent with the position derived from the detecting satellites. This suggested an origin of either very faint stars or extremely distant galaxies. Even the most accurate positions contained numerous faint stars and galaxies, and it was widely agreed that final resolution of the origins of cosmic gamma-ray bursts would require both new satellites and faster communication.
Afterglow
Several models for the origin of gamma-ray bursts postulated that the initial burst of gamma rays should be followed by afterglow: slowly fading emission at longer wavelengths created by collisions between the burst ejecta and interstellar gas. Early searches for this afterglow were unsuccessful, largely because it is difficult to observe a burst's position at longer wavelengths immediately after the initial burst. The breakthrough came in February 1997 when the satellite BeppoSAX detected a gamma-ray burst (GRB 970228) and when the X-ray camera was pointed towards the direction from which the burst had originated, it detected fading X-ray emission. The William Herschel Telescope identified a fading optical counterpart 20 hours after the burst. Once the GRB faded, deep imaging was able to identify a faint, distant host galaxy at the location of the GRB as pinpointed by the optical afterglow.
Because of the very faint luminosity of this galaxy, its exact distance was not measured for several years. Well after then, another major breakthrough occurred with the next event registered by BeppoSAX, GRB 970508. This event was localized within four hours of its discovery, allowing research teams to begin making observations much sooner than any previous burst. The spectrum of the object revealed a redshift of z = 0.835, placing the burst at a distance of roughly 6 billion light years from Earth. This was the first accurate determination of the distance to a GRB, and together with the discovery of the host galaxy of 970228 proved that GRBs occur in extremely distant galaxies. Within a few months, the controversy about the distance scale ended: GRBs were extragalactic events originating within faint galaxies at enormous distances. The following year, GRB 980425 was followed within a day by a bright supernova (SN 1998bw), coincident in location, indicating a clear connection between GRBs and the deaths of very massive stars. This burst provided the first strong clue about the nature of the systems that produce GRBs.
More recent instruments - launched from 2000
BeppoSAX functioned until 2002 and CGRO (with BATSE) was deorbited in 2000. However, the revolution in the study of gamma-ray bursts motivated the development of a number of additional instruments designed specifically to explore the nature of GRBs, especially in the earliest moments following the explosion. The first such mission, HETE-2, was launched in 2000 and functioned until 2006, providing most of the major discoveries during this period. One of the most successful space missions to date, Swift, was launched in 2004 and as of May 2024 is still operational. Swift is equipped with a very sensitive gamma-ray detector as well as on-board X-ray and optical telescopes, which can be rapidly and automatically slewed to observe afterglow emission following a burst. More recently, the Fermi mission was launched carrying the Gamma-Ray Burst Monitor, which detects bursts at a rate of several hundred per year, some of which are bright enough to be observed at extremely high energies with Fermi's Large Area Telescope. Meanwhile, on the ground, numerous optical telescopes have been built or modified to incorporate robotic control software that responds immediately to signals sent through the Gamma-ray Burst Coordinates Network. This allows the telescopes to rapidly repoint towards a GRB, often within seconds of receiving the signal and while the gamma-ray emission itself is still ongoing.
The Space Variable Objects Monitor is a small X-ray telescope satellite for studying the explosions of massive stars by analysing the resulting gamma-ray bursts, developed by China National Space Administration (CNSA), Chinese Academy of Sciences (CAS) and the French Space Agency (CNES), launched on 22 June 2024 (07:00:00 UTC).
The Taiwan Space Agency is launching a cubesat called The Gamma-ray Transients Monitor to track GRBs and other bright gamma-ray transients with energies ranging from 50 keV to 2 MeV in Q4 2026.
Short bursts and other observations
New developments since the 2000s include the recognition of short gamma-ray bursts as a separate class (likely from merging neutron stars and not associated with supernovae), the discovery of extended, erratic flaring activity at X-ray wavelengths lasting for many minutes after most GRBs, and the discovery of the most luminous and the former most distant objects in the universe. Prior to a flurry of discoveries from the James Webb Space Telescope, was the most distant known object in the universe.
In October 2018, astronomers reported that (detected in 2015) and GW170817, a gravitational wave event detected in 2017 (which has been associated with , a burst detected 1.7 seconds later), may have been produced by the same mechanism—the merger of two neutron stars. The similarities between the two events, in terms of gamma ray, optical, and x-ray emissions, as well as to the nature of the associated host galaxies, were considered "striking", suggesting the two separate events may both be the result of the merger of neutron stars, and both may be a kilonova, which may be more common in the universe than previously understood, according to the researchers.
The highest energy light observed from a gamma-ray burst was one teraelectronvolt, from in 2019. Although enormous for such a distant event, this energy is around 3 orders of magnitude lower than the highest energy light observed from closer gamma ray sources within our Milky Way galaxy, for example a 2021 event of 1.4 petaelectronvolts.
Classification
The light curves of gamma-ray bursts are extremely diverse and complex. No two gamma-ray burst light curves are identical, with large variation observed in almost every property: the duration of observable emission can vary from milliseconds to tens of minutes, there can be a single peak or several individual subpulses, and individual peaks can be symmetric or with fast brightening and very slow fading. Some bursts are preceded by a "precursor" event, a weak burst that is then followed (after seconds to minutes of no emission at all) by the much more intense "true" bursting episode. The light curves of some events have extremely chaotic and complicated profiles with almost no discernible patterns.
Although some light curves can be roughly reproduced using certain simplified models, little progress has been made in understanding the full diversity observed. Many classification schemes have been proposed, but these are often based solely on differences in the appearance of light curves and may not always reflect a true physical difference in the progenitors of the explosions. However, plots of the distribution of the observed duration for a large number of gamma-ray bursts show a clear bimodality, suggesting the existence of two separate populations: a "short" population with an average duration of about 0.3 seconds and a "long" population with an average duration of about 30 seconds. Both distributions are very broad with a significant overlap region in which the identity of a given event is not clear from duration alone. Additional classes beyond this two-tiered system have been proposed on both observational and theoretical grounds.
Short gamma-ray bursts
Events with a duration of less than about two seconds are classified as short gamma-ray bursts (sGRB). These account for about 30% of gamma-ray bursts, but until 2005, no afterglow had been successfully detected from any short event and little was known about their origins. Following this, several dozen short gamma-ray burst afterglows were detected and localized, several of them associated with regions of little or no star formation, such as large elliptical galaxies. This ruled out a link to massive stars, confirming the short events to be physically distinct from long events. In addition, there had been no association with supernovae.
The true nature of these objects was thus initially unknown, but the leading hypothesis was that they originated from the mergers of binary neutron stars or a neutron star with a black hole. Such mergers were hypothesized to produce kilonovae, and evidence for a kilonova associated with short GRB 130603B was reported in 2013. The mean duration of sGRB events of around 200 milliseconds implied (due to causality) that the sources must be of very small physical diameter in stellar terms: less than 0.2 light-seconds (60,000 km or 37,000 miles)—about four times the Earth's diameter. The observation of minutes to hours of X-ray flashes after an sGRB was seen as consistent with small particles of a precursor object like a neutron star initially being swallowed by a black hole in less than two seconds, followed by some hours of lower-energy events as remaining fragments of tidally disrupted neutron star material (no longer neutronium) would remain in orbit, spiraling into the black hole over a longer period of time. The origin of short gamma-ray bursts in kilonovae was finally conclusively established in 2017, when short GRB 170817A co-occurred with the detection of gravitational wave GW170817, a signal from the merger of two neutron stars.
Unrelated to these cataclysmic origins, short-duration gamma-ray signals are also produced by giant flares from soft gamma repeaters in our own—or nearby—galaxies.
Long gamma-ray bursts
Most observed events (70%) have a duration of greater than two seconds and are classified as long gamma-ray bursts. Because these events constitute the majority of the population and because they tend to have the brightest afterglows, they have been observed in much greater detail than their short counterparts. Almost every well-studied long gamma-ray burst has been linked to a galaxy with rapid star formation, and in many cases to a core-collapse supernova as well, unambiguously associating long GRBs with the deaths of massive stars. Long GRB afterglow observations, at high redshift, are also consistent with the GRB having originated in star-forming regions.
In December 2022, astronomers reported the observation of GRB 211211A for 51 seconds, the first evidence of a long GRB likely associated with mergers of "compact binary objects" such as neutron stars or white dwarfs. Following this, GRB 191019A (2019, 64s) and GRB 230307A (2023, 35s) have been argued to signify an emerging class of long GRB which may originate from these types of progenitor events.
Ultra-long gamma-ray bursts
ulGRB are defined as GRB lasting more than 10,000 seconds, covering the upper range to the limit of the GRB duration distribution. They have been proposed to form a separate class, caused by the collapse of a blue supergiant star, a tidal disruption event or a new-born magnetar. Only a small number have been identified to date, their primary characteristic being their gamma ray emission duration. The most studied ultra-long events include GRB 101225A and GRB 111209A. The low detection rate may be a result of low sensitivity of current detectors to long-duration events, rather than a reflection of their true frequency. A 2013 study, on the other hand, shows that the existing evidence for a separate ultra-long GRB population with a new type of progenitor is inconclusive, and further multi-wavelength observations are needed to draw a firmer conclusion.
Energetics
Gamma-ray bursts are very bright as observed from Earth despite their typically immense distances. An average long GRB has a bolometric flux comparable to a bright star of our galaxy despite a distance of billions of light years (compared to a few tens of light years for most visible stars). Most of this energy is released in gamma rays, although some GRBs have extremely luminous optical counterparts as well. GRB 080319B, for example, was accompanied by an optical counterpart that peaked at a visible magnitude of 5.8, comparable to that of the dimmest naked-eye stars despite the burst's distance of 7.5 billion light years. This combination of brightness and distance implies an extremely energetic source. Assuming the gamma-ray explosion to be spherical, the energy output of GRB 080319B would be within a factor of two of the rest-mass energy of the Sun (the energy which would be released were the Sun to be converted entirely into radiation).
Gamma-ray bursts are thought to be highly focused explosions, with most of the explosion energy collimated into a narrow jet. The jets of gamma-ray bursts are ultrarelativistic, and are the most relativistic jets in the universe. The matter in gamma-ray burst jets may also become superluminal, or faster than the speed of light in the jet medium, with there also being effects of time reversibility. The approximate angular width of the jet (that is, the degree of spread of the beam) can be estimated directly by observing the achromatic "jet breaks" in afterglow light curves: a time after which the slowly decaying afterglow begins to fade rapidly as the jet slows and can no longer beam its radiation as effectively. Observations suggest significant variation in the jet angle from between 2 and 20 degrees.
Because their energy is strongly focused, the gamma rays emitted by most bursts are expected to miss the Earth and never be detected. When a gamma-ray burst is pointed towards Earth, the focusing of its energy along a relatively narrow beam causes the burst to appear much brighter than it would have been were its energy emitted spherically. The total energy of typical gamma-ray bursts has been estimated at 3 × 1044 J,which is larger than the total energy (1044 J) of ordinary supernovae (type Ia, Ibc, II), with gamma-ray bursts also being more powerful than the typical supernova. Very bright supernovae have been observed to accompany several of the nearest GRBs. Further support for focusing of the output of GRBs comes from observations of strong asymmetries in the spectra of nearby type Ic supernovae and from radio observations taken long after bursts when their jets are no longer relativistic.
However, a competing model, the binary-driven hypernova model, developed by Remo Ruffini and others at ICRANet, accepts the extreme isotropic energy totals as being true, with there being no need to correct for beaming. They also note that the extreme beaming angles in the standard "fireball" model have never been physically corroborated.
With the discovery of GRB 190114C, astronomers may have been missing half of the total energy that gamma-ray bursts produce, with Konstancja Satalecka, an astrophysicist at the German Electron Synchrotron, stating that "Our measurements show that the energy released in very-high-energy gamma-rays is comparable to the amount radiated at all lower energies taken together".
Short (time duration) GRBs appear to come from a lower-redshift (i.e. less distant) population and are less luminous than long GRBs. The degree of beaming in short bursts has not been accurately measured, but as a population they are likely less collimated than long GRBs or possibly not collimated at all in some cases.
Progenitors
Because of the immense distances of most gamma-ray burst sources from Earth, identification of the progenitors, the systems that produce these explosions, is challenging. The association of some long GRBs with supernovae and the fact that their host galaxies are rapidly star-forming offer very strong evidence that long gamma-ray bursts are associated with massive stars. The most widely accepted mechanism for the origin of long-duration GRBs is the collapsar model, in which the core of an extremely massive, low-metallicity, rapidly rotating star collapses into a black hole in the final stages of its evolution. Matter near the star's core rains down towards the center and swirls into a high-density accretion disk. The infall of this material into a black hole drives a pair of relativistic jets out along the rotational axis, which pummel through the stellar envelope and eventually break through the stellar surface and radiate as gamma rays. Some alternative models replace the black hole with a newly formed magnetar, although most other aspects of the model (the collapse of the core of a massive star and the formation of relativistic jets) are the same.
However, a new model which has gained support and was developed by the Italian astrophysicist Remo Ruffini and other scientists at ICRANet is that of the binary-driven hypernova (BdHN) model. The model succeeds and improves upon both the fireshell model and the induced gravitational collapse (IGC) paradigm suggested before, and explains all aspects of gamma-ray bursts. The model posits long gamma-ray bursts as occurring in binary systems with a carbon–oxygen core and a companion neutron star or a black hole. Furthermore, the energy of GRBs in the model is isotropic instead of collimated. The creators of the model have noted the numerous drawbacks of the standard "fireball" model as motivation for developing the model, such as the markedly different energetics for supernova and gamma-ray bursts, and the fact that the existence of extremely narrow beaming angles have never been observationally corroborated.
The closest analogs within the Milky Way galaxy of the stars producing long gamma-ray bursts are likely the Wolf–Rayet stars, extremely hot and massive stars, which have shed most or all of their hydrogen envelope. Eta Carinae, Apep, and WR 104 have been cited as possible future gamma-ray burst progenitors. It is unclear if any star in the Milky Way has the appropriate characteristics to produce a gamma-ray burst.
The massive-star model probably does not explain all types of gamma-ray burst. There is strong evidence that some short-duration gamma-ray bursts occur in systems with no star formation and no massive stars, such as elliptical galaxies and galaxy halos. The favored hypothesis for the origin of most short gamma-ray bursts is the merger of a binary system consisting of two neutron stars. According to this model, the two stars in a binary slowly spiral towards each other because gravitational radiation releases energy until tidal forces suddenly rip the neutron stars apart and they collapse into a single black hole. The infall of matter into the new black hole produces an accretion disk and releases a burst of energy, analogous to the collapsar model. Numerous other models have also been proposed to explain short gamma-ray bursts, including the merger of a neutron star and a black hole, the accretion-induced collapse of a neutron star, or the evaporation of primordial black holes.
An alternative explanation proposed by Friedwardt Winterberg is that in the course of a gravitational collapse and in reaching the event horizon of a black hole, all matter disintegrates into a burst of gamma radiation.
Tidal disruption events
This class of GRB-like events was first discovered through the detection of Swift J1644+57 (originally classified as GRB 110328A) by the Swift Gamma-Ray Burst Mission on 28 March 2011. This event had a gamma-ray duration of about 2 days, much longer than even ultra-long GRBs, and was detected in many frequencies for months and years after. It occurred at the center of a small elliptical galaxy at redshift 3.8 billion light years away. This event has been accepted as a tidal disruption event (TDE), where a star wanders too close to a supermassive black hole, shredding the star. In the case of Swift J1644+57, an astrophysical jet traveling at near the speed of light was launched, and lasted roughly 1.5 years before turning off.
Since 2011, only 4 jetted TDEs have been discovered, of which 3 were detected in gamma-rays (including Swift J1644+57). It is estimated that just 1% of all TDEs are jetted events.
Emission mechanisms
The means by which gamma-ray bursts convert energy into radiation remains poorly understood, and as of 2010 there was still no generally accepted model for how this process occurs. Any successful model of GRB emission must explain the physical process for generating gamma-ray emission that matches the observed diversity of light curves, spectra, and other characteristics. Particularly challenging is the need to explain the very high efficiencies that are inferred from some explosions: some gamma-ray bursts may convert as much as half (or more) of the explosion energy into gamma-rays. Early observations of the bright optical counterparts to GRB 990123 and to GRB 080319B, whose optical light curves were extrapolations of the gamma-ray light spectra, have suggested that inverse Compton scattering may be the dominant process in some events. In this model, pre-existing low-energy photons are scattered by relativistic electrons within the explosion, augmenting their energy by a large factor and transforming them into gamma-rays.
The nature of the longer-wavelength afterglow emission (ranging from X-ray through radio) that follows gamma-ray bursts is better understood. Any energy released by the explosion not radiated away in the burst itself takes the form of matter or energy moving outward at nearly the speed of light. As this matter collides with the surrounding interstellar gas, it creates a relativistic shock wave that then propagates forward into interstellar space. A second shock wave, the reverse shock, may propagate back into the ejected matter. Extremely energetic electrons within the shock wave are accelerated by strong local magnetic fields and radiate as synchrotron emission across most of the electromagnetic spectrum. This model has generally been successful in modeling the behavior of many observed afterglows at late times (generally, hours to days after the explosion), although there are difficulties explaining all features of the afterglow very shortly after the gamma-ray burst has occurred.
Rate of occurrence and potential effects on life
Gamma ray bursts can have harmful or destructive effects on life. Considering the universe as a whole, the safest environments for life similar to that on Earth are the lowest density regions in the outskirts of large galaxies. Our knowledge of galaxy types and their distribution suggests that life as we know it can only exist in about 10% of all galaxies. Furthermore, galaxies with a redshift, z, higher than 0.5 are unsuitable for life as we know it, because of their higher rate of GRBs and their stellar compactness.
All GRBs observed to date have occurred well outside the Milky Way galaxy and have been harmless to Earth. However, if a GRB were to occur within the Milky Way within 5,000 to 8,000 light-years and its emission were beamed straight towards Earth, the effects could be harmful and potentially devastating for its ecosystems. Currently, orbiting satellites detect on average approximately one GRB per day. The closest observed GRB as of March 2014 was GRB 980425, located away (z=0.0085) in an SBc-type dwarf galaxy. GRB 980425 was far less energetic than the average GRB and was associated with the Type Ib supernova SN 1998bw.
Estimating the exact rate at which GRBs occur is difficult; for a galaxy of approximately the same size as the Milky Way, estimates of the expected rate (for long-duration GRBs) can range from one burst every 10,000 years, to one burst every 1,000,000 years. Only a small percentage of these would be beamed towards Earth. Estimates of rate of occurrence of short-duration GRBs are even more uncertain because of the unknown degree of collimation, but are probably comparable.
Since GRBs are thought to involve beamed emission along two jets in opposing directions, only planets in the path of these jets would be subjected to the high energy gamma radiation. A GRB could potentially vaporize anything in its beams' paths within a range of around 200 light-years.
Although nearby GRBs hitting Earth with a destructive shower of gamma rays are only hypothetical events, high energy processes across the galaxy have been observed to affect the Earth's atmosphere.
Effects on Earth
Earth's atmosphere is very effective at absorbing high energy electromagnetic radiation such as x-rays and gamma rays, so these types of radiation would not reach any dangerous levels at the surface during the burst event itself. The immediate effect on life on Earth from a GRB within a few kiloparsecs would only be a short increase in ultraviolet radiation at ground level, lasting from less than a second to tens of seconds. This ultraviolet radiation could potentially reach dangerous levels depending on the exact nature and distance of the burst, but it seems unlikely to be able to cause a global catastrophe for life on Earth.
The long-term effects from a nearby burst are more dangerous. Gamma rays cause chemical reactions in the atmosphere involving oxygen and nitrogen molecules, creating first nitrogen oxide then nitrogen dioxide gas. The nitrogen oxides cause dangerous effects on three levels. First, they deplete ozone, with models showing a possible global reduction of 25–35%, with as much as 75% in certain locations, an effect that would last for years. This reduction is enough to cause a dangerously elevated UV index at the surface. Secondly, the nitrogen oxides cause photochemical smog, which darkens the sky and blocks out parts of the sunlight spectrum. This would affect photosynthesis, but models show only about a 1% reduction of the total sunlight spectrum, lasting a few years. However, the smog could potentially cause a cooling effect on Earth's climate, producing a "cosmic winter" (similar to an impact winter, but without an impact), but only if it occurs simultaneously with a global climate instability. Thirdly, the elevated nitrogen dioxide levels in the atmosphere would wash out and produce acid rain. Nitric acid is toxic to a variety of organisms, including amphibian life, but models predict that it would not reach levels that would cause a serious global effect. The nitrates might in fact be of benefit to some plants.
All in all, a GRB within a few kiloparsecs, with its energy directed towards Earth, will mostly damage life by raising the UV levels during the burst itself and for a few years thereafter. Models show that the destructive effects of this increase can cause up to 16 times the normal levels of DNA damage. It has proved difficult to assess a reliable evaluation of the consequences of this on the terrestrial ecosystem, because of the uncertainty in biological field and laboratory data.
Hypothetical effects on Earth in the past
There is a very good chance (but no certainty) that at least one lethal GRB took place during the past 5 billion years close enough to Earth as to significantly damage life. There is a 50% chance that such a lethal GRB took place within two kiloparsecs of Earth during the last 500 million years, causing one of the major mass extinction events.
The major Ordovician–Silurian extinction event 450 million years ago may have been caused by a GRB. Estimates suggest that approximately 20–60% of the total phytoplankton biomass in the Ordovician oceans would have perished in a GRB, because the oceans were mostly oligotrophic and clear. The late Ordovician species of trilobites that spent portions of their lives in the plankton layer near the ocean surface were much harder hit than deep-water dwellers, which tended to remain within quite restricted areas. This is in contrast to the usual pattern of extinction events, wherein species with more widely spread populations typically fare better. A possible explanation is that trilobites remaining in deep water would be more shielded from the increased UV radiation associated with a GRB. Also supportive of this hypothesis is the fact that during the late Ordovician, burrowing bivalve species were less likely to go extinct than bivalves that lived on the surface.
A case has been made that the 774–775 carbon-14 spike was the result of a short GRB, though a very strong solar flare is another possibility.
GRB candidates in the Milky Way
No gamma-ray bursts from within our own galaxy, the Milky Way, have been observed, and the question of whether one has ever occurred remains unresolved. In light of evolving understanding of gamma-ray bursts and their progenitors, the scientific literature records a growing number of local, past, and future GRB candidates. Long duration GRBs are related to superluminous supernovae, or hypernovae, and most luminous blue variables (LBVs) and rapidly spinning Wolf–Rayet stars are thought to end their life cycles in core-collapse supernovae with an associated long-duration GRB. Knowledge of GRBs, however, is from metal-poor galaxies of former epochs of the universe's evolution, and it is impossible to directly extrapolate to encompass more evolved galaxies and stellar environments with a higher metallicity, such as the Milky Way.
See also
Fast blue optical transient
Fast radio burst
Gamma-ray burst precursor
Gamma-ray Search for Extraterrestrial Intelligence
Horizons: Exploring the Universe
List of gamma-ray bursts
GRB 020813, GRB 031203, GRB 070714B
GRB 080916C, GRB 100621A, GRB 130427A
GRB 190114C, GRB 221009A
Stellar evolution
Terrestrial gamma-ray flashes
Notes
Citations
References
Further reading
External links
GRB mission sites
Swift Gamma-Ray Burst Mission:
Official NASA Swift Homepage
UK Swift Science Data Centre
Swift Mission Operations Center at Penn State
HETE-2: High Energy Transient Explorer (Wiki entry)
INTEGRAL: INTErnational Gamma-Ray Astrophysics Laboratory (Wiki entry)
BATSE: Burst and Transient Source Explorer
Fermi Gamma-ray Space Telescope (Wiki entry)
AGILE: Astro-rivelatore Gamma a Immagini Leggero (Wiki entry)
EXIST: Energetic X-ray Survey Telescope
Gamma Ray Burst Catalog at NASA
GRB follow-up programs
The Gamma-ray bursts Coordinates Network (GCN) (Wiki entry)
BOOTES: Burst Observer and Optical Transient Exploring System (Wiki entry)
GROND: Gamma-Ray Burst Optical Near-infrared Detector (Wiki entry)
KAIT: The Katzman Automatic Imaging Telescope (Wiki entry)
MASTER: Mobile Astronomical System of the Telescope-Robots
ROTSE: Robotic Optical Transient Search Experiment (Wiki entry)
Astronomical events
Stellar phenomena
Cosmic doomsday | Gamma-ray burst | [
"Physics",
"Astronomy"
] | 7,418 | [
"Physical phenomena",
"Stellar phenomena",
"Astronomical events",
"Gamma-ray bursts"
] |
48,824 | https://en.wikipedia.org/wiki/Gravitational%20lens | A gravitational lens is matter, such as a cluster of galaxies or a point particle, that bends light from a distant source as it travels toward an observer. The amount of gravitational lensing is described by Albert Einstein's general theory of relativity. If light is treated as corpuscles travelling at the speed of light, Newtonian physics also predicts the bending of light, but only half of that predicted by general relativity.
Orest Khvolson (1924) and Frantisek Link (1936) are generally credited with being the first to discuss the effect in print, but it is more commonly associated with Einstein, who made unpublished calculations on it in 1912 and published an article on the subject in 1936.
In 1937, Fritz Zwicky posited that galaxy clusters could act as gravitational lenses, a claim confirmed in 1979 by observation of the Twin QSO SBS 0957+561.
Description
Unlike an optical lens, a point-like gravitational lens produces a maximum deflection of light that passes closest to its center, and a minimum deflection of light that travels furthest from its center. Consequently, a gravitational lens has no single focal point, but a focal line. The term "lens" in the context of gravitational light deflection was first used by O. J. Lodge, who remarked that it is "not permissible to say that the solar gravitational field acts like a lens, for it has no focal length". If the (light) source, the massive lensing object, and the observer lie in a straight line, the original light source will appear as a ring around the massive lensing object (provided the lens has circular symmetry). If there is any misalignment, the observer will see an arc segment instead.
This phenomenon was first mentioned in 1924 by the St. Petersburg physicist Orest Khvolson, and quantified by Albert Einstein in 1936. It is usually referred to in the literature as an Einstein ring, since Khvolson did not concern himself with the flux or radius of the ring image. More commonly, where the lensing mass is complex (such as a galaxy group or cluster) and does not cause a spherical distortion of spacetime, the source will resemble partial arcs scattered around the lens. The observer may then see multiple distorted images of the same source; the number and shape of these depending upon the relative positions of the source, lens, and observer, and the shape of the gravitational well of the lensing object.
There are three classes of gravitational lensing:
Strong lensing Where there are easily visible distortions such as the formation of Einstein rings, arcs, and multiple images. Despite being considered "strong", the effect is in general relatively small, such that even a galaxy with a mass more than 100 billion times that of the Sun will produce multiple images separated by only a few arcseconds. Galaxy clusters can produce separations of several arcminutes. In both cases the galaxies and sources are quite distant, many hundreds of megaparsecs away from our Galaxy.
Weak lensing Where the distortions of background sources are much smaller and can only be detected by analyzing large numbers of sources in a statistical way to find coherent distortions of only a few percent. The lensing shows up statistically as a preferred stretching of the background objects perpendicular to the direction to the centre of the lens. By measuring the shapes and orientations of large numbers of distant galaxies, their orientations can be averaged to measure the shear of the lensing field in any region. This, in turn, can be used to reconstruct the mass distribution in the area: in particular, the background distribution of dark matter can be reconstructed. Since galaxies are intrinsically elliptical and the weak gravitational lensing signal is small, a very large number of galaxies must be used in these surveys. These weak lensing surveys must carefully avoid a number of important sources of systematic error: the intrinsic shape of galaxies, the tendency of a camera's point spread function to distort the shape of a galaxy and the tendency of atmospheric seeing to distort images must be understood and carefully accounted for. The results of these surveys are important for cosmological parameter estimation, to better understand and improve upon the Lambda-CDM model, and to provide a consistency check on other cosmological observations. They may also provide an important future constraint on dark energy.
Microlensing Where no distortion in shape can be seen but the amount of light received from a background object changes in time. The lensing object may be stars in the Milky Way in one typical case, with the background source being stars in a remote galaxy, or, in another case, an even more distant quasar. In extreme cases, a star in a distant galaxy can act as a microlens and magnify another star much farther away. The first example of this was the star MACS J1149 Lensed Star 1 (also known as Icarus), thanks to the boost in flux due to the microlensing effect.
Gravitational lenses act equally on all kinds of electromagnetic radiation, not just visible light, and also in non-electromagnetic radiation, like gravitational waves. Weak lensing effects are being studied for the cosmic microwave background as well as galaxy surveys. Strong lenses have been observed in radio and x-ray regimes as well. If a strong lens produces multiple images, there will be a relative time delay between two paths: that is, in one image the lensed object will be observed before the other image.
History
Henry Cavendish in 1784 (in an unpublished manuscript) and Johann Georg von Soldner in 1801 (published in 1804) had pointed out that Newtonian gravity predicts that starlight will bend around a massive object as had already been supposed by Isaac Newton in 1704 in his Queries No.1 in his book Opticks. The same value as Soldner's was calculated by Einstein in 1911 based on the equivalence principle alone. However, Einstein noted in 1915, in the process of completing general relativity, that his (and thus Soldner's) 1911-result is only half of the correct value. Einstein became the first to calculate the correct value for light bending.
The first observation of light deflection was performed by noting the change in position of stars as they passed near the Sun on the celestial sphere. The observations were performed in 1919 by Arthur Eddington, Frank Watson Dyson, and their collaborators during the total solar eclipse on May 29. The solar eclipse allowed the stars near the Sun to be observed. Observations were made simultaneously in the cities of Sobral, Ceará, Brazil and in São Tomé and Príncipe on the west coast of Africa. The observations demonstrated that the light from stars passing close to the Sun was slightly bent, so that stars appeared slightly out of position.
The result was considered spectacular news and made the front page of most major newspapers. It made Einstein and his theory of general relativity world-famous. When asked by his assistant what his reaction would have been if general relativity had not been confirmed by Eddington and Dyson in 1919, Einstein said "Then I would feel sorry for the dear Lord. The theory is correct anyway." In 1912, Einstein had speculated that an observer could see multiple images of a single light source, if the light were deflected around a mass. This effect would make the mass act as a kind of gravitational lens. However, as he only considered the effect of deflection around a single star, he seemed to conclude that the phenomenon was unlikely to be observed for the foreseeable future since the necessary alignments between stars and observer would be highly improbable. Several other physicists speculated about gravitational lensing as well, but all reached the same conclusion that it would be nearly impossible to observe.
Although Einstein made unpublished calculations on the subject, the first discussion of the gravitational lens in print was by Khvolson, in a short article discussing the "halo effect" of gravitation when the source, lens, and observer are in near-perfect alignment, now referred to as the Einstein ring.
In 1936, after some urging by Rudi W. Mandl, Einstein reluctantly published the short article "Lens-Like Action of a Star By the Deviation of Light In the Gravitational Field" in the journal Science.
In 1937, Fritz Zwicky first considered the case where the newly discovered galaxies (which were called 'nebulae' at the time) could act as both source and lens, and that, because of the mass and sizes involved, the effect was much more likely to be observed.
In 1963 Yu. G. Klimov, S. Liebes, and Sjur Refsdal recognized independently that quasars are an ideal light source for the gravitational lens effect.
It was not until 1979 that the first gravitational lens would be discovered. It became known as the "Twin QSO" since it initially looked like two identical quasistellar objects. (It is officially named SBS 0957+561.) This gravitational lens was discovered by Dennis Walsh, Bob Carswell, and Ray Weymann using the Kitt Peak National Observatory 2.1 meter telescope.
In the 1980s, astronomers realized that the combination of CCD imagers and computers would allow the brightness of millions of stars to be measured each night. In a dense field, such as the galactic center or the Magellanic clouds, many microlensing events per year could potentially be found. This led to efforts such as Optical Gravitational Lensing Experiment, or OGLE, that have characterized hundreds of such events, including those of OGLE-2016-BLG-1190Lb and OGLE-2016-BLG-1195Lb.
Approximate Newtonian description
Newton wondered whether light, in the form of corpuscles, would be bent due to gravity. The Newtonian prediction for light deflection refers to the amount of deflection a corpuscle would feel under the effect of gravity, and therefore one should read "Newtonian" in this context as the referring to the following calculations and not a belief that Newton held in the validity of these calculations.
For a gravitational point-mass lens of mass , a corpuscle of mass feels a force
where is the lens-corpuscle separation. If we equate this force with Newton's second law, we can solve for the acceleration that the light undergoes:
The light interacts with the lens from initial time to , and the velocity boost the corpuscle receives is
If one assumes that initially the light is far enough from the lens to neglect gravity, the perpendicular distance between the light's initial trajectory and the lens is b (the impact parameter), and the parallel distance is , such that . We additionally assume a constant speed of light along the parallel direction, , and that the light is only being deflected a small amount. After plugging these assumptions into the above equation and further simplifying, one can solve for the velocity boost in the perpendicular direction. The angle of deflection between the corpuscle’s initial and final trajectories is therefore (see, e.g., M. Meneghetti 2021)
Although this result appears to be half the prediction from general relativity, classical physics predicts that the speed of light is observer-dependent (see, e.g., L. Susskind and A. Friedman 2018) which was superseded by a universal speed of light in special relativity.
Explanation in terms of spacetime curvature
In general relativity, light follows the curvature of spacetime, hence when light passes around a massive object, it is bent. This means that the light from an object on the other side will be bent towards an observer's eye, just like an ordinary lens. In general relativity the path of light depends on the shape of space (i.e. the metric). The gravitational attraction can be viewed as the motion of undisturbed objects in a background curved geometry or alternatively as the response of objects to a force in a flat geometry. The angle of deflection is
toward the mass M at a distance r from the affected radiation, where G is the universal constant of gravitation, and c is the speed of light in vacuum.
Since the Schwarzschild radius is defined as , and escape velocity is defined as , this can also be expressed in simple form as
Search for gravitational lenses
Most of the gravitational lenses in the past have been discovered accidentally. A search for gravitational lenses in the northern hemisphere (Cosmic Lens All Sky Survey, CLASS), done in radio frequencies using the Very Large Array (VLA) in New Mexico, led to the discovery of 22 new lensing systems, a major milestone. This has opened a whole new avenue for research ranging from finding very distant objects to finding values for cosmological parameters so we can understand the universe better.
A similar search in the southern hemisphere would be a very good step towards complementing the northern hemisphere search as well as obtaining other objectives for study. If such a search is done using well-calibrated and well-parameterized instruments and data, a result similar to the northern survey can be expected. The use of the Australia Telescope 20 GHz (AT20G) Survey data collected using the Australia Telescope Compact Array (ATCA) stands to be such a collection of data. As the data were collected using the same instrument maintaining a very stringent quality of data we should expect to obtain good results from the search. The AT20G survey is a blind survey at 20 GHz frequency in the radio domain of the electromagnetic spectrum. Due to the high frequency used, the chances of finding gravitational lenses increases as the relative number of compact core objects (e.g. quasars) are higher (Sadler et al. 2006). This is important as the lensing is easier to detect and identify in simple objects compared to objects with complexity in them. This search involves the use of interferometric methods to identify candidates and follow them up at higher resolution to identify them. Full detail of the project is currently under works for publication.
Microlensing techniques have been used to search for planets outside our solar system. A statistical analysis of specific cases of observed microlensing over the time period of 2002 to 2007 found that most stars in the Milky Way galaxy hosted at least one orbiting planet within 0.5 to 10 AU.
In 2009, weak gravitational lensing was used to extend the mass-X-ray-luminosity relation to older and smaller structures than was previously possible to improve measurements of distant galaxies.
the most distant gravitational lens galaxy, J1000+0221, had been found using NASA's Hubble Space Telescope. While it remains the most distant quad-image lensing galaxy known, an even more distant two-image lensing galaxy was subsequently discovered by an international team of astronomers using a combination of Hubble Space Telescope and Keck telescope imaging and spectroscopy. The discovery and analysis of the IRC 0218 lens was published in the Astrophysical Journal Letters on June 23, 2014.
Research published Sep 30, 2013 in the online edition of Physical Review Letters, led by McGill University in Montreal, Québec, Canada, has discovered the B-modes, that are formed due to gravitational lensing effect, using National Science Foundation's South Pole Telescope and with help from the Herschel space observatory. This discovery would open the possibilities of testing the theories of how our universe originated.
Solar gravitational lens
Albert Einstein predicted in 1936 that rays of light from the same direction that skirt the edges of the Sun would converge to a focal point approximately 542 AU from the Sun. Thus, a probe positioned at this distance (or greater) from the Sun could use the Sun as a gravitational lens for magnifying distant objects on the opposite side of the Sun. A probe's location could shift around as needed to select different targets relative to the Sun.
This distance is far beyond the progress and equipment capabilities of space probes such as Voyager 1, and beyond the known planets and dwarf planets, though over thousands of years 90377 Sedna will move farther away on its highly elliptical orbit. The high gain for potentially detecting signals through this lens, such as microwaves at the 21-cm hydrogen line, led to the suggestion by Frank Drake in the early days of SETI that a probe could be sent to this distance. A multipurpose probe SETISAIL and later FOCAL was proposed to the ESA in 1993, but is expected to be a difficult task. If a probe does pass 542 AU, magnification capabilities of the lens will continue to act at farther distances, as the rays that come to a focus at larger distances pass further away from the distortions of the Sun's corona. A critique of the concept was given by Landis, who discussed issues including interference of the solar corona, the high magnification of the target, which will make the design of the mission focal plane difficult, and an analysis of the inherent spherical aberration of the lens.
In 2020, NASA physicist Slava Turyshev presented his idea of Direct Multipixel Imaging and Spectroscopy of an Exoplanet with a Solar Gravitational Lens Mission. The lens could reconstruct the exoplanet image with ~25 km-scale surface resolution, enough to see surface features and signs of habitability.
Measuring weak lensing
Kaiser, Squires and Broadhurst (1995), Luppino & Kaiser (1997) and Hoekstra et al. (1998) prescribed a method to invert the effects of the point spread function (PSF) smearing and shearing, recovering a shear estimator uncontaminated by the systematic distortion of the PSF. This method (KSB+) is the most widely used method in weak lensing shear measurements.
Galaxies have random rotations and inclinations. As a result, the shear effects in weak lensing need to be determined by statistically preferred orientations. The primary source of error in lensing measurement is due to the convolution of the PSF with the lensed image. The KSB method measures the ellipticity of a galaxy image. The shear is proportional to the ellipticity. The objects in lensed images are parameterized according to their weighted quadrupole moments. For a perfect ellipse, the weighted quadrupole moments are related to the weighted ellipticity. KSB calculate how a weighted ellipticity measure is related to the shear and use the same formalism to remove the effects of the PSF.
KSB's primary advantages are its mathematical ease and relatively simple implementation. However, KSB is based on a key assumption that the PSF is circular with an anisotropic distortion. This is a reasonable assumption for cosmic shear surveys, but the next generation of surveys (e.g. LSST) may need much better accuracy than KSB can provide.
Gallery
See also
References
Notes
Bibliography
"Accidental Astrophysicists ". Science News, June 13, 2008.
"XFGLenses". A Computer Program to visualize Gravitational Lenses, Francisco Frutos-Alfaro
"G-LenS". A Point Mass Gravitational Lens Simulation, Mark Boughen.
Newbury, Pete, "Gravitational Lensing". Institute of Applied Mathematics, The University of British Columbia.
Cohen, N., "Gravity's Lens: Views of the New Cosmology", Wiley and Sons, 1988.
"Q0957+561 Gravitational Lens". Harvard.edu.
Bridges, Andrew, "Most distant known object in universe discovered". Associated Press. February 15, 2004. (Farthest galaxy found by gravitational lensing, using Abell 2218 and Hubble Space Telescope.)
Analyzing Corporations ... and the Cosmos An unusual career path in gravitational lensing.
"HST images of strong gravitational lenses". Harvard-Smithsonian Center for Astrophysics.
"A planetary microlensing event" and "A Jovian-mass Planet in Microlensing Event OGLE-2005-BLG-071", the first extra-solar planet detections using microlensing.
Gravitational lensing on arxiv.org
NRAO CLASS home page
AT20G survey
A diffraction limit on the gravitational lens effect (Bontz, R. J. and Haugan, M. P. "Astrophysics and Space Science" vol. 78, no. 1, p. 199-210. August 1981)
Further reading
.
Tools for the evaluation of the possibilities of using parallax measurements of gravitationally lensed sources (Stein Vidar Hagfors Haugan. June 2008)
External links
Video: Evalyn Gates – Einstein's Telescope: The Search for Dark Matter and Dark Energy in the Universe , presentation in Portland, Oregon, on April 19, 2009, from the author's recent book tour.
Audio: Fraser Cain and Dr. Pamela Gay – Astronomy Cast: Gravitational Lensing, May 2007
Historical papers
Concepts in astrophysics
Effects of gravity
Large-scale structure of the cosmos
Concepts in astronomy
Articles containing video clips | Gravitational lens | [
"Physics",
"Astronomy"
] | 4,288 | [
"Concepts in astronomy",
"Concepts in astrophysics",
"Astrophysics"
] |
48,837 | https://en.wikipedia.org/wiki/Sidereal%20time | Sidereal time ("sidereal" pronounced ) is a system of timekeeping used especially by astronomers. Using sidereal time and the celestial coordinate system, it is easy to locate the positions of celestial objects in the night sky. Sidereal time is a "time scale that is based on Earth's rate of rotation measured relative to the fixed stars".
Viewed from the same location, a star seen at one position in the sky will be seen at the same position on another night at the same time of day (or night), if the day is defined as a sidereal day (also known as the sidereal rotation period). This is similar to how the time kept by a sundial (Solar time) can be used to find the location of the Sun. Just as the Sun and Moon appear to rise in the east and set in the west due to the rotation of Earth, so do the stars. Both solar time and sidereal time make use of the regularity of Earth's rotation about its polar axis: solar time is reckoned according to the position of the Sun in the sky while sidereal time is based approximately on the position of the fixed stars on the theoretical celestial sphere.
More exactly, sidereal time is the angle, measured along the celestial equator, from the observer's meridian to the great circle that passes through the March equinox (the northern hemisphere's vernal equinox) and both celestial poles, and is usually expressed in hours, minutes, and seconds. (In the context of sidereal time, "March equinox" or "equinox" or "first point of Aries" is currently a direction, from the center of the Earth along the line formed by the intersection of the Earth's equator and the Earth's orbit around the Sun, toward the constellation Pisces; during ancient times it was toward the constellation Aries.) Common time on a typical clock (using mean Solar time) measures a slightly longer cycle, affected not only by Earth's axial rotation but also by Earth's orbit around the Sun.
The March equinox itself precesses slowly westward relative to the fixed stars, completing one revolution in about 25,800 years, so the misnamed "sidereal" day ("sidereal" is derived from the Latin sidus meaning "star") is 0.0084 seconds shorter than the stellar day, Earth's actual period of rotation relative to the fixed stars.
The slightly longer stellar period is measured as the Earth rotation angle (ERA), formerly the stellar angle. An increase of 360° in the ERA is a full rotation of the Earth.
A sidereal day on Earth is approximately 86164.0905 seconds (23 h 56 min 4.0905 s or 23.9344696 h).
(Seconds are defined as per International System of Units and are not to be confused with ephemeris seconds.)
Each day, the sidereal time at any given place and time will be about four minutes shorter than local civil time (which is based on solar time), so that for a complete year the number of sidereal "days" is one more than the number of solar days.
Comparison to solar time
Solar time is measured by the apparent diurnal motion of the Sun. Local noon in apparent solar time is the moment when the Sun is exactly due south or north (depending on the observer's latitude and the season). A mean solar day (what we normally measure as a "day") is the average time between local solar noons ("average" since this varies slightly over a year).
Earth makes one rotation around its axis each sidereal day; during that time it moves a short distance (about 1°) along its orbit around the Sun. So after a sidereal day has passed, Earth still needs to rotate slightly more before the Sun reaches local noon according to solar time. A mean solar day is, therefore, nearly 4 minutes longer than a sidereal day.
The stars are so far away that Earth's movement along its orbit makes nearly no difference to their apparent direction (except for the nearest stars if measured with extreme accuracy; see parallax), and so they return to their highest point at the same time each sidereal day.
Another way to understand this difference is to notice that, relative to the stars, as viewed from Earth, the position of the Sun at the same time each day appears to move around Earth once per year. A year has about 365.24 solar days but 366.24 sidereal days. Therefore, there is one fewer solar day per year than there are sidereal days, similar to an observation of the coin rotation paradox. This makes a sidereal day approximately times the length of the 24-hour solar day.
Effects of precession
Earth's rotation is not a simple rotation around an axis that remains always parallel to itself. Earth's rotational axis itself rotates about a second axis, orthogonal to the plane of Earth's orbit, taking about 25,800 years to perform a complete rotation. This phenomenon is termed the precession of the equinoxes. Because of this precession, the stars appear to move around Earth in a manner more complicated than a simple constant rotation.
For this reason, to simplify the description of Earth's orientation in astronomy and geodesy, it was conventional to chart the positions of the stars in the sky according to right ascension and declination, which are based on a frame of reference that follows Earth's precession, and to keep track of Earth's rotation, through sidereal time, relative to this frame as well. (The conventional reference frame, for purposes of star catalogues, was replaced in 1998 with the International Celestial Reference Frame, which is fixed with respect to extra-galactic radio sources. Because of the great distances, these sources have no appreciable proper motion.) In this frame of reference, Earth's rotation is close to constant, but the stars appear to rotate slowly with a period of about 25,800 years. It is also in this frame of reference that the tropical year (or solar year), the year related to Earth's seasons, represents one orbit of Earth around the Sun. The precise definition of a sidereal day is the time taken for one rotation of Earth in this precessing frame of reference.
Modern definitions
During the past, time was measured by observing stars with instruments such as photographic zenith tubes and Danjon astrolabes, and the passage of stars across defined lines would be timed with the observatory clock. Then, using the right ascension of the stars from a star catalog, the time when the star should have passed through the meridian of the observatory was computed, and a correction to the time kept by the observatory clock was computed. Sidereal time was defined such that the March equinox would transit the meridian of the observatory at 0 hours local sidereal time.
Beginning during the 1970s, the radio astronomy methods very-long-baseline interferometry (VLBI) and pulsar timing overtook optical instruments for the most precise astrometry. This resulted in the determination of UT1 (mean solar time at 0° longitude) using VLBI, a new measure of the Earth Rotation Angle, and new definitions of sidereal time. These changes became effective 1 January 2003.
Earth rotation angle
The Earth rotation angle (ERA) measures the rotation of the Earth from an origin on the celestial equator, the Celestial Intermediate Origin, also termed the Celestial Ephemeris Origin, that has no instantaneous motion along the equator; it was originally referred to as the non-rotating origin. This point is very close to the equinox of J2000.
ERA, measured in radians, is related to UT1 by a simple linear relation:
where tU is the Julian UT1 date (JD) minus 2451545.0.
The linear coefficient represents the Earth's rotation speed around its own axis.
ERA replaces Greenwich Apparent Sidereal Time (GAST). The origin on the celestial equator for GAST, termed the true equinox, does move, due to the movement of the equator and the ecliptic. The lack of motion of the origin of ERA is considered a significant advantage.
The ERA may be converted to other units; for example, the Astronomical Almanac for the Year 2017 tabulated it in degrees, minutes, and seconds.
As an example, the Astronomical Almanac for the Year 2017 gave the ERA at 0 h 1 January 2017 UT1 as 100° 37′ 12.4365″. Since Coordinated Universal Time (UTC) is within a second or two of UT1, this can be used as an anchor to give the ERA approximately for a given civil time and date.
Mean and apparent varieties
Although ERA is intended to replace sidereal time, there is a need to maintain definitions for sidereal time during the transition, and when working with older data and documents.
Similarly to mean solar time, every location on Earth has its own local sidereal time (LST), depending on the longitude of the point. Since it is not feasible to publish tables for every longitude, astronomical tables use Greenwich sidereal time (GST), which is sidereal time on the IERS Reference Meridian, less precisely termed the Greenwich, or Prime meridian. There are two varieties, mean sidereal time if the mean equator and equinox of date are used, and apparent sidereal time if the apparent equator and equinox of date are used. The former ignores the effect of astronomical nutation while the latter includes it. When the choice of location is combined with the choice of including astronomical nutation or not, the acronyms GMST, LMST, GAST, and LAST result.
The following relationships are true:
The new definitions of Greenwich mean and apparent sidereal time (since 2003, see above) are:
such that θ is the Earth Rotation Angle, EPREC is the accumulated precession, and E0 is equation of the origins, which represents accumulated precession and nutation. The calculation of precession and nutation was described in Chapter 6 of Urban & Seidelmann.
As an example, the Astronomical Almanac for the Year 2017 gave the ERA at 0 h 1 January 2017 UT1 as 100° 37′ 12.4365″ (6 h 42 m 28.8291 s). The GAST was 6 h 43 m 20.7109 s. For GMST the hour and minute were the same but the second was 21.1060.
Relationship between solar time and sidereal time intervals
If a certain interval I is measured in both mean solar time (UT1) and sidereal time, the numerical value will be greater in sidereal time than in UT1, because sidereal days are shorter than UT1 days. The ratio is:
such that t represents the number of Julian centuries elapsed since noon 1 January 2000 Terrestrial Time.
Sidereal days compared to solar days on other planets
Six of the eight solar planets have prograde rotation—that is, they rotate more than once per year in the same direction as they orbit the Sun, so the Sun rises in the east. Venus and Uranus, however, have retrograde rotation. For prograde rotation, the formula relating the lengths of the sidereal and solar days is:
or, equivalently:
When calculating the formula for a retrograde rotation, the operator of the denominator will be a plus sign (put another way, in the original formula the length of the sidereal day must be treated as negative). This is due to the solar day being shorter than the sidereal day for retrograde rotation, as the rotation of the planet would be against the direction of orbital motion.
If a planet rotates prograde, and the sidereal day exactly equals the orbital period, then the formula above gives an infinitely long solar day (division by zero). This is the case for a planet in synchronous rotation; in the case of zero eccentricity, one hemisphere experiences eternal day, the other eternal night, with a "twilight belt" separating them.
All the solar planets more distant from the Sun than Earth are similar to Earth in that, since they experience many rotations per revolution around the Sun, there is only a small difference between the length of the sidereal day and that of the solar day – the ratio of the former to the latter never being less than Earth's ratio of 0.997. But the situation is quite different for Mercury and Venus. Mercury's sidereal day is about two-thirds of its orbital period, so by the prograde formula its solar day lasts for two revolutions around the Sun – three times as long as its sidereal day. Venus rotates retrograde with a sidereal day lasting about 243.0 Earth days, or about 1.08 times its orbital period of 224.7 Earth days; hence by the retrograde formula its solar day is about 116.8 Earth days, and it has about 1.9 solar days per orbital period.
By convention, rotation periods of planets are given in sidereal terms unless otherwise specified.
See also
Anti-sidereal time
Earth's rotation
International Celestial Reference Frame
Nocturnal (instrument)
Sidereal month
Sidereal year
Synodic day
Transit instrument
Citations
References
External links
Web-based Sidereal time calculator
Horology
Time in astronomy
Time scales
Units of time | Sidereal time | [
"Physics",
"Astronomy",
"Mathematics"
] | 2,759 | [
"Time in astronomy",
"Physical quantities",
"Time",
"Horology",
"Units of time",
"Quantity",
"Astronomical coordinate systems",
"Spacetime",
"Time scales",
"Units of measurement"
] |
48,838 | https://en.wikipedia.org/wiki/Hour%20angle | In astronomy and celestial navigation, the hour angle is the dihedral angle between the meridian plane (containing Earth's axis and the zenith) and the hour circle (containing Earth's axis and a given point of interest).
It may be given in degrees, time, or rotations depending on the application.
The angle may be expressed as negative east of the meridian plane and positive west of the meridian plane, or as positive westward from 0° to 360°. The angle may be measured in degrees or in time, with 24h = 360° exactly.
In celestial navigation, the convention is to measure in degrees westward from the prime meridian (Greenwich hour angle, GHA), from the local meridian (local hour angle, LHA) or from the first point of Aries (sidereal hour angle, SHA).
The hour angle is paired with the declination to fully specify the location of a point on the celestial sphere in the equatorial coordinate system.
Relation with right ascension
The local hour angle (LHA) of an object in the observer's sky is
or
where LHAobject is the local hour angle of the object, LST is the local sidereal time, is the object's right ascension, GST is Greenwich sidereal time and is the observer's longitude (positive east from the prime meridian). These angles can be measured in time (24 hours to a circle) or in degrees (360 degrees to a circle)—one or the other, not both.
Negative hour angles (−180° < LHAobject < 0°) indicate the object is approaching the meridian, positive hour angles (0° < LHAobject < 180°) indicate the object is moving away from the meridian; an hour angle of zero means the object is on the meridian.
Right ascension is frequently given in sexagesimal hours-minutes-seconds format (HH:MM:SS) in astronomy, though may be given in decimal hours, sexagesimal degrees (DDD:MM:SS), or, decimal degrees.
Solar hour angle
Observing the Sun from Earth, the solar hour angle is an expression of time, expressed in angular measurement, usually degrees, from solar noon. At solar noon the hour angle is zero degrees, with the time before solar noon expressed as negative degrees, and the local time after solar noon expressed as positive degrees. For example, at 10:30 AM local apparent time the hour angle is −22.5° (15° per hour times 1.5 hours before noon).
The cosine of the hour angle (cos(h)) is used to calculate the solar zenith angle. At solar noon, so , and before and after solar noon the cos(± h) term = the same value for morning (negative hour angle) or afternoon (positive hour angle), so that the Sun is at the same altitude in the sky at 11:00AM and 1:00PM solar time.
Sidereal hour angle
The sidereal hour angle (SHA) of a body on the celestial sphere is its angular distance west of the March equinox generally measured in degrees. The SHA of a star varies by less than a minute of arc per year, due to precession, while the SHA of a planet varies significantly from night to night. SHA is often used in celestial navigation and navigational astronomy, and values are published in astronomical almanacs.
See also
Clock position
List of orbits
Notes and references
Astronomical coordinate systems
Angle | Hour angle | [
"Physics",
"Astronomy",
"Mathematics"
] | 715 | [
"Geometric measurement",
"Scalar physical quantities",
"Physical quantities",
"Astronomical coordinate systems",
"Coordinate systems",
"Wikipedia categories named after physical quantities",
"Angle"
] |
48,896 | https://en.wikipedia.org/wiki/Lyrebird | A lyrebird is either of two species of ground-dwelling Australian birds that compose the genus Menura, and the family Menuridae. They are most notable for their impressive ability to mimic natural and artificial sounds from their environment, and the striking beauty of the male bird's huge tail when it is fanned out in courtship display. Lyrebirds have unique plumes of neutral-coloured tailfeathers and are among Australia's best-known native birds.
Taxonomy
The classification of lyrebirds was the subject of much debate after the first specimens reached European scientists after 1798. Based on specimens sent from New South Wales to England, Major-General Thomas Davies illustrated and described this species as the superb lyrebird, which he called Menura superba, in an 1800 presentation to the Linnean Society of London, but this work was not published until 1802; in the intervening time period, however, the species was described and named Menura novaehollandiae by John Latham in 1801, and this is the accepted name by virtue of nomenclatural priority.
The genus name Menura refers to the pattern of repeated transparent crescents (or "lunules") on the superb lyrebird's outer tail-feathers, from the Ancient Greek words mēnē "moon" and ourá "tail".
Lyrebirds are named because their outer tail feathers are broad and curved in a S shape that together resemble the shape of a lyre.
Systematics
Lyrebirds were thought to be Galliformes like the broadly similar looking partridge, junglefowl, and pheasants familiar to Europeans, reflected in the early names given to the superb lyrebird, including native pheasant. They were also called peacock-wrens and Australian birds-of-paradise. The idea that they were related to the pheasants was abandoned when the first chicks, which are altricial, were described. They were not classed with the passerines until a paper was published in 1840, twelve years after they were assigned a discrete family, Menuridae. Within that family they compose a single genus, Menura.
It is generally accepted that the lyrebird family is most closely related to the scrub-birds (Atrichornithidae) and some authorities combine both in a single family, but evidence that they are also related to the bowerbirds remains controversial.
Lyrebirds are ancient Australian animals: the Australian Museum has fossils of lyrebirds dating back to about 15 million years ago. The prehistoric Menura tyawanoides has been described from Early Miocene fossils found at the famous Riversleigh site.
Species
Two species of lyrebird are extant:
Description
The lyrebirds are large passerine birds, amongst the largest in the order. They are ground living birds with strong legs and feet and short rounded wings. They are poor fliers and rarely fly except for periods of downhill gliding. The superb lyrebird is the larger of the two species. Lyrebirds measure 31 to 39 inches in length, including their tail. Males tend to be slightly larger than females. Females weigh around 2 pounds, and males weigh around 2.4 pounds.
Distribution and habitat
The superb lyrebird is found in areas of rainforest in Victoria, New South Wales, and south-east Queensland. It is also found in Tasmania where it was introduced in the 19th century. Many superb lyrebirds live in the Dandenong Ranges National Park and Kinglake National Park around Melbourne, the Royal National Park and Illawarra region south of Sydney, in many other parks along the east coast of Australia, and non protected bushland. Albert's lyrebird is found only in a small area of Southern Queensland rainforest.
Behaviour and ecology
Lyrebirds are shy and difficult to approach, particularly the Albert's lyrebird, with the result that little information about its behaviour has been documented. When lyrebirds detect potential danger, they pause and scan the surroundings, sound an alarm, and either flee the area on foot, or seek cover and freeze. Firefighters sheltering in mine shafts during bushfires have been joined by lyrebirds.
Diet and feeding
Lyrebirds feed on the ground and as individuals. A range of invertebrate prey is taken, including insects such as cockroaches, beetles (both adults and larvae), earwigs, fly larvae, and the adults and larvae of moths. Other prey taken includes centipedes, spiders, earthworms. Less commonly taken prey includes stick insects, bugs, amphipods, lizards, frogs and occasionally, seeds. They find food by scratching with their feet through the leaf-litter.
Breeding
Lyrebirds are long-lived birds that can live as long as 30 years. They have long breeding cycles and start breeding later in life than other passerine birds. Female superb lyrebirds start breeding at the age of five or six, and males at the age of six to eight. Males defend territories from other males, and those territories may contain the breeding territories of up to eight females. Within the male territories, the males create or use display platforms; for the superb lyrebird, this is a mound of bare soil; for the Albert's lyrebird, it is a pile of twigs on the forest floor.
Male lyrebirds call mostly during winter, when they construct and maintain an open arena-mound in dense bush, on which they sing and dance in an elaborate courtship display performed for potential mates, of which the male lyrebird has several. The strength, volume, and location of the nest built by the female lyrebird is dependent on the rainfall and predation during the nest building period. It is important for the nest to be water resistant and hidden in secluded areas so predators cannot attack. Once the nest is made in the preferred location, the female lyrebird lays a single egg. The egg is incubated over 50 days solely by the female, and the female also fosters the chick alone.
Vocalizations and mimicry
A lyrebird's song is one of the more distinctive aspects of its behavioural biology. Lyrebirds sing throughout the year, but the peak of the breeding season, from June to August, is when they sing with the most intensity. During this peak males may sing for four hours of the day, almost half the hours of daylight. The song of the lyrebird is a mixture of elements of its own song and mimicry of other species. Lyrebirds render with great fidelity the individual songs of other birds and the chatter of flocks of birds, and also mimic other animals such as possums, koalas and dingoes. Lyrebirds have been recorded mimicking human sounds such as a mill whistle, a cross-cut saw, chainsaws, car engines and car alarms, fire alarms, rifle-shots, camera shutters, dogs barking, crying babies, music, mobile phone ring tones, and even the human voice. However, while the mimicry of human noises is widely reported, the extent to which it happens is exaggerated and the phenomenon is unusual. Parts of the lyrebird's own song can resemble human-made sound effects, which has given rise to the urban legend that they frequently imitate video game or film sounds.
The superb lyrebird's mimicked calls are learned from the local environment, including from other superb lyrebirds. An instructive example is the population of superb lyrebirds in Tasmania, which have retained the calls of species not native to Tasmania in their repertoire, with some local Tasmanian endemic bird songs added. The female lyrebirds of both species are also mimics capable of complex vocalisations. Superb lyrebird females are silent during courtship; however, they regularly produce sophisticated vocal displays during foraging and nest defense. A recording of a superb lyrebird mimicking sounds of an electronic shooting game, workmen and chainsaws was added to the National Film and Sound Archive's Sounds of Australia registry in 2013.
Both species of lyrebird produced elaborate lyrebird-specific vocalisations including 'whistle songs'. Males also sing songs specifically associated with their song and dance displays.
One researcher, Sydney Curtis, has recorded flute-like lyrebird calls in the vicinity of the New England National Park. Similarly, in 1969, a park ranger, Neville Fenton, recorded a lyrebird song which resembled flute sounds in the New England National Park, near Dorrigo in northern coastal New South Wales. After much detective work by Fenton, it was discovered that in the 1930s, a flute player living on a farm adjoining the park used to play tunes near his pet lyrebird. The lyrebird adopted the tunes into his repertoire, and retained them after release into the park. Neville Fenton forwarded a tape of his recording to Norman Robinson. Because a lyrebird is able to carry two tunes at the same time, Robinson filtered out one of the tunes and put it on the phonograph for the purposes of analysis. One witness suggested that the song represents a modified version of two popular tunes in the 1930s: "The Keel Row" and "Mosquito's Dance". Musicologist David Rothenberg has endorsed this information. However, a "flute lyrebird" research group (including Curtis and Fenton) formed to investigate the veracity of this story found no evidence of "Mosquito Dance" and only remnants of "Keel Row" in contemporary and historical lyrebird recordings from this area. Neither were they able to prove that a lyrebird chick had been a pet, although they acknowledged compelling evidence on both sides of the argument.
Status and conservation
Until the 2019–2020 Australian bushfire season, superb lyrebirds were not considered threatened in the short to medium term. Concern has since grown as early analyses have shown the extent of destruction of the lyrebird's preferred wet-forest habitats, which in less intense previous bushfire seasons have been spared, in large part due to their moisture content. Albert's lyrebird has a very restricted habitat and had been listed as vulnerable by the IUCN, but because the species and its habitat were carefully managed, the species was re-assessed to near threatened in 2009. The superb lyrebird had already been seriously threatened by habitat destruction in the past. Its population had since recovered, but the 2019–2020 bushfires damaged much of its habitat, which may lead to a reclassification of its status from "common" to "threatened". Beyond this new threat are the long-term vulnerabilities to predation by cats and foxes, as well as human population pressure on its habitat.
In culture
Painting by John Gould
The lyrebird is so called because the male bird has a spectacular tail, consisting of 16 highly modified feathers (two long slender lyrates at the centre of the plume, two broader medians on the outside edges and twelve filamentaries arrayed between them), which was originally thought to resemble a lyre. This happened when a superb lyrebird specimen (which had been taken from Australia to England during the early 19th century) was prepared for display at the British Museum by a taxidermist who had never seen a live lyrebird. The taxidermist mistakenly thought that the tail would resemble a lyre, and that the tail would be held in a similar way to that of a peacock during courtship display, and so he arranged the feathers in this way. Later, John Gould (who had also never seen a live lyrebird), painted the lyrebird from the British Museum specimen.
The male lyrebird's tail is not held as in John Gould's painting. Instead, the male lyrebird's tail is fanned over the lyrebird during courtship display, with the tail completely covering his head and back—as can be seen in the image in the "breeding" section of this page, and also the image of the 10-cent coin, where the superb lyrebird's tail (in courtship display) is portrayed accurately.
Lyrebird emblems and logos
The lyrebird has been featured as a symbol and emblem many times, especially in New South Wales and Victoria (where the superb lyrebird has its natural habitat), and in Queensland (where Albert's lyrebird has its natural habitat).
A male superb lyrebird is featured on the reverse of the Australian 10-cent coin.
A superb lyrebird featured on the Australian one shilling postage stamp first issued in 1932.
A stylised superb lyrebird appears in the transparent window of the Australian 100 dollar note.
A silhouette of a male superb lyrebird is the logo of the Australian Film Commission.
An illustration of a male superb lyrebird, in courtship display, is the emblem of the New South Wales National Parks and Wildlife Service.
The pattern on the curtains of the Victorian State Theatre is the image of a male superb lyrebird, in courtship display, as viewed from the front.
A stylised illustration of a male Albert's lyrebird was the logo of the Queensland Conservatorium of Music, before the Conservatorium became part of Griffith University. In the logo, the top part of the lyrebird's tail became a music stave.
Australian band You Am I's 2008 album Dilettantes and its first single, "Erasmus", feature a drawing of a lyrebird by artist Ken Taylor.
A stylised illustration of part of a male superb lyrebird's tail is the logo for the Lyrebird Arts Council of Victoria.
The lyrebird is also featured atop the crest of Panhellenic Sorority Alpha Chi Omega, whose symbol is the lyre.
There are many other companies with the name of Lyrebird, and these also have lyrebird logos.
"Land of the Lyrebird" is an alternative name for the Strzelecki Ranges in the Gippsland region of Victoria.
A silhouetted male superb lyrebird in courtship display features in the masthead of The Betoota Advocate.
See also
The Display
References
Further references
Attenborough, D. (1998). The Life of Birds. p. 212. .
External links
Lyrebirds—At the New South Wales Department of Environment and Heritage site
The Albert's lyrebird project at the Queensland Department of Environment and Resource Management site
Lyrebird videos at the Internet Bird Collection
National Film and Sound Archive of Australia (Sounds of Australia) recording of a [https://www.nfsa.gov.au/collection/curated/superb-lyrebird-imitating-workers superb lyrebird imitating workers
Endemic birds of Australia
Mimicry
Taxa named by John Latham (ornithologist) | Lyrebird | [
"Biology"
] | 3,023 | [
"Mimicry",
"Biological defense mechanisms"
] |
48,900 | https://en.wikipedia.org/wiki/Atomic%20radius | The atomic radius of a chemical element is a measure of the size of its atom, usually the mean or typical distance from the center of the nucleus to the outermost isolated electron. Since the boundary is not a well-defined physical entity, there are various non-equivalent definitions of atomic radius. Four widely used definitions of atomic radius are: Van der Waals radius, ionic radius, metallic radius and covalent radius. Typically, because of the difficulty to isolate atoms in order to measure their radii separately, atomic radius is measured in a chemically bonded state; however theoretical calculations are simpler when considering atoms in isolation. The dependencies on environment, probe, and state lead to a multiplicity of definitions.
Depending on the definition, the term may apply to atoms in condensed matter, covalently bonding in molecules, or in ionized and excited states; and its value may be obtained through experimental measurements, or computed from theoretical models. The value of the radius may depend on the atom's state and context.
Electrons do not have definite orbits nor sharply defined ranges. Rather, their positions must be described as probability distributions that taper off gradually as one moves away from the nucleus, without a sharp cutoff; these are referred to as atomic orbitals or electron clouds. Moreover, in condensed matter and molecules, the electron clouds of the atoms usually overlap to some extent, and some of the electrons may roam over a large region encompassing two or more atoms.
Under most definitions the radii of isolated neutral atoms range between 30 and 300 pm (trillionths of a meter), or between 0.3 and 3 ångströms. Therefore, the radius of an atom is more than 10,000 times the radius of its nucleus (1–10 fm), and less than 1/1000 of the wavelength of visible light (400–700 nm).
For many purposes, atoms can be modeled as spheres. This is only a crude approximation, but it can provide quantitative explanations and predictions for many phenomena, such as the density of liquids and solids, the diffusion of fluids through molecular sieves, the arrangement of atoms and ions in crystals, and the size and shape of molecules.
History
The concept of atomic radius was preceded in the 19th century by the concept of atomic volume, a relative measure of how much space would on average an atom occupy in a given solid or liquid material. By the end of the century this term was also used in an absolute sense, as a molar volume divided by Avogadro constant. Such a volume is different for different crystalline forms even of the same compound, but physicists used it for rough, order-of-magnitude estimates of the atomic size, getting 10−8–10−7 cm for copper.
The earliest estimates of the atomic size was made by opticians in the 1830s, particularly Cauchy, who developed models of light dispersion assuming a lattice of connected "molecules". In 1857 Clausius developed a gas-kinetic model which included the equation for mean free path. In the 1870s it was used to estimate gas molecule sizes, as well as an aforementioned comparison with visible light wavelength and an estimate from the thickness of soap bubble film at which its contractile force rapidly diminishes. By 1900, various estimates of mercury atom diameter averaged around 275±20 pm (modern estimates give 300±10 pm, see below).
In 1920, shortly after it had become possible to determine the sizes of atoms using X-ray crystallography, it was suggested that all atoms of the same element have the same radii. However, in 1923, when more crystal data had become available, it was found that the approximation of an atom as a sphere does not necessarily hold when comparing the same atom in different crystal structures.
Definitions
Widely used definitions of atomic radius include:
Van der Waals radius: In the simplest definition, half the minimum distance between the nuclei of two atoms of the element that are not otherwise bound by covalent or metallic interactions. The Van der Waals radius may be defined even for elements (such as metals) in which Van der Waals forces are dominated by other interactions. Because Van der Waals interactions arise through quantum fluctuations of the atomic polarisation, the polarisability (which can usually be measured or calculated more easily) may be used to define the Van der Waals radius indirectly.
Ionic radius: the nominal radius of the ions of an element in a specific ionization state, deduced from the spacing of atomic nuclei in crystalline salts that include that ion. In principle, the spacing between two adjacent oppositely charged ions (the length of the ionic bond between them) should equal the sum of their ionic radii.
Covalent radius: the nominal radius of the atoms of an element when covalently bound to other atoms, as deduced from the separation between the atomic nuclei in molecules. In principle, the distance between two atoms that are bound to each other in a molecule (the length of that covalent bond) should equal the sum of their covalent radii.
Metallic radius: the nominal radius of atoms of an element when joined to other atoms by metallic bonds.
Bohr radius: the radius of the lowest-energy electron orbit predicted by Bohr model of the atom (1913). It is only applicable to atoms and ions with a single electron, such as hydrogen, singly ionized helium, and positronium. Although the model itself is now obsolete, the Bohr radius for the hydrogen atom is still regarded as an important physical constant, because it is equivalent to the quantum-mechanical most probable distance of the electron from the nucleus.
Empirically measured atomic radius
The following table shows empirically measured covalent radii for the elements, as published by J. C. Slater in 1964. The values are in picometers (pm or 1×10−12 m), with an accuracy of about 5 pm. The shade of the box ranges from red to yellow as the radius increases; gray indicates lack of data.
Explanation of the general trends
Electrons in atoms fill electron shells from the lowest available energy level. As a consequence of the Aufbau principle, each new period begins with the first two elements filling the next unoccupied s-orbital. Because an atom's s-orbital electrons are typically farthest from the nucleus, this results in a significant increase in atomic radius with the first elements of each period.
The atomic radius of each element generally decreases across each period due to an increasing number of protons, since an increase in the number of protons increases the attractive force acting on the atom's electrons. The greater attraction draws the electrons closer to the protons, decreasing the size of the atom. Down each group, the atomic radius of each element typically increases because there are more occupied
electron energy levels and therefore a greater distance between protons and electrons.
The increasing nuclear charge is partly counterbalanced by the increasing number of electrons—a phenomenon that is known as shielding—which explains why the size of atoms usually increases down each column despite an increase in attractive force from the nucleus. Electron shielding causes the attraction of an atom's nucleus on its electrons to decrease, so electrons occupying higher energy states farther from the nucleus experience reduced attractive force, increasing the size of the atom. However, elements in the 5d-block (lutetium to mercury) are much smaller than this trend predicts due to the weak shielding of the 4f-subshell. This phenomenon is known as the lanthanide contraction. A similar phenomenon exists for actinides; however, the general instability of transuranic elements makes measurements for the remainder of the 5f-block difficult and for transactinides nearly impossible. Finally, for sufficiently heavy elements, the atomic radius may be decreased by relativistic effects. This is a consequence of electrons near the strongly charged nucleus traveling at a sufficient fraction of the speed of light to gain a nontrivial amount of mass.
The following table summarizes the main phenomena that influence the atomic radius of an element:
Lanthanide contraction
The electrons in the 4f-subshell, which is progressively filled from lanthanum (Z = 57) to ytterbium (Z = 70), are not particularly effective at shielding the increasing nuclear charge from the sub-shells further out. The elements immediately following the lanthanides have atomic radii which are smaller than would be expected and which are almost identical to the atomic radii of the elements immediately above them. Hence lutetium is in fact slightly smaller than yttrium, hafnium has virtually the same atomic radius (and chemistry) as zirconium, and tantalum has an atomic radius similar to niobium, and so forth. The effect of the lanthanide contraction is noticeable up to platinum (Z = 78), after which it is masked by a relativistic effect known as the inert-pair effect.
Due to lanthanide contraction, the 5 following observations can be drawn:
The size of Ln3+ ions regularly decreases with atomic number. According to Fajans' rules, decrease in size of Ln3+ ions increases the covalent character and decreases the basic character between Ln3+ and OH− ions in Ln(OH)3, to the point that Yb(OH)3 and Lu(OH)3 can dissolve with difficulty in hot concentrated NaOH. Hence the order of size of Ln3+ is given: La3+ > Ce3+ > ..., ... > Lu3+.
There is a regular decrease in their ionic radii.
There is a regular decrease in their tendency to act as a reducing agent, with an increase in atomic number.
The second and third rows of d-block transition elements are quite close in properties.
Consequently, these elements occur together in natural minerals and are difficult to separate.
d-block contraction
The d-block contraction is less pronounced than the lanthanide contraction but arises from a similar cause. In this case, it is the poor shielding capacity of the 3d-electrons which affects the atomic radii and chemistries of the elements immediately following the first row of the transition metals, from gallium (Z = 31) to bromine (Z = 35).
Calculated atomic radius
The following table shows atomic radii computed from theoretical models, as published by Enrico Clementi and others in 1967. The values are in picometres (pm).
See also
Atomic radii of the elements (data page)
Chemical bond
Covalent radius
Bond length
Steric hindrance
Kinetic diameter
References
Atomic radius
Properties of chemical elements | Atomic radius | [
"Physics",
"Chemistry"
] | 2,173 | [
"Atomic radius",
"Properties of chemical elements",
"Atoms",
"Matter"
] |
48,902 | https://en.wikipedia.org/wiki/Kilogram%20per%20cubic%20metre | The kilogram per cubic metre (symbol: kg·m−3, or kg/m3) is the unit of density in the International System of Units (SI). It is defined by dividing the SI unit of mass, the kilogram, by the SI unit of volume, the cubic metre.
Conversions
1 kg/m3 = 1 g/L (exactly)
1 kg/m3 = 0.001 g/cm3 (exactly)
1 kg/m3 ≈ 0.06243 lb/ft3 (approximately)
1 kg/m3 ≈ 0.1335 oz/US gal (approximately)
1 kg/m3 ≈ 0.1604 oz/imp gal (approximately)
1 g/cm3 = 1000 kg/m3 (exactly)
1 lb/ft3 ≈ 16.02 kg/m3 (approximately)
1 oz/(US gal) ≈ 7.489 kg/m3 (approximately)
1 oz/(imp gal) ≈ 6.236 kg/m3 (approximately)
Relation to other measures
The density of water is about 1000 kg/m3 or 1 g/cm3, because the size of the gram was originally based on the mass of a cubic centimetre of water.
In chemistry, g/cm3 is more commonly used.
See also
Gram per cubic centimetre
References
External links
Official BIPM definition of the kilogram
Official BIPM definition of the metre
SI derived units
Units of chemical measurement
Units of density | Kilogram per cubic metre | [
"Physics",
"Chemistry",
"Mathematics"
] | 292 | [
"Physical quantities",
"Units of density",
"Quantity",
"Chemical quantities",
"Density",
"Units of chemical measurement",
"Units of measurement"
] |
48,903 | https://en.wikipedia.org/wiki/Nucleosynthesis | Nucleosynthesis is the process that creates new atomic nuclei from pre-existing nucleons (protons and neutrons) and nuclei. According to current theories, the first nuclei were formed a few minutes after the Big Bang, through nuclear reactions in a process called Big Bang nucleosynthesis. After about 20 minutes, the universe had expanded and cooled to a point at which these high-energy collisions among nucleons ended, so only the fastest and simplest reactions occurred, leaving our universe containing hydrogen and helium. The rest is traces of other elements such as lithium and the hydrogen isotope deuterium. Nucleosynthesis in stars and their explosions later produced the variety of elements and isotopes that we have today, in a process called cosmic chemical evolution. The amounts of total mass in elements heavier than hydrogen and helium (called 'metals' by astrophysicists) remains small (few percent), so that the universe still has approximately the same composition.
Stars fuse light elements to heavier ones in their cores, giving off energy in the process known as stellar nucleosynthesis. Nuclear fusion reactions create many of the lighter elements, up to and including iron and nickel in the most massive stars. Products of stellar nucleosynthesis remain trapped in stellar cores and remnants except if ejected through stellar winds and explosions. The neutron capture reactions of the r-process and s-process create heavier elements, from iron upwards.
Supernova nucleosynthesis within exploding stars is largely responsible for the elements between oxygen and rubidium: from the ejection of elements produced during stellar nucleosynthesis; through explosive nucleosynthesis during the supernova explosion; and from the r-process (absorption of multiple neutrons) during the explosion.
Neutron star mergers are a recently discovered major source of elements produced in the r-process. When two neutron stars collide, a significant amount of neutron-rich matter may be ejected which then quickly forms heavy elements.
Cosmic ray spallation is a process wherein cosmic rays impact nuclei and fragment them. It is a significant source of the lighter nuclei, particularly 3He, 9Be and 10,11B, that are not created by stellar nucleosynthesis. Cosmic ray spallation can occur in the interstellar medium, on asteroids and meteoroids, or on Earth in the atmosphere or in the ground.
This contributes to the presence on Earth of cosmogenic nuclides.
On Earth new nuclei are also produced by radiogenesis, the decay of long-lived, primordial radionuclides such as uranium, thorium, and potassium-40.
History
Timeline
It is thought that the primordial nucleons themselves were formed from the quark–gluon plasma around 13.8 billion years ago during the Big Bang as it cooled below two trillion degrees. A few minutes afterwards, starting with only protons and neutrons, nuclei up to lithium and beryllium (both with mass number 7) were formed, but hardly any other elements. Some boron may have been formed at this time, but the process stopped before significant carbon could be formed, as this element requires a far higher product of helium density and time than were present in the short nucleosynthesis period of the Big Bang. That fusion process essentially shut down at about 20 minutes, due to drops in temperature and density as the universe continued to expand. This first process, Big Bang nucleosynthesis, was the first type of nucleogenesis to occur in the universe, creating the so-called primordial elements.
A star formed in the early universe produces heavier elements by combining its lighter nucleihydrogen, helium, lithium, beryllium, and boronwhich were found in the initial composition of the interstellar medium and hence the star. Interstellar gas therefore contains declining abundances of these light elements, which are present only by virtue of their nucleosynthesis during the Big Bang, and also cosmic ray spallation. These lighter elements in the present universe are therefore thought to have been produced through thousands of millions of years of cosmic ray (mostly high-energy proton) mediated breakup of heavier elements in interstellar gas and dust. The fragments of these cosmic-ray collisions include helium-3 and the stable isotopes of the light elements lithium, beryllium, and boron. Carbon was not made in the Big Bang, but was produced later in larger stars via the triple-alpha process.
The subsequent nucleosynthesis of heavier elements (Z ≥ 6, carbon and heavier elements) requires the extreme temperatures and pressures found within stars and supernovae. These processes began as hydrogen and helium from the Big Bang collapsed into the first stars after about 500 million years. Star formation has been occurring continuously in galaxies since that time. The primordial nuclides were created by Big Bang nucleosynthesis, stellar nucleosynthesis, supernova nucleosynthesis, and by nucleosynthesis in exotic events such as neutron star collisions. Other nuclides, such as Ar, formed later through radioactive decay. On Earth, mixing and evaporation has altered the primordial composition to what is called the natural terrestrial composition. The heavier elements produced after the Big Bang range in atomic numbers from Z = 6 (carbon) to Z = 94 (plutonium). Synthesis of these elements occurred through nuclear reactions involving the strong and weak interactions among nuclei, and called nuclear fusion (including both rapid and slow multiple neutron capture), and include also nuclear fission and radioactive decays such as beta decay. The stability of atomic nuclei of different sizes and composition (i.e. numbers of neutrons and protons) plays an important role in the possible reactions among nuclei. Cosmic nucleosynthesis, therefore, is studied among researchers of astrophysics and nuclear physics ("nuclear astrophysics").
History of nucleosynthesis theory
The first ideas on nucleosynthesis were simply that the chemical elements were created at the beginning of the universe, but no rational physical scenario for this could be identified. Gradually it became clear that hydrogen and helium are much more abundant than any of the other elements. All the rest constitute less than 2% of the mass of the Solar System, and of other star systems as well. At the same time it was clear that oxygen and carbon were the next two most common elements, and also that there was a general trend toward high abundance of the light elements, especially those with isotopes composed of whole numbers of helium-4 nuclei (alpha nuclides).
Arthur Stanley Eddington first suggested in 1920 that stars obtain their energy by fusing hydrogen into helium and raised the possibility that the heavier elements may also form in stars. This idea was not generally accepted, as the nuclear mechanism was not understood. In the years immediately before World War II, Hans Bethe first elucidated those nuclear mechanisms by which hydrogen is fused into helium.
Fred Hoyle's original work on nucleosynthesis of heavier elements in stars, occurred just after World War II. His work explained the production of all heavier elements, starting from hydrogen. Hoyle proposed that hydrogen is continuously created in the universe from vacuum and energy, without need for universal beginning.
Hoyle's work explained how the abundances of the elements increased with time as the galaxy aged. Subsequently, Hoyle's picture was expanded during the 1960s by contributions from William A. Fowler, Alastair G. W. Cameron, and Donald D. Clayton, followed by many others. The seminal 1957 review paper by E. M. Burbidge, G. R. Burbidge, Fowler and Hoyle is a well-known summary of the state of the field in 1957. That paper defined new processes for the transformation of one heavy nucleus into others within stars, processes that could be documented by astronomers.
The Big Bang itself had been proposed in 1931, long before this period, by Georges Lemaître, a Belgian physicist, who suggested that the evident expansion of the Universe in time required that the Universe, if contracted backwards in time, would continue to do so until it could contract no further. This would bring all the mass of the Universe to a single point, a "primeval atom", to a state before which time and space did not exist. Hoyle is credited with coining the term "Big Bang" during a 1949 BBC radio broadcast, saying that Lemaître's theory was "based on the hypothesis that all the matter in the universe was created in one big bang at a particular time in the remote past." It is popularly reported that Hoyle intended this to be pejorative, but Hoyle explicitly denied this and said it was just a striking image meant to highlight the difference between the two models. Lemaître's model was needed to explain the existence of deuterium and nuclides between helium and carbon, as well as the fundamentally high amount of helium present, not only in stars but also in interstellar space. As it happened, both Lemaître and Hoyle's models of nucleosynthesis would be needed to explain the elemental abundances in the universe.
The goal of the theory of nucleosynthesis is to explain the vastly differing abundances of the chemical elements and their several isotopes from the perspective of natural processes. The primary stimulus to the development of this theory was the shape of a plot of the abundances versus the atomic number of the elements. Those abundances, when plotted on a graph as a function of atomic number, have a jagged sawtooth structure that varies by factors up to ten million. A very influential stimulus to nucleosynthesis research was an abundance table created by Hans Suess and Harold Urey that was based on the unfractionated abundances of the non-volatile elements found within unevolved meteorites. Such a graph of the abundances is displayed on a logarithmic scale below, where the dramatically jagged structure is visually suppressed by the many powers of ten spanned in the vertical scale of this graph.
Processes
There are a number of astrophysical processes which are believed to be responsible for nucleosynthesis. The majority of these occur within stars, and the chain of those nuclear fusion processes are known as hydrogen burning (via the proton–proton chain or the CNO cycle), helium burning, carbon burning, neon burning, oxygen burning and silicon burning. These processes are able to create elements up to and including iron and nickel. This is the region of nucleosynthesis within which the isotopes with the highest binding energy per nucleon are created. Heavier elements can be assembled within stars by a neutron capture process known as the s-process or in explosive environments, such as supernovae and neutron star mergers, by a number of other processes. Some of those others include the r-process, which involves rapid neutron captures, the rp-process, and the p-process (sometimes known as the gamma process), which results in the photodisintegration of existing nuclei.
Major types
Big Bang nucleosynthesis
Big Bang nucleosynthesis occurred within the first three minutes of the beginning of the universe and is responsible for much of the abundance of (protium), (D, deuterium), (helium-3), and (helium-4). Although continues to be produced by stellar fusion and alpha decays and trace amounts of continue to be produced by spallation and certain types of radioactive decay, most of the mass of the isotopes in the universe are thought to have been produced in the Big Bang. The nuclei of these elements, along with some and are considered to have been formed between 100 and 300 seconds after the Big Bang when the primordial quark–gluon plasma froze out to form protons and neutrons. Because of the very short period in which nucleosynthesis occurred before it was stopped by expansion and cooling (about 20 minutes), no elements heavier than beryllium (or possibly boron) could be formed. Elements formed during this time were in the plasma state, and did not cool to the state of neutral atoms until much later.
Stellar nucleosynthesis
Stellar nucleosynthesis is the nuclear process by which new nuclei are produced. It occurs in stars during stellar evolution. It is responsible for the galactic abundances of elements from carbon to iron. Stars are thermonuclear furnaces in which H and He are fused into heavier nuclei by increasingly high temperatures as the composition of the core evolves. Of particular importance is carbon because its formation from He is a bottleneck in the entire process. Carbon is produced by the triple-alpha process in all stars. Carbon is also the main element that causes the release of free neutrons within stars, giving rise to the s-process, in which the slow absorption of neutrons converts iron into elements heavier than iron and nickel.
The products of stellar nucleosynthesis are generally dispersed into the interstellar gas through mass loss episodes and the stellar winds of low mass stars. The mass loss events can be witnessed today in the planetary nebulae phase of low-mass star evolution, and the explosive ending of stars, called supernovae, of those with more than eight times the mass of the Sun.
The first direct proof that nucleosynthesis occurs in stars was the astronomical observation that interstellar gas has become enriched with heavy elements as time passed. As a result, stars that were born from it late in the galaxy, formed with much higher initial heavy element abundances than those that had formed earlier. The detection of technetium in the atmosphere of a red giant star in 1952, by spectroscopy, provided the first evidence of nuclear activity within stars. Because technetium is radioactive, with a half-life much less than the age of the star, its abundance must reflect its recent creation within that star. Equally convincing evidence of the stellar origin of heavy elements is the large overabundances of specific stable elements found in stellar atmospheres of asymptotic giant branch stars. Observation of barium abundances some 20–50 times greater than found in unevolved stars is evidence of the operation of the s-process within such stars. Many modern proofs of stellar nucleosynthesis are provided by the isotopic compositions of stardust, solid grains that have condensed from the gases of individual stars and which have been extracted from meteorites. Stardust is one component of cosmic dust and is frequently called presolar grains. The measured isotopic compositions in stardust grains demonstrate many aspects of nucleosynthesis within the stars from which the grains condensed during the star's late-life mass-loss episodes.
Explosive nucleosynthesis
Supernova nucleosynthesis occurs in the energetic environment in supernovae, in which the elements between silicon and nickel are synthesized in quasiequilibrium established during fast fusion that attaches by reciprocating balanced nuclear reactions to 28Si. Quasiequilibrium can be thought of as almost equilibrium except for a high abundance of the 28Si nuclei in the feverishly burning mix. This concept was the most important discovery in nucleosynthesis theory of the intermediate-mass elements since Hoyle's 1954 paper because it provided an overarching understanding of the abundant and chemically important elements between silicon (A = 28) and nickel (A = 60). It replaced the incorrect although much cited alpha process of the B2FH paper, which inadvertently obscured Hoyle's 1954 theory. Further nucleosynthesis processes can occur, in particular the r-process (rapid process) described by the B2FH paper and first calculated by Seeger, Fowler and Clayton, in which the most neutron-rich isotopes of elements heavier than nickel are produced by rapid absorption of free neutrons. The creation of free neutrons by electron capture during the rapid compression of the supernova core along with the assembly of some neutron-rich seed nuclei makes the r-process a primary process, and one that can occur even in a star of pure H and He. This is in contrast to the B2FH designation of the process as a secondary process. This promising scenario, though generally supported by supernova experts, has yet to achieve a satisfactory calculation of r-process abundances. The primary r-process has been confirmed by astronomers who had observed old stars born when galactic metallicity was still small, that nonetheless contain their complement of r-process nuclei; thereby demonstrating that the metallicity is a product of an internal process. The r-process is responsible for our natural cohort of radioactive elements, such as uranium and thorium, as well as the most neutron-rich isotopes of each heavy element.
The rp-process (rapid proton) involves the rapid absorption of free protons as well as neutrons, but its role and its existence are less certain.
Explosive nucleosynthesis occurs too rapidly for radioactive decay to decrease the number of neutrons, so that many abundant isotopes with equal and even numbers of protons and neutrons are synthesized by the silicon quasi-equilibrium process. During this process, the burning of oxygen and silicon fuses nuclei that themselves have equal numbers of protons and neutrons to produce nuclides which consist of whole numbers of helium nuclei, up to 15 (representing 60Ni). Such multiple-alpha-particle nuclides are totally stable up to 40Ca (made of 10 helium nuclei), but heavier nuclei with equal and even numbers of protons and neutrons are tightly bound but unstable. The quasi-equilibrium produces radioactive isobars 44Ti, 48Cr, 52Fe, and 56Ni, which (except 44Ti) are created in abundance but decay after the explosion and leave the most stable isotope of the corresponding element at the same atomic weight. The most abundant and extant isotopes of elements produced in this way are 48Ti, 52Cr, and 56Fe. These decays are accompanied by the emission of gamma-rays (radiation from the nucleus), whose spectroscopic lines can be used to identify the isotope created by the decay. The detection of these emission lines were an important early product of gamma-ray astronomy.
The most convincing proof of explosive nucleosynthesis in supernovae occurred in 1987 when those gamma-ray lines were detected emerging from supernova 1987A. Gamma-ray lines identifying 56Co and 57Co nuclei, whose half-lives limit their age to about a year, proved that their radioactive cobalt parents created them. This nuclear astronomy observation was predicted in 1969 as a way to confirm explosive nucleosynthesis of the elements, and that prediction played an important role in the planning for NASA's Compton Gamma-Ray Observatory.
Other proofs of explosive nucleosynthesis are found within the stardust grains that condensed within the interiors of supernovae as they expanded and cooled. Stardust grains are one component of cosmic dust. In particular, radioactive 44Ti was measured to be very abundant within supernova stardust grains at the time they condensed during the supernova expansion. This confirmed a 1975 prediction of the identification of supernova stardust (SUNOCONs), which became part of the pantheon of presolar grains. Other unusual isotopic ratios within these grains reveal many specific aspects of explosive nucleosynthesis.
Neutron star mergers
The merger of binary neutron stars (BNSs) is now believed to be the main source of r-process elements. Being neutron-rich by definition, mergers of this type had been suspected of being a source of such elements, but definitive evidence was difficult to obtain. In 2017 strong evidence emerged, when LIGO, VIRGO, the Fermi Gamma-ray Space Telescope and INTEGRAL, along with a collaboration of many observatories around the world, detected both gravitational wave and electromagnetic signatures of a likely neutron star merger, GW170817, and subsequently detected signals of numerous heavy elements such as gold as the ejected degenerate matter decays and cools. The first detection of the merger of a neutron star and black hole (NSBHs) came in July 2021 and more after but analysis seem to favor BNSs over NSBHs as the main contributors to heavy metal production.
Black hole accretion disk nucleosynthesis
Nucleosynthesis may happen in accretion disks of black holes.
Cosmic ray spallation
Cosmic ray spallation process reduces the atomic weight of interstellar matter by the impact with cosmic rays, to produce some of the lightest elements present in the universe (though not a significant amount of deuterium). Most notably spallation is believed to be responsible for the generation of almost all of 3He and the elements lithium, beryllium, and boron, although some and are thought to have been produced in the Big Bang. The spallation process results from the impact of cosmic rays (mostly fast protons) against the interstellar medium. These impacts fragment carbon, nitrogen, and oxygen nuclei present. The process results in the light elements beryllium, boron, and lithium in the cosmos at much greater abundances than they are found within solar atmospheres. The quantities of the light elements 1H and 4He produced by spallation are negligible relative to their primordial abundance.
Beryllium and boron are not significantly produced by stellar fusion processes, since 8Be has an extremely short half-life of seconds.
Empirical evidence
Theories of nucleosynthesis are tested by calculating isotope abundances and comparing those results with observed abundances. Isotope abundances are typically calculated from the transition rates between isotopes in a network. Often these calculations can be simplified as a few key reactions control the rate of other reactions.
Minor mechanisms and processes
Tiny amounts of certain nuclides are produced on Earth by artificial means. Those are our primary source, for example, of technetium. However, some nuclides are also produced by a number of natural means that have continued after primordial elements were in place. These often act to create new elements in ways that can be used to date rocks or to trace the source of geological processes. Although these processes do not produce the nuclides in abundance, they are assumed to be the entire source of the existing natural supply of those nuclides.
These mechanisms include:
Radioactive decay may lead to radiogenic daughter nuclides. The nuclear decay of many long-lived primordial isotopes, especially uranium-235, uranium-238, and thorium-232 produce many intermediate daughter nuclides before they too finally decay to isotopes of lead. The Earth's natural supply of elements like radon and polonium is via this mechanism. The atmosphere's supply of argon-40 is due mostly to the radioactive decay of potassium-40 in the time since the formation of the Earth. Little of the atmospheric argon is primordial. Helium-4 is produced by alpha-decay, and the helium trapped in Earth's crust is also mostly non-primordial. In other types of radioactive decay, such as cluster decay, larger species of nuclei are ejected (for example, neon-20), and these eventually become newly formed stable atoms.
Radioactive decay may lead to spontaneous fission. This is not cluster decay, as the fission products may be split among nearly any type of atom. Thorium-232, uranium-235, and uranium-238 are primordial isotopes that undergo spontaneous fission. Natural technetium and promethium are produced in this manner.
Nuclear reactions. Naturally occurring nuclear reactions powered by radioactive decay give rise to so-called nucleogenic nuclides. This process happens when an energetic particle from radioactive decay, often an alpha particle, reacts with a nucleus of another atom to change the nucleus into another nuclide. This process may also cause the production of further subatomic particles, such as neutrons. Neutrons can also be produced in spontaneous fission and by neutron emission. These neutrons can then go on to produce other nuclides via neutron-induced fission, or by neutron capture. For example, some stable isotopes such as neon-21 and neon-22 are produced by several routes of nucleogenic synthesis, and thus only part of their abundance is primordial.
Nuclear reactions due to cosmic rays. By convention, these reaction-products are not termed "nucleogenic" nuclides, but rather cosmogenic nuclides. Cosmic rays continue to produce new elements on Earth by the same cosmogenic processes discussed above that produce primordial beryllium and boron. One important example is carbon-14, produced from nitrogen-14 in the atmosphere by cosmic rays. Iodine-129 is another example.
See also
Extinct isotopes of superheavy elements
References
Further reading
External links
The Valley of Stability (video) – nucleosynthesis explained in terms of the nuclide chart, by CEA (France)
Astrophysics
Nuclear physics | Nucleosynthesis | [
"Physics",
"Chemistry",
"Astronomy"
] | 5,130 | [
"Nuclear fission",
"Astrophysics",
"Nucleosynthesis",
"Nuclear physics",
"Nuclear fusion",
"Astronomical sub-disciplines"
] |
48,908 | https://en.wikipedia.org/wiki/Apparent%20retrograde%20motion | Apparent retrograde motion is the apparent motion of a planet in a direction opposite to that of other bodies within its system, as observed from a particular vantage point. Direct motion or prograde motion is motion in the same direction as other bodies.
While the terms direct and prograde are equivalent in this context, the former is the traditional term in astronomy. The earliest recorded use of prograde was in the early 18th century, although the term is now less common.
Etymology and history
The term retrograde is from the Latin word – "backward-step", the affix meaning "backwards" and "step". Retrograde is most commonly an adjective used to describe the path of a planet as it travels through the night sky, with respect to the zodiac, stars, and other bodies of the celestial canopy. In this context, the term refers to planets, as they appear from Earth, stopping briefly and reversing direction at certain times, though in reality, of course, we now understand that they perpetually orbit in the same uniform direction.
Although planets can sometimes be mistaken for stars as one observes the night sky, the planets actually change position from night to night in relation to the stars. Retrograde (backward) and prograde (forward) are observed as though the stars revolve around the Earth. Ancient Greek astronomer Ptolemy in 150 AD believed that the Earth was the center of the Solar System and therefore used the terms retrograde and prograde to describe the movement of the planets in relation to the stars. Although it is known today that the planets revolve around the Sun, the same terms continue to be used in order to describe the movement of the planets in relation to the stars as they are observed from Earth. Like the Sun, the planets appear to rise in the East and set in the West. When a planet travels eastward in relation to the stars, it is called prograde. When the planet travels westward in relation to the stars (opposite path) it is called retrograde.
This apparent retrogradation puzzled ancient astronomers, and was one reason they named these bodies 'planets' in the first place: 'Planet' comes from the Greek word for 'wanderer'. In the geocentric model of the Solar System proposed by Apollonius in the third century BCE, retrograde motion was explained by having the planets travel in deferents and epicycles. It was not understood to be an illusion until the time of Copernicus, although the Greek astronomer Aristarchus in 240 BCE proposed a heliocentric model for the Solar System.
Galileo's drawings show that he first observed Neptune on December 28, 1612, and again on January 27, 1613. On both occasions, Galileo mistook Neptune for a fixed star when it appeared very close—in conjunction—to Jupiter in the night sky, hence, he is not credited with Neptune's discovery. During the period of his first observation in December 1612, Neptune was stationary in the sky because it had just turned retrograde that very day. Since Neptune was only beginning its yearly retrograde cycle, the motion of the planet was far too slight to be detected with Galileo's small telescope.
Apparent motion
From Earth
When standing on the Earth looking up at the sky, it would appear that the Moon travels from east to west, just as the Sun and the stars do. Day after day however, the Moon appears to move to the east with respect to the stars. In fact, the Moon orbits the Earth from west to east, as do the vast majority of manmade satellites such as the International Space Station. The apparent westward motion of the Moon from the Earth's surface is actually an artifact of its being in a supersynchronous orbit. This means that the Earth completes one sidereal rotation before the Moon is able to complete one orbit. As a result, it looks like the Moon is travelling in the opposite direction, otherwise known as apparent retrograde motion. A person standing on Earth "catches up" to the Moon and passes it because the Earth completes one rotation before the Moon completes one orbit.
This phenomenon also occurs on Mars, which has two natural satellites, Phobos and Deimos. Both moons orbit Mars in an eastward (prograde) direction; however, Deimos has an orbital period of 1.23 Martian sidereal days, making it supersynchronous, whereas Phobos has an orbital period of 0.31 Martian sidereal days, making it subsynchronous. Consequently, although both moons are traveling in an eastward (prograde) direction, they appear to be traveling in opposite directions when viewed from the surface of Mars due to their orbital periods in relation to the rotational period of the planet.
All other planetary bodies in the Solar System also appear to periodically switch direction as they cross Earth's sky. Though all stars and planets appear to move from east to west on a nightly basis in response to the rotation of Earth, the outer planets generally drift slowly eastward relative to the stars. Asteroids and Kuiper Belt objects (including Pluto) exhibit apparent retrograde motion. This motion is normal for the planets, and so is considered direct motion. However, since Earth completes its orbit in a shorter period of time than the planets outside its orbit, it periodically overtakes them, like a faster car on a multi-lane highway. When this occurs, the planet being passed will first appear to stop its eastward drift, and then drift back toward the west. Then, as Earth swings past the planet in its orbit, it appears to resume its normal motion west to east.Inner planets Venus and Mercury appear to move in retrograde in a similar mechanism, but as they can never be in opposition to the Sun as seen from Earth, their retrograde cycles are tied to their inferior conjunctions with the Sun. They are unobservable in the Sun's glare and in their "new" phase, with mostly their dark sides toward Earth; they occur in the transition from evening star to morning star.
The more distant planets retrograde more frequently, as they do not move as much in their orbits while Earth completes an orbit itself. The retrograde motion of a hypothetical extremely distant (and nearly non-moving) planet would take place during a half-year, with the planet's apparent yearly motion being reduced to a parallax ellipse.
The center of the retrograde motion occurs at the planet's opposition which is when the planet is exactly opposite the Sun. This position is halfway, or 6 months, around the ecliptic from the Sun. The planet's height in the sky is opposite that of the Sun's height. The planet is at its highest at the winter solstice, and at its lowest at the summer solstice, on those (rare) occasions when it passes through the center of its retrograde motion near a solstice. Note particularly that the hemisphere the observer is in is critical to what they observe. The December Solstice will place the planet high in the northern hemisphere sky where it is winter and place it low in the southern hemisphere sky where it is summer. The opposite is true if this happens at the June Solstice.
Since the planet's opposition retrograde motion is when the Earth passes closest, the planet appears at its brightest for the year.
The period between the center of such retrogradations is the synodic period of the planet.
From Mercury
From any point on the daytime surface of Mercury when the planet is near perihelion (closest approach to the Sun), the Sun undergoes apparent retrograde motion. This occurs because, from approximately four Earth days before perihelion until approximately four Earth days after it, Mercury's angular orbital speed exceeds its angular rotational velocity. Mercury's elliptical orbit is farther from circular than that of any other planet in the Solar System, resulting in a substantially higher orbital speed near perihelion. As a result, at specific points on Mercury's surface an observer would be able to see the Sun rise part way, then reverse and set before rising again, all within the same Mercurian day.
See also
Deferent and epicycle
Retrograde and prograde motion
Hipparchus
Ptolemy
Shen Kuo
Spherical astronomy
Wei Pu
References
External links
Animated explanation of the mechanics of a retrograde orbit of a planet , University of South Wales
NASA: Mars retrograde motion
Double sunrises, 3DS Max Animation – illustrating the case of Mercury (the animation of an imaginary apparent retrograde motion of the Sun as seen from Earth begins at 1:35)
Mars Looping – The Retrograde Motion of Mars – 2018
Astrodynamics
Dynamics of the Solar System | Apparent retrograde motion | [
"Astronomy",
"Engineering"
] | 1,763 | [
"Astrodynamics",
"Dynamics of the Solar System",
"Solar System",
"Aerospace engineering"
] |
48,909 | https://en.wikipedia.org/wiki/Zenith | The zenith (, ) is the imaginary point on the celestial sphere directly "above" a particular location. "Above" means in the vertical direction (plumb line) opposite to the gravity direction at that location (nadir). The zenith is the "highest" point on the celestial sphere.
Origin
The word zenith derives from an inaccurate reading of the Arabic expression (), meaning "direction of the head" or "path above the head", by Medieval Latin scribes in the Middle Ages (during the 14th century), possibly through Old Spanish. It was reduced to samt ("direction") and miswritten as senit/cenit, the m being misread as ni. Through the Old French cenith, zenith first appeared in the 17th century.
Relevance and use
The term zenith sometimes means the highest point, way, or level reached by a celestial body on its daily apparent path around a given point of observation. This sense of the word is often used to describe the position of the Sun ("The sun reached its zenith..."), but to an astronomer, the Sun does not have its own zenith and is at the zenith only if it is directly overhead.
In a scientific context, the zenith is the direction of reference for measuring the zenith angle (or zenith angular distance), the angle between a direction of interest (e.g. a star) and the local zenith - that is, the complement of the altitude angle (or elevation angle).
The Sun reaches the observer's zenith when it is 90° above the horizon, and this only happens between the Tropic of Cancer and the Tropic of Capricorn. The point where this occurs is known as the subsolar point. In Islamic astronomy, the passing of the Sun over the zenith of Mecca becomes the basis of the qibla observation by shadows twice a year on 27/28 May and 15/16 July.
At a given location during the course of a day, the Sun reaches not only its zenith but also its nadir, at the antipode of that location 12 hours from solar noon.
In astronomy, the altitude in the horizontal coordinate system and the zenith angle are complementary angles, with the horizon perpendicular to the zenith. The astronomical meridian is also determined by the zenith, and is defined as a circle on the celestial sphere that passes through the zenith, nadir, and the celestial poles.
A zenith telescope is a type of telescope designed to point straight up at or near the zenith, and used for precision measurement of star positions, to simplify telescope construction, or both. The NASA Orbital Debris Observatory and the Large Zenith Telescope are both zenith telescopes, since the use of liquid mirrors meant these telescopes could only point straight up.
On the International Space Station, zenith and nadir are used instead of up and down, referring to directions within and around the station, relative to the earth.
Zenith star
Zenith stars (also "star on top", "overhead star", "latitude star") are stars whose declination equals the latitude of the observers location, and hence at some time in the day or night pass culminate (pass) through the zenith. When at the zenith the right ascension of the star equals the local sidereal time at your location. In celestial navigation this allows latitude to be determined, since the declination of the star equals the latitude of the observer. If the current time at Greenwich is known at the time of the observation, the observers longitude can also be determined from the right ascension of the star. Hence "Zenith stars" lie on or near the circle of declination equal to the latitude of the observer ("zenith circle"). Zenith stars are not to be confused with "steering stars" of a sidereal compass rose of a sidereal compass.
See also
Analemma
Azimuth
Geodesy
History of geodesy
Horizon zenith angle
Horizontal coordinate system
Keyhole problem
Vertical deflection
References
Further reading
Astronomical coordinate systems
Technical factors of astrology
Astrological house systems
Topography
Orientation (geometry) | Zenith | [
"Physics",
"Astronomy",
"Mathematics"
] | 827 | [
"Horizontal coordinate system",
"Astronomical coordinate systems",
"Topology",
"Space",
"Geometry",
"Coordinate systems",
"Spacetime",
"Orientation (geometry)"
] |
48,910 | https://en.wikipedia.org/wiki/Horizon | The horizon is the apparent curve that separates the surface of a celestial body from its sky when viewed from the perspective of an observer on or near the surface of the relevant body. This curve divides all viewing directions based on whether it intersects the relevant body's surface or not.
The true horizon is a theoretical line, which can only be observed to any degree of accuracy when it lies along a relatively smooth surface such as that of Earth's oceans. At many locations, this line is obscured by terrain, and on Earth it can also be obscured by life forms such as trees and/or human constructs such as buildings. The resulting intersection of such obstructions with the sky is called the visible horizon. On Earth, when looking at a sea from a shore, the part of the sea closest to the horizon is called the offing.
The true horizon surrounds the observer and it is typically assumed to be a circle, drawn on the surface of a perfectly spherical model of the relevant celestial body, i.e., a small circle of the local osculating sphere. With respect to Earth, the center of the true horizon is below the observer and below sea level. Its radius or horizontal distance from the observer varies slightly from day to day due to atmospheric refraction, which is greatly affected by weather conditions. Also, the higher the observer's eyes are from sea level, the farther away the horizon is from the observer. For instance, in standard atmospheric conditions, for an observer with eye level above sea level by , the horizon is at a distance of about .
When observed from very high standpoints, such as a space station, the horizon is much farther away and it encompasses a much larger area of Earth's surface. In this case, the horizon would no longer be a perfect circle, not even a plane curve such as an ellipse, especially when the observer is above the equator, as the Earth's surface can be better modeled as an oblate ellipsoid than as a sphere.
Etymology
The word horizon derives from the Greek () 'separating circle', where is from the verb ὁρίζω () 'to divide, to separate', which in turn derives from () 'boundary, landmark'.
Appearance and usage
Historically, the distance to the visible horizon has long been vital to survival and successful navigation, especially at sea, because it determined an observer's maximum range of vision and thus of communication, with all the obvious consequences for safety and the transmission of information that this range implied. This importance lessened with the development of the radio and the telegraph, but even today, when flying an aircraft under visual flight rules, a technique called attitude flying is used to control the aircraft, where the pilot uses the visual relationship between the aircraft's nose and the horizon to control the aircraft. Pilots can also retain their spatial orientation by referring to the horizon.
In many contexts, especially perspective drawing, the curvature of the Earth is disregarded and the horizon is considered the theoretical line to which points on any horizontal plane converge (when projected onto the picture plane) as their distance from the observer increases. For observers near sea level, the difference between this geometrical horizon (which assumes a perfectly flat, infinite ground plane) and the true horizon (which assumes a spherical Earth surface) is imperceptible to the unaided eye. However, for someone on a hill looking out across the sea, the true horizon will be about a degree below a horizontal line.
In astronomy, the horizon is the horizontal plane through the eyes of the observer. It is the fundamental plane of the horizontal coordinate system, the locus of points that have an altitude of zero degrees. While similar in ways to the geometrical horizon, in this context a horizon may be considered to be a plane in space, rather than a line on a picture plane.
Distance to the horizon
Ignoring the effect of atmospheric refraction, distance to the true horizon from an observer close to the Earth's surface is about
where h is height above sea level and R is the Earth radius.
The expression can be simplified as:
where the constant equals k.
In this equation, Earth's surface is assumed to be perfectly spherical, with R equal to about .
Examples
Assuming no atmospheric refraction and a spherical Earth with radius R=:
For an observer standing on the ground with h = , the horizon is at a distance of .
For an observer standing on the ground with h = , the horizon is at a distance of .
For an observer standing on a hill or tower above sea level, the horizon is at a distance of .
For an observer standing on a hill or tower above sea level, the horizon is at a distance of .
For an observer standing on the roof of the Burj Khalifa, from ground, and about above sea level, the horizon is at a distance of .
For an observer atop Mount Everest ( in altitude), the horizon is at a distance of .
For an observer aboard a commercial passenger plane flying at a typical altitude of , the horizon is at a distance of .
For a U-2 pilot, whilst flying at its service ceiling , the horizon is at a distance of .
Other planets
On terrestrial planets and other solid celestial bodies with negligible atmospheric effects, the distance to the horizon for a "standard observer" varies as the square root of the planet's radius. Thus, the horizon on Mercury is 62% as far away from the observer as it is on Earth, on Mars the figure is 73%, on the Moon the figure is 52%, on Mimas the figure is 18%, and so on.
Derivation
If the Earth is assumed to be a featureless sphere (rather than an oblate spheroid) with no atmospheric refraction, then the distance to the horizon can easily be calculated.
The tangent-secant theorem states that
Make the following substitutions:
d = OC = distance to the horizon
D = AB = diameter of the Earth
h = OB = height of the observer above sea level
D+h = OA = diameter of the Earth plus height of the observer above sea level,
with d, D, and h all measured in the same units. The formula now becomes
or
where R is the radius of the Earth.
The same equation can also be derived using the Pythagorean theorem.
At the horizon, the line of sight is a tangent to the Earth and is also perpendicular to Earth's radius.
This sets up a right triangle, with the sum of the radius and the height as the hypotenuse.
With
d = distance to the horizon
h = height of the observer above sea level
R = radius of the Earth
referring to the second figure at the right leads to the following:
The exact formula above can be expanded as:
where R is the radius of the Earth (R and h must be in the same units). For example,
if a satellite is at a height of 2000 km, the distance to the horizon is ;
neglecting the second term in parentheses would give a distance of , a 7% error.
Approximation
If the observer is close to the surface of the Earth, then it is valid to disregard h in the term , and the formula becomes-
Using kilometres for d and R, and metres for h, and taking the radius of the Earth as 6371 km, the distance to the horizon is
.
Using imperial units, with d and R in statute miles (as commonly used on land), and h in feet, the distance to the horizon is
.
If d is in nautical miles, and h in feet, the constant factor is about 1.06, which is close enough to 1 that it is often ignored, giving:
These formulas may be used when h is much smaller than the radius of the Earth (6371 km or 3959 mi), including all views from any mountaintops, airplanes, or high-altitude balloons. With the constants as given, both the metric and imperial formulas are precise to within 1% (see the next section for how to obtain greater precision).
If h is significant with respect to R, as with most satellites, then the approximation is no longer valid, and the exact formula is required.
Related measures
Arc distance
Another relationship involves the great-circle distance s along the arc over the curved surface of the Earth to the horizon; this is more directly comparable to the geographical distance on a map.
It can be formulated in terms of γ in radians,
then
Solving for s gives
The distance s can also be expressed in terms of the line-of-sight distance d; from the second figure at the right,
substituting for γ and rearranging gives
The distances d and s are nearly the same when the height of the object is negligible compared to the radius (that is, h ≪ R).
Zenith angle
When the observer is elevated, the horizon zenith angle can be greater than 90°. The maximum visible zenith angle occurs when the ray is tangent to Earth's surface; from triangle OCG in the figure at right,
where is the observer's height above the surface and is the angular dip of the horizon. It is related to the horizon zenith angle by:
For a non-negative height , the angle is always ≥ 90°.
Objects above the horizon
To compute the greatest distance DBL at which an observer B can see the top of an object L above the horizon, simply add the distances to the horizon from each of the two points:
DBL = DB + DL
For example, for an observer B with a height of hB1.70 m standing on the ground, the horizon is DB4.65 km away. For a tower with a height of hL100 m, the horizon distance is DL35.7 km. Thus an observer on a beach can see the top of the tower as long as it is not more than DBL40.35 km away. Conversely, if an observer on a boat (hB1.7m) can just see the tops of trees on a nearby shore (hL10m), the trees are probably about DBL16 km away.
Referring to the figure at the right, and using the approximation above, the top of the lighthouse will be visible to a lookout in a crow's nest at the top of a mast of the boat if
where DBL is in kilometres and hB and hL are in metres.
As another example, suppose an observer, whose eyes are two metres above the level ground, uses binoculars to look at a distant building which he knows to consist of thirty storeys, each 3.5 metres high. He counts the stories he can see and finds there are only ten. So twenty stories or 70 metres of the building are hidden from him by the curvature of the Earth. From this, he can calculate his distance from the building:
which comes to about 35 kilometres.
It is similarly possible to calculate how much of a distant object is visible above the horizon. Suppose an observer's eye is 10 metres above sea level, and he is watching a ship that is 20 km away. His horizon is:
kilometres from him, which comes to about 11.3 kilometres away. The ship is a further 8.7 km away. The height of a point on the ship that is just visible to the observer is given by:
which comes to almost exactly six metres. The observer can therefore see that part of the ship that is more than six metres above the level of the water. The part of the ship that is below this height is hidden from him by the curvature of the Earth. In this situation, the ship is said to be hull-down.
Effect of atmospheric refraction
Due to atmospheric refraction the distance to the visible horizon is further than the distance based on a simple geometric calculation. If the ground (or water) surface is colder than the air above it, a cold, dense layer of air forms close to the surface, causing light to be refracted downward as it travels, and therefore, to some extent, to go around the curvature of the Earth. The reverse happens if the ground is hotter than the air above it, as often happens in deserts, producing mirages. As an approximate compensation for refraction, surveyors measuring distances longer than 100 meters subtract 14% from the calculated curvature error and ensure lines of sight are at least 1.5 metres from the ground, to reduce random errors created by refraction.
If the Earth were an airless world like the Moon, the above calculations would be accurate. However, Earth has an atmosphere of air, whose density and refractive index vary considerably depending on the temperature and pressure. This makes the air refract light to varying extents, affecting the appearance of the horizon. Usually, the density of the air just above the surface of the Earth is greater than its density at greater altitudes. This makes its refractive index greater near the surface than at higher altitudes, which causes light that is travelling roughly horizontally to be refracted downward. This makes the actual distance to the horizon greater than the distance calculated with geometrical formulas. With standard atmospheric conditions, the difference is about 8%. This changes the factor of 3.57, in the metric formulas used above, to about 3.86. For instance, if an observer is standing on seashore, with eyes 1.70 m above sea level, according to the simple geometrical formulas given above the horizon should be 4.7 km away. Actually, atmospheric refraction allows the observer to see 300 metres farther, moving the true horizon 5 km away from the observer.
This correction can be, and often is, applied as a fairly good approximation when atmospheric conditions are close to standard. When conditions are unusual, this approximation fails. Refraction is strongly affected by temperature gradients, which can vary considerably from day to day, especially over water. In extreme cases, usually in springtime, when warm air overlies cold water, refraction can allow light to follow the Earth's surface for hundreds of kilometres. Opposite conditions occur, for example, in deserts, where the surface is very hot, so hot, low-density air is below cooler air. This causes light to be refracted upward, causing mirage effects that make the concept of the horizon somewhat meaningless. Calculated values for the effects of refraction under unusual conditions are therefore only approximate. Nevertheless, attempts have been made to calculate them more accurately than the simple approximation described above.
Outside the visual wavelength range, refraction will be different. For radar (e.g. for wavelengths 300 to 3 mm i.e. frequencies between 1 and 100 GHz) the radius of the Earth may be multiplied by 4/3 to obtain an effective radius giving a factor of 4.12 in the metric formula i.e. the radar horizon will be 15% beyond the geometrical horizon or 7% beyond the visual. The 4/3 factor is not exact, as in the visual case the refraction depends on atmospheric conditions.
Integration method—Sweer
If the density profile of the atmosphere is known, the distance d to the horizon is given by
where RE is the radius of the Earth, ψ is the dip of the horizon and δ is the refraction of the horizon. The dip is determined fairly simply from
where h is the observer's height above the Earth, μ is the index of refraction of air at the observer's height, and μ0 is the index of refraction of air at Earth's surface.
The refraction must be found by integration of
where is the angle between the ray and a line through the center of the Earth. The angles ψ and are related by
Simple method—Young
A much simpler approach, which produces essentially the same results as the first-order approximation described above, uses the geometrical model but uses a radius . The distance to the horizon is then
Taking the radius of the Earth as 6371 km, with d in km and h in m,
with d in mi and h in ft,
In the case of radar one typically has resulting (with d in km and h in m) in
Results from Young's method are quite close to those from Sweer's method, and are sufficiently accurate for many purposes.
Vanishing points
The horizon is a key feature of the picture plane in the science of graphical perspective. Assuming the picture plane stands vertical to ground, and P is the perpendicular projection of the eye point O on the picture plane, the horizon is defined as the horizontal line through P. The point P is the vanishing point of lines perpendicular to the picture. If S is another point on the horizon, then it is the vanishing point for all lines parallel to OS. But Brook Taylor (1719) indicated that the horizon plane determined by O and the horizon was like any other plane:
The term of Horizontal Line, for instance, is apt to confine the Notions of a Learner to the Plane of the Horizon, and to make him imagine, that that Plane enjoys some particular Privileges, which make the Figures in it more easy and more convenient to be described, by the means of that Horizontal Line, than the Figures in any other plane;…But in this Book I make no difference between the Plane of the Horizon, and any other Plane whatsoever...
The peculiar geometry of perspective where parallel lines converge in the distance, stimulated the development of projective geometry which posits a point at infinity where parallel lines meet. In her book Geometry of an Art (2007), Kirsti Andersen described the evolution of perspective drawing and science up to 1800, noting that vanishing points need not be on the horizon. In a chapter titled "Horizon", John Stillwell recounted how projective geometry has led to incidence geometry, the modern abstract study of line intersection. Stillwell also ventured into foundations of mathematics in a section titled "What are the Laws of Algebra ?" The "algebra of points", originally given by Karl von Staudt deriving the axioms of a field was deconstructed in the twentieth century, yielding a wide variety of mathematical possibilities. Stillwell states
This discovery from 100 years ago seems capable of turning mathematics upside down, though it has not yet been fully absorbed by the mathematical community. Not only does it defy the trend of turning geometry into algebra, it suggests that both geometry and algebra have a simpler foundation than previously thought.
See also
References
Further reading
Horizontal coordinate system
Astronomical coordinate systems | Horizon | [
"Astronomy",
"Mathematics"
] | 3,741 | [
"Astronomical coordinate systems",
"Horizontal coordinate system",
"Coordinate systems"
] |
48,968 | https://en.wikipedia.org/wiki/Guanxi | Guanxi () is a term used in Chinese culture to describe an individual's social network of mutually beneficial personal and business relationships. The character guan, 关, means "closed" and "caring" while the character xi 系 means "system" and together the term refers to a closed caring system of relationships that is somewhat analogous to the term old boy's network in the West. In Western media, the pinyin romanization guanxi is more widely used than common translations such as "connections" or "relationships" because those terms do not capture the significance of a person's guanxi to most personal and business dealings in China. Unlike in the West, guanxi relationships are almost never established purely through formal meetings but must also include spending time to get to know each other during tea sessions, dinner banquets, or other personal meetings. Essentially, guanxi requires a personal bond before any business relationship can develop. As a result, guanxi relationships are often more tightly bound than relationships in Western personal social networks. Guanxi has a major influence on the management of businesses based in mainland China, Hong Kong, and those owned by Overseas Chinese people in Southeast Asia (the bamboo network).
Guanxi networks are grounded in Confucian doctrine about the proper structure of family, hierarchical, and friendly relationships in a community, including the need for implicit mutual commitments, reciprocity, and trust.
Guanxi has 3 sub-dimensions sometimes abbreviated as GRX which stands for ganqing, a measure of the emotional attachment in a relationship, renqing ( rénqíng/jen-ch'ing), the moral obligation to maintain a relationship with reciprocal exchange of favors, and xinren, or the amount of interpersonal trust. Guanxi is also related to the idea of "face" (, miànzi/mien-tzu), which refers to social status, propriety, prestige, or a combination of all three. Other related concepts include wulun (), the five cardinal types of relationships, which supports the idea of a long-term, developing relationship between a business and its client, and yi-ren and ren, which respectively support reciprocity and empathy.
History
The guanxi system developed in imperial, dynastic China. Historically, China lacked a strong rule of law and the government did not hold every citizen subject to the law. As a result, the law did not provide the same legal protection as it did in the West. Chinese people developed guanxi along with the concept of face and personal reputation to help ensure trust between each other in business and personal matters. Today, the power of guanxi resides primarily within the Chinese Communist Party (CCP).
Description and usage
In a personal context
At its most basic, guanxi describes a personal connection between two people in which one is able to prevail upon another to perform a favor or service, or be prevailed upon, that is, one's standing with another. The two people need not be of equal social status. Guanxi can also be used to describe a network of contacts, which an individual can call upon when something needs to be done, and through which he or she can exert influence on behalf of another.
Guanxi also refers to the benefits gained from social connections and usually extends from extended family, school friends, workmates and members of standard clubs or organizations. It is customary for Chinese people to cultivate an intricate web of guanxi relationships, which may expand in a huge number of directions, and includes lifelong relationships. Staying in contact with members of your network is not necessary to bind reciprocal obligations. Reciprocal favors are the key factor to maintaining one's guanxi web. At the same time failure to reciprocate is considered an unforgivable offense (that is, the more one asks of someone, the more one owes them). Guanxi can perpetuate a never-ending cycle of favors.
The term is not generally used to describe interpersonal relationships within a family, although guanxi obligations can sometimes be described in terms of an extended family. Essentially, familial relations are the core of one's interpersonal relations, while the various non-familial interpersonal relations are modifications or extensions of familial relations. Chinese culture's emphasis on familial relations informs guanxi as well, making it such that both familial relations and non-familial interpersonal relations are grounded by similar behavioral norms. An individual may view and interact with other individuals in a way that is similar to their viewing of and interactions with family members; through guanxi, a relationship between two friends can be likened by each friend to being a pseudo elder sibling–younger sibling relationship, with each friend acting accordingly based on that relationship (the friend who sees himself as the "younger sibling" will show more deference to the friend who is the "older sibling"). Guanxi is also based on concepts like loyalty, dedication, reciprocity, and trust, which help to develop non-familial interpersonal relations, while mirroring the concept of filial piety, which is used to ground familial relations.
Ultimately, the relationships formed by guanxi are personal and not transferable.
In a business context
In China, a country where business relations are highly socially embedded, guanxi plays a central role in the shaping and development of day-to-day business transactions by allowing inter-business relationships and relationships between businesses and the government to grow as individuals representing these organizations work with one another. Specifically, in a business context, guanxi occurs through individual interactions first before being applied on a corporate level (e.g., one member of a business may perform a favor for a member of another business because they have interpersonal ties, which helps to facilitate the relationship between the two businesses involved in this interaction). Guanxi also acts as an essential informal governance mechanism, helping leverage Chinese organizations on social and economic platforms. In places in China where institutions, like the structuring of local governments and government policies, may make business interactions less efficient to facilitate, guanxi can serve as a way for businesses to circumvent such institutions by having their members cultivate their interpersonal ties.
Thus, guanxi is important in two domains: social ties with managers of suppliers, buyers, competitors, and other business intermediaries; and social ties with government officials at various national government-regulated agencies. Given its extensive influential power in the shaping of business operations, many see guanxi as a crucial source of social capital and a strategic tool for business success. Thanks to a good knowledge of guanxi, companies obtain secret information, increase their knowledge about precise government regulations, and receive privileged access to stocks and resources. Knowing this, some economists have warned that Western countries and others that trade regularly with China should improve their "cultural competency" in regards to practices such as guanxi. In doing so, such countries can avoid financial fallout caused by a lack of awareness regarding the way practices like guanxi operate.
The nature of guanxi, however, can also form the basis of patron–client relations. As a result, it creates challenges for businesses whose members are obligated to repay favors to members of other businesses when they cannot sufficiently do so. In following these obligations, businesses may also be forced to act in ways detrimental to their future, and start to over-rely on each other. Members within a business may also start to more frequently discuss information that all members knew prior, rather than try and discuss information only known by select members. If the ties fail between two businesses within an overall network built through guanxi, the other ties comprising the overall network have a chance of failing as well. A guanxi network may also violate bureaucratic norms, leading to corporate corruption.
Note that the aforementioned organizational flaws guanxi creates can be diminished by having more efficient institutions (like open market systems that are regulated by formal organizational procedures while promoting competition and innovation) in place to help facilitate business interactions more effectually.
In East Asian societies, the boundary between business and social lives can sometimes be ambiguous as people tend to rely heavily on their closer relations and friends. This can result in nepotism in the workforce being created through guanxi, as it is common for authoritative figures to draw from family and close ties to fill employment opportunities, instead of assessing talent and suitability. This practice often prevents the most suitably qualified person from being employed for the position. However, guanxi only becomes nepotism when individuals start to value their interpersonal relationships as ways to accomplish their goals over the relationships themselves. When interpersonal relationships are seen in this light, then, it is usually the case that individuals are not viewing their cultivation of prospective business relationships without bias. In addition, guanxi and nepotism are distinct in that the former is inherently a social transaction (considering the emphasis on the actual act of building relationships) and not purely based in financial transactions, while the latter is explicitly based in financial transactions and has a higher chance of resulting in legal consequences. However, cronyism is less obvious and can lead to low-risk sycophancy and empire-building bureaucracy within the internal politics of an organisation.
In a political context
For relationship-based networks such as guanxi, reputation plays an important role in shaping interpersonal and political relations. As a result, the government is still the most important stakeholder, despite China's recent efforts to minimise government involvement. Key government officials wield the authority to choose political associates and allies, approve projects, allocate resources, and distribute finances. Thus, it is especially crucial for international companies to develop harmonious personal relationships with government officials. In addition to holding major legislative power, the Chinese government owns vital resources including land, banks, and major media networks and wields major influence over other stakeholders. Thus, it is important to maintain good relations with the central government in order for a business to maintain guanxi. However, the issue of guanxi as a form of government corruption has been raised into question over recent years. This is often the case when businesspeople interpret guanxi's reciprocal obligations as unethical gift-giving in exchange for government approval. The line drawn between ethical and unethical reciprocal obligation is unclear, but China is currently looking into understanding the structural problems inherent in the guanxi system.
In a diasporic context
Guanxi can be used as a school of thought that influences how ethnic Chinese think of and view society. The Chinese in the diaspora are more likely to adhere and connect to the group of people with shared background. Moreover, diasporic communities might possess ties with individuals in their home country. Guanxi allows the diaspora to maintain their networks and foster close relations with people in their home country and form a subethnic enclave within society. Guanxi could also influence how the diaspora assimilates into the host country, and how the diaspora deals with racism in society. Groups that could be studied are Chinese-Americans, Chinese-Indonesians who have faced prejudice in their host countries. Marred by the LA massacre in 1871, Saigu in 1992, the Japanese American internment during World War II, and the idea of the "Hindu Invasion", the Asian Americans already in the United States faced discrimination from the wider American society. They had to find solutions based on trial and error, looking for legal, political, and social ways to find their place in society.
Ethical concerns
In recent years, the ethical consequences of guanxi have been brought into question. While guanxi can bring benefits to people directly within the guanxi network, it also has the potential to bring harm to individuals, societies and nations when misused or abused. For example, mutual reciprocal obligation is a major component of guanxi. However, the specific date, time and method are often unspecified. Thus, guanxi can be ethically questionable when one party takes advantage of others' personal favors, without seeking to reciprocate. A common example of unethical reciprocal obligation involves the abuse of business-government relations. In 2013, an official of the CCP criticized government officials for using public funds of over 10,000 yuan for banquets. This totals approximately 48 billion dollars worth of banquets per year. Guanxi may also allow for interpersonal obligations to take precedence over civic duties.
Guanxi is a neutral word, but the use or practice of guanxi can range from 'benign, neutral, to questionable and corrupt'. In mainland China, terms like guanxi practice or la guanxi are used to refer to bribery and corruption. Guanxi practice is commonly employed by favour seekers to seek corrupt benefits from power-holders. Guanxi offers an efficient information transmission channel to help guanxi members to identify potential and trustworthy partners; it also offers a safe and secret platform for illegal transactions. Guanxi norms help buyers and sellers of corrupt benefits justify and rationalize their acts. Li's Performing Bribery in China (2011) as well as Wang's The buying and selling of military positions (2016) analyze how guanxi practice works in corrupt exchanges.
This question is especially critical in cross-cultural business partnerships, when Western firms and auditors are operating within Confucian cultures. Western-based managers must exercise caution in determining whether or not their Chinese colleagues and business partners are in fact practicing guanxi. Caution and extra guidance should be taken to ensure that conflict does not occur as a result of misunderstood cultural agreements.
Other studies argue that guanxi is not in fact unethical, but is rather wrongly accused of an act thought unethical in the eyes of those unacquainted with it and Chinese culture. Just as how the Western juridical system is the image of the Western ethical attitudes, it can be said that the Eastern legal system functions similarly. Also, while Westerners might misunderstand guanxi as a form of corruption, the Chinese recognize guanxi as a subset of renqing, which likens the maintenance of interpersonal relationships to a moral obligation. As such, any relevant actions taken to maintain such relationships are recognized as working within ethical constraints.
The term guanxixue (, the 'art' or 'knowledge' of guanxi) is also used to specifically refer to the manipulation and corruption brought about by a selfish and sometimes illegal utilization of guanxi. In turn, guanxixue distinguishes unethical usage of guanxi from the term guanxi itself. Although many Chinese lament the strong importance of guanxi in their culture because of the unethical use that arises through it, they still consider guanxi as a Chinese element that should not be denied.
Similar concepts in other cultures
Sociologists have linked guanxi with the concept of social capital (it has been described as a Gemeinschaft value structure), and it has been exhaustively described in Western studies of Chinese economic and political behavior.
Blat in Russian culture
Shurobadzhanashtina in Bulgarian society
Wasta in Middle Eastern culture
Sociolismo in Cuban culture
Old boy network in Anglo-Saxon and Finnish culture
Dignitas in ancient Roman culture
Ksharim (literally 'connections') in Israeli culture. Protektsia (from the word 'Protection') is the use of ksharim for personal gain or helping another, also known in slang as 'Vitamin P'.
Enchufe (literally 'plug in' – compare English 'hook up') in Spain, meaning to 'plug' friends or acquaintances 'into' a job or position.
Compadrazgo in Latin American culture
Padrino System in the Philippines (basically "godfather" or patron), also known locally as "kapit" (Filipino word for "to hang on," "to hook on.")
Western vs. Eastern social business relations
The four dimensions for a successful business networking comprise: trust, bonding, mutual relationship, and empathy. Nevertheless, the points of view in which these dimensions are understood and consolidated into business tasks are extensively disparate in the East vs the West.
From the Western point of view, trust is treated as shared unwavering quality, constancy, and correspondence. Instead, from the Eastern point of view, trust is additionally synonymous with obligation, where guanxi is required to be kept up through persistent long haul affiliation and connection. The Chinese system of wulun (the basic norms of guanxi) supports the Eastern attitude, emphasizing that one's fulfillment of one's responsibilities in a given role ensures the smooth functioning of Chinese society. Correspondence is likewise a measurement that is substantially more stressed in the East than in the West. As per Confucianism, every individual is urged to wind up a yi-ren (exemplary individual) and compensate some help with altogether more than one has gotten. In conclusion, compassion is a measurement that is exceedingly implanted in Eastern business bonds, the significance for dealers and clients to see each other's needs is extremely important. The Confucian understanding of ren, which also equates to "Do not do to others as one does not want others to do to him", stresses the importance for sellers and customers to understand each other's needs.
Cross-cultural differences in its usage also distinguish Western relationship marketing from Chinese guanxi. Unlike Western relationship marketing, where networking plays a more surface-level impersonal role in shaping larger business relations, guanxi plays a much more central and personal role in shaping social business relations. Chinese culture borrows much of its practices from Confucianism, which emphasizes collectivism and long-term personal relations. Likewise, guanxi functions within Chinese culture and directly reflects the values and behaviors expressed in Chinese business relations. For example, reciprocal obligation plays an intricate role in maintaining harmonious business relations. It is expected that both sides not only stay friendly with each other, but also reciprocate a favor given by the other party. Western relationship marketing, on the other hand, is much more formally constructed, in which no social obligation and further exchanges of favors are expected. Thus, long-term personal relations are more emphasized in Chinese guanxi practice than in Western relationship marketing.
See also
Blat (similar phenomenon in Russia)
Sociolismo (similar phenomenon in Cuba)
Compadrazgo (similar phenomenon in Latin America)
Ubuntu philosophy (similar phenomenon in Africa)
System D (similar concept of informality from European French)
Bamboo network
Chinese social relations
Ganqing
Mianzi
Social capital
Social network
Xenos (guest-friend) an ancient Greek concept
References
External links
China's modern power house, BBC article discussing the role of Guanxi in the modern governance of China.
What is guanxi? Wiki discussion about definitions of guanxi, developed by the publishers of Guanxi: The China Letter.
Guanxi, The art of relationships, by Robert Buderi, Gregory T. Huang, .
China Characteristics – Regarding Guanxi GCiS China Strategic Research
Bamboo network
Business culture
Chinese culture
Society of China
Confucianism in China
Interpersonal relationships | Guanxi | [
"Biology"
] | 3,956 | [
"Behavior",
"Interpersonal relationships",
"Human behavior"
] |
48,975 | https://en.wikipedia.org/wiki/Cladogram | A cladogram (from Greek clados "branch" and gramma "character") is a diagram used in cladistics to show relations among organisms. A cladogram is not, however, an evolutionary tree because it does not show how ancestors are related to descendants, nor does it show how much they have changed, so many differing evolutionary trees can be consistent with the same cladogram. A cladogram uses lines that branch off in different directions ending at a clade, a group of organisms with a last common ancestor. There are many shapes of cladograms but they all have lines that branch off from other lines. The lines can be traced back to where they branch off. These branching off points represent a hypothetical ancestor (not an actual entity) which can be inferred to exhibit the traits shared among the terminal taxa above it. This hypothetical ancestor might then provide clues about the order of evolution of various features, adaptation, and other evolutionary narratives about ancestors. Although traditionally such cladograms were generated largely on the basis of morphological characters, DNA and RNA sequencing data and computational phylogenetics are now very commonly used in the generation of cladograms, either on their own or in combination with morphology.
Generating a cladogram
Molecular versus morphological data
The characteristics used to create a cladogram can be roughly categorized as either morphological (synapsid skull, warm blooded, notochord, unicellular, etc.) or molecular (DNA, RNA, or other genetic information). Prior to the advent of DNA sequencing, cladistic analysis primarily used morphological data. Behavioral data (for animals) may also be used.
As DNA sequencing has become cheaper and easier, molecular systematics has become a more and more popular way to infer phylogenetic hypotheses. Using a parsimony criterion is only one of several methods to infer a phylogeny from molecular data. Approaches such as maximum likelihood, which incorporate explicit models of sequence evolution, are non-Hennigian ways to evaluate sequence data. Another powerful method of reconstructing phylogenies is the use of genomic retrotransposon markers, which are thought to be less prone to the problem of reversion that plagues sequence data. They are also generally assumed to have a low incidence of homoplasies because it was once thought that their integration into the genome was entirely random; this seems at least sometimes not to be the case, however.
Plesiomorphies and synapomorphies
Researchers must decide which character states are "ancestral" (plesiomorphies) and which are derived (synapomorphies), because only synapomorphic character states provide evidence of grouping. This determination is usually done by comparison to the character states of one or more outgroups. States shared between the outgroup and some members of the in-group are symplesiomorphies; states that are present only in a subset of the in-group are synapomorphies. Note that character states unique to a single terminal (autapomorphies) do not provide evidence of grouping. The choice of an outgroup is a crucial step in cladistic analysis because different outgroups can produce trees with profoundly different topologies.
Homoplasies
A homoplasy is a character state that is shared by two or more taxa due to some cause other than common ancestry. The two main types of homoplasy are convergence (evolution of the "same" character in at least two distinct lineages) and reversion (the return to an ancestral character state). Characters that are obviously homoplastic, such as white fur in different lineages of Arctic mammals, should not be included as a character in a phylogenetic analysis as they do not contribute anything to our understanding of relationships. However, homoplasy is often not evident from inspection of the character itself (as in DNA sequence, for example), and is then detected by its incongruence (unparsimonious distribution) on a most-parsimonious cladogram. Note that characters that are homoplastic may still contain phylogenetic signal.
A well-known example of homoplasy due to convergent evolution would be the character, "presence of wings". Although the wings of birds, bats, and insects serve the same function, each evolved independently, as can be seen by their anatomy. If a bird, bat, and a winged insect were scored for the character, "presence of wings", a homoplasy would be introduced into the dataset, and this could potentially confound the analysis, possibly resulting in a false hypothesis of relationships. Of course, the only reason a homoplasy is recognizable in the first place is because there are other characters that imply a pattern of relationships that reveal its homoplastic distribution.
What is not a cladogram
A cladogram is the diagrammatic result of an analysis, which groups taxa on the basis of synapomorphies alone. There are many other phylogenetic algorithms that treat data somewhat differently, and result in phylogenetic trees that look like cladograms but are not cladograms. For example, phenetic algorithms, such as UPGMA and Neighbor-Joining, group by overall similarity, and treat both synapomorphies and symplesiomorphies as evidence of grouping, The resulting diagrams are phenograms, not cladograms, Similarly, the results of model-based methods (Maximum Likelihood or Bayesian approaches) that take into account both branching order and "branch length," count both synapomorphies and autapomorphies as evidence for or against grouping, The diagrams resulting from those sorts of analysis are not cladograms, either.
Cladogram selection
There are several algorithms available to identify the "best" cladogram. Most algorithms use a metric to measure how consistent a candidate cladogram is with the data. Most cladogram algorithms use the mathematical techniques of optimization and minimization.
In general, cladogram generation algorithms must be implemented as computer programs, although some algorithms can be performed manually when the data sets are modest (for example, just a few species and a couple of characteristics).
Some algorithms are useful only when the characteristic data are molecular (DNA, RNA); other algorithms are useful only when the characteristic data are morphological. Other algorithms can be used when the characteristic data includes both molecular and morphological data.
Algorithms for cladograms or other types of phylogenetic trees include least squares, neighbor-joining, parsimony, maximum likelihood, and Bayesian inference.
Biologists sometimes use the term parsimony for a specific kind of cladogram generation algorithm and sometimes as an umbrella term for all phylogenetic algorithms.
Algorithms that perform optimization tasks (such as building cladograms) can be sensitive to the order in which the input data (the list of species and their characteristics) is presented. Inputting the data in various orders can cause the same algorithm to produce different "best" cladograms. In these situations, the user should input the data in various orders and compare the results.
Using different algorithms on a single data set can sometimes yield different "best" cladograms, because each algorithm may have a unique definition of what is "best".
Because of the astronomical number of possible cladograms, algorithms cannot guarantee that the solution is the overall best solution. A nonoptimal cladogram will be selected if the program settles on a local minimum rather than the desired global minimum. To help solve this problem, many cladogram algorithms use a simulated annealing approach to increase the likelihood that the selected cladogram is the optimal one.
The basal position is the direction of the base (or root) of a rooted phylogenetic tree or cladogram. A basal clade is the earliest clade (of a given taxonomic rank[a]) to branch within a larger clade.
Statistics
Incongruence length difference test (or partition homogeneity test)
The incongruence length difference test (ILD) is a measurement of how the combination of different datasets (e.g. morphological and molecular, plastid and nuclear genes) contributes to a longer tree. It is measured by first calculating the total tree length of each partition and summing them. Then replicates are made by making randomly assembled partitions consisting of the original partitions. The lengths are summed. A p value of 0.01 is obtained for 100 replicates if 99 replicates have longer combined tree lengths.
Measuring homoplasy
Some measures attempt to measure the amount of homoplasy in a dataset with reference to a tree, though it is not necessarily clear precisely what property these measures aim to quantify
Consistency index
The consistency index (CI) measures the consistency of a tree to a set of data – a measure of the minimum amount of homoplasy implied by the tree. It is calculated by counting the minimum number of changes in a dataset and dividing it by the actual number of changes needed for the cladogram. A consistency index can also be calculated for an individual character i, denoted ci.
Besides reflecting the amount of homoplasy, the metric also reflects the number of taxa in the dataset, (to a lesser extent) the number of characters in a dataset, the degree to which each character carries phylogenetic information, and the fashion in which additive characters are coded, rendering it unfit for purpose.
ci occupies a range from 1 to 1/[n.taxa/2] in binary characters with an even state distribution; its minimum value is larger when states are not evenly spread. In general, for a binary or non-binary character with , ci occupies a range from 1 to .
Retention index
The retention index (RI) was proposed as an improvement of the CI "for certain applications" This metric also purports to measure of the amount of homoplasy, but also measures how well synapomorphies explain the tree. It is calculated taking the (maximum number of changes on a tree minus the number of changes on the tree), and dividing by the (maximum number of changes on the tree minus the minimum number of changes in the dataset).
The rescaled consistency index (RC) is obtained by multiplying the CI by the RI; in effect this stretches the range of the CI such that its minimum theoretically attainable value is rescaled to 0, with its maximum remaining at 1. The homoplasy index (HI) is simply 1 − CI.
Homoplasy Excess Ratio
This measures the amount of homoplasy observed on a tree relative to the maximum amount of homoplasy that could theoretically be present – 1 − (observed homoplasy excess) / (maximum homoplasy excess). A value of 1 indicates no homoplasy; 0 represents as much homoplasy as there would be in a fully random dataset, and negative values indicate more homoplasy still (and tend only to occur in contrived examples). The HER is presented as the best measure of homoplasy currently available.
See also
Phylogenetics
Dendrogram
Basal (phylogenetics)
References
External links
Diagrams
Phylogenetics | Cladogram | [
"Biology"
] | 2,331 | [
"Bioinformatics",
"Phylogenetics",
"Taxonomy (biology)"
] |
48,980 | https://en.wikipedia.org/wiki/Basidiomycota | Basidiomycota () is one of two large divisions that, together with the Ascomycota, constitute the subkingdom Dikarya (often referred to as the "higher fungi") within the kingdom Fungi. Members are known as basidiomycetes. More specifically, Basidiomycota includes these groups: agarics, puffballs, stinkhorns, bracket fungi, other polypores, jelly fungi, boletes, chanterelles, earth stars, smuts, bunts, rusts, mirror yeasts, and Cryptococcus, the human pathogenic yeast.
Basidiomycota are filamentous fungi composed of hyphae (except for basidiomycota-yeast) and reproduce sexually via the formation of specialized club-shaped end cells called basidia that normally bear external meiospores (usually four). These specialized spores are called basidiospores. However, some Basidiomycota are obligate asexual reproducers. Basidiomycota that reproduce asexually (discussed below) can typically be recognized as members of this division by gross similarity to others, by the formation of a distinctive anatomical feature (the clamp connection), cell wall components, and definitively by phylogenetic molecular analysis of DNA sequence data.
Classification
A 2007 classification, adopted by a coalition of 67 mycologists recognized three subphyla (Pucciniomycotina, Ustilaginomycotina, Agaricomycotina) and two other class level taxa (Wallemiomycetes, Entorrhizomycetes) outside of these, among the Basidiomycota. As now classified, the subphyla join and also cut across various obsolete taxonomic groups (see below) previously commonly used to describe Basidiomycota. According to a 2008 estimate, Basidiomycota comprise three subphyla (including six unassigned classes) 16 classes, 52 orders, 177 families, 1,589 genera, and 31,515 species.
Wijayawardene et al. 2020 produced an update that recognized 19 classes (Agaricomycetes, Agaricostilbomycetes, Atractiellomycetes, Bartheletiomycetes, Classiculomycetes, Cryptomycocolacomycetes, Cystobasidiomycetes, Dacrymycetes, Exobasidiomycetes, Malasseziomycetes, Microbotryomycetes, Mixiomycetes, Monilielliomycetes, Pucciniomycetes, Spiculogloeomycetes, Tremellomycetes, Tritirachiomycetes, Ustilaginomycetes and Wallemiomycetes) with multiple orders and genera.
Traditionally, the Basidiomycota were divided into two classes, now obsolete:
Homobasidiomycetes (alternatively called holobasidiomycetes), including true mushrooms
Heterobasidiomycetes, including the jelly, rust and smut fungi
Nonetheless these former concepts continue to be used as two types of growth habit groupings, the "mushrooms" (e.g. Schizophyllum commune) and the non-mushrooms (e.g. Mycosarcoma maydis).
Agaricomycotina
The Agaricomycotina include what had previously been called the Hymenomycetes (an obsolete morphological based class of Basidiomycota that formed hymenial layers on their fruitbodies), the Gasteromycetes (another obsolete class that included species mostly lacking hymenia and mostly forming spores in enclosed fruitbodies), as well as most of the jelly fungi. This sub-phyla also includes the "classic" mushrooms, polypores, corals, chanterelles, crusts, puffballs and stinkhorns. The three classes in the Agaricomycotina are the Agaricomycetes, the Dacrymycetes, and the Tremellomycetes.
The class Wallemiomycetes is not yet placed in a subdivision, but recent genomic evidence suggests that it is a sister group of Agaricomycotina.
Pucciniomycotina
The Pucciniomycotina include the rust fungi, the insect parasitic/symbiotic genus Septobasidium, a former group of smut fungi (in the Microbotryomycetes, which includes mirror yeasts), and a mixture of odd, infrequently seen, or seldom recognized fungi, often parasitic on plants. The eight classes in the Pucciniomycotina are Agaricostilbomycetes, Atractiellomycetes, Classiculomycetes, Cryptomycocolacomycetes, Cystobasidiomycetes, Microbotryomycetes, Mixiomycetes, and Pucciniomycetes.
Ustilaginomycotina
The Ustilaginomycotina are most (but not all) of the former smut fungi and the Exobasidiales. The classes of the Ustilaginomycotina are the Exobasidiomycetes, the Entorrhizomycetes, and the Ustilaginomycetes.
Genera included
There are several genera classified in the Basidiomycota that are 1) poorly known, 2) have not been subjected to DNA analysis, or 3) if analysed phylogenetically do not group with as yet named or identified families, and have not been assigned to a specific family (i.e., they are incertae sedis with respect to familial placement). These include:
Anastomyces W.P.Wu, B.Sutton & Gange (1997)
Anguillomyces Marvanová & Bärl. (2000)
Anthoseptobasidium Rick (1943)
Arcispora Marvanová & Bärl. (1998)
Arrasia Bernicchia, Gorjón & Nakasone (2011)
Brevicellopsis Hjortstam & Ryvarden (2008)
Celatogloea P.Roberts (2005)
Cleistocybe Ammirati, A.D.Parker & Matheny (2007)
Cystogloea P. Roberts (2006)
Dacryomycetopsis Rick (1958)
Eriocybe Vellinga (2011)
Hallenbergia Dhingra & Priyanka (2011)
Hymenoporus Tkalčec, Mešić & Chun Y.Deng (2015)
Kryptastrina Oberw. (1990)
Microstella K.Ando & Tubaki (1984)
Neotyphula Wakef. (1934)
Nodulospora Marvanová & Bärl. (2000)
Paraphelaria Corner (1966)
Punctulariopsis Ghob.-Nejh. (2010)
Radulodontia Hjortstam & Ryvarden (2008)
Restilago Vánky (2008)
Sinofavus W.Y.Zhuang (2008)
Zanchia Rick (1958)
Zygodesmus Corda (1837)
Zygogloea P.Roberts (1994)
Typical life cycle
Unlike animals and plants which have readily recognizable male and female counterparts, Basidiomycota (except for the Rust (Pucciniales)) tend to have mutually indistinguishable, compatible haploids which are usually mycelia being composed of filamentous hyphae. Typically haploid Basidiomycota mycelia fuse via plasmogamy and then the compatible nuclei migrate into each other's mycelia and pair up with the resident nuclei. Karyogamy is delayed, so that the compatible nuclei remain in pairs, called a dikaryon. The hyphae are then said to be dikaryotic. Conversely, the haploid mycelia are called monokaryons. Often, the dikaryotic mycelium is more vigorous than the individual monokaryotic mycelia, and proceeds to take over the substrate in which they are growing. The dikaryons can be long-lived, lasting years, decades, or centuries. The monokaryons are male nor female. They have either a () or a () mating system. This results in the fact that following meiosis, the resulting haploid basidiospores and resultant monokaryons, have nuclei that are compatible with 50% (if bipolar) or 25% (if tetrapolar) of their sister basidiospores (and their resultant monokaryons) because the mating genes must differ for them to be compatible. However, there are sometimes more than two possible alleles for a given locus, and in such species, depending on the specifics, over 90% of monokaryons could be compatible with each other.
The maintenance of the dikaryotic status in dikaryons in many Basidiomycota is facilitated by the formation of clamp connections that physically appear to help coordinate and re-establish pairs of compatible nuclei following synchronous mitotic nuclear divisions. Variations are frequent and multiple. In a typical Basidiomycota lifecycle the long lasting dikaryons periodically (seasonally or occasionally) produce basidia, the specialized usually club-shaped end cells, in which a pair of compatible nuclei fuse (karyogamy) to form a diploid cell. Meiosis follows shortly with the production of 4 haploid nuclei that migrate into 4 external, usually apical basidiospores. Variations occur, however. Typically the basidiospores are ballistic, hence they are sometimes also called ballistospores. In most species, the basidiospores disperse and each can start a new haploid mycelium, continuing the lifecycle. Basidia are microscopic but they are often produced on or in multicelled large fructifications called basidiocarps or basidiomes, or fruitbodies, variously called mushrooms, puffballs, etc. Ballistic basidiospores are formed on sterigmata which are tapered spine-like projections on basidia, and are typically curved, like the horns of a bull. In some Basidiomycota the spores are not ballistic, and the sterigmata may be straight, reduced to stubs, or absent. The basidiospores of these non-ballistosporic basidia may either bud off, or be released via dissolution or disintegration of the basidia.
In summary, meiosis takes place in a diploid basidium. Each one of the four haploid nuclei migrates into its own basidiospore. The basidiospores are ballistically discharged and start new haploid mycelia called monokaryons. There are no males or females, rather there are compatible thalli with multiple compatibility factors. Plasmogamy between compatible individuals leads to delayed karyogamy leading to establishment of a dikaryon. The dikaryon is long lasting but ultimately gives rise to either fruitbodies with basidia or directly to basidia without fruitbodies. The paired dikaryon in the basidium fuse (i.e. karyogamy takes place). The diploid basidium begins the cycle again.
Meiosis
Coprinopsis cinerea is a basidiomycete mushroom. It is particularly suited to the study of meiosis because meiosis progresses synchronously in about 10 million cells within the mushroom cap, and the meiotic prophase stage is prolonged. Burns et al. studied the expression of genes involved in the 15-hour meiotic process, and found that the pattern of gene expression of C. cinerea was similar to two other fungal species, the yeasts Saccharomyces cerevisiae and Schizosaccharomyces pombe. These similarities in the patterns of expression led to the conclusion that the core expression program of meiosis has been conserved in these fungi for over half a billion years of evolution since these species diverged.
Cryptococcus neoformans and Mycosarcoma maydis are examples of pathogenic basidiomycota. Such pathogens must be able to overcome the oxidative defenses of their respective hosts in order to produce a successful infection. The ability to undergo meiosis may provide a survival benefit for these fungi by promoting successful infection. A characteristic central feature of meiosis is recombination between homologous chromosomes. This process is associated with repair of DNA damage, particularly double-strand breaks. The ability of C. neoformans and M. maydis to undergo meiosis may contribute to their virulence by repairing the oxidative DNA damage caused by their host's release of reactive oxygen species.
Variations in lifecycles
Many variations occur: some variations are self-compatible and spontaneously form dikaryons without a separate compatible thallus being involved. These fungi are said to be homothallic, versus the normal heterothallic species with mating types. Others are secondarily homothallic, in that two compatible nuclei following meiosis migrate into each basidiospore, which is then dispersed as a pre-existing dikaryon. Often such species form only two spores per basidium, but that too varies. Following meiosis, mitotic divisions can occur in the basidium. Multiple numbers of basidiospores can result, including odd numbers via degeneration of nuclei, or pairing up of nuclei, or lack of migration of nuclei. For example, the chanterelle genus Craterellus often has six-spored basidia, while some corticioid Sistotrema species can have two-, four-, six-, or eight-spored basidia, and the cultivated button mushroom, Agaricus bisporus. can have one-, two-, three- or four-spored basidia under some circumstances. Occasionally, monokaryons of some taxa can form morphologically fully formed basidiomes and anatomically correct basidia and ballistic basidiospores in the absence of dikaryon formation, diploid nuclei, and meiosis. A rare few number of taxa have extended diploid lifecycles, but can be common species. Examples exist in the mushroom genera Armillaria and Xerula, both in the Physalacriaceae. Occasionally, basidiospores are not formed and parts of the "basidia" act as the dispersal agents, e.g. the peculiar mycoparasitic jelly fungus, Tetragoniomyces or the entire "basidium" acts as a "spore", e.g. in some false puffballs (Scleroderma). In the human pathogenic genus Cryptococcus, four nuclei following meiosis remain in the basidium, but continually divide mitotically, each nucleus migrating into synchronously forming nonballistic basidiospores that are then pushed upwards by another set forming below them, resulting in four parallel chains of dry "basidiospores".
Other variations occur: some as standard lifecycles (that themselves have variations within variations) within specific orders.
Rusts
Rusts (Pucciniales, previously known as Uredinales) at their greatest complexity, produce five different types of spores on two different host plants in two unrelated host families. Such rusts are heteroecious (requiring two hosts) and macrocyclic (producing all five spores types). Wheat stem rust is an example. By convention, the stages and spore states are numbered by Roman numerals. Typically, basidiospores infect host one, also known as the alternate or sexual host, and the mycelium forms pycnidia, which are miniature, flask-shaped, hollow, submicroscopic bodies embedded in the host tissue (such as a leaf). This stage, numbered "0", produces single-celled spores that ooze out in a sweet liquid and that act as nonmotile spermatia, and also protruding receptive hyphae. Insects and probably other vectors such as rain carry the spermatia from spermagonium to spermagonium, cross inoculating the mating types. Neither thallus is male or female. Once crossed, the dikaryons are established and a second spore stage is formed, numbered "I" and called aecia, which form dikaryotic aeciospores in dry chains in inverted cup-shaped bodies embedded in host tissue. These aeciospores then infect the second host, known as the primary or asexual host (in macrocyclic rusts). On the primary host a repeating spore stage is formed, numbered "II", the urediospores in dry pustules called uredinia. Urediospores are dikaryotic and can infect the same host that produced them. They repeatedly infect this host over the growing season. At the end of the season, a fourth spore type, the teliospore, is formed. It is thicker-walled and serves to overwinter or to survive other harsh conditions. It does not continue the infection process, rather it remains dormant for a period and then germinates to form basidia (stage "IV"), sometimes called a promycelium. In the Pucciniales, the basidia are cylindrical and become 3-septate after meiosis, with each of the 4 cells bearing one basidiospore each. The basidiospores disperse and start the infection process on host 1 again. Autoecious rusts complete their life-cycles on one host instead of two, and microcyclic rusts cut out one or more stages.
Smuts
The characteristic part of the life-cycle of smuts is the thick-walled, often darkly pigmented, ornate, teliospore that serves to survive harsh conditions such as overwintering and also serves to help disperse the fungus as dry diaspores. The teliospores are initially dikaryotic but become diploid via karyogamy. Meiosis takes place at the time of germination. A promycelium is formed that consists of a short hypha (equated to a basidium). In some smuts such as Mycosarcoma maydis the nuclei migrate into the promycelium that becomes septate (i.e., divided into cellular compartments separated by cell walls called septa), and haploid yeast-like conidia/basidiospores sometimes called sporidia, bud off laterally from each cell. In various smuts, the yeast phase may proliferate, or they may fuse, or they may infect plant tissue and become hyphal. In other smuts, such as Tilletia caries, the elongated haploid basidiospores form apically, often in compatible pairs that fuse centrally resulting in H-shaped diaspores which are by then dikaryotic. Dikaryotic conidia may then form. Eventually the host is infected by infectious hyphae. Teliospores form in host tissue. Many variations on these general themes occur.
Smuts with both a yeast phase and an infectious hyphal state are examples of dimorphic Basidiomycota. In plant parasitic taxa, the saprotrophic phase is normally the yeast while the infectious stage is hyphal. However, there are examples of animal and human parasites where the species are dimorphic but it is the yeast-like state that is infectious. The genus Filobasidiella forms basidia on hyphae but the main infectious stage is more commonly known by the anamorphic yeast name Cryptococcus, e.g. Cryptococcus neoformans and Cryptococcus gattii.
The dimorphic Basidiomycota with yeast stages and the pleiomorphic rusts are examples of fungi with anamorphs, which are the asexual stages. Some Basidiomycota are only known as anamorphs. Many are called basidiomycetous yeasts, which differentiates them from ascomycetous yeasts in the Ascomycota. Aside from yeast anamorphs and uredinia, aecia, and pycnidia, some Basidiomycota form other distinctive anamorphs as parts of their life cycles. Examples are Collybia tuberosa with its apple-seed-shaped and coloured sclerotium, Dendrocollybia racemosa with its sclerotium and its Tilachlidiopsis racemosa conidia, Armillaria with their rhizomorphs, Hohenbuehelia with their Nematoctonus nematode infectious, state and the coffee leaf parasite, Mycena citricolor, and its Decapitatus flavidus propagules called gemmae.
See also
Forest pathology
List of Basidiomycota families
Mating in fungi
References
Sources
External links
Basidiomycota at the Tree of Life Web Project
Fungus phyla
Fungi by classification
Mycology
Taxa named by Royall T. Moore
Taxa described in 1980 | Basidiomycota | [
"Biology"
] | 4,456 | [
"Fungi",
"Eukaryotes by classification",
"Fungi by classification",
"Mycology"
] |
48,981 | https://en.wikipedia.org/wiki/Ascomycota | Ascomycota is a phylum of the kingdom Fungi that, together with the Basidiomycota, forms the subkingdom Dikarya. Its members are commonly known as the sac fungi or ascomycetes. It is the largest phylum of Fungi, with over 64,000 species. The defining feature of this fungal group is the "ascus" (), a microscopic sexual structure in which nonmotile spores, called ascospores, are formed. However, some species of Ascomycota are asexual and thus do not form asci or ascospores. Familiar examples of sac fungi include morels, truffles, brewers' and bakers' yeast, dead man's fingers, and cup fungi. The fungal symbionts in the majority of lichens (loosely termed "ascolichens") such as Cladonia belong to the Ascomycota.
Ascomycota is a monophyletic group (containing all of the descendants of a common ancestor). Previously placed in the Basidiomycota along with asexual species from other fungal taxa, asexual (or anamorphic) ascomycetes are now identified and classified based on morphological or physiological similarities to ascus-bearing taxa, and by phylogenetic analyses of DNA sequences.
Ascomycetes are of particular use to humans as sources of medicinally important compounds such as antibiotics, as well as for fermenting bread, alcoholic beverages, and cheese. Examples of ascomycetes include Penicillium species on cheeses and those producing antibiotics for treating bacterial infectious diseases.
Many ascomycetes are pathogens, both of animals, including humans, and of plants. Examples of ascomycetes that can cause infections in humans include Candida albicans, Aspergillus niger and several tens of species that cause skin infections. The many plant-pathogenic ascomycetes include apple scab, rice blast, the ergot fungi, black knot, and the powdery mildews. The members of the genus Cordyceps are entomopathogenic fungi, meaning that they parasitise and kill insects. Other entomopathogenic ascomycetes have been used successfully in biological pest control, such as Beauveria.
Several species of ascomycetes are biological model organisms in laboratory research. Most famously, Neurospora crassa, several species of yeasts, and Aspergillus species are used in many genetics and cell biology studies.
Sexual reproduction in ascomycetes
Ascomycetes are 'spore shooters'. They are fungi which produce microscopic spores inside special, elongated cells or sacs, known as 'asci', which give the group its name.
Asexual reproduction is the dominant form of propagation in the Ascomycota, and is responsible for the rapid spread of these fungi into new areas. Asexual reproduction of ascomycetes is very diverse from both structural and functional points of view. The most important and general is production of conidia, but chlamydospores are also frequently produced. Furthermore, Ascomycota also reproduce asexually through budding.
Conidia formation
Asexual reproduction may occur through vegetative reproductive spores, the conidia. The asexual, non-motile haploid spores of a fungus, which are named after the Greek word for dust (conia), are hence also known as . The conidiospores commonly contain one nucleus and are products of mitotic cell divisions and thus are sometimes called , which are genetically identical to the mycelium from which they originate. They are typically formed at the ends of specialized hyphae, the conidiophores. Depending on the species they may be dispersed by wind or water, or by animals. Conidiophores may simply branch off from the mycelia or they may be formed in fruiting bodies.
The hypha that creates the sporing (conidiating) tip can be very similar to the normal hyphal tip, or it can be differentiated. The most common differentiation is the formation of a bottle shaped cell called a , from which the spores are produced. Not all of these asexual structures are a single hypha. In some groups, the conidiophores (the structures that bear the conidia) are aggregated to form a thick structure.
E.g. In the order Moniliales, all of them are single hyphae with the exception of the aggregations, termed as coremia or synnema. These produce structures rather like corn-stokes, with many conidia being produced in a mass from the aggregated conidiophores.
The diverse conidia and conidiophores sometimes develop in asexual sporocarps with different characteristics (e.g. acervulus, pycnidium, sporodochium). Some species of ascomycetes form their structures within plant tissue, either as parasite or saprophytes. These fungi have evolved more complex asexual sporing structures, probably influenced by the cultural conditions of plant tissue as a substrate. These structures are called the . This is a cushion of conidiophores created from a pseudoparenchymatous stroma in plant tissue. The is a globose to flask-shaped parenchymatous structure, lined on its inner wall with conidiophores. The is a flat saucer shaped bed of conidiophores produced under a plant cuticle, which eventually erupt through the cuticle for dispersal.
Budding
Asexual reproduction process in ascomycetes also involves the budding which we clearly observe in yeast. This is termed a "blastic process". It involves the blowing out or blebbing of the hyphal tip wall. The blastic process can involve all wall layers, or there can be a new cell wall synthesized which is extruded from within the old wall.
The initial events of budding can be seen as the development of a ring of chitin around the point where the bud is about to appear. This reinforces and stabilizes the cell wall. Enzymatic activity and turgor pressure act to weaken and extrude the cell wall. New cell wall material is incorporated during this phase. Cell contents are forced into the progeny cell, and as the final phase of mitosis ends a cell plate, the point at which a new cell wall will grow inwards from, forms.
Characteristics of ascomycetes
Ascomycota are morphologically diverse. The group includes organisms from unicellular yeasts to complex cup fungi.
98% of lichens have an Ascomycota as the fungal part of the lichen.
There are 2000 identified genera and 30,000 species of Ascomycota.
The unifying characteristic among these diverse groups is the presence of a reproductive structure known as the , though in some cases it has a reduced role in the life cycle.
Many ascomycetes are of commercial importance. Some play a beneficial role, such as the yeasts used in baking, brewing, and wine fermentation, plus truffles and morels, which are held as gourmet delicacies.
Many of them cause tree diseases, such as Dutch elm disease and apple blights.
Some of the plant pathogenic ascomycetes are apple scab, rice blast, the ergot fungi, black knot, and the powdery mildews.
The yeasts are used to produce alcoholic beverages and breads. The mold Penicillium is used to produce the antibiotic penicillin.
Almost half of all members of the phylum Ascomycota form associations with algae to form lichens.
Others, such as morels (a highly prized edible fungi), form important relationships with plants, thereby providing enhanced water and nutrient uptake and, in some cases, protection from insects.
Most ascomycetes are terrestrial or parasitic. However, some have adapted to marine or freshwater environments. As of 2015, there were 805 marine fungi in the Ascomycota, distributed among 352 genera.
The cell walls of the hyphae are variably composed of chitin and β-glucans, just as in Basidiomycota. However, these fibers are set in a matrix of glycoprotein containing the sugars galactose and mannose.
The mycelium of ascomycetes is usually made up of septate hyphae. However, there is not necessarily any fixed number of nuclei in each of the divisions.
The septal walls have septal pores which provide cytoplasmic continuity throughout the individual hyphae. Under appropriate conditions, nuclei may also migrate between septal compartments through the septal pores.
A unique character of the Ascomycota (but not present in all ascomycetes) is the presence of Woronin bodies on each side of the septa separating the hyphal segments which control the septal pores. If an adjoining hypha is ruptured, the Woronin bodies block the pores to prevent loss of cytoplasm into the ruptured compartment. The Woronin bodies are spherical, hexagonal, or rectangular membrane bound structures with a crystalline protein matrix.
Modern classification
There are three subphyla that are described and accepted:
The Pezizomycotina are the largest subphylum and contains all ascomycetes that produce ascocarps (fruiting bodies), except for one genus, Neolecta, in the Taphrinomycotina. It is roughly equivalent to the previous taxon, Euascomycetes. The Pezizomycotina includes most macroscopic "ascos" such as truffles, ergot, ascolichens, cup fungi (discomycetes), pyrenomycetes, lorchels, and caterpillar fungus. It also contains microscopic fungi such as powdery mildews, dermatophytic fungi, and Laboulbeniales.
The Saccharomycotina comprise most of the "true" yeasts, such as baker's yeast and Candida, which are single-celled (unicellular) fungi, which reproduce vegetatively by budding. Most of these species were previously classified in a taxon called Hemiascomycetes.
The Taphrinomycotina include a disparate and basal group within the Ascomycota that was recognized following molecular (DNA) analyses. The taxon was originally named Archiascomycetes (or Archaeascomycetes). It includes hyphal fungi (Neolecta, Taphrina, Archaeorhizomyces), fission yeasts (Schizosaccharomyces), and the mammalian lung parasite Pneumocystis.
Outdated taxon names
Several outdated taxon names—based on morphological features—are still occasionally used for species of the Ascomycota. These include the following sexual (teleomorphic) groups, defined by the structures of their sexual fruiting bodies: the Discomycetes, which included all species forming apothecia; the Pyrenomycetes, which included all sac fungi that formed perithecia or pseudothecia, or any structure resembling these morphological structures; and the Plectomycetes, which included those species that form cleistothecia. Hemiascomycetes included the yeasts and yeast-like fungi that have now been placed into the Saccharomycotina or Taphrinomycotina, while the Euascomycetes included the remaining species of the Ascomycota, which are now in the Pezizomycotina, and the Neolecta, which are in the Taphrinomycotina.
Some ascomycetes do not reproduce sexually or are not known to produce asci and are therefore anamorphic species. Those anamorphs that produce conidia (mitospores) were previously described as mitosporic Ascomycota. Some taxonomists placed this group into a separate artificial phylum, the Deuteromycota (or "Fungi Imperfecti"). Where recent molecular analyses have identified close relationships with ascus-bearing taxa, anamorphic species have been grouped into the Ascomycota, despite the absence of the defining ascus. Sexual and asexual isolates of the same species commonly carry different binomial species names, as, for example, Aspergillus nidulans and Emericella nidulans, for asexual and sexual isolates, respectively, of the same species.
Species of the Deuteromycota were classified as Coelomycetes if they produced their conidia in minute flask- or saucer-shaped conidiomata, known technically as pycnidia and acervuli. The Hyphomycetes were those species where the conidiophores (i.e., the hyphal structures that carry conidia-forming cells at the end) are free or loosely organized. They are mostly isolated but sometimes also appear as bundles of cells aligned in parallel (described as synnematal) or as cushion-shaped masses (described as sporodochial).
Morphology
Most species grow as filamentous, microscopic structures called hyphae or as budding single cells (yeasts). Many interconnected hyphae form a thallus usually referred to as the mycelium, which—when visible to the naked eye (macroscopic)—is commonly called mold. During sexual reproduction, many Ascomycota typically produce large numbers of asci. The ascus is often contained in a multicellular, occasionally readily visible fruiting structure, the ascocarp (also called an ascoma). Ascocarps come in a very large variety of shapes: cup-shaped, club-shaped, potato-like, spongy, seed-like, oozing and pimple-like, coral-like, nit-like, golf-ball-shaped, perforated tennis ball-like, cushion-shaped, plated and feathered in miniature (Laboulbeniales), microscopic classic Greek shield-shaped, stalked or sessile. They can appear solitary or clustered. Their texture can likewise be very variable, including fleshy, like charcoal (carbonaceous), leathery, rubbery, gelatinous, slimy, powdery, or cob-web-like. Ascocarps come in multiple colors such as red, orange, yellow, brown, black, or, more rarely, green or blue. Some ascomyceous fungi, such as Saccharomyces cerevisiae, grow as single-celled yeasts, which—during sexual reproduction—develop into an ascus, and do not form fruiting bodies.
In lichenized species, the thallus of the fungus defines the shape of the symbiotic colony. Some dimorphic species, such as Candida albicans, can switch between growth as single cells and as filamentous, multicellular hyphae. Other species are pleomorphic, exhibiting asexual (anamorphic) as well as a sexual (teleomorphic) growth forms.
Except for lichens, the non-reproductive (vegetative) mycelium of most ascomycetes is usually inconspicuous because it is commonly embedded in the substrate, such as soil, or grows on or inside a living host, and only the ascoma may be seen when fruiting. Pigmentation, such as melanin in hyphal walls, along with prolific growth on surfaces can result in visible mold colonies; examples include Cladosporium species, which form black spots on bathroom caulking and other moist areas. Many ascomycetes cause food spoilage, and, therefore, the pellicles or moldy layers that develop on jams, juices, and other foods are the mycelia of these species or occasionally Mucoromycotina and almost never Basidiomycota. Sooty molds that develop on plants, especially in the tropics are the thalli of many species.
Large masses of yeast cells, asci or ascus-like cells, or conidia can also form macroscopic structures. For example. Pneumocystis species can colonize lung cavities (visible in x-rays), causing a form of pneumonia. Asci of Ascosphaera fill honey bee larvae and pupae causing mummification with a chalk-like appearance, hence the name "chalkbrood". Yeasts for small colonies in vitro and in vivo, and excessive growth of Candida species in the mouth or vagina causes "thrush", a form of candidiasis.
The cell walls of the ascomycetes almost always contain chitin and β-glucans, and divisions within the hyphae, called "septa", are the internal boundaries of individual cells (or compartments). The cell wall and septa give stability and rigidity to the hyphae and may prevent loss of cytoplasm in case of local damage to cell wall and cell membrane. The septa commonly have a small opening in the center, which functions as a cytoplasmic connection between adjacent cells, also sometimes allowing cell-to-cell movement of nuclei within a hypha. Vegetative hyphae of most ascomycetes contain only one nucleus per cell (uninucleate hyphae), but multinucleate cells—especially in the apical regions of growing hyphae—can also be present.
Metabolism
In common with other fungal phyla, the Ascomycota are heterotrophic organisms that require organic compounds as energy sources. These are obtained by feeding on a variety of organic substrates including dead matter, foodstuffs, or as symbionts in or on other living organisms. To obtain these nutrients from their surroundings, ascomycetous fungi secrete powerful digestive enzymes that break down organic substances into smaller molecules, which are then taken up into the cell. Many species live on dead plant material such as leaves, twigs, or logs. Several species colonize plants, animals, or other fungi as parasites or mutualistic symbionts and derive all their metabolic energy in form of nutrients from the tissues of their hosts.
Owing to their long evolutionary history, the Ascomycota have evolved the capacity to break down almost every organic substance. Unlike most organisms, they are able to use their own enzymes to digest plant biopolymers such as cellulose or lignin. Collagen, an abundant structural protein in animals, and keratin—a protein that forms hair and nails—, can also serve as food sources. Unusual examples include Aureobasidium pullulans, which feeds on wall paint, and the kerosene fungus Amorphotheca resinae, which feeds on aircraft fuel (causing occasional problems for the airline industry), and may sometimes block fuel pipes. Other species can resist high osmotic stress and grow, for example, on salted fish, and a few ascomycetes are aquatic.
The Ascomycota is characterized by a high degree of specialization; for instance, certain species of Laboulbeniales attack only one particular leg of one particular insect species. Many Ascomycota engage in symbiotic relationships such as in lichens—symbiotic associations with green algae or cyanobacteria—in which the fungal symbiont directly obtains products of photosynthesis. In common with many basidiomycetes and Glomeromycota, some ascomycetes form symbioses with plants by colonizing the roots to form mycorrhizal associations. The Ascomycota also represents several carnivorous fungi, which have developed hyphal traps to capture small protists such as amoebae, as well as roundworms (Nematoda), rotifers, tardigrades, and small arthropods such as springtails (Collembola).
Distribution and living environment
The Ascomycota are represented in all land ecosystems worldwide, occurring on all continents including Antarctica. Spores and hyphal fragments are dispersed through the atmosphere and freshwater environments, as well as ocean beaches and tidal zones. The distribution of species is variable; while some are found on all continents, others, as for example the white truffle Tuber magnatum, only occur in isolated locations in Italy and Eastern Europe. The distribution of plant-parasitic species is often restricted by host distributions; for example, Cyttaria is only found on Nothofagus (Southern Beech) in the Southern Hemisphere.
Reproduction
Asexual reproduction
Asexual reproduction is the dominant form of propagation in the Ascomycota, and is responsible for the rapid spread of these fungi into new areas. It occurs through vegetative reproductive spores, the conidia. The conidiospores commonly contain one nucleus and are products of mitotic cell divisions and thus are sometimes called mitospores, which are genetically identical to the mycelium from which they originate. They are typically formed at the ends of specialized hyphae, the conidiophores. Depending on the species they may be dispersed by wind or water, or by animals.
Asexual spores
Different types of asexual spores can be identified by colour, shape, and how they are released as individual spores. Spore types can be used as taxonomic characters in the classification within the Ascomycota. The most frequent types are the single-celled spores, which are designated amerospores. If the spore is divided into two by a cross-wall (septum), it is called a didymospore.
When there are two or more cross-walls, the classification depends on spore shape. If the septae are transversal, like the rungs of a ladder, it is a phragmospore, and if they possess a net-like structure it is a dictyospore. In staurospores ray-like arms radiate from a central body; in others (helicospores) the entire spore is wound up in a spiral like a spring. Very long worm-like spores with a length-to-diameter ratio of more than 15:1, are called scolecospores.
Conidiogenesis and dehiscence
Important characteristics of the anamorphs of the Ascomycota are conidiogenesis, which includes spore formation and dehiscence (separation from the parent structure). Conidiogenesis corresponds to Embryology in animals and plants and can be divided into two fundamental forms of development: blastic conidiogenesis, where the spore is already evident before it separates from the conidiogenic hypha, and thallic conidiogenesis, during which a cross-wall forms and the newly created cell develops into a spore. The spores may or may not be generated in a large-scale specialized structure that helps to spread them.
These two basic types can be further classified as follows:
blastic-acropetal (repeated budding at the tip of the conidiogenic hypha, so that a chain of spores is formed with the youngest spores at the tip),
blastic-synchronous (simultaneous spore formation from a central cell, sometimes with secondary acropetal chains forming from the initial spores),
blastic-sympodial (repeated sideways spore formation from behind the leading spore, so that the oldest spore is at the main tip),
blastic-annellidic (each spore separates and leaves a ring-shaped scar inside the scar left by the previous spore),
blastic-phialidic (the spores arise and are ejected from the open ends of special conidiogenic cells called phialides, which remain constant in length),
basauxic (where a chain of conidia, in successively younger stages of development, is emitted from the mother cell),
blastic-retrogressive (spores separate by formation of crosswalls near the tip of the conidiogenic hypha, which thus becomes progressively shorter),
thallic-arthric (double cell walls split the conidiogenic hypha into cells that develop into short, cylindrical spores called arthroconidia; sometimes every second cell dies off, leaving the arthroconidia free),
thallic-solitary (a large bulging cell separates from the conidiogenic hypha, forms internal walls, and develops to a phragmospore).
Sometimes the conidia are produced in structures visible to the naked eye, which help to distribute the spores. These structures are called "conidiomata" (singular: conidioma), and may take the form of pycnidia (which are flask-shaped and arise in the fungal tissue) or acervuli (which are cushion-shaped and arise in host tissue).
Dehiscence happens in two ways. In schizolytic dehiscence, a double-dividing wall with a central lamella (layer) forms between the cells; the central layer then breaks down thereby releasing the spores. In rhexolytic dehiscence, the cell wall that joins the spores on the outside degenerates and releases the conidia.
Heterokaryosis and parasexuality
Several Ascomycota species are not known to have a sexual cycle. Such asexual species may be able to undergo genetic recombination between individuals by processes involving heterokaryosis and parasexual events.
Parasexuality refers to the process of heterokaryosis, caused by merging of two hyphae belonging to different individuals, by a process called anastomosis, followed by a series of events resulting in genetically different cell nuclei in the mycelium.
The merging of nuclei is not followed by meiotic events, such as gamete formation and results in an increased number of chromosomes per nuclei. Mitotic crossover may enable recombination, i.e., an exchange of genetic material between homologous chromosomes. The chromosome number may then be restored to its haploid state by nuclear division, with each daughter nuclei being genetically different from the original parent nuclei. Alternatively, nuclei may lose some chromosomes, resulting in aneuploid cells. Candida albicans (class Saccharomycetes) is an example of a fungus that has a parasexual cycle (see Candida albicans and Parasexual cycle).
Sexual reproduction
Sexual reproduction in the Ascomycota leads to the formation of the ascus, the structure that defines this fungal group and distinguishes it from other fungal phyla. The ascus is a tube-shaped vessel, a meiosporangium, which contains the sexual spores produced by meiosis and which are called ascospores.
Apart from a few exceptions, such as Candida albicans, most ascomycetes are haploid, i.e., they contain one set of chromosomes per nucleus. During sexual reproduction there is a diploid phase, which commonly is very short, and meiosis restores the haploid state. The sexual cycle of one well-studied representative species of Ascomycota is described in greater detail in Neurospora crassa. Also, the adaptive basis for the maintenance of sexual reproduction in the Ascomycota fungi was reviewed by Wallen and Perlin. They concluded that the most plausible reason for the maintenance of this capability is the benefit of repairing DNA damage by using recombination that occurs during meiosis. DNA damage can be caused by a variety of stresses such as nutrient limitation.
Formation of sexual spores
The sexual part of the life cycle commences when two hyphal structures mate. In the case of homothallic species, mating is enabled between hyphae of the same fungal clone, whereas in heterothallic species, the two hyphae must originate from fungal clones that differ genetically, i.e., those that are of a different mating type. Mating types are typical of the fungi and correspond roughly to the sexes in plants and animals; however one species may have more than two mating types, resulting in sometimes complex vegetative incompatibility systems. The adaptive function of mating type is discussed in Neurospora crassa.
Gametangia are sexual structures formed from hyphae, and are the generative cells. A very fine hypha, called trichogyne emerges from one gametangium, the ascogonium, and merges with a gametangium (the antheridium) of the other fungal isolate. The nuclei in the antheridium then migrate into the ascogonium, and plasmogamy—the mixing of the cytoplasm—occurs. Unlike in animals and plants, plasmogamy is not immediately followed by the merging of the nuclei (called karyogamy). Instead, the nuclei from the two hyphae form pairs, initiating the dikaryophase of the sexual cycle, during which time the pairs of nuclei synchronously divide. Fusion of the paired nuclei leads to mixing of the genetic material and recombination and is followed by meiosis. A similar sexual cycle is present in the red algae (Rhodophyta). A discarded hypothesis held that a second karyogamy event occurred in the ascogonium prior to ascogeny, resulting in a tetraploid nucleus which divided into four diploid nuclei by meiosis and then into eight haploid nuclei by a supposed process called brachymeiosis, but this hypothesis was disproven in the 1950s.
From the fertilized ascogonium, dinucleate hyphae emerge in which each cell contains two nuclei. These hyphae are called ascogenous or fertile hyphae. They are supported by the vegetative mycelium containing uni– (or mono–) nucleate hyphae, which are sterile. The mycelium containing both sterile and fertile hyphae may grow into fruiting body, the ascocarp, which may contain millions of fertile hyphae.
An ascocarp is the fruiting body of the sexual phase in Ascomycota. There are five morphologically different types of ascocarp, namely:
Naked asci: these occur in simple ascomycetes; asci are produced on the organism's surface.
Perithecia: Asci are in flask-shaped ascoma (perithecium) with a pore (ostiole) at the top.
Cleistothecia: The ascocarp (a cleistothecium) is spherical and closed.
Apothecia: The asci are in a bowl shaped ascoma (apothecium). These are sometimes called the "cup fungi".
Pseudothecia: Asci with two layers, produced in pseudothecia that look like perithecia. The ascospores are arranged irregularly.
The sexual structures are formed in the fruiting layer of the ascocarp, the hymenium. At one end of ascogenous hyphae, characteristic U-shaped hooks develop, which curve back opposite to the growth direction of the hyphae. The two nuclei contained in the apical part of each hypha divide in such a way that the threads of their mitotic spindles run parallel, creating two pairs of genetically different nuclei. One daughter nucleus migrates close to the hook, while the other daughter nucleus locates to the basal part of the hypha. The formation of two parallel cross-walls then divides the hypha into three sections: one at the hook with one nucleus, one at the basal of the original hypha that contains one nucleus, and one that separates the U-shaped part, which contains the other two nuclei.
Fusion of the nuclei (karyogamy) takes place in the U-shaped cells in the hymenium, and results in the formation of a diploid zygote. The zygote grows into the ascus, an elongated tube-shaped or cylinder-shaped capsule. Meiosis then gives rise to four haploid nuclei, usually followed by a further mitotic division that results in eight nuclei in each ascus. The nuclei along with some cytoplasma become enclosed within membranes and a cell wall to give rise to ascospores that are aligned inside the ascus like peas in a pod.
Upon opening of the ascus, ascospores may be dispersed by the wind, while in some cases the spores are forcibly ejected form the ascus; certain species have evolved spore cannons, which can eject ascospores up to 30 cm. away. When the spores reach a suitable substrate, they germinate, form new hyphae, which restarts the fungal life cycle.
The form of the ascus is important for classification and is divided into four basic types: unitunicate-operculate, unitunicate-inoperculate, bitunicate, or prototunicate. See the article on asci for further details.
Ecology
The Ascomycota fulfil a central role in most land-based ecosystems. They are important decomposers, breaking down organic materials, such as dead leaves and animals, and helping the detritivores (animals that feed on decomposing material) to obtain their nutrients. Ascomycetes, along with other fungi, can break down large molecules such as cellulose or lignin, and thus have important roles in nutrient cycling such as the carbon cycle.
The fruiting bodies of the Ascomycota provide food for many animals ranging from insects and slugs and snails (Gastropoda) to rodents and larger mammals such as deer and wild boars.
Many ascomycetes also form symbiotic relationships with other organisms, including plants and animals.
Lichens
Probably since early in their evolutionary history, the Ascomycota have formed symbiotic associations with green algae (Chlorophyta), and other types of algae and cyanobacteria. These mutualistic associations are commonly known as lichens, and can grow and persist in terrestrial regions of the earth that are inhospitable to other organisms and characterized by extremes in temperature and humidity, including the Arctic, the Antarctic, deserts, and mountaintops. While the photoautotrophic algal partner generates metabolic energy through photosynthesis, the fungus offers a stable, supportive matrix and protects cells from radiation and dehydration. Around 42% of the Ascomycota (about 18,000 species) form lichens, and almost all the fungal partners of lichens belong to the Ascomycota.
Mycorrhizal fungi and endophytes
Members of the Ascomycota form two important types of relationship with plants: as mycorrhizal fungi and as endophytes. Mycorrhiza are symbiotic associations of fungi with the root systems of the plants, which can be of vital importance for growth and persistence for the plant. The fine mycelial network of the fungus enables the increased uptake of mineral salts that occur at low levels in the soil. In return, the plant provides the fungus with metabolic energy in the form of photosynthetic products.
Endophytic fungi live inside plants, and those that form mutualistic or commensal associations with their host, do not damage their hosts. The exact nature of the relationship between endophytic fungus and host depends on the species involved, and in some cases fungal colonization of plants can bestow a higher resistance against insects, roundworms (nematodes), and bacteria; in the case of grass endophytes the fungal symbiont produces poisonous alkaloids, which can affect the health of plant-eating (herbivorous) mammals and deter or kill insect herbivores.
Symbiotic relationships with animals
Several ascomycetes of the genus Xylaria colonize the nests of leafcutter ants and other fungus-growing ants of the tribe Attini, and the fungal gardens of termites (Isoptera). Since they do not generate fruiting bodies until the insects have left the nests, it is suspected that, as confirmed in several cases of Basidiomycota species, they may be cultivated.
Bark beetles (family Scolytidae) are important symbiotic partners of ascomycetes. The female beetles transport fungal spores to new hosts in characteristic tucks in their skin, the mycetangia. The beetle tunnels into the wood and into large chambers in which they lay their eggs. Spores released from the mycetangia germinate into hyphae, which can break down the wood. The beetle larvae then feed on the fungal mycelium, and, on reaching maturity, carry new spores with them to renew the cycle of infection. A well-known example of this is Dutch elm disease, caused by Ophiostoma ulmi, which is carried by the European elm bark beetle, Scolytus multistriatus.
Plant disease interactions
One of their most harmful roles is as the agent of many plant diseases. For instance:
Dutch elm disease, caused by the closely related species Ophiostoma ulmi and Ophiostoma novo-ulmi, has led to the death of many elms in Europe and North America.
The originally Asian Cryphonectria parasitica is responsible for attacking Sweet Chestnuts (Castanea sativa), and virtually eliminated the once-widespread American Chestnut (Castanea dentata),
A disease of maize (Zea mays), which is especially prevalent in North America, is brought about by Cochliobolus heterostrophus.
Taphrina deformans causes leaf curl of peach.
Uncinula necator is responsible for the disease powdery mildew, which attacks grapevines.
Species of Monilinia cause brown rot of stone fruit such as peaches (Prunus persica) and sour cherries (Prunus ceranus).
Members of the Ascomycota such as Stachybotrys chartarum are responsible for fading of woolen textiles, which is a common problem especially in the tropics.
Blue-green, red and brown molds attack and spoil foodstuffs – for instance Penicillium italicum rots oranges.
Cereals infected with Fusarium graminearum contain mycotoxins like deoxynivalenol (DON), which causes Fusarium ear blight and skin and mucous membrane lesions when eaten by pigs.
Human disease interactions
Aspergillus fumigatus, the most common cause of fungal infection in the lungs of immune-compromised patients often resulting in death. Also the most frequent cause of Allergic bronchopulmonary aspergillosis, which often occurs in patients with Cystic fibrosis as well as Asthma.
Candida albicans, a yeast that attacks the mucous membranes, can cause an infection of the mouth or vagina called thrush or candidiasis, and is also blamed for "yeast allergies".
Fungi like Epidermophyton cause skin infections but are not very dangerous for people with healthy immune systems. However, if the immune system is damaged they can be life-threatening; for instance, Pneumocystis jirovecii is responsible for severe lung infections that occur in AIDS patients.
Ergot (Claviceps purpurea) is a direct menace to humans when it attacks wheat or rye and produces highly poisonous alkaloids, causing ergotism if consumed. Symptoms include hallucinations, stomach cramps, and a burning sensation in the limbs ("Saint Anthony's Fire").
Aspergillus flavus, which grows on peanuts and other hosts, generates aflatoxin, which damages the liver and is highly carcinogenic.
Histoplasma capsulatum causes histoplasmosis, which affects immunocompromised patients.
Blastomyces dermatitidis is the causal agent of blastomycosis, an invasive and often serious fungal infection found occasionally in humans and other animals in regions where the fungus is endemic.
Paracoccidioides brasiliensis and Paracoccidioides lutzii are the causal agents of paracoccidioidomycosis.
Coccidioides immitis and Coccidioides posadasii are the causative agent of coccidioidomycosis (valley fever).
Talaromyces marneffei, formerly called Penicillium marneffei causes talaromycosis
Beneficial effects for humans
On the other hand, ascus fungi have brought some significant benefits to humanity.
The most famous case may be that of the mold Penicillium chrysogenum (formerly Penicillium notatum), which, probably to attack competing bacteria, produces an antibiotic that, under the name of penicillin, triggered a revolution in the treatment of bacterial infectious diseases in the 20th century.
The medical importance of Tolypocladium niveum as an immunosuppressor can hardly be exaggerated. It excretes Ciclosporin, which, as well as being given during Organ transplantation to prevent rejection, is also prescribed for auto-immune diseases such as multiple sclerosis. However, there is some doubt over the long-term side effects of the treatment.
Some ascomycete fungi can be easily altered through genetic engineering procedures. They can then produce useful proteins such as insulin, human growth hormone, or TPa, which is employed to dissolve blood clots.
Several species are common model organisms in biology, including Saccharomyces cerevisiae, Schizosaccharomyces pombe, and Neurospora crassa. The genomes of some ascomycete fungi have been fully sequenced.
Baker's Yeast (Saccharomyces cerevisiae) is used to make bread, beer and wine, during which process sugars such as glucose or sucrose are fermented to make ethanol and carbon dioxide. Bakers use the yeast for carbon dioxide production, causing the bread to rise, with the ethanol boiling off during cooking. Most vintners use it for ethanol production, releasing carbon dioxide into the atmosphere during fermentation. Brewers and traditional producers of sparkling wine use both, with a primary fermentation for the alcohol and a secondary one to produce the carbon dioxide bubbles that provide the drinks with a "sparkling" texture in the case of wine and the desirable foam in the case of beer.
Enzymes of Penicillium camemberti play a role in the manufacture of the cheeses Camembert and Brie, while those of Penicillium roqueforti do the same for Gorgonzola, Roquefort and Stilton.
In Asia, Aspergillus oryzae is added to a pulp of soaked soya beans to make soy sauce and is used to break down starch in rice and other grains into simple sugars for fermentation into East Asian alcoholic beverages such as huangjiu and sake.
Finally, some members of the Ascomycota are choice edibles; morels (Morchella spp.), truffles (Tuber spp.), and lobster mushroom (Hypomyces lactifluorum) are some of the most sought-after fungal delicacies.
Cordyceps militaris is known for its numerous medicinal benefits, including supporting the immune system, reducing inflammation, providing antioxidant effects, enhancing metabolic health, improving athletic performance, and promoting respiratory health. It contains bioactive compounds such as cordycepin, cordycepic acid, adenosine, and polysaccharides, beta-glucans, and ergosterol.
See also
List of Ascomycota families incertae sedis
List of Ascomycota genera incertae sedis
Notes
Cited texts
Mycology
Fungus phyla | Ascomycota | [
"Biology"
] | 9,299 | [
"Mycology"
] |
49,008 | https://en.wikipedia.org/wiki/Robert%20Andrews%20Millikan | Robert Andrews Millikan (March 22, 1868 – December 19, 1953) was an American physicist who received the Nobel Prize in Physics in 1923 for the measurement of the elementary charge and for his work on the photoelectric effect.
Millikan graduated from Oberlin College in 1891 and obtained his doctorate at Columbia University in 1895. In 1896 he became an assistant at the University of Chicago, where he became a full professor in 1910. In 1909 Millikan began a series of experiments to determine the electric charge carried by a single electron. He began by measuring the course of charged water droplets in an electric field. The results suggested that the charge on the droplets is a multiple of the elementary electric charge, but the experiment was not accurate enough to be convincing. He obtained more precise results in 1910 with his oil-drop experiment in which he replaced water (which tended to evaporate too quickly) with oil.
In 1914 Millikan took up with similar skill the experimental verification of the equation introduced by Albert Einstein in 1905 to describe the photoelectric effect. He used this same research to obtain an accurate value of the Planck constant. In 1921 Millikan left the University of Chicago to become director of the Norman Bridge Laboratory of Physics at the California Institute of Technology (Caltech) in Pasadena, California. There he undertook a major study of the radiation that the physicist Victor Hess had detected coming from outer space. Millikan proved that this radiation is indeed of extraterrestrial origin, and he named it "cosmic rays." As chairman of the Executive Council of Caltech (the school's governing body at the time) from 1921 until his retirement in 1945, Millikan helped to turn the school into one of the leading research institutions in the United States. He also served on the board of trustees for Science Service, now known as Society for Science & the Public, from 1921 to 1953.
Millikan was an elected member of the American Philosophical Society, the American Academy of Arts and Sciences, and the United States National Academy of Sciences. He was elected an Honorary Member of the Optical Society of America in 1950.
Biography
Education
Robert Andrews Millikan was born on March 22, 1868, in Morrison, Illinois. He went to high school in Maquoketa, Iowa and received a bachelor's degree in the classics from Oberlin College in 1891 and his doctorate in physics from Columbia University in 1895 – he was the first to earn a Ph.D. from that department.
At the close of my sophomore year [...] my Greek professor [...] asked me to teach the course in elementary physics in the preparatory department during the next year. To my reply that I did not know any physics at all, his answer was, "Anyone who can do well in my Greek can teach physics." "All right," said I, "you will have to take the consequences, but I will try and see what I can do with it." I at once purchased an Avery's Elements of Physics, and spent the greater part of my summer vacation of 1889 at home – trying to master the subject. [...] I doubt if I have ever taught better in my life than in my first course in physics in 1889. I was so intensely interested in keeping my knowledge ahead of that of the class that they may have caught some of my own interest and enthusiasm.
Millikan's enthusiasm for education continued throughout his career, and he was the coauthor of a popular and influential series of introductory textbooks, which were ahead of their time in many ways. Compared to other books of the time, they treated the subject more in the way in which it was thought about by physicists. They also included many homework problems that asked conceptual questions, rather than simply requiring the student to plug numbers into a formula.
Charge of the electron
Starting in 1908, while a professor at the University of Chicago, Millikan worked on an oil-drop experiment in which he measured the charge on a single electron. J. J. Thomson had already discovered the charge-to-mass ratio of the electron. However, the actual charge and mass values were unknown. Therefore, if one of these two values were to be discovered, the other could easily be calculated. Millikan and his then graduate student Harvey Fletcher used the oil-drop experiment to measure the charge of the electron (as well as the electron mass, and Avogadro constant, since their relation to the electron charge was known).
Professor Millikan took sole credit, in return for Harvey Fletcher claiming full authorship on a related result for his dissertation. Millikan went on to win the 1923 Nobel Prize for Physics, in part for this work, and Fletcher kept the agreement a secret until his death. After a publication on his first results in 1910, contradictory observations by Felix Ehrenhaft started a controversy between the two physicists. After improving his setup, Millikan published his seminal study in 1913.
The elementary charge is one of the fundamental physical constants, and accurate knowledge of its value is of great importance. His experiment measured the force on tiny charged droplets of oil suspended against gravity between two metal electrodes. Knowing the electric field, the charge on the droplet could be determined. Repeating the experiment for many droplets, Millikan showed that the results could be explained as integer multiples of a common value (1.592 × 10−19 coulomb), which is the charge of a single electron. That this is somewhat lower than the modern value of 1.602 176 53(14) x 10−19 coulomb is probably due to Millikan's use of an inaccurate value for the viscosity of air.
Although at the time of Millikan's oil-drop experiments it was becoming clear that there exist such things as subatomic particles, not everyone was convinced. Experimenting with cathode rays in 1897, J. J. Thomson had discovered negatively charged 'corpuscles', as he called them, with a charge-to-mass ratio 1840 times that of a hydrogen ion. Similar results had been found by George FitzGerald and Walter Kaufmann. Most of what was then known about electricity and magnetism could be explained on the basis that charge is a continuous variable. This in much the same way that many of the properties of light can be explained by treating it as a continuous wave rather than as a stream of photons.
The beauty of the oil-drop experiment is that as well as allowing quite accurate determination of the fundamental unit of charge, Millikan's apparatus also provided a 'hands on' demonstration that charge is actually quantized. General Electric Company's Charles Steinmetz, who had previously thought that charge is a continuous variable, became convinced otherwise after working with Millikan's apparatus.
Data selection controversy
There is some controversy over selectivity in Millikan's use of results from his second experiment measuring the electron charge. This issue has been discussed by Allan Franklin, a former high-energy experimentalist and current philosopher of science at the University of Colorado. Franklin contends that Millikan's exclusions of data do not affect the final value of the charge obtained, but that Millikan's substantial "cosmetic surgery" reduced the statistical error. This enabled Millikan to give the charge of the electron to better than one-half of one percent. In fact, if Millikan had included all of the data he discarded, the error would have been less than 2%. While this would still have resulted in Millikan's having measured the charge of e− better than anyone else at the time, the slightly larger uncertainty might have allowed more disagreement with his results within the physics community, which Millikan likely tried to avoid. David Goodstein argues that Millikan's statement, that all drops observed over a 60 day period were used in the paper, was clarified in a subsequent sentence that specified all "drops upon which complete series of observations were made". Goodstein attests that this is indeed the case and notes that five pages of tables separate the two sentences.
Photoelectric effect
When Albert Einstein published his 1905 paper on the particle theory of light, Millikan was convinced that it had to be wrong, because of the vast body of evidence that had already shown that light was a wave. He undertook a decade-long experimental program to test Einstein's theory, which required building what he described as "a machine shop in vacuo" in order to prepare the very clean metal surface of the photoelectrode. His results, published in 1914, confirmed Einstein's predictions in every detail, but Millikan was not convinced of Einstein's interpretation, and as late as 1916 he wrote, "Einstein's photoelectric equation... cannot in my judgment be looked upon at present as resting upon any sort of a satisfactory theoretical foundation," even though "it actually represents very accurately the behavior" of the photoelectric effect. In his 1950 autobiography, however, he declared that his work "scarcely permits of any other interpretation than that which Einstein had originally suggested, namely that of the semi-corpuscular or photon theory of light itself".
Although Millikan's work formed some of the basis for modern particle physics, he was conservative in his opinions about 20th century developments in physics, as in the case of the photon theory. Another example is that his textbook, as late as the 1927 version, unambiguously states the existence of the ether, and mentions Einstein's theory of relativity only in a noncommittal note at the end of the caption under Einstein's portrait, stating as the last in a list of accomplishments that he was "author of the special theory of relativity in 1905 and of the general theory of relativity in 1914, both of which have had great success in explaining otherwise unexplained phenomena and in predicting new ones."
Millikan is also credited with measuring the value of the Planck constant by using photoelectric emission graphs of various metals.
Later life
In 1917, solar astronomer George Ellery Hale convinced Millikan to begin spending several months each year at the Throop College of Technology, a small academic institution in Pasadena, California, that Hale wished to transform into a major center for scientific research and education. A few years later Throop College became the California Institute of Technology (Caltech), and Millikan left the University of Chicago to become Caltech's "chairman of the executive council" (effectively its president). Millikan served in that position from 1921 to 1945. At Caltech, most of his scientific research focused on the study of "cosmic rays" (a term he coined). In the 1930s he entered into a debate with Arthur Compton over whether cosmic rays were composed of high-energy photons (Millikan's view) or charged particles (Compton's view). Millikan thought his cosmic ray photons were the "birth cries" of new atoms continually being created to counteract entropy and prevent the heat death of the universe. Compton was eventually proven right by the observation that cosmic rays are deflected by the Earth's magnetic field (hence must be charged particles).
Millikan was Vice Chairman of the National Research Council during World War I. During that time, he helped to develop anti-submarine and meteorological devices. During his wartime service, an investigation by Inspector General William T. Wood determined that Millikan had attempted to steal another inventor's design for a centrifugal gun in order to profit personally. Wood recommended termination of Millikan's army commission, but a subsequent investigation by Frank McIntyre, the executive assistant to the army chief of staff, exonerated Millikan. He received the Chinese Order of Jade in 1940. After the War, Millikan contributed to the works of the League of Nations' Committee on Intellectual Cooperation (from 1922, in replacement to George E. Hale, to 1931), with other prominent researchers (Marie Curie, Albert Einstein, Hendrik Lorentz, etc.). Millikan was a member of the organizing committee of the 1932 Los Angeles Olympics, and in his private life was an enthusiastic tennis player. He was married and had three sons, the eldest of whom, Clark B. Millikan, became a prominent aerodynamic engineer. Another son, Glenn, also a physicist, married the daughter (Clare) of George Leigh Mallory of "Because it's there" Mount Everest fame. Glenn was killed in a climbing accident in Cumberland Mountains in 1947.
In the aftermath of the 1933 Long Beach earthquake, Millikan chaired the Joint Technical Committee on Earthquake Protection. They authored a report proposing means to minimize life and property loss in future earthquakes by advocating stricter building codes.
A religious man and the son of a minister, in his later life Millikan argued strongly for a complementary relationship between Christian faith and science. He dealt with this in his Terry Lectures at Yale in 1926–27, published as Evolution in Science and Religion. He was a Christian theist and proponent of theistic evolution. A more controversial belief of his was eugenics – he was one of the initial trustees of the Human Betterment Foundation and praised San Marino, California for being "the westernmost outpost of Nordic civilization ... [with] a population which is twice as Anglo-Saxon as that existing in New York, Chicago, or any of the great cities of this country." In 1936, Millikan advised the president of Duke University in the then-racial segregated southern United States against recruiting a female physicist and argued that it would be better to hire young men.
On account of Millikan's affiliation with the Human Betterment Foundation, in January 2021, the Caltech Board of Trustees authorized removal of Millikan's name (and the names of five other historical figures affiliated with the Foundation), from campus buildings.
This criticism has been rigorously analyzed in 2023 by mathematician Thomas C. Hales in, concluding: "In a reversal of Caltech's claims, this article shows that all three of Caltech's scientific witnesses against eugenics were actually pro-eugenic to varying degrees. Millikan's beliefs fell within acceptable scientific norms of his day." His analysis further proposed to remedy the Caltech's decision as follows: "The following remedies are recommended. President Rosenbaum and the Caltech Board of Trustees should rescind their endorsement of the CNR report. The report itself should be retracted for failing to meet the minimal standards of accuracy and scholarship that are expected of official documents issued by one of the world’s great scientific institutions. Caltech should restore Robert Andrews Millikan to a place of honor."
Westinghouse time capsule
In 1938, he wrote a short passage to be placed in the Westinghouse Time Capsules.
At this moment, August 22, 1938, the principles of representative ballot government, such as are represented by the governments of the Anglo-Saxon, French, and Scandinavian countries, are in deadly conflict with the principles of despotism, which up to two centuries ago had controlled the destiny of man throughout practically the whole of recorded history. If the rational, scientific, progressive principles win out in this struggle there is a possibility of a warless, golden age ahead for mankind. If the reactionary principles of despotism triumph now and in the future, the future history of mankind will repeat the sad story of war and oppression as in the past.
Death and legacy
Millikan died at a Pasadena nursing home in 1953 at age 85, and was interred in the "Court of Honor" at Forest Lawn Memorial Park Cemetery in Glendale, California.
On January 26, 1982, he was honored by the United States Postal Service with a 37¢ Great Americans series (1980–2000) postage stamp.
Tektronix named a street on their Portland, Oregon, campus after Millikan with the Millikan Way (MAX station) of Portland's MAX Blue Line named after the street.
Name removal from college campuses during the 21st century
During the mid to late 20th century, several colleges named buildings, physical features, awards, and professorships after Millikan. In 1958, Pomona College named a science building Millikan Laboratory in honor of Millikan. After reviewing Millikan's association with the eugenics movement, the college administration voted in October 2020 to rename the building as the Ms. Mary Estella Seaver and Mr. Carlton Seaver Laboratory.
On the Caltech campus, several physical features, rooms, awards, and a professorship were named in honor of Millikan, including the Millikan Library, which was completed in 1966. In January 2021, the board of trustees voted to immediately strip Millikan's name from the Caltech campus because of his association with eugenics. The Robert A. Millikan Library has been renamed Caltech Hall. In November 2021, the Robert A. Millikan Professorship was renamed the Judge Shirley Hufstedler Professorship.
Possible name removal from secondary schools during the 21st century
In November 2020, Millikan Middle School (formerly Millikan Junior High School) in the suburban Los Angeles neighborhood of Sherman Oaks started the process of renaming their school. In February 2022, the Board of Education for the Los Angeles Unified School District voted unanimously to rename the school in honor of musician Louis Armstrong.
In August 2020, the Long Beach Unified School District established a committee that would examine the need for renaming of their Robert A. Millikan High School. An October 2023 attempt to get the school board to restart the stalled renaming process failed. , Long Beach remains the only city that still has an educational institution named in honor of Millikan.
Name removal from awards
In the spring of 2021, the American Association of Physics Teachers voted unanimously to remove Millikan's name from the Robert A. Millikan award, which honors "notable and intellectually creative contributions to the teaching of physics." A few months later, AAPT announced that the award would be renamed in honor of University of Washington professor of physics Lillian C. McDermott who died the previous year.
Personal life
In 1902, he married Greta Ervin Blanchard (1876–1953), who pre-deceased him by 3 months. They had three sons: Clark Blanchard, Glenn Allan, and Max Franklin.
Famous statements
"If Kevin Harding's equation and Aston's curve are even roughly correct, as I'm sure they are, for Dr. Cameron and I have computed with their aid the maximum energy evolved in radioactive change and found it to check well with observation, then this supposition of an energy evolution through the disintegration of the common elements is from the one point of view a childish Utopian dream, and from the other a foolish bugaboo."
"No more earnest seekers after truth, no intellectuals of more penetrating vision can be found anywhere at any time than these, and yet every one of them has been a devout and professed follower of religion."
Bibliography
Goodstein, D., " In defense of Robert Andrews Millikan", Engineering and Science, 2000. No 4, pp30–38 (pdf).
Millikan, R A (1950). The Autobiography of Robert Millikan
Millikan, Robert Andrews (1917). The Electron: Its Isolation and Measurements and the Determination of Some of its Properties. The University of Chicago Press.
Nobel Lectures, "Robert A. Millikan – Nobel Biography". Elsevier Publishing Company, Amsterdam.
Segerstråle, U (1995) Good to the last drop? Millikan stories as "canned" pedagogy, Science and Engineering Ethics vol 1, pp197–214
Robert Andrews Millikan "Robert A. Millikan – Nobel Biography".
The NIST Reference on Constants, Units, and Uncertainty
Kargon, Robert H (1982). The rise of Robert Millikan: portrait of a life in American science. Ithaca: Cornell University Press.
See also
Nobel Prize controversies - Millikan is widely believed to have been denied the 1920 prize for physics owing to Felix Ehrenhaft's claims to have measured charges smaller than Millikan's elementary charge. Ehrenhaft's claims were ultimately dismissed and Millikan was awarded the prize in 1923.
Millikan's passage announcing emerging branch of physics under the designation of quantum theory, published in Popular Science January 1927.
References
Citations
Sources
Waller, John, "Einstein's Luck: The Truth Behind Some of the Greatest Scientific Discoveries". Oxford University Press, 2003. .
Physics paper On the Elementary Electrical Charge and the Avogadro Constant (extract) Robert Andrews Millikan at www.aip.org/history, 2003
External links
including the Nobel Lecture, May 23, 1924 The Electron and the Light-Quant from the Experimental Point of View
"Famous Iowans," by Tom Longdon
. Retrieved on March 30, 2007.
Robert Millikan: Scientist . Part of a series on Notable American Unitarians.
Key Participants: Robert Millikan – Linus Pauling and the Nature of the Chemical Bond: A Documentary History
Robert Millikan standing on right during historic gathering of the Guggenheim Board Fund for Aeronautics 1928. Orville Wright seated second from right, Charles Lindbergh standing third from right
Archival collections
Robert Millikan papers [microform], 1821-1953 (bulk 1921-1953), Niels Bohr Library & Archives
William Polk Jesse student notebooks, 1919-1921, Niels Bohr Library & Archives (contains notes on the lectures of Robert A. Millikan, including courses taught by Millkan: Electron Theory, Quantum Theory, and Kinetic Theory)
1868 births
1953 deaths
American Congregationalists
American eugenicists
American Nobel laureates
ASME Medal recipients
California Institute of Technology faculty
Columbia Graduate School of Arts and Sciences alumni
Fellows of the American Academy of Arts and Sciences
Members of the French Academy of Sciences
Corresponding Members of the Russian Academy of Sciences (1917–1925)
Corresponding Members of the USSR Academy of Sciences
American experimental physicists
IEEE Edison Medal recipients
Nobel laureates in Physics
Oberlin College alumni
American optical physicists
People from Morrison, Illinois
Presidents of the California Institute of Technology
University of Chicago faculty
Spectroscopists
Burials at Forest Lawn Memorial Park (Glendale)
Naval Consulting Board
20th-century American physicists
People from San Marino, California
Theistic evolutionists
Recipients of the Matteucci Medal
Presidents of the International Union of Pure and Applied Physics
Presidents of the American Physical Society
Proceedings of the National Academy of Sciences of the United States of America editors
Recipients of Franklin Medal
Members of the American Philosophical Society | Robert Andrews Millikan | [
"Biology"
] | 4,651 | [
"Non-Darwinian evolution",
"Theistic evolutionists",
"Biology theories"
] |
49,023 | https://en.wikipedia.org/wiki/Propulsion | Propulsion is the generation of force by any combination of pushing or pulling to modify the translational motion of an object, which is typically a rigid body (or an articulated rigid body) but may also concern a fluid. The term is derived from two Latin words: pro, meaning before or forward; and pellere, meaning to drive.
A propulsion system consists of a source of mechanical power, and a propulsor (means of converting this power into propulsive force).
Plucking a guitar string to induce a vibratory translation is technically a form of propulsion of the guitar string; this is not commonly depicted in this vocabulary, even though human muscles are considered to propel the fingertips. The motion of an object moving through a gravitational field is affected by the field, and within some frames of reference physicists speak of the gravitational field generating a force upon the object, but for deep theoretic reasons, physicists now consider the curved path of an object moving freely through space-time as shaped by gravity as a natural movement of the object, unaffected by a propulsive force (in this view, the falling apple is considered to be unpropelled, while the observer of the apple standing on the ground is considered to be propelled by the reactive force of the Earth's surface).
Biological propulsion systems use an animal's muscles as the power source, and limbs such as wings, fins or legs as the propulsors. A technological system uses an engine or motor as the power source (commonly called a powerplant), and wheels and axles, propellers, or a propulsive nozzle to generate the force. Components such as clutches or gearboxes may be needed to connect the motor to axles, wheels, or propellers. A technological/biological system may use human, or trained animal, muscular work to power a mechanical device.
Small objects, such as bullets, propelled at high speed are known as projectiles; larger objects propelled at high speed, often into ballistic flight, are known as rockets or missiles.
Influencing rotational motion is also technically a form of propulsion, but in speech, an automotive mechanic might prefer to describe the hot gasses in an engine cylinder as propelling the piston (translational motion), which drives the crankshaft (rotational motion), the crankshaft then drives the wheels (rotational motion), and the wheels propel the car forward (translational motion). In common speech, propulsion is associated with spatial displacement more strongly than locally contained forms of motion, such as rotation or vibration. As another example, internal stresses in a rotating baseball cause the surface of the baseball to travel along a sinusoidal or helical trajectory, which would not happen in the absence of these interior forces; these forces meet the technical definition of propulsion from Newtonian mechanics, but are not commonly spoken of in this language.
Vehicular propulsion
Air propulsion
An aircraft propulsion system generally consists of an aircraft engine and some means to generate thrust, such as a propeller or a propulsive nozzle.
An aircraft propulsion system must achieve two things. First, the thrust from the propulsion system must balance the drag of the airplane when the airplane is cruising. And second, the thrust from the propulsion system must exceed the drag of the airplane for the airplane to accelerate. The greater the difference between the thrust and the drag, called the excess thrust, the faster the airplane will accelerate.
Some aircraft, like airliners and cargo planes, spend most of their life in a cruise condition. For these airplanes, excess thrust is not as important as high engine efficiency and low fuel usage. Since thrust depends on both the amount of gas moved and the velocity, we can generate high thrust by accelerating a large mass of gas by a small amount, or by accelerating a small mass of gas by a large amount. Because of the aerodynamic efficiency of propellers and fans, it is more fuel efficient to accelerate a large mass by a small amount, which is why high-bypass turbofans and turboprops are commonly used on cargo planes and airliners.
Some aircraft, like fighter planes or experimental high speed aircraft, require very high excess thrust to accelerate quickly and to overcome the high drag associated with high speeds. For these airplanes, engine efficiency is not as important as very high thrust. Modern combat aircraft usually have an afterburner added to a low bypass turbofan. Future hypersonic aircraft may use some type of ramjet or rocket propulsion.
Ground
Ground propulsion is any mechanism for propelling solid bodies along the ground, usually for the purposes of transportation. The propulsion system often consists of a combination of an engine or motor, a gearbox and wheel and axles in standard applications.
Maglev
Maglev (derived from magnetic levitation) is a system of transportation that uses magnetic levitation to suspend, guide and propel vehicles with magnets rather than using mechanical methods, such as wheels, axles and bearings. With maglev a vehicle is levitated a short distance away from a guide way using magnets to create both lift and thrust. Maglev vehicles are claimed to move more smoothly and quietly and to require less maintenance than wheeled mass transit systems. It is claimed that non-reliance on friction also means that acceleration and deceleration can far surpass that of existing forms of transport. The power needed for levitation is not a particularly large percentage of the overall energy consumption; most of the power used is needed to overcome air resistance (drag), as with any other high-speed form of transport.
Marine
Marine propulsion is the mechanism or system used to generate thrust to move a ship or boat across water. While paddles and sails are still used on some smaller boats, most modern ships are propelled by mechanical systems consisting of a motor or engine turning a propeller, or less frequently, in jet drives, an impeller. Marine engineering is the discipline concerned with the design of marine propulsion systems.
Steam engines were the first mechanical engines used in marine propulsion, but have mostly been replaced by two-stroke or four-stroke diesel engines, outboard motors, and gas turbine engines on faster ships. Nuclear reactors producing steam are used to propel warships and icebreakers, and there have been attempts to utilize them to power commercial vessels. Electric motors have been used on submarines and electric boats and have been proposed for energy-efficient propulsion. Recent development in liquified natural gas (LNG) fueled engines are gaining recognition for their low emissions and cost advantages.
Space
Spacecraft propulsion is any method used to accelerate spacecraft and artificial satellites. There are many different methods. Each method has drawbacks and advantages, and spacecraft propulsion is an active area of research. However, most spacecraft today are propelled by forcing a gas from the back/rear of the vehicle at very high speed through a supersonic de Laval nozzle. This sort of engine is called a rocket engine.
All current spacecraft use chemical rockets (bipropellant or solid-fuel) for launch, though some (such as the Pegasus rocket and SpaceShipOne) have used air-breathing engines on their first stage. Most satellites have simple reliable chemical thrusters (often monopropellant rockets) or resistojet rockets for orbital station-keeping and some use momentum wheels for attitude control. Soviet bloc satellites have used electric propulsion for decades, and newer Western geo-orbiting spacecraft are starting to use them for north–south stationkeeping and orbit raising. Interplanetary vehicles mostly use chemical rockets as well, although a few have used ion thrusters and Hall-effect thrusters (two different types of electric propulsion) to great success.
Cable
A cable car is any of a variety of transportation systems relying on cables to pull vehicles along or lower them at a steady rate. The terminology also refers to the vehicles on these systems. The cable car vehicles are motor-less and engine-less and they are pulled by a cable that is rotated by a motor off-board.
Animal
Animal locomotion, which is the act of self-propulsion by an animal, has many manifestations, including running, swimming, jumping and flying. Animals move for a variety of reasons, such as to find food, a mate, or a suitable microhabitat, and to escape predators. For many animals the ability to move is essential to survival and, as a result, selective pressures have shaped the locomotion methods and mechanisms employed by moving organisms. For example, migratory animals that travel vast distances (such as the Arctic tern) typically have a locomotion mechanism that costs very little energy per unit distance, whereas non-migratory animals that must frequently move quickly to escape predators (such as frogs) are likely to have costly but very fast locomotion. The study of animal locomotion is typically considered to be a sub-field of biomechanics.
Locomotion requires energy to overcome friction, drag, inertia, and gravity, though in many circumstances some of these factors are negligible. In terrestrial environments gravity must be overcome, though the drag of air is much less of an issue. In aqueous environments however, friction (or drag) becomes the major challenge, with gravity being less of a concern. Although animals with natural buoyancy need not expend much energy maintaining vertical position, some will naturally sink and must expend energy to remain afloat. Drag may also present a problem in flight, and the aerodynamically efficient body shapes of birds highlight this point. Flight presents a different problem from movement in water however, as there is no way for a living organism to have lower density than air. Limbless organisms moving on land must often contend with surface friction, but do not usually need to expend significant energy to counteract gravity.
Newton's third law of motion is widely used in the study of animal locomotion: if at rest, to move forward an animal must push something backward. Terrestrial animals must push the solid ground; swimming and flying animals must push against a fluid (either water or air). The effect of forces during locomotion on the design of the skeletal system is also important, as is the interaction between locomotion and muscle physiology, in determining how the structures and effectors of locomotion enable or limit animal movement.
See also
Jetpack
Transport
References
External links
Vehicle technology | Propulsion | [
"Engineering"
] | 2,078 | [
"Vehicle technology",
"Mechanical engineering by discipline"
] |
49,024 | https://en.wikipedia.org/wiki/Wolfram%20Mathematica | Wolfram Mathematica is a software system with built-in libraries for several areas of technical computing that allows machine learning, statistics, symbolic computation, data manipulation, network analysis, time series analysis, NLP, optimization, plotting functions and various types of data, implementation of algorithms, creation of user interfaces, and interfacing with programs written in other programming languages. It was conceived by Stephen Wolfram, and is developed by Wolfram Research of Champaign, Illinois. The Wolfram Language is the programming language used in Mathematica. Mathematica 1.0 was released on June 23, 1988 in Champaign, Illinois and Santa Clara, California.
Notebook interface
Mathematica is split into two parts: the kernel and the front end. The kernel interprets expressions (Wolfram Language code) and returns result expressions, which can then be displayed by the front end.
The original front end, designed by Theodore Gray in 1988, consists of a notebook interface and allows the creation and editing of notebook documents that can contain code, plaintext, images, and graphics.
Code development is also supported through support in a range of standard integrated development environment (IDE) including Eclipse, IntelliJ IDEA, Atom, Vim, Visual Studio Code and Git. The Mathematica Kernel also includes a command line front end.
Other interfaces include JMath, based on GNU Readline and WolframScript which runs self-contained Mathematica programs (with arguments) from the UNIX command line.
High-performance computing
Capabilities for high-performance computing were extended with the introduction of packed arrays in version 4 (1999) and sparse matrices (version 5, 2003), and by adopting the GNU Multiple Precision Arithmetic Library to evaluate high-precision arithmetic.
Version 5.2 (2005) added automatic multi-threading when computations are performed on multi-core computers. This release included CPU-specific optimized libraries. In addition Mathematica is supported by third party specialist acceleration hardware such as ClearSpeed.
In 2002, gridMathematica was introduced to allow user level parallel programming on heterogeneous clusters and multiprocessor systems and in 2008 parallel computing technology was included in all Mathematica licenses including support for grid technology such as Windows HPC Server 2008, Microsoft Compute Cluster Server and Sun Grid.
Support for CUDA and OpenCL GPU hardware was added in 2010.
Extensions
As of Version 14, there are 6,602 built-in functions and symbols in the Wolfram Language. Stephen Wolfram announced the launch of the Wolfram Function Repository in June 2019 as a way for the public Wolfram community to contribute functionality to the Wolfram Language. At the time of Stephen Wolfram's release announcement for Mathematica 13, there were 2,259 functions contributed as Resource Functions. In addition to the Wolfram Function Repository, there is a Wolfram Data Repository with computable data and the Wolfram Neural Net Repository for machine learning.
Wolfram Mathematica is the basis of the Combinatorica package, which adds discrete mathematics functionality in combinatorics and graph theory to the program.
Connections to other applications, programming languages, and services
Communication with other applications can be done using a protocol called Wolfram Symbolic Transfer Protocol (WSTP). It allows communication between the Wolfram Mathematica kernel and the front end and provides a general interface between the kernel and other applications.
Wolfram Research freely distributes a developer kit for linking applications written in the programming language C to the Mathematica kernel through WSTP using J/Link., a Java program that can ask Mathematica to perform computations. Similar functionality is achieved with .NET /Link, but with .NET programs instead of Java programs.
Other languages that connect to Mathematica include Haskell, AppleScript, Racket, Visual Basic, Python, and Clojure.
Mathematica supports the generation and execution of Modelica models for systems modeling and connects with Wolfram System Modeler.
Links are also available to many third-party software packages and APIs.
Mathematica can also capture real-time data from a variety of sources and can read and write to public blockchains (Bitcoin, Ethereum, and ARK).
It supports import and export of over 220 data, image, video, sound, computer-aided design (CAD), geographic information systems (GIS), document, and biomedical formats.
In 2019, support was added for compiling Wolfram Language code to LLVM.
Version 12.3 of the Wolfram Language added support for Arduino.
Computable data
Mathematica is also integrated with Wolfram Alpha, an online answer engine that provides additional data, some of which is kept updated in real time, for users who use Mathematica with an internet connection. Some of the data sets include astronomical, chemical, geopolitical, language, biomedical, airplane, and weather data, in addition to mathematical data (such as knots and polyhedra).
Reception
BYTE in 1989 listed Mathematica as among the "Distinction" winners of the BYTE Awards, stating that it "is another breakthrough Macintosh application ... it could enable you to absorb the algebra and calculus that seemed impossible to comprehend from a textbook". Mathematica has been criticized for being closed source. Wolfram Research claims keeping Mathematica closed source is central to its business model and the continuity of the software.
See also
Comparison of multi-paradigm programming languages
Comparison of numerical-analysis software
Comparison of programming languages
Comparison of regular expression engines
Dynamic programming language
Fourth-generation programming language
Functional programming
List of computer algebra systems
List of computer simulation software
List of information graphics software
Literate programming
Mathematical markup language
Mathematical software
WolframAlpha, a web answer engine
Wolfram Language
Wolfram SystemModeler, a physical modeling and simulation tool which integrates with Mathematica
SageMath
References
External links
Mathematica Documentation Center
A little bit of Mathematica history documenting the growth of code base and number of functions over time
1988 software
Astronomical databases
Notebook interface
Computer algebra system software for Linux
Computer algebra system software for macOS
Computer algebra system software for Windows
Computer algebra systems
Cross-platform software
Data mining and machine learning software
Earth sciences graphics software
Econometrics software
Formula editors
Interactive geometry software
Mathematical optimization software
Mathematical software
Numerical analysis software for Linux
Numerical analysis software for macOS
Numerical analysis software for Windows
Numerical programming languages
Numerical software
Physics software
Plotting software
Proprietary commercial software for Linux
Proprietary cross-platform software
Proprietary software that uses Qt
Regression and curve fitting software
Simulation programming languages
Software that uses Qt
Statistical programming languages
Theorem proving software systems
Time series software
Wolfram Research
Graph drawing software | Wolfram Mathematica | [
"Physics",
"Astronomy",
"Mathematics"
] | 1,352 | [
"Interactive geometry software",
"Computer algebra systems",
"Automated theorem proving",
"Formula editors",
"Works about astronomy",
"Physics software",
"Computational physics",
"Theorem proving software systems",
"Astronomical databases",
"Numerical software",
"Mathematical software"
] |
49,033 | https://en.wikipedia.org/wiki/Epigenetics | In biology, epigenetics is the study of heritable traits, or a stable change of cell function, that happen without changes to the DNA sequence. The Greek prefix epi- ( "over, outside of, around") in epigenetics implies features that are "on top of" or "in addition to" the traditional (DNA sequence based) genetic mechanism of inheritance. Epigenetics usually involves a change that is not erased by cell division, and affects the regulation of gene expression. Such effects on cellular and physiological phenotypic traits may result from environmental factors, or be part of normal development. Epigenetic factors can also lead to cancer.
The term also refers to the mechanism of changes: functionally relevant alterations to the genome that do not involve mutation of the nucleotide sequence. Examples of mechanisms that produce such changes are DNA methylation and histone modification, each of which alters how genes are expressed without altering the underlying DNA sequence. Further, non-coding RNA sequences have been shown to play a key role in the regulation of gene expression. Gene expression can be controlled through the action of repressor proteins that attach to silencer regions of the DNA. These epigenetic changes may last through cell divisions for the duration of the cell's life, and may also last for multiple generations, even though they do not involve changes in the underlying DNA sequence of the organism; instead, non-genetic factors cause the organism's genes to behave (or "express themselves") differently.
One example of an epigenetic change in eukaryotic biology is the process of cellular differentiation. During morphogenesis, totipotent stem cells become the various pluripotent cell lines of the embryo, which in turn become fully differentiated cells. In other words, as a single fertilized egg cell – the zygote – continues to divide, the resulting daughter cells change into all the different cell types in an organism, including neurons, muscle cells, epithelium, endothelium of blood vessels, etc., by activating some genes while inhibiting the expression of others.
Definitions
The term epigenesis has a generic meaning of "extra growth" that has been used in English since the 17th century. In scientific publications, the term epigenetics started to appear in the 1930s (see Fig. on the right). However, its contemporary meaning emerged only in the 1990s.
A definition of the concept of epigenetic trait as a "stably heritable phenotype resulting from changes in a chromosome without alterations in the DNA sequence" was formulated at a Cold Spring Harbor meeting in 2008, although alternate definitions that include non-heritable traits are still being used widely.
Waddington's canalisation, 1940s
The hypothesis of epigenetic changes affecting the expression of chromosomes was put forth by the Russian biologist Nikolai Koltsov. From the generic meaning, and the associated adjective epigenetic, British embryologist C. H. Waddington coined the term epigenetics in 1942 as pertaining to epigenesis, in parallel to Valentin Haecker's 'phenogenetics' (). Epigenesis in the context of the biology of that period referred to the differentiation of cells from their initial totipotent state during embryonic development.
When Waddington coined the term, the physical nature of genes and their role in heredity was not known. He used it instead as a conceptual model of how genetic components might interact with their surroundings to produce a phenotype; he used the phrase "epigenetic landscape" as a metaphor for biological development. Waddington held that cell fates were established during development in a process he called canalisation much as a marble rolls down to the point of lowest local elevation. Waddington suggested visualising increasing irreversibility of cell type differentiation as ridges rising between the valleys where the marbles (analogous to cells) are travelling.
In recent times, Waddington's notion of the epigenetic landscape has been rigorously formalized in the context of the systems dynamics state approach to the study of cell-fate. Cell-fate determination is predicted to exhibit certain dynamics, such as attractor-convergence (the attractor can be an equilibrium point, limit cycle or strange attractor) or oscillatory.
Contemporary
Robin Holliday defined in 1990 epigenetics as "the study of the mechanisms of temporal and spatial control of gene activity during the development of complex organisms."
More recent usage of the word in biology follows stricter definitions. As defined by Arthur Riggs and colleagues, it is "the study of mitotically and/or meiotically heritable changes in gene function that cannot be explained by changes in DNA sequence."
The term has also been used, however, to describe processes which have not been demonstrated to be heritable, such as some forms of histone modification. Consequently, there are attempts to redefine "epigenetics" in broader terms that would avoid the constraints of requiring heritability. For example, Adrian Bird defined epigenetics as "the structural adaptation of chromosomal regions so as to register, signal or perpetuate altered activity states." This definition would be inclusive of transient modifications associated with DNA repair or cell-cycle phases as well as stable changes maintained across multiple cell generations, but exclude others such as templating of membrane architecture and prions unless they impinge on chromosome function. Such redefinitions however are not universally accepted and are still subject to debate. The NIH "Roadmap Epigenomics Project", which ran from 2008 to 2017, uses the following definition: "For purposes of this program, epigenetics refers to both heritable changes in gene activity and expression (in the progeny of cells or of individuals) and also stable, long-term alterations in the transcriptional potential of a cell that are not necessarily heritable." In 2008, a consensus definition of the epigenetic trait, a "stably heritable phenotype resulting from changes in a chromosome without alterations in the DNA sequence," was made at a Cold Spring Harbor meeting.
The similarity of the word to "genetics" has generated many parallel usages. The "epigenome" is a parallel to the word "genome", referring to the overall epigenetic state of a cell, and epigenomics refers to global analyses of epigenetic changes across the entire genome. The phrase "genetic code" has also been adapted – the "epigenetic code" has been used to describe the set of epigenetic features that create different phenotypes in different cells from the same underlying DNA sequence. Taken to its extreme, the "epigenetic code" could represent the total state of the cell, with the position of each molecule accounted for in an epigenomic map, a diagrammatic representation of the gene expression, DNA methylation and histone modification status of a particular genomic region. More typically, the term is used in reference to systematic efforts to measure specific, relevant forms of epigenetic information such as the histone code or DNA methylation patterns.
Mechanisms
Covalent modification of either DNA (e.g. cytosine methylation and hydroxymethylation) or of histone proteins (e.g. lysine acetylation, lysine and arginine methylation, serine and threonine phosphorylation, and lysine ubiquitination and sumoylation) play central roles in many types of epigenetic inheritance. Therefore, the word "epigenetics" is sometimes used as a synonym for these processes. However, this can be misleading. Chromatin remodeling is not always inherited, and not all epigenetic inheritance involves chromatin remodeling. In 2019, a further lysine modification appeared in the scientific literature linking epigenetics modification to cell metabolism, i.e. lactylation
Because the phenotype of a cell or individual is affected by which of its genes are transcribed, heritable transcription states can give rise to epigenetic effects. There are several layers of regulation of gene expression. One way that genes are regulated is through the remodeling of chromatin. Chromatin is the complex of DNA and the histone proteins with which it associates. If the way that DNA is wrapped around the histones changes, gene expression can change as well. Chromatin remodeling is accomplished through two main mechanisms:
The first way is post translational modification of the amino acids that make up histone proteins. Histone proteins are made up of long chains of amino acids. If the amino acids that are in the chain are changed, the shape of the histone might be modified. DNA is not completely unwound during replication. It is possible, then, that the modified histones may be carried into each new copy of the DNA. Once there, these histones may act as templates, initiating the surrounding new histones to be shaped in the new manner. By altering the shape of the histones around them, these modified histones would ensure that a lineage-specific transcription program is maintained after cell division.
The second way is the addition of methyl groups to the DNA, mostly at CpG sites, to convert cytosine to 5-methylcytosine. 5-Methylcytosine performs much like a regular cytosine, pairing with a guanine in double-stranded DNA. However, when methylated cytosines are present in CpG sites in the promoter and enhancer regions of genes, the genes are often repressed. When methylated cytosines are present in CpG sites in the gene body (in the coding region excluding the transcription start site) expression of the gene is often enhanced. Transcription of a gene usually depends on a transcription factor binding to a (10 base or less) recognition sequence at the enhancer that interacts with the promoter region of that gene (Gene expression#Enhancers, transcription factors, mediator complex and DNA loops in mammalian transcription). About 22% of transcription factors are inhibited from binding when the recognition sequence has a methylated cytosine. In addition, presence of methylated cytosines at a promoter region can attract methyl-CpG-binding domain (MBD) proteins. All MBDs interact with nucleosome remodeling and histone deacetylase complexes, which leads to gene silencing. In addition, another covalent modification involving methylated cytosine is its demethylation by TET enzymes. Hundreds of such demethylations occur, for instance, during learning and memory forming events in neurons.
There is frequently a reciprocal relationship between DNA methylation and histone lysine methylation. For instance, the methyl binding domain protein MBD1, attracted to and associating with methylated cytosine in a DNA CpG site, can also associate with H3K9 methyltransferase activity to methylate histone 3 at lysine 9. On the other hand, DNA maintenance methylation by DNMT1 appears to partly rely on recognition of histone methylation on the nucleosome present at the DNA site to carry out cytosine methylation on newly synthesized DNA. There is further crosstalk between DNA methylation carried out by DNMT3A and DNMT3B and histone methylation so that there is a correlation between the genome-wide distribution of DNA methylation and histone methylation.
Mechanisms of heritability of histone state are not well understood; however, much is known about the mechanism of heritability of DNA methylation state during cell division and differentiation. Heritability of methylation state depends on certain enzymes (such as DNMT1) that have a higher affinity for 5-methylcytosine than for cytosine. If this enzyme reaches a "hemimethylated" portion of DNA (where 5-methylcytosine is in only one of the two DNA strands) the enzyme will methylate the other half. However, it is now known that DNMT1 physically interacts with the protein UHRF1. UHRF1 has been recently recognized as essential for DNMT1-mediated maintenance of DNA methylation. UHRF1 is the protein that specifically recognizes hemi-methylated DNA, therefore bringing DNMT1 to its substrate to maintain DNA methylation.
Although histone modifications occur throughout the entire sequence, the unstructured N-termini of histones (called histone tails) are particularly highly modified. These modifications include acetylation, methylation, ubiquitylation, phosphorylation, sumoylation, ribosylation and citrullination. Acetylation is the most highly studied of these modifications. For example, acetylation of the K14 and K9 lysines of the tail of histone H3 by histone acetyltransferase enzymes (HATs) is generally related to transcriptional competence (see Figure).
One mode of thinking is that this tendency of acetylation to be associated with "active" transcription is biophysical in nature. Because it normally has a positively charged nitrogen at its end, lysine can bind the negatively charged phosphates of the DNA backbone. The acetylation event converts the positively charged amine group on the side chain into a neutral amide linkage. This removes the positive charge, thus loosening the DNA from the histone. When this occurs, complexes like SWI/SNF and other transcriptional factors can bind to the DNA and allow transcription to occur. This is the "cis" model of the epigenetic function. In other words, changes to the histone tails have a direct effect on the DNA itself.
Another model of epigenetic function is the "trans" model. In this model, changes to the histone tails act indirectly on the DNA. For example, lysine acetylation may create a binding site for chromatin-modifying enzymes (or transcription machinery as well). This chromatin remodeler can then cause changes to the state of the chromatin. Indeed, a bromodomain – a protein domain that specifically binds acetyl-lysine – is found in many enzymes that help activate transcription, including the SWI/SNF complex. It may be that acetylation acts in this and the previous way to aid in transcriptional activation.
The idea that modifications act as docking modules for related factors is borne out by histone methylation as well. Methylation of lysine 9 of histone H3 has long been associated with constitutively transcriptionally silent chromatin (constitutive heterochromatin) (see bottom Figure). It has been determined that a chromodomain (a domain that specifically binds methyl-lysine) in the transcriptionally repressive protein HP1 recruits HP1 to K9 methylated regions. One example that seems to refute this biophysical model for methylation is that tri-methylation of histone H3 at lysine 4 is strongly associated with (and required for full) transcriptional activation (see top Figure). Tri-methylation, in this case, would introduce a fixed positive charge on the tail.
It has been shown that the histone lysine methyltransferase (KMT) is responsible for this methylation activity in the pattern of histones H3 & H4. This enzyme utilizes a catalytically active site called the SET domain (Suppressor of variegation, Enhancer of Zeste, Trithorax). The SET domain is a 130-amino acid sequence involved in modulating gene activities. This domain has been demonstrated to bind to the histone tail and causes the methylation of the histone.
Differing histone modifications are likely to function in differing ways; acetylation at one position is likely to function differently from acetylation at another position. Also, multiple modifications may occur at the same time, and these modifications may work together to change the behavior of the nucleosome. The idea that multiple dynamic modifications regulate gene transcription in a systematic and reproducible way is called the histone code, although the idea that histone state can be read linearly as a digital information carrier has been largely debunked. One of the best-understood systems that orchestrate chromatin-based silencing is the SIR protein based silencing of the yeast hidden mating-type loci HML and HMR.
DNA methylation
DNA methylation frequently occurs in repeated sequences, and helps to suppress the expression and mobility of 'transposable elements': Because 5-methylcytosine can be spontaneously deaminated (replacing nitrogen by oxygen) to thymidine, CpG sites are frequently mutated and become rare in the genome, except at CpG islands where they remain unmethylated. Epigenetic changes of this type thus have the potential to direct increased frequencies of permanent genetic mutation. DNA methylation patterns are known to be established and modified in response to environmental factors by a complex interplay of at least three independent DNA methyltransferases, DNMT1, DNMT3A, and DNMT3B, the loss of any of which is lethal in mice. DNMT1 is the most abundant methyltransferase in somatic cells, localizes to replication foci, has a 10–40-fold preference for hemimethylated DNA and interacts with the proliferating cell nuclear antigen (PCNA).
By preferentially modifying hemimethylated DNA, DNMT1 transfers patterns of methylation to a newly synthesized strand after DNA replication, and therefore is often referred to as the 'maintenance' methyltransferase. DNMT1 is essential for proper embryonic development, imprinting and X-inactivation. To emphasize the difference of this molecular mechanism of inheritance from the canonical Watson-Crick base-pairing mechanism of transmission of genetic information, the term 'Epigenetic templating' was introduced. Furthermore, in addition to the maintenance and transmission of methylated DNA states, the same principle could work in the maintenance and transmission of histone modifications and even cytoplasmic (structural) heritable states.
RNA methylation
RNA methylation of N6-methyladenosine (m6A) as the most abundant eukaryotic RNA modification has recently been recognized as an important gene regulatory mechanism.
Histone modifications
Histones H3 and H4 can also be manipulated through demethylation using histone lysine demethylase (KDM). This recently identified enzyme has a catalytically active site called the Jumonji domain (JmjC). The demethylation occurs when JmjC utilizes multiple cofactors to hydroxylate the methyl group, thereby removing it. JmjC is capable of demethylating mono-, di-, and tri-methylated substrates.
Chromosomal regions can adopt stable and heritable alternative states resulting in bistable gene expression without changes to the DNA sequence. Epigenetic control is often associated with alternative covalent modifications of histones. The stability and heritability of states of larger chromosomal regions are suggested to involve positive feedback where modified nucleosomes recruit enzymes that similarly modify nearby nucleosomes. A simplified stochastic model for this type of epigenetics is found here.
It has been suggested that chromatin-based transcriptional regulation could be mediated by the effect of small RNAs. Small interfering RNAs can modulate transcriptional gene expression via epigenetic modulation of targeted promoters.
RNA transcripts
Sometimes a gene, after being turned on, transcribes a product that (directly or indirectly) maintains the activity of that gene. For example, Hnf4 and MyoD enhance the transcription of many liver-specific and muscle-specific genes, respectively, including their own, through the transcription factor activity of the proteins they encode. RNA signalling includes differential recruitment of a hierarchy of generic chromatin modifying complexes and DNA methyltransferases to specific loci by RNAs during differentiation and development. Other epigenetic changes are mediated by the production of different splice forms of RNA, or by formation of double-stranded RNA (RNAi). Descendants of the cell in which the gene was turned on will inherit this activity, even if the original stimulus for gene-activation is no longer present. These genes are often turned on or off by signal transduction, although in some systems where syncytia or gap junctions are important, RNA may spread directly to other cells or nuclei by diffusion. A large amount of RNA and protein is contributed to the zygote by the mother during oogenesis or via nurse cells, resulting in maternal effect phenotypes. A smaller quantity of sperm RNA is transmitted from the father, but there is recent evidence that this epigenetic information can lead to visible changes in several generations of offspring.
MicroRNAs
MicroRNAs (miRNAs) are members of non-coding RNAs that range in size from 17 to 25 nucleotides. miRNAs regulate a large variety of biological functions in plants and animals. So far, in 2013, about 2000 miRNAs have been discovered in humans and these can be found online in a miRNA database. Each miRNA expressed in a cell may target about 100 to 200 messenger RNAs(mRNAs) that it downregulates. Most of the downregulation of mRNAs occurs by causing the decay of the targeted mRNA, while some downregulation occurs at the level of translation into protein.
It appears that about 60% of human protein coding genes are regulated by miRNAs. Many miRNAs are epigenetically regulated. About 50% of miRNA genes are associated with CpG islands, that may be repressed by epigenetic methylation. Transcription from methylated CpG islands is strongly and heritably repressed. Other miRNAs are epigenetically regulated by either histone modifications or by combined DNA methylation and histone modification.
mRNA
In 2011, it was demonstrated that the methylation of mRNA plays a critical role in human energy homeostasis. The obesity-associated FTO gene is shown to be able to demethylate N6-methyladenosine in RNA.
sRNAs
sRNAs are small (50–250 nucleotides), highly structured, non-coding RNA fragments found in bacteria. They control gene expression including virulence genes in pathogens and are viewed as new targets in the fight against drug-resistant bacteria. They play an important role in many biological processes, binding to mRNA and protein targets in prokaryotes. Their phylogenetic analyses, for example through sRNA–mRNA target interactions or protein binding properties, are used to build comprehensive databases. sRNA-gene maps based on their targets in microbial genomes are also constructed.
Long non-coding RNAs
Numerous investigations have demonstrated the pivotal involvement of long non-coding RNAs (lncRNAs) in the regulation of gene expression and chromosomal modifications, thereby exerting significant control over cellular differentiation. These long non-coding RNAs also contribute to genomic imprinting and the inactivation of the X chromosome.
In invertebrates such as social insects of honey bees, long non-coding RNAs are detected as a possible epigenetic mechanism via allele-specific genes underlying aggression via reciprocal crosses.
Prions
Prions are infectious forms of proteins. In general, proteins fold into discrete units that perform distinct cellular functions, but some proteins are also capable of forming an infectious conformational state known as a prion. Although often viewed in the context of infectious disease, prions are more loosely defined by their ability to catalytically convert other native state versions of the same protein to an infectious conformational state. It is in this latter sense that they can be viewed as epigenetic agents capable of inducing a phenotypic change without a modification of the genome.
Fungal prions are considered by some to be epigenetic because the infectious phenotype caused by the prion can be inherited without modification of the genome. PSI+ and URE3, discovered in yeast in 1965 and 1971, are the two best studied of this type of prion. Prions can have a phenotypic effect through the sequestration of protein in aggregates, thereby reducing that protein's activity. In PSI+ cells, the loss of the Sup35 protein (which is involved in termination of translation) causes ribosomes to have a higher rate of read-through of stop codons, an effect that results in suppression of nonsense mutations in other genes. The ability of Sup35 to form prions may be a conserved trait. It could confer an adaptive advantage by giving cells the ability to switch into a PSI+ state and express dormant genetic features normally terminated by stop codon mutations.
Prion-based epigenetics has also been observed in Saccharomyces cerevisiae.
Molecular basis
Epigenetic changes modify the activation of certain genes, but not the genetic code sequence of DNA. The microstructure (not code) of DNA itself or the associated chromatin proteins may be modified, causing activation or silencing. This mechanism enables differentiated cells in a multicellular organism to express only the genes that are necessary for their own activity. Epigenetic changes are preserved when cells divide. Most epigenetic changes only occur within the course of one individual organism's lifetime; however, these epigenetic changes can be transmitted to the organism's offspring through a process called transgenerational epigenetic inheritance. Moreover, if gene inactivation occurs in a sperm or egg cell that results in fertilization, this epigenetic modification may also be transferred to the next generation.
Specific epigenetic processes include paramutation, bookmarking, imprinting, gene silencing, X chromosome inactivation, position effect, DNA methylation reprogramming, transvection, maternal effects, the progress of carcinogenesis, many effects of teratogens, regulation of histone modifications and heterochromatin, and technical limitations affecting parthenogenesis and cloning.
DNA damage
DNA damage can also cause epigenetic changes. DNA damage is very frequent, occurring on average about 60,000 times a day per cell of the human body (see DNA damage (naturally occurring)). These damages are largely repaired, however, epigenetic changes can still remain at the site of DNA repair. In particular, a double strand break in DNA can initiate unprogrammed epigenetic gene silencing both by causing DNA methylation as well as by promoting silencing types of histone modifications (chromatin remodeling - see next section). In addition, the enzyme Parp1 (poly(ADP)-ribose polymerase) and its product poly(ADP)-ribose (PAR) accumulate at sites of DNA damage as part of the repair process. This accumulation, in turn, directs recruitment and activation of the chromatin remodeling protein, ALC1, that can cause nucleosome remodeling. Nucleosome remodeling has been found to cause, for instance, epigenetic silencing of DNA repair gene MLH1. DNA damaging chemicals, such as benzene, hydroquinone, styrene, carbon tetrachloride and trichloroethylene, cause considerable hypomethylation of DNA, some through the activation of oxidative stress pathways.
Foods are known to alter the epigenetics of rats on different diets. Some food components epigenetically increase the levels of DNA repair enzymes such as MGMT and MLH1 and p53. Other food components can reduce DNA damage, such as soy isoflavones. In one study, markers for oxidative stress, such as modified nucleotides that can result from DNA damage, were decreased by a 3-week diet supplemented with soy. A decrease in oxidative DNA damage was also observed 2 h after consumption of anthocyanin-rich bilberry (Vaccinium myrtillius L.) pomace extract.
DNA repair
Damage to DNA is very common and is constantly being repaired. Epigenetic alterations can accompany DNA repair of oxidative damage or double-strand breaks. In human cells, oxidative DNA damage occurs about 10,000 times a day and DNA double-strand breaks occur about 10 to 50 times a cell cycle in somatic replicating cells (see DNA damage (naturally occurring)). The selective advantage of DNA repair is to allow the cell to survive in the face of DNA damage. The selective advantage of epigenetic alterations that occur with DNA repair is not clear.
Repair of oxidative DNA damage can alter epigenetic markers
In the steady state (with endogenous damages occurring and being repaired), there are about 2,400 oxidatively damaged guanines that form 8-oxo-2'-deoxyguanosine (8-OHdG) in the average mammalian cell DNA. 8-OHdG constitutes about 5% of the oxidative damages commonly present in DNA. The oxidized guanines do not occur randomly among all guanines in DNA. There is a sequence preference for the guanine at a methylated CpG site (a cytosine followed by guanine along its 5' → 3' direction and where the cytosine is methylated (5-mCpG)). A 5-mCpG site has the lowest ionization potential for guanine oxidation.
Oxidized guanine has mispairing potential and is mutagenic. Oxoguanine glycosylase (OGG1) is the primary enzyme responsible for the excision of the oxidized guanine during DNA repair. OGG1 finds and binds to an 8-OHdG within a few seconds. However, OGG1 does not immediately excise 8-OHdG. In HeLa cells half maximum removal of 8-OHdG occurs in 30 minutes, and in irradiated mice, the 8-OHdGs induced in the mouse liver are removed with a half-life of 11 minutes.
When OGG1 is present at an oxidized guanine within a methylated CpG site it recruits TET1 to the 8-OHdG lesion (see Figure). This allows TET1 to demethylate an adjacent methylated cytosine. Demethylation of cytosine is an epigenetic alteration.
As an example, when human mammary epithelial cells were treated with H2O2 for six hours, 8-OHdG increased about 3.5-fold in DNA and this caused about 80% demethylation of the 5-methylcytosines in the genome. Demethylation of CpGs in a gene promoter by TET enzyme activity increases transcription of the gene into messenger RNA. In cells treated with H2O2, one particular gene was examined, BACE1. The methylation level of the BACE1 CpG island was reduced (an epigenetic alteration) and this allowed about 6.5 fold increase of expression of BACE1 messenger RNA.
While six-hour incubation with H2O2 causes considerable demethylation of 5-mCpG sites, shorter times of H2O2 incubation appear to promote other epigenetic alterations. Treatment of cells with H2O2 for 30 minutes causes the mismatch repair protein heterodimer MSH2-MSH6 to recruit DNA methyltransferase 1 (DNMT1) to sites of some kinds of oxidative DNA damage. This could cause increased methylation of cytosines (epigenetic alterations) at these locations.
Jiang et al. treated HEK 293 cells with agents causing oxidative DNA damage, (potassium bromate (KBrO3) or potassium chromate (K2CrO4)). Base excision repair (BER) of oxidative damage occurred with the DNA repair enzyme polymerase beta localizing to oxidized guanines. Polymerase beta is the main human polymerase in short-patch BER of oxidative DNA damage. Jiang et al. also found that polymerase beta recruited the DNA methyltransferase protein DNMT3b to BER repair sites. They then evaluated the methylation pattern at the single nucleotide level in a small region of DNA including the promoter region and the early transcription region of the BRCA1 gene. Oxidative DNA damage from bromate modulated the DNA methylation pattern (caused epigenetic alterations) at CpG sites within the region of DNA studied. In untreated cells, CpGs located at −189, −134, −29, −19, +16, and +19 of the BRCA1 gene had methylated cytosines (where numbering is from the messenger RNA transcription start site, and negative numbers indicate nucleotides in the upstream promoter region). Bromate treatment-induced oxidation resulted in the loss of cytosine methylation at −189, −134, +16 and +19 while also leading to the formation of new methylation at the CpGs located at −80, −55, −21 and +8 after DNA repair was allowed.
Homologous recombinational repair alters epigenetic markers
At least four articles report the recruitment of DNA methyltransferase 1 (DNMT1) to sites of DNA double-strand breaks. During homologous recombinational repair (HR) of the double-strand break, the involvement of DNMT1 causes the two repaired strands of DNA to have different levels of methylated cytosines. One strand becomes frequently methylated at about 21 CpG sites downstream of the repaired double-strand break. The other DNA strand loses methylation at about six CpG sites that were previously methylated downstream of the double-strand break, as well as losing methylation at about five CpG sites that were previously methylated upstream of the double-strand break. When the chromosome is replicated, this gives rise to one daughter chromosome that is heavily methylated downstream of the previous break site and one that is unmethylated in the region both upstream and downstream of the previous break site. With respect to the gene that was broken by the double-strand break, half of the progeny cells express that gene at a high level and in the other half of the progeny cells expression of that gene is repressed. When clones of these cells were maintained for three years, the new methylation patterns were maintained over that time period.
In mice with a CRISPR-mediated homology-directed recombination insertion in their genome there were a large number of increased methylations of CpG sites within the double-strand break-associated insertion.
Non-homologous end joining can cause some epigenetic marker alterations
Non-homologous end joining (NHEJ) repair of a double-strand break can cause a small number of demethylations of pre-existing cytosine DNA methylations downstream of the repaired double-strand break. Further work by Allen et al. showed that NHEJ of a DNA double-strand break in a cell could give rise to some progeny cells having repressed expression of the gene harboring the initial double-strand break and some progeny having high expression of that gene due to epigenetic alterations associated with NHEJ repair. The frequency of epigenetic alterations causing repression of a gene after an NHEJ repair of a DNA double-strand break in that gene may be about 0.9%.
Techniques used to study epigenetics
Epigenetic research uses a wide range of molecular biological techniques to further understanding of epigenetic phenomena. These techniques include chromatin immunoprecipitation (together with its large-scale variants ChIP-on-chip and ChIP-Seq), fluorescent in situ hybridization, methylation-sensitive restriction enzymes, DNA adenine methyltransferase identification (DamID) and bisulfite sequencing. Furthermore, the use of bioinformatics methods has a role in computational epigenetics.
Chromatin Immunoprecipitation
Chromatin Immunoprecipitation (ChIP) has helped bridge the gap between DNA and epigenetic interactions. With the use of ChIP, researchers are able to make findings in regards to gene regulation, transcription mechanisms, and chromatin structure.
Fluorescent in situ hybridization
Fluorescent in situ hybridization (FISH) is very important to understand epigenetic mechanisms. FISH can be used to find the location of genes on chromosomes, as well as finding noncoding RNAs. FISH is predominantly used for detecting chromosomal abnormalities in humans.
Methylation-sensitive restriction enzymes
Methylation sensitive restriction enzymes paired with PCR is a way to evaluate methylation in DNA - specifically the CpG sites. If DNA is methylated, the restriction enzymes will not cleave the strand. Contrarily, if the DNA is not methylated, the enzymes will cleave the strand and it will be amplified by PCR.
Bisulfite sequencing
Bisulfite sequencing is another way to evaluate DNA methylation. Cytosine will be changed to uracil from being treated with sodium bisulfite, whereas methylated cytosines will not be affected.
Nanopore sequencing
Certain sequencing methods, such as nanopore sequencing, allow sequencing of native DNA. Native (=unamplified) DNA retains the epigenetic modifications which would otherwise be lost during the amplification step. Nanopore basecaller models can distinguish between the signals obtained for epigenetically modified bases and unaltered based and provide an epigenetic profile in addition to the sequencing result.
Structural inheritance
In ciliates such as Tetrahymena and Paramecium, genetically identical cells show heritable differences in the patterns of ciliary rows on their cell surface. Experimentally altered patterns can be transmitted to daughter cells. It seems existing structures act as templates for new structures. The mechanisms of such inheritance are unclear, but reasons exist to assume that multicellular organisms also use existing cell structures to assemble new ones.
Nucleosome positioning
Eukaryotic genomes have numerous nucleosomes. Nucleosome position is not random, and determine the accessibility of DNA to regulatory proteins. Promoters active in different tissues have been shown to have different nucleosome positioning features. This determines differences in gene expression and cell differentiation. It has been shown that at least some nucleosomes are retained in sperm cells (where most but not all histones are replaced by protamines). Thus nucleosome positioning is to some degree inheritable. Recent studies have uncovered connections between nucleosome positioning and other epigenetic factors, such as DNA methylation and hydroxymethylation.
Histone variants
Different histone variants are incorporated into specific regions of the genome non-randomly. Their differential biochemical characteristics can affect genome functions via their roles in gene regulation, and maintenance of chromosome structures.
Genomic architecture
The three-dimensional configuration of the genome (the 3D genome) is complex, dynamic and crucial for regulating genomic function and nuclear processes such as DNA replication, transcription and DNA-damage repair.
Functions and consequences
In the brain
Memory
Memory formation and maintenance are due to epigenetic alterations that cause the required dynamic changes in gene transcription that create and renew memory in neurons.
An event can set off a chain of reactions that result in altered methylations of a large set of genes in neurons, which give a representation of the event, a memory.
Areas of the brain important in the formation of memories include the hippocampus, medial prefrontal cortex (mPFC), anterior cingulate cortex and amygdala, as shown in the diagram of the human brain in this section.
When a strong memory is created, as in a rat subjected to contextual fear conditioning (CFC), one of the earliest events to occur is that more than 100 DNA double-strand breaks are formed by topoisomerase IIB in neurons of the hippocampus and the medial prefrontal cortex (mPFC). These double-strand breaks are at specific locations that allow activation of transcription of immediate early genes (IEGs) that are important in memory formation, allowing their expression in mRNA, with peak mRNA transcription at seven to ten minutes after CFC.
Two important IEGs in memory formation are EGR1 and the alternative promoter variant of DNMT3A, DNMT3A2. EGR1 protein binds to DNA at its binding motifs, 5′-GCGTGGGCG-3′ or 5′-GCGGGGGCGG-3', and there are about 12,000 genome locations at which EGR1 protein can bind. EGR1 protein binds to DNA in gene promoter and enhancer regions. EGR1 recruits the demethylating enzyme TET1 to an association, and brings TET1 to about 600 locations on the genome where TET1 can then demethylate and activate the associated genes.
The DNA methyltransferases DNMT3A1, DNMT3A2 and DNMT3B can all methylate cytosines (see image this section) at CpG sites in or near the promoters of genes. As shown by Manzo et al., these three DNA methyltransferases differ in their genomic binding locations and DNA methylation activity at different regulatory sites. Manzo et al. located 3,970 genome regions exclusively enriched for DNMT3A1, 3,838 regions for DNMT3A2 and 3,432 regions for DNMT3B. When DNMT3A2 is newly induced as an IEG (when neurons are activated), many new cytosine methylations occur, presumably in the target regions of DNMT3A2. Oliviera et al. found that the neuronal activity-inducible IEG levels of Dnmt3a2 in the hippocampus determined the ability to form long-term memories.
Rats form long-term associative memories after contextual fear conditioning (CFC). Duke et al. found that 24 hours after CFC in rats, in hippocampus neurons, 2,097 genes (9.17% of the genes in the rat genome) had altered methylation. When newly methylated cytosines are present in CpG sites in the promoter regions of genes, the genes are often repressed, and when newly demethylated cytosines are present the genes may be activated. After CFC, there were 1,048 genes with reduced mRNA expression and 564 genes with upregulated mRNA expression. Similarly, when mice undergo CFC, one hour later in the hippocampus region of the mouse brain there are 675 demethylated genes and 613 hypermethylated genes. However, memories do not remain in the hippocampus, but after four or five weeks the memories are stored in the anterior cingulate cortex. In the studies on mice after CFC, Halder et al. showed that four weeks after CFC there were at least 1,000 differentially methylated genes and more than 1,000 differentially expressed genes in the anterior cingulate cortex, while at the same time the altered methylations in the hippocampus were reversed.
The epigenetic alteration of methylation after a new memory is established creates a different pool of nuclear mRNAs. As reviewed by Bernstein, the epigenetically determined new mix of nuclear mRNAs are often packaged into neuronal granules, or messenger RNP, consisting of mRNA, small and large ribosomal subunits, translation initiation factors and RNA-binding proteins that regulate mRNA function. These neuronal granules are transported from the neuron nucleus and are directed, according to 3′ untranslated regions of the mRNA in the granules (their "zip codes"), to neuronal dendrites. Roughly 2,500 mRNAs may be localized to the dendrites of hippocampal pyramidal neurons and perhaps 450 transcripts are in excitatory presynaptic nerve terminals (dendritic spines). The altered assortments of transcripts (dependent on epigenetic alterations in the neuron nucleus) have different sensitivities in response to signals, which is the basis of altered synaptic plasticity. Altered synaptic plasticity is often considered the neurochemical foundation of learning and memory.
Aging
Epigenetics play a major role in brain aging and age-related cognitive decline, with relevance to life extension.
Other and general
In adulthood, changes in the epigenome are important for various higher cognitive functions. Dysregulation of epigenetic mechanisms is implicated in neurodegenerative disorders and diseases. Epigenetic modifications in neurons are dynamic and reversible. Epigenetic regulation impacts neuronal action, affecting learning, memory, and other cognitive processes.
Early events, including during embryonic development, can influence development, cognition, and health outcomes through epigenetic mechanisms.
Epigenetic mechanisms have been proposed as "a potential molecular mechanism for effects of endogenous hormones on the organization of developing brain circuits".
Nutrients could interact with the epigenome to "protect or boost cognitive processes across the lifespan".
A review suggests neurobiological effects of physical exercise via epigenetics seem "central to building an 'epigenetic memory' to influence long-term brain function and behavior" and may even be heritable.
With the axo-ciliary synapse, there is communication between serotonergic axons and antenna-like primary cilia of CA1 pyramidal neurons that alters the neuron's epigenetic state in the nucleus via the signalling distinct from that at the plasma membrane (and longer-term).
Epigenetics also play a major role in the brain evolution in and to humans.
Development
Developmental epigenetics can be divided into predetermined and probabilistic epigenesis. Predetermined epigenesis is a unidirectional movement from structural development in DNA to the functional maturation of the protein. "Predetermined" here means that development is scripted and predictable. Probabilistic epigenesis on the other hand is a bidirectional structure-function development with experiences and external molding development.
Somatic epigenetic inheritance, particularly through DNA and histone covalent modifications and nucleosome repositioning, is very important in the development of multicellular eukaryotic organisms. The genome sequence is static (with some notable exceptions), but cells differentiate into many different types, which perform different functions, and respond differently to the environment and intercellular signaling. Thus, as individuals develop, morphogens activate or silence genes in an epigenetically heritable fashion, giving cells a memory. In mammals, most cells terminally differentiate, with only stem cells retaining the ability to differentiate into several cell types ("totipotency" and "multipotency"). In mammals, some stem cells continue producing newly differentiated cells throughout life, such as in neurogenesis, but mammals are not able to respond to loss of some tissues, for example, the inability to regenerate limbs, which some other animals are capable of. Epigenetic modifications regulate the transition from neural stem cells to glial progenitor cells (for example, differentiation into oligodendrocytes is regulated by the deacetylation and methylation of histones). Unlike animals, plant cells do not terminally differentiate, remaining totipotent with the ability to give rise to a new individual plant. While plants do utilize many of the same epigenetic mechanisms as animals, such as chromatin remodeling, it has been hypothesized that some kinds of plant cells do not use or require "cellular memories", resetting their gene expression patterns using positional information from the environment and surrounding cells to determine their fate.
Epigenetic changes can occur in response to environmental exposure – for example, maternal dietary supplementation with genistein (250 mg/kg) have epigenetic changes affecting expression of the agouti gene, which affects their fur color, weight, and propensity to develop cancer. Ongoing research is focused on exploring the impact of other known teratogens, such as diabetic embryopathy, on methylation signatures.
Controversial results from one study suggested that traumatic experiences might produce an epigenetic signal that is capable of being passed to future generations. Mice were trained, using foot shocks, to fear a cherry blossom odor. The investigators reported that the mouse offspring had an increased aversion to this specific odor. They suggested epigenetic changes that increase gene expression, rather than in DNA itself, in a gene, M71, that governs the functioning of an odor receptor in the nose that responds specifically to this cherry blossom smell. There were physical changes that correlated with olfactory (smell) function in the brains of the trained mice and their descendants. Several criticisms were reported, including the study's low statistical power as evidence of some irregularity such as bias in reporting results. Due to limits of sample size, there is a probability that an effect will not be demonstrated to within statistical significance even if it exists. The criticism suggested that the probability that all the experiments reported would show positive results if an identical protocol was followed, assuming the claimed effects exist, is merely 0.4%. The authors also did not indicate which mice were siblings, and treated all of the mice as statistically independent. The original researchers pointed out negative results in the paper's appendix that the criticism omitted in its calculations, and undertook to track which mice were siblings in the future.
Transgenerational
Epigenetic mechanisms were a necessary part of the evolutionary origin of cell differentiation. Although epigenetics in multicellular organisms is generally thought to be a mechanism involved in differentiation, with epigenetic patterns "reset" when organisms reproduce, there have been some observations of transgenerational epigenetic inheritance (e.g., the phenomenon of paramutation observed in maize). Although most of these multigenerational epigenetic traits are gradually lost over several generations, the possibility remains that multigenerational epigenetics could be another aspect to evolution and adaptation.
As mentioned above, some define epigenetics as heritable.
A sequestered germ line or Weismann barrier is specific to animals, and epigenetic inheritance is more common in plants and microbes. Eva Jablonka, Marion J. Lamb and Étienne Danchin have argued that these effects may require enhancements to the standard conceptual framework of the modern synthesis and have called for an extended evolutionary synthesis. Other evolutionary biologists, such as John Maynard Smith, have incorporated epigenetic inheritance into population-genetics models or are openly skeptical of the extended evolutionary synthesis (Michael Lynch). Thomas Dickins and Qazi Rahman state that epigenetic mechanisms such as DNA methylation and histone modification are genetically inherited under the control of natural selection and therefore fit under the earlier "modern synthesis".
Two important ways in which epigenetic inheritance can differ from traditional genetic inheritance, with important consequences for evolution, are:
rates of epimutation can be much faster than rates of mutation
the epimutations are more easily reversible
In plants, heritable DNA methylation mutations are 100,000 times more likely to occur compared to DNA mutations. An epigenetically inherited element such as the PSI+ system can act as a "stop-gap", good enough for short-term adaptation that allows the lineage to survive for long enough for mutation and/or recombination to genetically assimilate the adaptive phenotypic change. The existence of this possibility increases the evolvability of a species.
More than 100 cases of transgenerational epigenetic inheritance phenomena have been reported in a wide range of organisms, including prokaryotes, plants, and animals. For instance, mourning-cloak butterflies will change color through hormone changes in response to experimentation of varying temperatures.
The filamentous fungus Neurospora crassa is a prominent model system for understanding the control and function of cytosine methylation. In this organism, DNA methylation is associated with relics of a genome-defense system called RIP (repeat-induced point mutation) and silences gene expression by inhibiting transcription elongation.
The yeast prion PSI is generated by a conformational change of a translation termination factor, which is then inherited by daughter cells. This can provide a survival advantage under adverse conditions, exemplifying epigenetic regulation which enables unicellular organisms to respond rapidly to environmental stress. Prions can be viewed as epigenetic agents capable of inducing a phenotypic change without modification of the genome.
Direct detection of epigenetic marks in microorganisms is possible with single molecule real time sequencing, in which polymerase sensitivity allows for measuring methylation and other modifications as a DNA molecule is being sequenced. Several projects have demonstrated the ability to collect genome-wide epigenetic data in bacteria.
Epigenetics in bacteria
While epigenetics is of fundamental importance in eukaryotes, especially metazoans, it plays a different role in bacteria. Most importantly, eukaryotes use epigenetic mechanisms primarily to regulate gene expression which bacteria rarely do. However, bacteria make widespread use of postreplicative DNA methylation for the epigenetic control of DNA-protein interactions. Bacteria also use DNA adenine methylation (rather than DNA cytosine methylation) as an epigenetic signal. DNA adenine methylation is important in bacteria virulence in organisms such as Escherichia coli, Salmonella, Vibrio, Yersinia, Haemophilus, and Brucella. In Alphaproteobacteria, methylation of adenine regulates the cell cycle and couples gene transcription to DNA replication. In Gammaproteobacteria, adenine methylation provides signals for DNA replication, chromosome segregation, mismatch repair, packaging of bacteriophage, transposase activity and regulation of gene expression. There exists a genetic switch controlling Streptococcus pneumoniae (the pneumococcus) that allows the bacterium to randomly change its characteristics into six alternative states that could pave the way to improved vaccines. Each form is randomly generated by a phase variable methylation system. The ability of the pneumococcus to cause deadly infections is different in each of these six states. Similar systems exist in other bacterial genera. In Bacillota such as Clostridioides difficile, adenine methylation regulates sporulation, biofilm formation and host-adaptation.
Medicine
Epigenetics has many and varied potential medical applications.
Twins
Direct comparisons of identical twins constitute an optimal model for interrogating environmental epigenetics. In the case of humans with different environmental exposures, monozygotic (identical) twins were epigenetically indistinguishable during their early years, while older twins had remarkable differences in the overall content and genomic distribution of 5-methylcytosine DNA and histone acetylation. The twin pairs who had spent less of their lifetime together and/or had greater differences in their medical histories were those who showed the largest differences in their levels of 5-methylcytosine DNA and acetylation of histones H3 and H4.
Dizygotic (fraternal) and monozygotic (identical) twins show evidence of epigenetic influence in humans. DNA sequence differences that would be abundant in a singleton-based study do not interfere with the analysis. Environmental differences can produce long-term epigenetic effects, and different developmental monozygotic twin subtypes may be different with respect to their susceptibility to be discordant from an epigenetic point of view.
A high-throughput study, which denotes technology that looks at extensive genetic markers, focused on epigenetic differences between monozygotic twins to compare global and locus-specific changes in DNA methylation and histone modifications in a sample of 40 monozygotic twin pairs. In this case, only healthy twin pairs were studied, but a wide range of ages was represented, between 3 and 74 years. One of the major conclusions from this study was that there is an age-dependent accumulation of epigenetic differences between the two siblings of twin pairs. This accumulation suggests the existence of epigenetic "drift". Epigenetic drift is the term given to epigenetic modifications as they occur as a direct function with age. While age is a known risk factor for many diseases, age-related methylation has been found to occur differentially at specific sites along the genome. Over time, this can result in measurable differences between biological and chronological age. Epigenetic changes have been found to be reflective of lifestyle and may act as functional biomarkers of disease before clinical threshold is reached.
A more recent study, where 114 monozygotic twins and 80 dizygotic twins were analyzed for the DNA methylation status of around 6000 unique genomic regions, concluded that epigenetic similarity at the time of blastocyst splitting may also contribute to phenotypic similarities in monozygotic co-twins. This supports the notion that microenvironment at early stages of embryonic development can be quite important for the establishment of epigenetic marks. Congenital genetic disease is well understood and it is clear that epigenetics can play a role, for example, in the case of Angelman syndrome and Prader–Willi syndrome. These are normal genetic diseases caused by gene deletions or inactivation of the genes but are unusually common because individuals are essentially hemizygous because of genomic imprinting, and therefore a single gene knock out is sufficient to cause the disease, where most cases would require both copies to be knocked out.
Genomic imprinting
Some human disorders are associated with genomic imprinting, a phenomenon in mammals where the father and mother contribute different epigenetic patterns for specific genomic loci in their germ cells. The best-known case of imprinting in human disorders is that of Angelman syndrome and Prader–Willi syndrome – both can be produced by the same genetic mutation, chromosome 15q partial deletion, and the particular syndrome that will develop depends on whether the mutation is inherited from the child's mother or from their father.
In the Överkalix study, paternal (but not maternal) grandsons of Swedish men who were exposed during preadolescence to famine in the 19th century were less likely to die of cardiovascular disease. If food was plentiful, then diabetes mortality in the grandchildren increased, suggesting that this was a transgenerational epigenetic inheritance. The opposite effect was observed for females – the paternal (but not maternal) granddaughters of women who experienced famine while in the womb (and therefore while their eggs were being formed) lived shorter lives on average.
Examples of drugs altering gene expression from epigenetic events
The use of beta-lactam antibiotics can alter glutamate receptor activity and the action of cyclosporine on multiple transcription factors. Additionally, lithium can impact autophagy of aberrant proteins, and opioid drugs via chronic use can increase the expression of genes associated with addictive phenotypes.
Parental nutrition, in utero exposure to stress or endocrine disrupting chemicals, male-induced maternal effects such as the attraction of differential mate quality, and maternal as well as paternal age, and offspring gender could all possibly influence whether a germline epimutation is ultimately expressed in offspring and the degree to which intergenerational inheritance remains stable throughout posterity. However, whether and to what extent epigenetic effects can be transmitted across generations remains unclear, particularly in humans.
Addiction
Addiction is a disorder of the brain's reward system which arises through transcriptional and neuroepigenetic mechanisms and occurs over time from chronically high levels of exposure to an addictive stimulus (e.g., morphine, cocaine, sexual intercourse, gambling). Transgenerational epigenetic inheritance of addictive phenotypes has been noted to occur in preclinical studies. However, robust evidence in support of the persistence of epigenetic effects across multiple generations has yet to be established in humans; for example, an epigenetic effect of prenatal exposure to smoking that is observed in great-grandchildren who had not been exposed.
Research
The two forms of heritable information, namely genetic and epigenetic, are collectively called dual inheritance. Members of the APOBEC/AID family of cytosine deaminases may concurrently influence genetic and epigenetic inheritance using similar molecular mechanisms, and may be a point of crosstalk between these conceptually compartmentalized processes.
Fluoroquinolone antibiotics induce epigenetic changes in mammalian cells through iron chelation. This leads to epigenetic effects through inhibition of α-ketoglutarate-dependent dioxygenases that require iron as a co-factor.
Various pharmacological agents are applied for the production of induced pluripotent stem cells (iPSC) or maintain the embryonic stem cell (ESC) phenotypic via epigenetic approach. Adult stem cells like bone marrow stem cells have also shown a potential to differentiate into cardiac competent cells when treated with G9a histone methyltransferase inhibitor BIX01294.
Cell plasticity, which is the adaptation of cells to stimuli without changes in their genetic code, requires epigenetic changes. These have been observed in cell plasticity in cancer cells during epithelial-to-mesenchymal transition and also in immune cells, such as macrophages. Interestingly, metabolic changes underly these adaptations, since various metabolites play crucial roles in the chemistry of epigenetic marks. This includes for instance alpha-ketoglutarate, which is required for histone demethylation, and acetyl-Coenzyme A, which is required for histone acetylation.
Epigenome editing
Epigenetic regulation of gene expression that could be altered or used in epigenome editing are or include mRNA/lncRNA modification, DNA methylation modification and histone modification.
CpG sites, SNPs and biological traits
Methylation is a widely characterized mechanism of genetic regulation that can determine biological traits. However, strong experimental evidences correlate methylation patterns in SNPs as an important additional feature for the classical activation/inhibition epigenetic dogma. Molecular interaction data, supported by colocalization analyses, identify multiple nuclear regulatory pathways, linking sequence variation to disturbances in DNA methylation and molecular and phenotypic variation.
UBASH3B locus
UBASH3B encodes a protein with tyrosine phosphatase activity, which has been previously linked to advanced neoplasia. SNP rs7115089 was identified as influencing DNA methylation and expression of this locus, as well as and Body Mass Index (BMI). In fact, SNP rs7115089 is strongly associated with BMI and with genetic variants linked to other cardiovascular and metabolic traits in GWASs. New studies suggesting UBASH3B as a potential mediator of adiposity and cardiometabolic disease. In addition, animal models demonstrated that UBASH3B expression is an indicator of caloric restriction that may drive programmed susceptibility to obesity and it is associated with other measures of adiposity in human peripherical blood.
NFKBIE locus
SNP rs730775 is located in the first intron of NFKBIE and is a cis eQTL for NFKBIE in whole blood. Nuclear factor (NF)-κB inhibitor ε (NFKBIE) directly inhibits NF-κB1 activity and is significantly co-expressed with NF-κB1, also, it is associated with rheumatoid arthritis. Colocalization analysis supports that variants for the majority of the CpG sites in SNP rs730775 cause genetic variation at the NFKBIE locus which is suggestible linked to rheumatoid arthritis through trans acting regulation of DNA methylation by NF-κB.
FADS1 locus
Fatty acid desaturase 1 (FADS1) is a key enzyme in the metabolism of fatty acids. Moreover, rs174548 in the FADS1 gene shows increased correlation with DNA methylation in people with high abundance of CD8+ T cells. SNP rs174548 is strongly associated with concentrations of arachidonic acid and other metabolites in fatty acid metabolism, blood eosinophil counts. and inflammatory diseases such as asthma. Interaction results indicated a correlation between rs174548 and asthma, providing new insights about fatty acid metabolism in CD8+ T cells with immune phenotypes.
Pseudoscience
As epigenetics is in the early stages of development as a science and is surrounded by sensationalism in the public media, David Gorski and geneticist Adam Rutherford have advised caution against the proliferation of false and pseudoscientific conclusions by new age authors making unfounded suggestions that a person's genes and health can be manipulated by mind control. Misuse of the scientific term by quack authors has produced misinformation among the general public.
See also
Baldwin effect
Behavioral epigenetics
Biological effects of radiation on the epigenome
Computational epigenetics
Contribution of epigenetic modifications to evolution
DAnCER database (2010)
Epigenesis (biology)
Epigenetics in forensic science
Epigenetics of autoimmune disorders
Epiphenotyping
Epigenetic therapy
Epigenetics of neurodegenerative diseases
Genetics
Lamarckism
Nutriepigenomics
Position-effect variegation
Preformationism
Somatic epitype
Synthetic genetic array
Sleep epigenetics
Transcriptional memory
Transgenerational epigenetic inheritance
References
Further reading
External links
The Human Epigenome Project (HEP)
The Epigenome Network of Excellence (NoE)
Canadian Epigenetics, Environment and Health Research Consortium (CEEHRC)
The Epigenome Network of Excellence (NoE) – public international site
"DNA Is Not Destiny" – Discover magazine cover story
"The Ghost In Your Genes", Horizon (2005), BBC
Epigenetics article at Hopkins Medicine
Towards a global map of epigenetic variation
Genetic mapping
Lamarckism | Epigenetics | [
"Biology"
] | 13,929 | [
"Non-Darwinian evolution",
"Biology theories",
"Obsolete biology theories",
"Lamarckism"
] |
49,072 | https://en.wikipedia.org/wiki/Francis%20Galton | Sir Francis Galton (; 16 February 1822 – 17 January 1911) was an English polymath and the originator of eugenics during the Victorian era; his ideas later became the basis of behavioural genetics.
Galton produced over 340 papers and books. He also developed the statistical concept of correlation and widely promoted regression toward the mean. He was the first to apply statistical methods to the study of human differences and inheritance of intelligence, and introduced the use of questionnaires and surveys for collecting data on human communities, which he needed for genealogical and biographical works and for his anthropometric studies. He coined the phrase "nature versus nurture". His book Hereditary Genius (1869) was the first social scientific attempt to study genius and greatness.
As an investigator of the human mind, he founded psychometrics and differential psychology, as well as the lexical hypothesis of personality. He devised a method for classifying fingerprints that proved useful in forensic science. He also conducted research on the power of prayer, concluding it had none due to its null effects on the longevity of those prayed for. His quest for the scientific principles of diverse phenomena extended even to the optimal method for making tea. As the initiator of scientific meteorology, he devised the first weather map, proposed a theory of anticyclones, and was the first to establish a complete record of short-term climatic phenomena on a European scale. He also invented the Galton whistle for testing differential hearing ability. Galton was knighted in 1909 for his contributions to science. He was Charles Darwin's half-cousin.
In recent years, he has received significant criticism for being a proponent of social Darwinism, eugenics, and biological racism; he was a pioneer of eugenics, coining the term itself in 1883.
Early life
Galton was born at "The Larches", a large house in the Sparkbrook area of Birmingham, England, built on the site of "Fair Hill", the former home of Joseph Priestley, which the botanist William Withering had renamed. He was Charles Darwin's half-cousin, sharing the common grandparent Erasmus Darwin. His father was Samuel Tertius Galton, son of Samuel Galton Jr. He was also a cousin of Douglas Strutt Galton. The Galtons were Quaker gun-manufacturers and bankers, while the Darwins were involved in medicine and science.
Both the Galton and Darwin families included Fellows of the Royal Society and members who loved to invent in their spare time. Both Erasmus Darwin and Samuel Galton were founding members of the Lunar Society of Birmingham, which included Matthew Boulton, James Watt, Josiah Wedgwood, Joseph Priestley and Richard Lovell Edgeworth. Both families were known for their literary talent. Erasmus Darwin composed lengthy technical treatises in verse. Galton's aunt Mary Anne Galton wrote on aesthetics and religion, and her autobiography detailed the environment of her childhood populated by Lunar Society members.
Galton was a child prodigy – he was reading by the age of two; at age five he knew some Greek, Latin and long division, and by the age of six he had moved on to adult books, including Shakespeare for pleasure, and poetry, which he quoted at length. Galton attended King Edward's School, Birmingham, but chafed at the narrow classical curriculum and left at 16. His parents pressed him to enter the medical profession, and he studied for two years at Birmingham General Hospital and King's College London Medical School. He followed this up with mathematical studies at Trinity College, Cambridge, from 1840 to early 1844.
According to the records of the United Grand Lodge of England, it was in February 1844 that Galton became a freemason at the Scientific lodge, held at the Red Lion Inn in Cambridge, progressing through the three masonic degrees: Apprentice, 5 February 1844; Fellow Craft, 11 March 1844; Master Mason, 13 May 1844. A note in the record states: "Francis Galton Trinity College student, gained his certificate 13 March 1845". One of Galton's masonic certificates from Scientific lodge can be found among his papers at University College, London.
A nervous breakdown prevented Galton's intent to try for honours. He elected instead to take a "poll" (pass) B.A. degree, like his half-cousin Charles Darwin. (Following the Cambridge custom, he was awarded an M.A. without further study, in 1847.) He briefly resumed his medical studies but the death of his father in 1844 left him emotionally destitute, though financially independent, and he terminated his medical studies entirely, turning to foreign travel, sport and technical invention.
In his early years Galton was an enthusiastic traveller, and made a solo trip through Eastern Europe to Istanbul, before going up to Cambridge. In 1845 and 1846, he went to Egypt and travelled up the Nile to Khartoum in the Sudan, and from there to Beirut, Damascus and down to Jordan.
In 1850 he joined the Royal Geographical Society, and over the next two years mounted a long and difficult expedition into then little-known South West Africa (now Namibia). He wrote a book on his experience, Narrative of an Explorer in Tropical South Africa. He was awarded the Royal Geographical Society's Founder's Medal in 1853 and the Silver Medal of the French Geographical Society for his pioneering cartographic survey of the region. This established his reputation as a geographer and explorer. He proceeded to write the best-selling The Art of Travel, a handbook of practical advice for the Victorian on the move, which went through many editions and is still in print.
Middle years
Early scientific career
Galton was a polymath who made important contributions in many fields, including meteorology (the anticyclone and the first popular weather maps), statistics (regression and correlation), psychology (synaesthesia), biology (the nature and mechanism of heredity), and criminology (fingerprints). Much of this was influenced by his penchant for counting and measuring. Galton prepared the first weather map published in The Times (1 April 1875, showing the weather from the previous day, 31 March), now a standard feature in newspapers worldwide.
He became very active in the British Association for the Advancement of Science, presenting many papers on a wide variety of topics at its meetings from 1858 to 1899. He was the general secretary from 1863 to 1867, president of the Geographical section in 1867 and 1872, and president of the Anthropological Section in 1877 and 1885. He was active on the council of the Royal Geographical Society for over forty years, in various committees of the Royal Society, and on the Meteorological Council.
James McKeen Cattell, a student of Wilhelm Wundt who had been reading Galton's articles, decided he wanted to study under him. He eventually built a professional relationship with Galton, measuring subjects and working together on research.
In 1888, Galton established a lab in the science galleries of the South Kensington Museum. In Galton's lab, participants could be measured to gain knowledge of their strengths and weaknesses. Galton also used these data for his own research. He would typically charge people a small fee for his services.
Heredity and eugenics
The publication by his cousin Charles Darwin of The Origin of Species in 1859 was an event that changed Galton's life. He came to be gripped by the work, especially the first chapter on "Variation under Domestication", concerning animal breeding.
Galton devoted much of the rest of his life to exploring variation in human populations and its implications, at which Darwin had only hinted in The Origin of Species, although he returned to it in his 1871 book The Descent of Man, drawing on his cousin's work in the intervening period. Galton established a research program which embraced multiple aspects of human variation, from mental characteristics to height; from facial images to fingerprint patterns. This required inventing novel measures of traits, devising large-scale collection of data using those measures, and in the end, the discovery of new statistical techniques for describing and understanding the data.
Galton was interested at first in the question of whether human ability was hereditary, and proposed to count the number of the relatives of various degrees of eminent men. If the qualities were hereditary, he reasoned, there should be more eminent men among the relatives than among the general population. To test this, he invented the methods of historiometry. Galton obtained extensive data from a broad range of biographical sources which he tabulated and compared in various ways. This pioneering work was described in detail in his book Hereditary Genius in 1869. Here he showed, among other things, that the numbers of eminent relatives dropped off when going from the first degree to the second degree relatives, and from the second degree to the third. He took this as evidence of the inheritance of abilities.
Galton recognised the limitations of his methods in these two works, and believed the question could be better studied by comparisons of twins. His method envisaged testing to see if twins who were similar at birth diverged in dissimilar environments, and whether twins dissimilar at birth converged when reared in similar environments. He again used the method of questionnaires to gather various sorts of data, which were tabulated and described in a paper The history of twins in 1875. In so doing he anticipated the modern field of behaviour genetics, which relies heavily on twin studies. He concluded that the evidence favoured nature rather than nurture. He also proposed adoption studies, including trans-racial adoption studies, to separate the effects of heredity and environment.
Galton recognised that cultural circumstances influenced the capability of a civilisation's citizens, and their reproductive success. In Hereditary Genius, he envisaged a situation conducive to resilient and enduring civilisation as follows:
Galton invented the term eugenics in 1883 and set down many of his observations and conclusions in a book, Inquiries into Human Faculty and Its Development. In the book's introduction, he wrote:
He believed that a scheme of 'marks' for family merit should be defined, and early marriage between families of high rank be encouraged via provision of monetary incentives. He pointed out some of the tendencies in British society, such as the late marriages of eminent people, and the paucity of their children, which he thought were dysgenic. He advocated encouraging eugenic marriages by supplying able couples with incentives to have children. On 29 October 1901, Galton chose to address eugenic issues when he delivered the second Huxley lecture at the Royal Anthropological Institute.
The Eugenics Review, the journal of the Eugenics Education Society, commenced publication in 1909. Galton, the Honorary President of the society, wrote the foreword for the first volume. The First International Congress of Eugenics was held in July 1912. Winston Churchill and Carls Elliot were among the attendees.
Model for population stability
Galton's formulation of regression and its link to the bivariate normal distribution can be traced to his attempts at developing a mathematical model for population stability. Although Galton's first attempt to study Darwinian questions, Hereditary Genius, generated little enthusiasm at the time, the text led to his further studies in the 1870s concerning the inheritance of physical traits. This text contains some crude notions of the concept of regression, described in a qualitative matter. For example, he wrote of dogs: "If a man breeds from strong, well-shaped dogs, but of mixed pedigree, the puppies will be sometimes, but rarely, the equals of their parents. They will commonly be of a mongrel, nondescript type, because ancestral peculiarities are apt to crop out in the offspring."
This notion created a problem for Galton, as he could not reconcile the tendency of a population to maintain a normal distribution of traits from generation to generation with the notion of inheritance. It seemed that a large number of factors operated independently on offspring, leading to the normal distribution of a trait in each generation. However, this provided no explanation as to how a parent can have a significant impact on his offspring, which was the basis of inheritance.
Galton's solution to this problem was presented in his Presidential Address at the September 1885 meeting of the British Association for the Advancement of Science, for he was serving at the time as President of Section H: Anthropology. The address was published in Nature, and Galton further developed the theory in "Regression toward mediocrity in hereditary stature" and "Hereditary Stature". An elaboration of this theory was published in 1889 in Natural Inheritance. There were three key developments that helped Galton develop this theory: the development of the law of error in 1874–1875, the formulation of an empirical law of reversion in 1877, and the development of a mathematical framework encompassing regression using human population data during 1885.
Galton's development of the law of regression to the mean, or reversion, was due to insights from the Galton board ('bean machine') and his studies of sweet peas. While Galton had previously invented the quincunx prior to February 1874, the 1877 version of the quincunx had a new feature that helped Galton demonstrate that a normal mixture of normal distributions is also normal. Galton demonstrated this using a new version of quincunx, adding chutes to the apparatus to represent reversion. When the pellets passed through the curved chutes (representing reversion) and then the pins (representing family variability), the result was a stable population. On Friday 19 February 1877, Galton gave a lecture entitled Typical Laws of Heredity at the Royal Institution in London. In this lecture, he posited that there must be a counteracting force to maintain population stability. However, this model required a much larger degree of intergenerational natural selection than was plausible.
In 1875, Galton began growing sweet peas, and addressed the Royal Institution on his findings on 9 February 1877. He found that each group of progeny seeds followed a normal curve, and the curves were equally disperse. Each group was not centred on the parent's weight, but rather at a weight closer to the population average. Galton called this reversion, as every progeny group was distributed at a value that was closer to the population average than the parent. The deviation from the population average was in the same direction, but the magnitude of the deviation was only one-third as large. In doing so, he demonstrated that there was variability among each of the families, yet the families combined to produce a stable, normally distributed population. When he addressed the British Association for the Advancement of Science in 1885, he said of his investigation of sweet peas, "I was then blind to what I now perceive to be the simple explanation of the phenomenon."
Galton was able to further his notion of regression by collecting and analysing data on human stature. Galton asked for help of mathematician J. Hamilton Dickson in investigating the geometric relationship of the data. He determined that the regression coefficient did not ensure population stability by chance, but rather that the regression coefficient, conditional variance, and population were interdependent quantities related by a simple equation. Thus Galton identified that the linearity of regression was not coincidental but rather was a necessary consequence of population stability.
The model for population stability resulted in Galton's formulation of the Law of Ancestral Heredity. This law, which was published in Natural Inheritance, states that the two parents of an offspring jointly contribute one half of an offspring's heritage, while the other, more-removed ancestors constitute a smaller proportion of the offspring's heritage. Galton viewed reversion as a spring, that when stretched, would return the distribution of traits back to the normal distribution. He concluded that evolution would have to occur via discontinuous steps, as reversion would neutralise any incremental steps. When Mendel's principles were rediscovered in 1900, this resulted in a fierce battle between the followers of Galton's Law of Ancestral Heredity, the biometricians, and those who advocated Mendel's principles.
Empirical test of pangenesis and Lamarckism
Galton conducted wide-ranging inquiries into heredity which led him to challenge Charles Darwin's hypothesis of pangenesis. Darwin had proposed as part of this model that certain particles, which he called "gemmules" moved throughout the body and were also responsible for the inheritance of acquired characteristics. Galton, in consultation with Darwin, set out to see if they were transported in the blood. In a long series of experiments in 1869 to 1871, he transfused the blood between dissimilar breeds of rabbits, and examined the features of their offspring. He found no evidence of characters transmitted in the transfused blood.
Darwin challenged the validity of Galton's experiment, giving his reasons in an article published in Nature where he wrote:
Galton explicitly rejected the idea of the inheritance of acquired characteristics (Lamarckism), and was an early proponent of "hard heredity" through selection alone. He came close to rediscovering Mendel's particulate theory of inheritance, but was prevented from making the final breakthrough in this regard because of his focus on continuous, rather than discrete, traits (now regarded as polygenic traits). He went on to found the biometric approach to the study of heredity, distinguished by its use of statistical techniques to study continuous traits and population-scale aspects of heredity.
This approach was later taken up enthusiastically by Karl Pearson and W. F. R. Weldon; together, they founded the highly influential journal Biometrika in 1901. (R. A. Fisher would later show how the biometrical approach could be reconciled with the Mendelian approach.) The statistical techniques that Galton developed (correlation and regression—see below) and phenomena he established (regression to the mean) formed the basis of the biometric approach and are now essential tools in all social sciences.
1884 International Health Exhibition
Anthropometric Laboratory
In 1884, London hosted the International Health Exhibition. This exhibition placed much emphasis on highlighting Victorian developments in sanitation and public health, and allowed the nation to display its advanced public health outreach, compared to other countries at the time. Francis Galton took advantage of this opportunity to set up his anthropometric laboratory. He stated that the purpose of this laboratory was to "show the public the simplicity of the instruments and methods by which the chief physical characteristics of man may be measured and recorded." The laboratory was an interactive walk-through in which physical characteristics such as height, weight, and eyesight, would be measured for each subject after payment of an admission fee. Upon entering the laboratory, a subject would visit the following stations in order.
First, they would fill in a form with personal and family history (age, birthplace, marital status, residence, and occupation), then visit stations that recorded hair and eye colour, followed by the keenness, colour-sense, and depth perception of sight. Next, they would examine the keenness, or relative acuteness, of hearing and highest audible note of their hearing followed by an examination of their sense of touch. However, because the surrounding area was noisy, the apparatus intended to measure hearing was rendered ineffective by the noise and echoes in the building. Their breathing capacity would also be measured, as well as their ability to throw a punch. The next stations would examine strength of both pulling and squeezing with both hands. Lastly, subjects' heights in various positions (sitting, standing, etc.) as well as arm span and weight would be measured.
One excluded characteristic of interest was the size of the head. Galton notes in his analysis that this omission was mostly for practical reasons. For instance, it would not be very accurate and additionally it would require much time for women to disassemble and reassemble their hair and bonnets. The patrons would then be given a souvenir containing all their biological data, while Galton would also keep a copy for future statistical research.
Although the laboratory did not employ any revolutionary measurement techniques, it was unique because of the simple logistics of constructing such a demonstration within a limited space, and because of the speed and efficiency with which all the necessary data were gathered. The laboratory itself was a see-through (lattice-walled) fenced off gallery measuring 36 feet long by 6 feet long. To collect data efficiently, Galton had to make the process as simple as possible for people to understand. As a result, subjects were taken through the laboratory in pairs so that explanations could be given to two at a time, also in the hope that one of the two would confidently take the initiative to go through all the tests first, encouraging the other. With this design, the total time spent in the exhibit was fourteen minutes for each pair.
Galton states that the measurements of human characteristics are useful for two reasons. First, he states that measuring physical characteristics is useful in order to ensure, on a more domestic level, that children are developing properly. A useful example he gives for the practicality of these domestic measurements is regularly checking a child's eyesight, in order to correct any deficiencies early on. The second use for the data from his anthropometric laboratory is for statistical studies. He comments on the usefulness of the collected data to compare attributes across occupations, residences, races, etc. The exhibit at the health exhibition allowed Galton to collect a large amount of raw data from which to conduct further comparative studies. He had 9,337 respondents, each measured in 17 categories, creating a rather comprehensive statistical database.
After the conclusion of the International Health Exhibition, Galton used these data to confirm in humans his theory of linear regression, posed after studying sweet peas. The accumulation of this human data allowed him to observe the correlation between forearm length and height, head width and head breadth, and head length and height. With these observations he was able to write Co-relations and their Measurements, chiefly from Anthropometric Data. In this publication, Galton defined what co-relation as a phenomenon that occurs when "the variation of the one [variable] is accompanied on the average by more or less variation of the other, and in the same direction."
Statistical innovation and psychological theory
Historiometry
The method used in Hereditary Genius has been described as the first example of historiometry. To bolster these results, and to attempt to make a distinction between 'nature' and 'nurture' (he was the first to apply this phrase to the topic), he devised a questionnaire that he sent out to 190 Fellows of the Royal Society. He tabulated characteristics of their families, such as birth order and the occupation and race of their parents. He attempted to discover whether their interest in science was 'innate' or due to the encouragements of others. The studies were published as a book, English men of science: their nature and nurture, in 1874. In the end, it promoted the nature versus nurture question, though it did not settle it, and provided some fascinating data on the sociology of scientists of the time.
The lexical hypothesis
Galton was the first scientist to recognise what is now known as the lexical hypothesis. This is the idea that the most salient and socially relevant personality differences in people's lives will eventually become encoded into language. The hypothesis further suggests that by sampling language, it is possible to derive a comprehensive taxonomy of human personality traits.
The questionnaire
Galton's inquiries into the mind involved detailed recording of people's subjective accounts of whether and how their minds dealt with phenomena such as mental imagery. To better elicit this information, he pioneered the use of the questionnaire. In one study, he asked his fellow members of the Royal Society of London to describe mental images that they experienced. In another, he collected in-depth surveys from eminent scientists for a work examining the effects of nature and nurture on the propensity toward scientific thinking.
Variance and standard deviation
Core to any statistical analysis is the concept that measurements vary: they have both a central tendency, or mean, and a spread around this central value, or variance. In the late 1860s, Galton conceived of a measure to quantify normal variation: the standard deviation.
Galton was a keen observer. In 1906, visiting a livestock fair, he stumbled upon an intriguing contest. An ox was on display, and the villagers were invited to guess the animal's weight after it was slaughtered and dressed. Nearly 800 participated, and Galton was able to study their individual entries after the event. Galton stated that "the middlemost estimate expresses the vox populi, every other estimate being condemned as too low or too high by a majority of the voters", and reported this value (the median, in terminology he himself had introduced, but chose not to use on this occasion) as 1,207 pounds. To his surprise, this was within 0.8% of the weight measured by the judges. Soon afterwards, in response to an enquiry, he reported the mean of the guesses as 1,197 pounds, but did not comment on its improved accuracy. Recent archival research has found some slips in transmitting Galton's calculations to the original article in Nature: the median was actually 1,208 pounds, and the dressed weight of the ox 1,197 pounds, so the mean estimate had zero error. James Surowiecki uses this weight-judging competition as his opening example: had he known the true result, his conclusion on the wisdom of the crowd would no doubt have been more strongly expressed.
The same year, Galton suggested in a letter to the journal Nature a better method of cutting a round cake by avoiding making radial incisions.
Experimental derivation of the normal distribution
Studying variation, Galton invented the Galton board, a pachinko-like device also known as the bean machine, as a tool for demonstrating the law of error and the normal distribution.
Bivariate normal distribution
He also discovered the properties of the bivariate normal distribution and its relationship to correlation and regression analysis.
Correlation and regression
In 1846, the French physicist Auguste Bravais (1811–1863) first developed what would become the correlation coefficient. After examining forearm and height measurements, Galton independently rediscovered the concept of correlation in 1888 and demonstrated its application in the study of heredity, anthropology, and psychology. Galton's later statistical study of the probability of extinction of surnames led to the concept of Galton–Watson stochastic processes.
Galton invented the use of the regression line and for the choice of r (for reversion or regression) to represent the correlation coefficient.
In the 1870s and 1880s he was a pioneer in the use of normal theory to fit histograms and ogives to actual tabulated data, much of which he collected himself: for instance large samples of sibling and parental height. Consideration of the results from these empirical studies led to his further insights into evolution, natural selection, and regression to the mean.
Regression toward the mean
Galton was the first to describe and explain the common phenomenon of regression toward the mean, which he first observed in his experiments on the size of the seeds of successive generations of sweet peas.
The conditions under which regression toward the mean occurs depend on the way the term is mathematically defined. Galton first observed the phenomenon in the context of simple linear regression of data points. Galton developed the following model: pellets fall through a quincunx or "bean machine" forming a normal distribution centred directly under their entrance point. These pellets could then be released down into a second gallery (corresponding to a second measurement occasion). Galton then asked the reverse question "from where did these pellets come?"
Theories of perception
Galton went beyond measurement and summary to attempt to explain the phenomena he observed. Among such developments, he proposed an early theory of ranges of sound and hearing, and collected large quantities of anthropometric data from the public through his popular and long-running Anthropometric Laboratory, which he established in 1884, and where he studied over 9,000 people. It was not until 1985 that these data were analysed in their entirety.
He made a beauty map of Britain, based on a secret grading of the local women on a scale from attractive to repulsive. The lowest point was in Aberdeen.
Differential psychology
Galton's study of human abilities ultimately led to the foundation of differential psychology and the formulation of the first mental tests. He was interested in measuring humans in every way possible. This included measuring their ability to make sensory discrimination which he assumed was linked to intellectual prowess. Galton suggested that individual differences in general ability are reflected in performance on relatively simple sensory capacities and in speed of reaction to a stimulus, variables that could be objectively measured by tests of sensory discrimination and reaction
time. He also measured how quickly people reacted which he later linked to internal wiring which ultimately limited intelligence ability. Throughout his research Galton assumed that people who reacted faster were more intelligent than others.
Composite photography
Galton also devised a technique called "composite portraiture" (produced by superimposing multiple photographic portraits of individuals' faces registered on their eyes) to create an average face (see averageness). In the 1990s, a hundred years after his discovery, much psychological research has examined the attractiveness of these faces, an aspect that Galton had remarked on in his original lecture. Others, including Sigmund Freud in his work on dreams, picked up Galton's suggestion that these composites might represent a useful metaphor for an Ideal type or a concept of a "natural kind" (see Eleanor Rosch)—such as Jewish men, criminals, patients with tuberculosis, etc.—onto the same photographic plate, thereby yielding a blended whole, or "composite", that he hoped could generalise the facial appearance of his subject into an "average" or "central type". (See also entry Modern physiognomy under Physiognomy).
This work began in the 1880s while the Jewish scholar Joseph Jacobs studied anthropology and statistics with Francis Galton. Jacobs asked Galton to create a composite photograph of a Jewish type. One of Jacobs' first publications that used Galton's composite imagery was "The Jewish Type, and Galton's Composite Photographs", Photographic News, 29, (24 April 1885): 268–269.
Galton hoped his technique would aid medical diagnosis, and even criminology through the identification of typical criminal faces. However, his technique did not prove useful and fell into disuse, although after much work on it including by photographers Lewis Hine and John L. Lovell and Arthur Batut.
Fingerprints
The method of identifying criminals by their fingerprints had been introduced in the 1860s by Sir William James Herschel in India, and their potential use in forensic work was first proposed by Dr Henry Faulds in 1880. Galton was introduced to the field by his half-cousin Charles Darwin, who was a friend of Faulds, and he went on to create the first scientific footing for the study (which assisted its acceptance by the courts) although Galton did not ever give credit that the original idea was not his.
In a Royal Institution paper in 1888 and three books (Finger Prints, 1892; Decipherment of Blurred Finger Prints, 1893; and Fingerprint Directories, 1895), Galton estimated the probability of two persons having the same fingerprint and studied the heritability and racial differences in fingerprints. He wrote about the technique (inadvertently sparking a controversy between Herschel and Faulds that was to last until 1917), identifying common pattern in fingerprints and devising a classification system that survives to this day. He described and classified them into eight broad categories: 1: plain arch, 2: tented arch, 3: simple loop, 4: central pocket loop, 5: double loop, 6: lateral pocket loop, 7: plain whorl, and 8: accidental.
Views
In 1873, Galton wrote a letter to The Times titled "Africa for the Chinese", where he argued that the Chinese, as a race capable of high civilisation and only temporarily stunted by the recent failures of Chinese dynasties, should be encouraged to immigrate to Africa and displace the inferior aboriginal blacks.
According to an editorial in Nature: "Galton also constructed a racial hierarchy, in which white people were considered superior. He wrote that the average intellectual standard of the negro race is some two grades below our own (the Anglo Saxon)." According to the Encyclopedia of Genocide, Galton bordered on the justification of genocide when he stated: "There exists a sentiment, for the most part quite unreasonable, against the gradual extinction of an inferior race."
In an effort to reach a wider audience, Galton worked on a novel entitled Kantsaywhere from May until December 1910. The novel described a utopia organised by a eugenic religion, designed to breed fitter and smarter humans. His unpublished notebooks show that this was an expansion of material he had been composing since at least 1901. He offered it to Methuen for publication, but they showed little enthusiasm. Galton wrote to his niece that it should be either "smothered or superseded". His niece appears to have burnt most of the novel, offended by the love scenes, but large fragments survived, and it was published online by University College, London.
Personal life and character
In January 1853, Galton met Louisa Jane Butler (1822–1897) at his neighbour's home, and they were married on 1 August 1853. The union of 43 years proved childless.
It has been written of Galton that "On his own estimation he was a supremely intelligent man." Later in life, Galton proposed a connection between genius and insanity based on his own experience: Attestations and descriptions of Galton's character were made by Beatrice Webb, James Arthur Harris, and Karl Pearson. He also corresponded with Beatrix Lucia Catherine Tollemache.
Galton is buried in the family tomb in the churchyard of St Michael and All Angels, in the village of Claverdon, Warwickshire.
Awards and influence
Over the course of his career Galton received many awards, including the Copley Medal of the Royal Society (1910). He received in 1853 the Founder's Medal, the highest award of the Royal Geographical Society, for his explorations and map-making of southwest Africa. He was elected a member of the Athenaeum Club in 1855 and made a Fellow of the Royal Society in 1860. His autobiography also lists:
Silver Medal, French Geographical Society (1854)
Gold Medal of the Royal Society (1886)
Officier de l'Instruction Publique, France (1891)
D.C.L. Oxford (1894)
Sc.D. (Honorary), Cambridge (1895)
Huxley Medal, Royal Anthropological Institute (1901)
Elected Hon. Fellow Trinity College, Cambridge (1902)
Darwin Medal, Royal Society (1902)
Linnean Society of London's Darwin–Wallace Medal (1908)
Galton was knighted in 1909:
His statistical heir Karl Pearson, first holder of the Galton Chair of Eugenics at University College, London (now Galton Chair of Genetics), wrote a three-volume biography of Galton, in four parts, after his death.
The flowering plant genus Galtonia was named after Galton.
University College London has in the twenty-first century been involved in a historical inquiry into its role as the institutional birthplace of eugenics. Galton established a laboratory at UCL in 1904. Some students and staff have called on the university to rename its Galton lecture theatre, with journalist Angela Saini stating, "Galton's seductive promise was of a bold new world filled only with beautiful, intelligent, productive people. The scientists in its thrall claimed this could be achieved by controlling reproduction, policing borders to prevent certain types of immigrants, and locking away "undesirables", including disabled people."
In June 2020, University College London (UCL) announced the renaming of a lecture theatre named after Galton because of his connection with eugenics.
Published works
See also
Eugenics
Eugenics in the United States
Founders of statistics
Galton Laboratory
Historiometry
Hereditarianism
History of evolutionary thought
New eugenics
Social darwinism
Social effects of evolutionary theory
References
Citations
Sources
Further reading
External links
Catalogue of the Galton papers held at University College London
1822 births
1911 deaths
19th-century British geographers
19th-century British explorers
19th-century English scientists
19th-century English mathematicians
19th-century British anthropologists
19th-century British inventors
19th-century British biologists
Alumni of King's College London
Alumni of Trinity College, Cambridge
Anthropometry
Biopolitics
Critics of Lamarckism
Darwin–Wedgwood family
English anthropologists
British eugenicists
English geographers
English inventors
English meteorologists
English statisticians
British evolutionary biologists
British explorers of Africa
Fellows of the Royal Geographical Society
Fellows of the Royal Society
Fellows of the Royal Anthropological Institute of Great Britain and Ireland
Presidents of the Royal Anthropological Institute of Great Britain and Ireland
Freemasons of the United Grand Lodge of England
Independent scientists
Intelligence researchers
Knights Bachelor
People educated at King Edward's School, Birmingham
People from Birmingham, West Midlands
Probability theorists
Psychometricians
Recipients of the Copley Medal
Royal Medal winners | Francis Galton | [
"Engineering",
"Biology"
] | 7,642 | [
"Biopolitics",
"Genetic engineering"
] |
49,089 | https://en.wikipedia.org/wiki/Cox%27s%20theorem | Cox's theorem, named after the physicist Richard Threlkeld Cox, is a derivation of the laws of probability theory from a certain set of postulates. This derivation justifies the so-called "logical" interpretation of probability, as the laws of probability derived by Cox's theorem are applicable to any proposition. Logical (also known as objective Bayesian) probability is a type of Bayesian probability. Other forms of Bayesianism, such as the subjective interpretation, are given other justifications.
Cox's assumptions
Cox wanted his system to satisfy the following conditions:
Divisibility and comparability – The plausibility of a proposition is a real number and is dependent on information we have related to the proposition.
Common sense – Plausibilities should vary sensibly with the assessment of plausibilities in the model.
Consistency – If the plausibility of a proposition can be derived in many ways, all the results must be equal.
The postulates as stated here are taken from Arnborg and Sjödin.
"Common sense" includes consistency with Aristotelian logic in the sense that logically equivalent propositions shall have the same plausibility.
The postulates as originally stated by Cox were not mathematically
rigorous (although more so than the informal description above),
as noted by Halpern. However it appears to be possible
to augment them with various mathematical assumptions made either
implicitly or explicitly by Cox to produce a valid proof.
Cox's notation:
The plausibility of a proposition given some related information is denoted by .
Cox's postulates and functional equations are:
The plausibility of the conjunction of two propositions , , given some related information , is determined by the plausibility of given and that of given .
In form of a functional equation
Because of the associative nature of the conjunction in propositional logic, the consistency with logic gives a functional equation saying that the function is an associative binary operation.
Additionally, Cox postulates the function to be monotonic.
All strictly increasing associative binary operations on the real numbers are isomorphic to multiplication of numbers in a subinterval of , which means that there is a monotonic function mapping plausibilities to such that
In case given is certain, we have and due to the requirement of consistency. The general equation then leads to
This shall hold for any proposition , which leads to
In case given is impossible, we have and due to the requirement of consistency. The general equation (with the A and B factors switched) then leads to
This shall hold for any proposition , which, without loss of generality, leads to a solution
Due to the requirement of monotonicity, this means that maps plausibilities to interval .
The plausibility of a proposition determines the plausibility of the proposition's negation.
This postulates the existence of a function such that
Because "a double negative is an affirmative", consistency with logic gives a functional equation
saying that the function is an involution, i.e., it is its own inverse.
Furthermore, Cox postulates the function to be monotonic.
The above functional equations and consistency with logic imply that
Since is logically equivalent to , we also get
If, in particular, , then also and and we get
and
Abbreviating and we get the functional equation
Implications of Cox's postulates
The laws of probability derivable from these postulates are the following. Let be the plausibility of the proposition given satisfying Cox's postulates. Then there is a function mapping plausibilities to interval [0,1] and a positive number such that
Certainty is represented by
It is important to note that the postulates imply only these general properties. We may recover the usual laws of probability by setting a new function, conventionally denoted or , equal to . Then we obtain the laws of probability in a more familiar form:
Certain truth is represented by , and certain falsehood by
Rule 2 is a rule for negation, and rule 3 is a rule for conjunction. Given that any proposition containing conjunction, disjunction, and negation can be equivalently rephrased using conjunction and negation alone (the conjunctive normal form), we can now handle any compound proposition.
The laws thus derived yield finite additivity of probability, but not countable additivity. The measure-theoretic formulation of Kolmogorov assumes that a probability measure is countably additive. This slightly stronger condition is necessary for certain results. An elementary example (in which this assumption merely simplifies the calculation rather than being necessary for it) is that the probability of seeing heads for the first time after an even number of flips in a sequence of coin flips is .
Interpretation and further discussion
Cox's theorem has come to be used as one of the justifications for the use of Bayesian probability theory. For example, in Jaynes it is discussed in detail in chapters 1 and 2 and is a cornerstone for the rest of the book. Probability is interpreted as a formal system of logic, the natural extension of Aristotelian logic (in which every statement is either true or false) into the realm of reasoning in the presence of uncertainty.
It has been debated to what degree the theorem excludes alternative models for reasoning about uncertainty. For example, if certain "unintuitive" mathematical assumptions were dropped then alternatives could be devised, e.g., an example provided by Halpern. However Arnborg and Sjödin suggest additional
"common sense" postulates, which would allow the assumptions to be relaxed in some cases while still ruling out the Halpern example. Other approaches were devised by Hardy or Dupré and Tipler.
The original formulation of Cox's theorem is in , which is extended with additional results and more discussion in . Jaynes cites Abel for the first known use of the associativity functional equation. János Aczél provides a long proof of the "associativity equation" (pages 256-267). Jaynes reproduces the shorter proof by Cox in which differentiability is assumed. A guide to Cox's theorem by Van Horn aims at comprehensively introducing the reader to all these references.
Baoding Liu, the founder of uncertainty theory, criticizes Cox's theorem for presuming that the truth value of conjunction is a twice differentiable function of truth values of the two propositions and , i.e., , which excludes uncertainty theory's "uncertain measure" from its start, because the function , used in uncertainty theory, is not differentiable with respect to and . According to Liu, "there does not exist any evidence that the truth value of conjunction is completely determined by the truth values of individual propositions, let alone a twice differentiable function."
See also
Probability axioms
Probability logic
Notes
References
Further reading
Probability theorems
Probability interpretations
Theorems in statistics | Cox's theorem | [
"Mathematics"
] | 1,428 | [
"Theorems in statistics",
"Probability interpretations",
"Theorems in probability theory",
"Mathematical problems",
"Mathematical theorems"
] |
49,090 | https://en.wikipedia.org/wiki/Ohm%27s%20law | Ohm's law states that the electric current through a conductor between two points is directly proportional to the voltage across the two points. Introducing the constant of proportionality, the resistance, one arrives at the three mathematical equations used to describe this relationship:
where is the current through the conductor, V is the voltage measured across the conductor and R is the resistance of the conductor. More specifically, Ohm's law states that the R in this relation is constant, independent of the current. If the resistance is not constant, the previous equation cannot be called Ohm's law, but it can still be used as a definition of static/DC resistance. Ohm's law is an empirical relation which accurately describes the conductivity of the vast majority of electrically conductive materials over many orders of magnitude of current. However some materials do not obey Ohm's law; these are called non-ohmic.
The law was named after the German physicist Georg Ohm, who, in a treatise published in 1827, described measurements of applied voltage and current through simple electrical circuits containing various lengths of wire. Ohm explained his experimental results by a slightly more complex equation than the modern form above (see below).
In physics, the term Ohm's law is also used to refer to various generalizations of the law; for example the vector form of the law used in electromagnetics and material science:
where J is the current density at a given location in a resistive material, E is the electric field at that location, and σ (sigma) is a material-dependent parameter called the conductivity, defined as the inverse of resistivity ρ (rho). This reformulation of Ohm's law is due to Gustav Kirchhoff.
History
In January 1781, before Georg Ohm's work, Henry Cavendish experimented with Leyden jars and glass tubes of varying diameter and length filled with salt solution. He measured the current by noting how strong a shock he felt as he completed the circuit with his body. Cavendish wrote that the "velocity" (current) varied directly as the "degree of electrification" (voltage). He did not communicate his results to other scientists at the time, and his results were unknown until James Clerk Maxwell published them in 1879.
Francis Ronalds delineated "intensity" (voltage) and "quantity" (current) for the dry pile—a high voltage source—in 1814 using a gold-leaf electrometer. He found for a dry pile that the relationship between the two parameters was not proportional under certain meteorological conditions.
Ohm did his work on resistance in the years 1825 and 1826, and published his results in 1827 as the book Die galvanische Kette, mathematisch bearbeitet ("The galvanic circuit investigated mathematically"). He drew considerable inspiration from Joseph Fourier's work on heat conduction in the theoretical explanation of his work. For experiments, he initially used voltaic piles, but later used a thermocouple as this provided a more stable voltage source in terms of internal resistance and constant voltage. He used a galvanometer to measure current, and knew that the voltage between the thermocouple terminals was proportional to the junction temperature. He then added test wires of varying length, diameter, and material to complete the circuit. He found that his data could be modeled through the equation
where x was the reading from the galvanometer, ℓ was the length of the test conductor, a depended on the thermocouple junction temperature, and b was a constant of the entire setup. From this, Ohm determined his law of proportionality and published his results.
In modern notation we would write,
where is the open-circuit emf of the thermocouple, is the internal resistance of the thermocouple and is the resistance of the test wire. In terms of the length of the wire this becomes,
where is the resistance of the test wire per unit length. Thus, Ohm's coefficients are,
Ohm's law was probably the most important of the early quantitative descriptions of the physics of electricity. We consider it almost obvious today. When Ohm first published his work, this was not the case; critics reacted to his treatment of the subject with hostility. They called his work a "web of naked fancies" and the Minister of Education proclaimed that "a professor who preached such heresies was unworthy to teach science." The prevailing scientific philosophy in Germany at the time asserted that experiments need not be performed to develop an understanding of nature because nature is so well ordered, and that scientific truths may be deduced through reasoning alone. Also, Ohm's brother Martin, a mathematician, was battling the German educational system. These factors hindered the acceptance of Ohm's work, and his work did not become widely accepted until the 1840s. However, Ohm received recognition for his contributions to science well before he died.
In the 1850s, Ohm's law was widely known and considered proved. Alternatives such as "Barlow's law", were discredited, in terms of real applications to telegraph system design, as discussed by Samuel F. B. Morse in 1855.
The electron was discovered in 1897 by J. J. Thomson, and it was quickly realized that it was the particle (charge carrier) that carried electric currents in electric circuits. In 1900, the first (classical) model of electrical conduction, the Drude model, was proposed by Paul Drude, which finally gave a scientific explanation for Ohm's law. In this model, a solid conductor consists of a stationary lattice of atoms (ions), with conduction electrons moving randomly in it. A voltage across a conductor causes an electric field, which accelerates the electrons in the direction of the electric field, causing a drift of electrons which is the electric current. However the electrons collide with atoms which causes them to scatter and randomizes their motion, thus converting kinetic energy to heat (thermal energy). Using statistical distributions, it can be shown that the average drift velocity of the electrons, and thus the current, is proportional to the electric field, and thus the voltage, over a wide range of voltages.
The development of quantum mechanics in the 1920s modified this picture somewhat, but in modern theories the average drift velocity of electrons can still be shown to be proportional to the electric field, thus deriving Ohm's law. In 1927 Arnold Sommerfeld applied the quantum Fermi-Dirac distribution of electron energies to the Drude model, resulting in the free electron model. A year later, Felix Bloch showed that electrons move in waves (Bloch electrons) through a solid crystal lattice, so scattering off the lattice atoms as postulated in the Drude model is not a major process; the electrons scatter off impurity atoms and defects in the material. The final successor, the modern quantum band theory of solids, showed that the electrons in a solid cannot take on any energy as assumed in the Drude model but are restricted to energy bands, with gaps between them of energies that electrons are forbidden to have. The size of the band gap is a characteristic of a particular substance which has a great deal to do with its electrical resistivity, explaining why some substances are electrical conductors, some semiconductors, and some insulators.
While the old term for electrical conductance, the mho (the inverse of the resistance unit ohm), is still used, a new name, the siemens, was adopted in 1971, honoring Ernst Werner von Siemens. The siemens is preferred in formal papers.
In the 1920s, it was discovered that the current through a practical resistor actually has statistical fluctuations, which depend on temperature, even when voltage and resistance are exactly constant; this fluctuation, now known as Johnson–Nyquist noise, is due to the discrete nature of charge. This thermal effect implies that measurements of current and voltage that are taken over sufficiently short periods of time will yield ratios of V/I that fluctuate from the value of R implied by the time average or ensemble average of the measured current; Ohm's law remains correct for the average current, in the case of ordinary resistive materials.
Ohm's work long preceded Maxwell's equations and any understanding of frequency-dependent effects in AC circuits. Modern developments in electromagnetic theory and circuit theory do not contradict Ohm's law when they are evaluated within the appropriate limits.
Scope
Ohm's law is an empirical law, a generalization from many experiments that have shown that current is approximately proportional to electric field for most materials. It is less fundamental than Maxwell's equations and is not always obeyed. Any given material will break down under a strong-enough electric field, and some materials of interest in electrical engineering are "non-ohmic" under weak fields.
Ohm's law has been observed on a wide range of length scales. In the early 20th century, it was thought that Ohm's law would fail at the atomic scale, but experiments have not borne out this expectation. As of 2012, researchers have demonstrated that Ohm's law works for silicon wires as small as four atoms wide and one atom high.
Microscopic origins
The dependence of the current density on the applied electric field is essentially quantum mechanical in nature; (see Classical and quantum conductivity.) A qualitative description leading to Ohm's law can be based upon classical mechanics using the Drude model developed by Paul Drude in 1900.
The Drude model treats electrons (or other charge carriers) like pinballs bouncing among the ions that make up the structure of the material. Electrons will be accelerated in the opposite direction to the electric field by the average electric field at their location. With each collision, though, the electron is deflected in a random direction with a velocity that is much larger than the velocity gained by the electric field. The net result is that electrons take a zigzag path due to the collisions, but generally drift in a direction opposing the electric field.
The drift velocity then determines the electric current density and its relationship to E and is independent of the collisions. Drude calculated the average drift velocity from p = −eEτ where p is the average momentum, −e is the charge of the electron and τ is the average time between the collisions. Since both the momentum and the current density are proportional to the drift velocity, the current density becomes proportional to the applied electric field; this leads to Ohm's law.
Hydraulic analogy
A hydraulic analogy is sometimes used to describe Ohm's law. Water pressure, measured by pascals (or PSI), is the analog of voltage because establishing a water pressure difference between two points along a (horizontal) pipe causes water to flow. The water volume flow rate, as in liters per second, is the analog of current, as in coulombs per second. Finally, flow restrictors—such as apertures placed in pipes between points where the water pressure is measured—are the analog of resistors. We say that the rate of water flow through an aperture restrictor is proportional to the difference in water pressure across the restrictor. Similarly, the rate of flow of electrical charge, that is, the electric current, through an electrical resistor is proportional to the difference in voltage measured across the resistor. More generally, the hydraulic head may be taken as the analog of voltage, and Ohm's law is then analogous to Darcy's law which relates hydraulic head to the volume flow rate via the hydraulic conductivity.
Flow and pressure variables can be calculated in fluid flow network with the use of the hydraulic ohm analogy. The method can be applied to both steady and transient flow situations. In the linear laminar flow region, Poiseuille's law describes the hydraulic resistance of a pipe, but in the turbulent flow region the pressure–flow relations become nonlinear.
The hydraulic analogy to Ohm's law has been used, for example, to approximate blood flow through the circulatory system.
Circuit analysis
In circuit analysis, three equivalent expressions of Ohm's law are used interchangeably:
Each equation is quoted by some sources as the defining relationship of Ohm's law,
or all three are quoted, or derived from a proportional form,
or even just the two that do not correspond to Ohm's original statement may sometimes be given.
The interchangeability of the equation may be represented by a triangle, where V (voltage) is placed on the top section, the I (current) is placed to the left section, and the R (resistance) is placed to the right. The divider between the top and bottom sections indicates division (hence the division bar).
Resistive circuits
Resistors are circuit elements that impede the passage of electric charge in agreement with Ohm's law, and are designed to have a specific resistance value R. In schematic diagrams, a resistor is shown as a long rectangle or zig-zag symbol. An element (resistor or conductor) that behaves according to Ohm's law over some operating range is referred to as an ohmic device (or an ohmic resistor) because Ohm's law and a single value for the resistance suffice to describe the behavior of the device over that range.
Ohm's law holds for circuits containing only resistive elements (no capacitances or inductances) for all forms of driving voltage or current, regardless of whether the driving voltage or current is constant (DC) or time-varying such as AC. At any instant of time Ohm's law is valid for such circuits.
Resistors which are in series or in parallel may be grouped together into a single "equivalent resistance" in order to apply Ohm's law in analyzing the circuit.
Reactive circuits with time-varying signals
When reactive elements such as capacitors, inductors, or transmission lines are involved in a circuit to which AC or time-varying voltage or current is applied, the relationship between voltage and current becomes the solution to a differential equation, so Ohm's law (as defined above) does not directly apply since that form contains only resistances having value R, not complex impedances which may contain capacitance (C) or inductance (L).
Equations for time-invariant AC circuits take the same form as Ohm's law. However, the variables are generalized to complex numbers and the current and voltage waveforms are complex exponentials.
In this approach, a voltage or current waveform takes the form Ae, where t is time, s is a complex parameter, and A is a complex scalar. In any linear time-invariant system, all of the currents and voltages can be expressed with the same s parameter as the input to the system, allowing the time-varying complex exponential term to be canceled out and the system described algebraically in terms of the complex scalars in the current and voltage waveforms.
The complex generalization of resistance is impedance, usually denoted Z; it can be shown that for an inductor,
and for a capacitor,
We can now write,
where V and I are the complex scalars in the voltage and current respectively and Z is the complex impedance.
This form of Ohm's law, with Z taking the place of R, generalizes the simpler form. When Z is complex, only the real part is responsible for dissipating heat.
In a general AC circuit, Z varies strongly with the frequency parameter s, and so also will the relationship between voltage and current.
For the common case of a steady sinusoid, the s parameter is taken to be , corresponding to a complex sinusoid . The real parts of such complex current and voltage waveforms describe the actual sinusoidal currents and voltages in a circuit, which can be in different phases due to the different complex scalars.
Linear approximations
Ohm's law is one of the basic equations used in the analysis of electrical circuits. It applies to both metal conductors and circuit components (resistors) specifically made for this behaviour. Both are ubiquitous in electrical engineering. Materials and components that obey Ohm's law are described as "ohmic" which means they produce the same value for resistance (R = V/I) regardless of the value of V or I which is applied and whether the applied voltage or current is DC (direct current) of either positive or negative polarity or AC (alternating current).
In a true ohmic device, the same value of resistance will be calculated from R = V/I regardless of the value of the applied voltage V. That is, the ratio of V/I is constant, and when current is plotted as a function of voltage the curve is linear (a straight line). If voltage is forced to some value V, then that voltage V divided by measured current I will equal R. Or if the current is forced to some value I, then the measured voltage V divided by that current I is also R. Since the plot of I versus V is a straight line, then it is also true that for any set of two different voltages V1 and V2 applied across a given device of resistance R, producing currents I1 = V1/R and I2 = V2/R, that the ratio (V1 − V2)/(I1 − I2) is also a constant equal to R. The operator "delta" (Δ) is used to represent a difference in a quantity, so we can write ΔV = V1 − V2 and ΔI = I1 − I2. Summarizing, for any truly ohmic device having resistance R, V/I = ΔV/ΔI = R for any applied voltage or current or for the difference between any set of applied voltages or currents.
There are, however, components of electrical circuits which do not obey Ohm's law; that is, their relationship between current and voltage (their I–V curve) is nonlinear (or non-ohmic). An example is the p–n junction diode (curve at right). As seen in the figure, the current does not increase linearly with applied voltage for a diode. One can determine a value of current (I) for a given value of applied voltage (V) from the curve, but not from Ohm's law, since the value of "resistance" is not constant as a function of applied voltage. Further, the current only increases significantly if the applied voltage is positive, not negative. The ratio V/I for some point along the nonlinear curve is sometimes called the static, or chordal, or DC, resistance, but as seen in the figure the value of total over total varies depending on the particular point along the nonlinear curve which is chosen. This means the "DC resistance" V/I at some point on the curve is not the same as what would be determined by applying an AC signal having peak amplitude volts or amps centered at that same point along the curve and measuring . However, in some diode applications, the AC signal applied to the device is small and it is possible to analyze the circuit in terms of the dynamic, small-signal, or incremental resistance, defined as the one over the slope of the V–I curve at the average value (DC operating point) of the voltage (that is, one over the derivative of current with respect to voltage). For sufficiently small signals, the dynamic resistance allows the Ohm's law small signal resistance to be calculated as approximately one over the slope of a line drawn tangentially to the V–I curve at the DC operating point.
Temperature effects
Ohm's law has sometimes been stated as, "for a conductor in a given state, the electromotive force is proportional to the current produced. "That is, that the resistance, the ratio of the applied electromotive force (or voltage) to the current, "does not vary with the current strength."The qualifier "in a given state" is usually interpreted as meaning "at a constant temperature," since the resistivity of materials is usually temperature dependent. Because the conduction of current is related to Joule heating of the conducting body, according to Joule's first law, the temperature of a conducting body may change when it carries a current. The dependence of resistance on temperature therefore makes resistance depend upon the current in a typical experimental setup, making the law in this form difficult to directly verify. Maxwell and others worked out several methods to test the law experimentally in 1876, controlling for heating effects. Usually, the measurements of a sample resistance are carried out at low currents to prevent Joule heating. However, even a small current causes heating(cooling) at the first(second) sample contact due to the Peltier effect. The temperatures at the sample contacts become different, their difference is linear in current. The voltage drop across the circuit includes additionally the Seebeck thermoelectromotive force which again is again linear in current. As a result, there exists a thermal correction to the sample resistance even at negligibly small current. The magnitude of the correction could be comparable with the sample resistance.
Relation to heat conductions
Ohm's principle predicts the flow of electrical charge (i.e. current) in electrical conductors when subjected to the influence of voltage differences; Jean-Baptiste-Joseph Fourier's principle predicts the flow of heat in heat conductors when subjected to the influence of temperature differences.
The same equation describes both phenomena, the equation's variables taking on different meanings in the two cases. Specifically, solving a heat conduction (Fourier) problem with temperature (the driving "force") and flux of heat (the rate of flow of the driven "quantity", i.e. heat energy) variables also solves an analogous electrical conduction (Ohm) problem having electric potential (the driving "force") and electric current (the rate of flow of the driven "quantity", i.e. charge) variables.
The basis of Fourier's work was his clear conception and definition of thermal conductivity. He assumed that, all else being the same, the flux of heat is strictly proportional to the gradient of temperature. Although undoubtedly true for small temperature gradients, strictly proportional behavior will be lost when real materials (e.g. ones having a thermal conductivity that is a function of temperature) are subjected to large temperature gradients.
A similar assumption is made in the statement of Ohm's law: other things being alike, the strength of the current at each point is proportional to the gradient of electric potential. The accuracy of the assumption that flow is proportional to the gradient is more readily tested, using modern measurement methods, for the electrical case than for the heat case.
Other versions
Ohm's law, in the form above, is an extremely useful equation in the field of electrical/electronic engineering because it describes how voltage, current and resistance are interrelated on a "macroscopic" level, that is, commonly, as circuit elements in an electrical circuit. Physicists who study the electrical properties of matter at the microscopic level use a closely related and more general vector equation, sometimes also referred to as Ohm's law, having variables that are closely related to the V, I, and R scalar variables of Ohm's law, but which are each functions of position within the conductor. Physicists often use this continuum form of Ohm's Law:
where is the electric field vector with units of volts per meter (analogous to of Ohm's law which has units of volts), is the current density vector with units of amperes per unit area (analogous to of Ohm's law which has units of amperes), and ρ "rho" is the resistivity with units of ohm·meters (analogous to of Ohm's law which has units of ohms). The above equation is also written as where "sigma" is the conductivity which is the reciprocal of .
The voltage between two points is defined as:
with the element of path along the integration of electric field vector E. If the applied E field is uniform and oriented along the length of the conductor as shown in the figure, then defining the voltage V in the usual convention of being opposite in direction to the field (see figure), and with the understanding that the voltage V is measured differentially across the length of the conductor allowing us to drop the Δ symbol, the above vector equation reduces to the scalar equation:
Since the field is uniform in the direction of wire length, for a conductor having uniformly consistent resistivity ρ, the current density will also be uniform in any cross-sectional area and oriented in the direction of wire length, so we may write:
Substituting the above 2 results (for E and J respectively) into the continuum form shown at the beginning of this section:
The electrical resistance of a uniform conductor is given in terms of resistivity by:
where ℓ is the length of the conductor in SI units of meters, is the cross-sectional area (for a round wire if is radius) in units of meters squared, and ρ is the resistivity in units of ohm·meters.
After substitution of R from the above equation into the equation preceding it, the continuum form of Ohm's law for a uniform field (and uniform current density) oriented along the length of the conductor reduces to the more familiar form:
A perfect crystal lattice, with low enough thermal motion and no deviations from periodic structure, would have no resistivity, but a real metal has crystallographic defects, impurities, multiple isotopes, and thermal motion of the atoms. Electrons scatter from all of these, resulting in resistance to their flow.
The more complex generalized forms of Ohm's law are important to condensed matter physics, which studies the properties of matter and, in particular, its electronic structure. In broad terms, they fall under the topic of constitutive equations and the theory of transport coefficients.
Magnetic effects
If an external B-field is present and the conductor is not at rest but moving at velocity , then an extra term must be added to account for the current induced by the Lorentz force on the charge carriers.
In the rest frame of the moving conductor this term drops out because . There is no contradiction because the electric field in the rest frame differs from the E-field in the lab frame: .
Electric and magnetic fields are relative, see Lorentz transformation.
If the current is alternating because the applied voltage or E-field varies in time, then reactance must be added to resistance to account for self-inductance, see electrical impedance. The reactance may be strong if the frequency is high or the conductor is coiled.
Conductive fluids
In a conductive fluid, such as a plasma, there is a similar effect. Consider a fluid moving with the velocity in a magnetic field . The relative motion induces an electric field which exerts electric force on the charged particles giving rise to an electric current . The equation of motion for the electron gas, with a number density , is written as
where , and are the charge, mass and velocity of the electrons, respectively. Also, is the frequency of collisions of the electrons with ions which have a velocity field . Since, the electron has a very small mass compared with that of ions, we can ignore the left hand side of the above equation to write
where we have used the definition of the current density, and also put which is the electrical conductivity. This equation can also be equivalently written as
where is the electrical resistivity. It is also common to write instead of which can be confusing since it is the same notation used for the magnetic diffusivity defined as .
See also
Fick's law of diffusion
Hopkinson's law ("Ohm's law for magnetics")
Maximum power transfer theorem
Norton's theorem
Electric power
Sheet resistance
Superposition theorem
Thermal noise
Thévenin's theorem
Uses
LED-Resistor circuit
References
Further reading
Ohm's Law chapter from Lessons In Electric Circuits Vol 1 DC book and series.
John C. Shedd and Mayo D. Hershey,"The History of Ohm's Law", Popular Science, December 1913, pp. 599–614, Bonnier Corporation , gives the history of Ohm's investigations, prior work, Ohm's false equation in the first paper, illustration of Ohm's experimental apparatus.
Explores the conceptual change underlying Ohm's experimental work.
Kenneth L. Caneva, "Ohm, Georg Simon." Complete Dictionary of Scientific Biography. 2008
s:Scientific Memoirs/2/The Galvanic Circuit investigated Mathematically, a translation of Ohm's original paper.
External links
Ohms Law Calculator
Electronic engineering
Circuit theorems
Empirical laws
Eponymous laws of physics
Electrical resistance and conductance
Voltage
Law | Ohm's law | [
"Physics",
"Mathematics",
"Technology",
"Engineering"
] | 5,935 | [
"Equations of physics",
"Physical quantities",
"Electrical systems",
"Computer engineering",
"Quantity",
"Physical systems",
"Electrical engineering",
"Electronic engineering",
"Circuit theorems",
"Voltage",
"Wikipedia categories named after physical quantities",
"Electrical resistance and con... |
49,091 | https://en.wikipedia.org/wiki/Optical%20character%20recognition | Optical character recognition or optical character reader (OCR) is the electronic or mechanical conversion of images of typed, handwritten or printed text into machine-encoded text, whether from a scanned document, a photo of a document, a scene photo (for example the text on signs and billboards in a landscape photo) or from subtitle text superimposed on an image (for example: from a television broadcast).
Widely used as a form of data entry from printed paper data recordswhether passport documents, invoices, bank statements, computerized receipts, business cards, mail, printed data, or any suitable documentationit is a common method of digitizing printed texts so that they can be electronically edited, searched, stored more compactly, displayed online, and used in machine processes such as cognitive computing, machine translation, (extracted) text-to-speech, key data and text mining. OCR is a field of research in pattern recognition, artificial intelligence and computer vision.
Early versions needed to be trained with images of each character, and worked on one font at a time. Advanced systems capable of producing a high degree of accuracy for most fonts are now common, and with support for a variety of image file format inputs. Some systems are capable of reproducing formatted output that closely approximates the original page including images, columns, and other non-textual components.
History
Early optical character recognition may be traced to technologies involving telegraphy and creating reading devices for the blind. In 1914, Emanuel Goldberg developed a machine that read characters and converted them into standard telegraph code. Concurrently, Edmund Fournier d'Albe developed the Optophone, a handheld scanner that when moved across a printed page, produced tones that corresponded to specific letters or characters.
In the late 1920s and into the 1930s, Emanuel Goldberg developed what he called a "Statistical Machine" for searching microfilm archives using an optical code recognition system. In 1931, he was granted US Patent number 1,838,389 for the invention. The patent was acquired by IBM.
Visually impaired users
In 1974, Ray Kurzweil started the company Kurzweil Computer Products, Inc. and continued development of omni-font OCR, which could recognize text printed in virtually any font. (Kurzweil is often credited with inventing omni-font OCR, but it was in use by companies, including CompuScan, in the late 1960s and 1970s.) Kurzweil used the technology to create a reading machine for blind people to have a computer read text to them out loud. The device included a CCD-type flatbed scanner and a text-to-speech synthesizer. On January 13, 1976, the finished product was unveiled during a widely reported news conference headed by Kurzweil and the leaders of the National Federation of the Blind. In 1978, Kurzweil Computer Products began selling a commercial version of the optical character recognition computer program. LexisNexis was one of the first customers, and bought the program to upload legal paper and news documents onto its nascent online databases. Two years later, Kurzweil sold his company to Xerox, which eventually spun it off as Scansoft, which merged with Nuance Communications.
In the 2000s, OCR was made available online as a service (WebOCR), in a cloud computing environment, and in mobile applications like real-time translation of foreign-language signs on a smartphone. With the advent of smartphones and smartglasses, OCR can be used in internet connected mobile device applications that extract text captured using the device's camera. These devices that do not have built-in OCR functionality will typically use an OCR API to extract the text from the image file captured by the device. The OCR API returns the extracted text, along with information about the location of the detected text in the original image back to the device app for further processing (such as text-to-speech) or display.
Various commercial and open source OCR systems are available for most common writing systems, including Latin, Cyrillic, Arabic, Hebrew, Indic, Bengali (Bangla), Devanagari, Tamil, Chinese, Japanese, and Korean characters.
Applications
OCR engines have been developed into software applications specializing in various subjects such as receipts, invoices, checks, and legal billing documents.
The software can be used for:
Entering data for business documents, e.g. checks, passports, invoices, bank statements and receipts
Automatic number-plate recognition
Passport recognition and information extraction in airports
Automatically extracting key information from insurance documents
Traffic-sign recognition
Extracting business card information into a contact list
Creating textual versions of printed documents, e.g. book scanning for Project Gutenberg
Making electronic images of printed documents searchable, e.g. Google Books
Converting handwriting in real-time to control a computer (pen computing)
Defeating or testing the robustness of CAPTCHA anti-bot systems, though these are specifically designed to prevent OCR.
Assistive technology for blind and visually impaired users
Writing instructions for vehicles by identifying CAD images in a database that are appropriate to the vehicle design as it changes in real time
Making scanned documents searchable by converting them to PDFs
Types
Optical character recognition (OCR)targets typewritten text, one glyph or character at a time.
Optical word recognitiontargets typewritten text, one word at a time (for languages that use a space as a word divider). Usually just called "OCR".
Intelligent character recognition (ICR)also targets handwritten printscript or cursive text one glyph or character at a time, usually involving machine learning.
Intelligent word recognition (IWR)also targets handwritten printscript or cursive text, one word at a time. This is especially useful for languages where glyphs are not separated in cursive script.
OCR is generally an offline process, which analyses a static document. There are cloud based services which provide an online OCR API service. Handwriting movement analysis can be used as input to handwriting recognition. Instead of merely using the shapes of glyphs and words, this technique is able to capture motion, such as the order in which segments are drawn, the direction, and the pattern of putting the pen down and lifting it. This additional information can make the process more accurate. This technology is also known as "online character recognition", "dynamic character recognition", "real-time character recognition", and "intelligent character recognition".
Techniques
Pre-processing
OCR software often pre-processes images to improve the chances of successful recognition. Techniques include:
De-skewingif the document was not aligned properly when scanned, it may need to be tilted a few degrees clockwise or counterclockwise in order to make lines of text perfectly horizontal or vertical.
Despecklingremoval of positive and negative spots, smoothing edges
Binarizationconversion of an image from color or greyscale to black-and-white (called a binary image because there are two colors). The task is performed as a simple way of separating the text (or any other desired image component) from the background. The task of binarization is necessary since most commercial recognition algorithms work only on binary images, as it is simpler to do so. In addition, the effectiveness of binarization influences to a significant extent the quality of character recognition, and careful decisions are made in the choice of the binarization employed for a given input image type; since the quality of the method used to obtain the binary result depends on the type of image (scanned document, scene text image, degraded historical document, etc.).
Line removalCleaning up non-glyph boxes and lines
Layout analysis or zoningIdentification of columns, paragraphs, captions, etc. as distinct blocks. Especially important in multi-column layouts and tables.
Line and word detectionEstablishment of a baseline for word and character shapes, separating words as necessary.
Script recognitionIn multilingual documents, the script may change at the level of the words and hence, identification of the script is necessary, before the right OCR can be invoked to handle the specific script.
Character isolation or segmentationFor per-character OCR, multiple characters that are connected due to image artifacts must be separated; single characters that are broken into multiple pieces due to artifacts must be connected.
Normalization of aspect ratio and scale
Segmentation of fixed-pitch fonts is accomplished relatively simply by aligning the image to a uniform grid based on where vertical grid lines will least often intersect black areas. For proportional fonts, more sophisticated techniques are needed because whitespace between letters can sometimes be greater than that between words, and vertical lines can intersect more than one character.
Text recognition
There are two basic types of core OCR algorithm, which may produce a ranked list of candidate characters.
Matrix matching involves comparing an image to a stored glyph on a pixel-by-pixel basis; it is also known as pattern matching, pattern recognition, or image correlation. This relies on the input glyph being correctly isolated from the rest of the image, and the stored glyph being in a similar font and at the same scale. This technique works best with typewritten text and does not work well when new fonts are encountered. This is the technique early physical photocell-based OCR implemented, rather directly.
Feature extraction decomposes glyphs into "features" like lines, closed loops, line direction, and line intersections. The extraction features reduces the dimensionality of the representation and makes the recognition process computationally efficient. These features are compared with an abstract vector-like representation of a character, which might reduce to one or more glyph prototypes. General techniques of feature detection in computer vision are applicable to this type of OCR, which is commonly seen in "intelligent" handwriting recognition and most modern OCR software. Nearest neighbour classifiers such as the k-nearest neighbors algorithm are used to compare image features with stored glyph features and choose the nearest match.
Software such as Cuneiform and Tesseract use a two-pass approach to character recognition. The second pass is known as adaptive recognition and uses the letter shapes recognized with high confidence on the first pass to better recognize the remaining letters on the second pass. This is advantageous for unusual fonts or low-quality scans where the font is distorted (e.g. blurred or faded).
, modern OCR software includes Google Docs OCR, ABBYY FineReader, and Transym. Others like OCRopus and Tesseract use neural networks which are trained to recognize whole lines of text instead of focusing on single characters.
A technique known as iterative OCR automatically crops a document into sections based on the page layout. OCR is then performed on each section individually using variable character confidence level thresholds to maximize page-level OCR accuracy. A patent from the United States Patent Office has been issued for this method.
The OCR result can be stored in the standardized ALTO format, a dedicated XML schema maintained by the United States Library of Congress. Other common formats include hOCR and PAGE XML.
For a list of optical character recognition software, see Comparison of optical character recognition software.
Post-processing
OCR accuracy can be increased if the output is constrained by a lexicona list of words that are allowed to occur in a document. This might be, for example, all the words in the English language, or a more technical lexicon for a specific field. This technique can be problematic if the document contains words not in the lexicon, like proper nouns. Tesseract uses its dictionary to influence the character segmentation step, for improved accuracy.
The output stream may be a plain text stream or file of characters, but more sophisticated OCR systems can preserve the original layout of the page and produce, for example, an annotated PDF that includes both the original image of the page and a searchable textual representation.
Near-neighbor analysis can make use of co-occurrence frequencies to correct errors, by noting that certain words are often seen together. For example, "Washington, D.C." is generally far more common in English than "Washington DOC".
Knowledge of the grammar of the language being scanned can also help determine if a word is likely to be a verb or a noun, for example, allowing greater accuracy.
The Levenshtein Distance algorithm has also been used in OCR post-processing to further optimize results from an OCR API.
Application-specific optimizations
In recent years, the major OCR technology providers began to tweak OCR systems to deal more efficiently with specific types of input. Beyond an application-specific lexicon, better performance may be had by taking into account business rules, standard expression, or rich information contained in color images. This strategy is called "Application-Oriented OCR" or "Customized OCR", and has been applied to OCR of license plates, invoices, screenshots, ID cards, driver's licenses, and automobile manufacturing.
The New York Times has adapted the OCR technology into a proprietary tool they entitle Document Helper, that enables their interactive news team to accelerate the processing of documents that need to be reviewed. They note that it enables them to process what amounts to as many as 5,400 pages per hour in preparation for reporters to review the contents.
Workarounds
There are several techniques for solving the problem of character recognition by means other than improved OCR algorithms.
Forcing better input
Special fonts like OCR-A, OCR-B, or MICR fonts, with precisely specified sizing, spacing, and distinctive character shapes, allow a higher accuracy rate during transcription in bank check processing. Several prominent OCR engines were designed to capture text in popular fonts such as Arial or Times New Roman, and are incapable of capturing text in these fonts that are specialized and very different from popularly used fonts. As Google Tesseract can be trained to recognize new fonts, it can recognize OCR-A, OCR-B and MICR fonts.
Comb fields are pre-printed boxes that encourage humans to write more legiblyone glyph per box. These are often printed in a dropout color which can be easily removed by the OCR system.
Palm OS used a special set of glyphs, known as Graffiti, which are similar to printed English characters but simplified or modified for easier recognition on the platform's computationally limited hardware. Users would need to learn how to write these special glyphs.
Zone-based OCR restricts the image to a specific part of a document. This is often referred to as Template OCR.
Crowdsourcing
Crowdsourcing humans to perform the character recognition can quickly process images like computer-driven OCR, but with higher accuracy for recognizing images than that obtained via computers. Practical systems include the Amazon Mechanical Turk and reCAPTCHA. The National Library of Finland has developed an online interface for users to correct OCRed texts in the standardized ALTO format. Crowd sourcing has also been used not to perform character recognition directly but to invite software developers to develop image processing algorithms, for example, through the use of rank-order tournaments.
Accuracy
Commissioned by the U.S. Department of Energy (DOE), the Information Science Research Institute (ISRI) had the mission to foster the improvement of automated technologies for understanding machine printed documents, and it conducted the most authoritative of the Annual Test of OCR Accuracy from 1992 to 1996.
Recognition of typewritten, Latin script text is still not 100% accurate even where clear imaging is available. One study based on recognition of 19th- and early 20th-century newspaper pages concluded that character-by-character OCR accuracy for commercial OCR software varied from 81% to 99%; total accuracy can be achieved by human review or Data Dictionary Authentication. Other areasincluding recognition of hand printing, cursive handwriting, and printed text in other scripts (especially those East Asian language characters which have many strokes for a single character)are still the subject of active research. The MNIST database is commonly used for testing systems' ability to recognize handwritten digits.
Accuracy rates can be measured in several ways, and how they are measured can greatly affect the reported accuracy rate. For example, if word context (a lexicon of words) is not used to correct software finding non-existent words, a character error rate of 1% (99% accuracy) may result in an error rate of 5% or worse if the measurement is based on whether each whole word was recognized with no incorrect letters. Using a large enough dataset is important in a neural-network-based handwriting recognition solutions. On the other hand, producing natural datasets is very complicated and time-consuming.
An example of the difficulties inherent in digitizing old text is the inability of OCR to differentiate between the "long s" and "f" characters.
Web-based OCR systems for recognizing hand-printed text on the fly have become well known as commercial products in recent years (see Tablet PC history). Accuracy rates of 80% to 90% on neat, clean hand-printed characters can be achieved by pen computing software, but that accuracy rate still translates to dozens of errors per page, making the technology useful only in very limited applications.
Recognition of cursive text is an active area of research, with recognition rates even lower than that of hand-printed text. Higher rates of recognition of general cursive script will likely not be possible without the use of contextual or grammatical information. For example, recognizing entire words from a dictionary is easier than trying to parse individual characters from script. Reading the Amount line of a check (which is always a written-out number) is an example where using a smaller dictionary can increase recognition rates greatly. The shapes of individual cursive characters themselves simply do not contain enough information to accurately (greater than 98%) recognize all handwritten cursive script.
Most programs allow users to set "confidence rates". This means that if the software does not achieve their desired level of accuracy, a user can be notified for manual review.
An error introduced by OCR scanning is sometimes termed a scanno (by analogy with the term typo).
Unicode
Characters to support OCR were added to the Unicode Standard in June 1993, with the release of version 1.1.
Some of these characters are mapped from fonts specific to MICR, OCR-A or OCR-B.
See also
References
External links
Unicode OCRHex Range: 2440-245F Optical Character Recognition in Unicode
Annotated bibliography of references to handwriting character recognition and pen computing
Applications of computer vision
Automatic identification and data capture
Computational linguistics
Unicode
Symbols
Machine learning task | Optical character recognition | [
"Mathematics",
"Technology"
] | 3,857 | [
"Symbols",
"Computational linguistics",
"Data",
"Automatic identification and data capture",
"Natural language and computing"
] |
49,105 | https://en.wikipedia.org/wiki/Vacuum%20cleaner | A vacuum cleaner, also known simply as a vacuum, is a device that uses suction, and often agitation, in order to remove dirt and other debris from carpets and hard floors.
The dirt is collected into a dust bag or a plastic bin. Vacuum cleaners, which are used in homes as well as in commercial settings, exist in a variety of sizes and types, including stick vacuums, handheld vacuums, upright vacuums, and canister vacuums. Specialized shop vacuums can be used to clean both solid debris and liquids.
Name
Although vacuum cleaner and the short form vacuum are neutral names, in some countries (UK, Ireland) hoover is used instead as a genericized trademark, and as a verb. The name comes from the Hoover Company, one of the first and most influential companies in the development of the device. In New Zealand, particularly the Southland region, it is sometimes called a lux, likewise a genericized trademark and used as a verb. The device is also sometimes called a sweeper although the same term also refers to a carpet sweeper, a similar invention.
History
The vacuum cleaner evolved from the carpet sweeper via manual vacuum cleaners. The first manual models, using bellows, were developed in the 1860s, and the first motorized designs appeared at the turn of the 20th century, with the first decade being the boom decade.
Manual vacuums
In 1860 a manual vacuum cleaner was invented by Daniel Hess of West Union, Iowa. Called a "carpet sweeper", it gathered dust with a rotating brush and had a bellows for generating suction.
Another early model (1869) was the "Whirlwind", invented in Chicago in 1868 by Ives W. McGaffey. The bulky device worked with a belt driven fan cranked by hand that made it awkward to operate, although it was commercially marketed with mixed success.
A similar model was constructed by Melville R. Bissell of Grand Rapids, Michigan in 1876, who also manufactured carpet sweepers. The company later added portable vacuum cleaners to its line of cleaning tools.
Powered vacuum cleaners
The end of the 19th century saw the introduction of powered cleaners, although early types used some variation of blowing air to clean instead of suction. One appeared in 1898 when John S. Thurman of St. Louis, Missouri, submitted a patent (U.S. No. 634,042) for a "pneumatic carpet renovator" which blew dust into a receptacle. Thurman's system, powered by an internal combustion engine, traveled to the customer's residence on a horse-drawn wagon as part of a door-to-door cleaning service. Corrine Dufour of Savannah, Georgia, received two patents in 1899 and 1900 for another blown-air system that seems to have featured the first use of an electric motor.
In 1901 powered vacuum cleaners using suction were invented independently by British engineer Hubert Cecil Booth and American inventor David T. Kenney. Booth also may have coined the word "vacuum cleaner". Booth's horse-drawn combustion-engine-powered "Puffing Billy", maybe derived from Thurman's blown-air design, relied upon just suction with air pumped through a cloth filter and was offered as part of his cleaning services. Kenney's was a stationary steam-engine-powered system with pipes and hoses reaching into all parts of the building.
Domestic vacuum cleaner
The first vacuum-cleaning device to be portable and marketed at the domestic market was built in 1905 by Walter Griffiths, a manufacturer in Birmingham, England. His Griffith's Improved Vacuum Apparatus for Removing Dust from Carpets resembled modern-day cleaners; it was portable, easy to store, and powered by "any one person (such as the ordinary domestic servant)", who would have the task of compressing a bellows-like contraption to suck up dust through a removable, flexible pipe, to which a variety of shaped nozzles could be attached.
In 1906 James B. Kirby developed his first of many vacuums called the "Domestic Cyclone". It used water for dirt separation. Later revisions came to be known as the Kirby Vacuum Cleaner. The Cleveland, Ohio factory was built in 1916 and remains open currently, and all Kirby vacuum cleaners are manufactured in the United States.
In 1907 department store janitor James Murray Spangler (1848–1915) of Canton, Ohio, invented the first portable electric vacuum cleaner, obtaining a patent for the Electric Suction Sweeper on 2 June 1908. Crucially, in addition to suction from an electric fan that blew the dirt and dust into a soap box and one of his wife's pillow cases, Spangler's design utilized a rotating brush to loosen debris. Unable to produce the design himself due to lack of funding, he sold the patent in 1908 to local leather goods manufacturer William Henry Hoover (1849–1932), who had Spangler's machine redesigned with a steel casing, casters, and attachments, founding the company that in 1922 was renamed the Hoover Company. Their first vacuum was the 1908 Model O, which sold for $60 ($ in dollars). Subsequent innovations included the beater bar in 1919 ("It beats as it sweeps as it cleans"), disposal filter bags in the 1920s, and an upright vacuum cleaner in 1926.
In Continental Europe, the Fisker and Nielsen company in Denmark was the first to sell vacuum cleaners in 1910. The design weighed just and could be operated by a single person. The Swedish company Electrolux launched their Model V in 1921 with the innovation of being able to lie on the floor on two thin metal runners. In the 1930s the German company Vorwerk started marketing vacuum cleaners of their own design which they sold through direct sales.
Post-Second World War
For many years after their introduction, vacuum cleaners remained a luxury item, but after the Second World War, they became common among the middle classes. Vacuums tend to be more common in Western countries, because in most other parts of the world, wall-to-wall carpeting is uncommon and homes have tile or hardwood floors, which are easily swept, wiped or mopped manually without power assist.
The last decades of the 20th century saw the more widespread use of technologies developed earlier, including filterless cyclonic dirt separation, central vacuum systems and rechargeable hand-held vacuums. In addition, miniaturized computer technology and improved batteries allowed the development of a new type of machine—the autonomous robotic vacuum cleaner. In 1997 Electrolux of Sweden demonstrated the Electrolux Trilobite, the first autonomous cordless robotic vacuum cleaner on the BBC-TV program Tomorrow's World, introducing it to the consumer market in 2001.
Recent developments
In 2004 a British company released AiRider, a hovering vacuum cleaner that floats on a cushion of air, similar to a hovercraft, to make it light-weight and easier to maneuver (compared to using wheels).
A British inventor has developed a new cleaning technology known as Air Recycling Technology, which, instead of using a vacuum, uses an air stream to collect dust from the carpet. This technology was tested by the Market Transformation Programme (MTP) and shown to be more energy-efficient than the vacuum method. Although working prototypes exist, Air Recycling Technology is not currently used in any production cleaner.
Modern configurations
A wide variety of technologies, designs, and configurations are available for both domestic and commercial cleaning jobs.
Upright
Upright vacuum cleaners are popular in the US, UK, and numerous Commonwealth countries, but unusual in some Continental European countries. They take the form of a cleaning head, onto which a handle and bag are attached. Upright designs generally employ a rotating brushroll or beater bar, which removes dirt through a combination of sweeping and vibration. There are two types of upright vacuums; dirty-air/direct fan (found mostly on commercial vacuums), or clean-air/fan-bypass (found on most of today's domestic vacuums).
The older of the two designs, direct-fan cleaners have a large impeller (fan) mounted close to the suction opening, through which the dirt passes directly, before being blown into a bag. The motor is often cooled by a separate cooling fan. Because of their large-bladed fans, and comparatively short airpaths, direct-fan cleaners create a very efficient airflow from a low amount of power, and make effective carpet cleaners. Their "above-floor" cleaning power is less efficient, since the airflow is lost when it passes through a long hose, and the fan has been optimized for airflow volume and not suction.
Fan-bypass uprights have their motor mounted after the filter bag. Dust is removed from the airstream by the bag, and usually a filter, before it passes through the fan. The fans are smaller, and are usually a combination of several moving and stationary turbines working in sequence to boost power. The motor is cooled by the airstream passing through it. Fan-bypass vacuums are good for both carpet and above-floor cleaning, since their suction does not significantly diminish over the distance of a hose, as it does in direct-fan cleaners. However, their air-paths are much less efficient, and can require more than twice as much power as direct-fan cleaners to achieve the same results.
The most common upright vacuum cleaners use a drive-belt powered by the suction motor to rotate the brush-roll. However, a more common design of dual motor upright is available. In these cleaners, the suction is provided via a large motor, while the brushroll is powered by a separate, smaller motor, which does not create any suction. The brush-roll motor can sometimes be switched off, so hard floors can be cleaned without the brush-roll scattering the dirt. It may also have an automatic cut-off feature which shuts the motor off if the brush-roll becomes jammed, protecting it from damage.
Canister
Canister models (in the UK also often called cylinder models) dominate the European market. They have the motor and dust collectors (using a bag or bagless) in a separate unit, usually mounted on wheels, which is connected to the vacuum head by a flexible hose. Their main advantage is flexibility, as the user can attach different heads for different tasks, and maneuverability (the head can reach under furniture and makes it very easy to vacuum stairs and vertical surfaces). Many cylinder models have power heads as standard or add-on equipment containing the same sort of mechanical beaters as in upright units, making them as efficient on carpets as upright models. Such beaters are driven by a separate electric motor or a turbine which uses the suction power to spin the brushroll via a drive belt.
Drum
Drum or shop vac models are essentially heavy-duty industrial versions of cylinder vacuum cleaners, where the canister consists of a large vertically positioned drum which can be stationary or on wheels. Smaller versions, for use in garages or small workshops, are usually electrically powered. Larger models, which can store over , are often hooked up to compressed air, utilizing the Venturi effect to produce a partial vacuum. Built-in dust collection systems are also used in many workshops.
Wet/dry
Wet or wet/dry vacuum cleaners are a specialized form of cylinder/drum models that can be used to clean up wet or liquid spills. They are generally designed to be used both indoors and outdoors and to accommodate both wet and dry debris; some are also equipped with an exhaust port or detachable blower for reversing the airflow, a useful function for everything from clearing a clogged hose to blowing dust into a corner for easy collection.
Shop vacs are able to collect large, bulky or otherwise inconvenient material that would damage or foul household vacuum cleaners, like sawdust, swarf, and liquids.
They use wide hoses, which open directly into the collection chamber (usually a bucket-like cylinder constituting the body of the vacuum). As the airstream enters the larger volume, its flow slows down, allowing the material to drop into the chamber before air is sucked out through the filter and to the vacuum's exhaust.
Shop vacs' performance can be evaluated by a number of metrics. Commonly used ones include the motor's rating (using power measurements like watts or horsepower), the vacuum's ability to develop suction (using pressure measurements like inches of water), and total airflow through the system (using volume rate measurements like cubic feet per minute).
Related to the wet vacuum is the extraction vacuum cleaner used mainly in hot water extraction, a method of cleaning hard-to-move pieces of fabric like carpets. These machines are able to spray hot soapy water and then suck it back out of the fabric, removing dirt in the process.
Wet vacuum cleaners have been modified by end users, adding an internally-mounted sump pump for continuous removal of liquids without having to stop to empty the tank.
Pneumatic
Pneumatic or pneumatic wet/dry vacuum cleaners are a specialized form of wet/dry models that hook up to compressed air. They commonly can accommodate both wet and dry soilage, a useful feature in industrial plants and manufacturing facilities.
Backpack
Backpack vacuum cleaners are commonly used for commercial cleaning: they allow the user to move rapidly about a large area. They are essentially small canister vacuums strapped onto the user's back.
Hand-held
Lightweight hand-held vacuum cleaners, either powered from rechargeable batteries or mains power, are also popular for cleaning up smaller spills. Frequently seen examples include the Black & Decker DustBuster, which was introduced in 1979, and numerous handheld models by Dirt Devil, which were first introduced in 1984. Some battery-powered handheld vacuums are wet/dry rated; the appliance must be partially disassembled and cleaned after picking up wet materials to avoid developing unpleasant odors.
Robotic
In the late 1990s and early 2000s, several companies developed robotic vacuum cleaners, a form of carpet sweeper usually equipped with limited suction power. Some prominent brands are Roomba, Neato, and bObsweep. These machines move autonomously while collecting surface dust and debris into a dustbin. They can usually navigate around furniture and come back to a docking station to charge their batteries, and a few are able to empty their dust containers into the dock as well. Most models are equipped with motorized brushes and a vacuum motor to collect dust and debris. While most robotic vacuum cleaners are designed for home use, some models are appropriate for operation in offices, hotels, hospitals, etc.
In December 2009, Neato Robotics launched the world's first robotic vacuum cleaner which uses a rotating laser-based range-finder (a form of lidar) to scan and map its surrounding. It uses this map to clean the floor methodically, even if it requires the robot to return to its base multiple times to recharge itself. In many cases it will notice when an area of the floor that was previously inaccessible becomes reachable, such as when a dog wakes up from a nap, and return to vacuum that area.
Cyclonic
Portable vacuum cleaners working on the cyclonic separation principle became popular in the 1990s. This dirt separation principle was well known and often used in central vacuum systems. Cleveland's P.A. Geier Company had obtained a patent on a cyclonic vacuum cleaner as early as 1928, which was later sold to Health-Mor in 1939, introducing the Filter Queen cyclonic canister vacuum cleaner.
In 1979, James Dyson introduced a portable unit with cyclonic separation, adapting this design from industrial saw mills. He launched his cyclone cleaner first in Japan in the 1980s at a cost of about US$1800 and in 1993 released the Dyson DC01 upright in the UK for £200. Critics expected that people would not buy a vacuum cleaner at twice the price of a conventional unit, but the Dyson design later became the most popular cleaner in the UK.
Cyclonic cleaners do not use filtration bags. Instead, the dust is separated in a detachable cylindrical collection vessel or bin. Air and dust are sucked at high speed into the collection vessel at a direction tangential to the vessel wall, creating a fast-spinning vortex. The dust particles and other debris move to the outside of the vessel by centrifugal force, where they fall due to gravity.
In fixed-installation central vacuum cleaners, the cleaned air may be exhausted directly outside without need for further filtration. A well-designed cyclonic filtration system loses suction power due to airflow restriction only when the collection vessel is almost full. This is in marked contrast to filter bag systems, which lose suction when pores in the filter become clogged as dirt and dust are collected.
In portable cyclonic models, the cleaned air from the center of the vortex is expelled from the machine after passing through a number of successively finer filters at the top of the container. The first filter is intended to trap particles which could damage the subsequent filters that remove fine dust particles. The filters must regularly be cleaned or replaced to ensure that the machine continues to perform efficiently.
Since Dyson's success in raising public awareness of cyclonic separation, several other companies have introduced cyclone models. Competing manufacturers include Hoover, Bissell, Bosch, Eureka, Electrolux and Vax. This high level of competition means the cheapest models are generally no more expensive than a conventional cleaner.
Central
Central vacuum cleaners, also known as built-in or ducted, are a type of canister/cylinder model which has the motor and dirt filtration unit located in a central location in a building, and connected by pipes to fixed vacuum inlets installed throughout the building. Only the hose and cleaning head need be carried from room to room, and the hose is commonly 8 m (25 ft) long, allowing a large range of movement without changing vacuum inlets. Plastic or metal piping connects the inlets to the central unit. The vacuum head may be unpowered, or have beaters operated by an electric motor or by an air-driven turbine.
The dirt bag or collection bin in a central vacuum system is usually so large that emptying or changing needs to be done less often, perhaps a few times per year for an ordinary household. The central unit usually stays in stand-by, and is turned on by a switch on the handle of the hose. Alternately, the unit powers up when the hose is plugged into the wall inlet, when the metal hose connector makes contact with two prongs in the wall inlet and control current is transmitted through low voltage wires to the main unit.
A central vacuum typically produces greater suction than common portable vacuum cleaners because a larger fan and more powerful motor can be used when they are not required to be portable. A cyclonic separation system, if used, does not lose suction as the collection container fills up, until the container is nearly full. This is in marked contrast to filter-bag designs, which start losing suction immediately as pores in the filter become clogged by accumulated dirt and dust.
A benefit to allergy sufferers is that unlike a standard vacuum cleaner, which must blow some of the dirt collected back into the room being cleaned (no matter how efficient its filtration), a central vacuum removes all the dirt collected to the central unit. Since this central unit is usually located outside the living area, no dust is recirculated back into the room being cleaned. Also it is possible on most newer models to vent the exhaust entirely outside, even with the unit inside the living quarters.
Another benefit of the central vacuum is, because of the remote location of the motor unit, there is much less noise in the room being cleaned than with a standard vacuum cleaner.
Constellation
Introduced in 1954, The Hoover Company's Constellation was of the cylinder type and lacked wheels. Instead the vacuum cleaner floated on its exhaust, operating as a hovercraft, although that was not true of the earliest models, which had a rotating hose, the intention being that the user would place the unit in the center of the room, and work around the cleaner.
The Constellation was changed and updated over the years until discontinued in 1975. Later Constellations routed all of the exhaust under the vacuum using an airfoil. The updated design was quiet even by modern standards, particularly on carpet, because it muffled the sound. Those models float on carpet or bare floor although, on hard flooring, the exhaust air tends to scatter any fluff or debris around.
Hoover re-released an updated version of the later-model Constellation in the US (model # S3341 in Pearl White and # S3345 in stainless steel). Changes included a HEPA filtration bag, a 12-amp motor, a turbine-powered brush roll, and a redesigned version of the handle. The same model was marketed in the UK under the Maytag brand, called the Satellite because of licensing restrictions. It was sold from 2006 to 2009.
Vehicles
See vacuum truck for very big vacuum cleaners mounted on vehicles.
Other
Some other vacuum cleaners include an electric mop in the same machine: for a dry and a later wet clean.
The iRobot company developed the Scooba, a robotic wet vacuum cleaner that carries its own cleaning solution, applies it and scrubs the floor, and vacuums the dirty water into a collection tank.
Technology
A vacuum's suction is caused by a difference in air pressure. A fan driven by an electric motor (often a universal motor) reduces the pressure inside the machine. Atmospheric pressure then pushes the air through the carpet and into the nozzle, and so the dust is literally pushed into the bag.
Tests have shown that vacuuming can kill 100% of young fleas and 96% of adult fleas.
Exhaust filtration
Vacuums by their nature cause dust to become airborne, by exhausting air that is not completely filtered. This can cause health problems since the operator ends up inhaling respirable dust, which is also redeposited into the area being cleaned. There are several methods manufacturers use to control this problem, some of which may be combined in a single appliance. Typically a filter is positioned so that the incoming air passes through it before it reaches the fan, and then the filtered air passes through the motor for cooling purposes. Some other designs use a completely separate air intake for cooling.
It is nearly impossible for a practical air filter to completely remove all ultrafine particles from a dirt-laden airstream. An ultra-efficient air filter will immediately clog up and become ineffective during everyday use, and practical filters are a compromise between filtering effectiveness and restriction of airflow. One way to sidestep this problem is to exhaust partially filtered air to the outdoors, which is a design feature of some central vacuum systems. Specially engineered portable vacuums may also utilize this design, but are more awkward to set up and use, requiring temporary installation of a separate exhaust hose to an exterior window.
Bag: The most common method to capture the debris vacuumed up involves a paper or fabric bag that allows air to pass through, but attempts to trap most of the dust and debris. The bag may become clogged with fine dust before it is full. The bag may be disposable, or designed to be cleaned and re-used.
Bagless: In non-cyclonic bagless models, the role of the bag is taken by a removable container and a reusable filter, equivalent to a reusable fabric bag.
Cyclonic separation: A vacuum cleaner employing this method is also bagless. It causes intake air to be cycled or spun so fast that most of the dust is forced out of the air and falls into a collection bin. The operation is similar to that of a centrifuge. Centrifugal separators eliminate the problem of a bag becoming clogged with fine dust.
Water filtration: First seen commercially in the 1920s in the form of the Newcombe Separator (later to become the Rexair Rainbow), a water filtration vacuum cleaner uses a water bath as a filter. It forces the dirt-laden intake air to pass through water before it is exhausted, so that wet dust cannot become airborne. The water trap filtration and low speed may also allow the user to use the machine as a stand-alone air purifier and humidifier unit. The dirty water must be dumped out and the appliance must be cleaned after each use, to avoid growth of bacteria and mold, causing unpleasant odors.
Ultra fine air filter: Also called HEPA filtered, this method is used as a secondary filter after the air has passed through the rest of the machine. It is meant to remove any remaining dust that could harm the operator. Some vacuum cleaners also use an activated charcoal filter to remove odors.
Ordinary vacuum cleaners should never be used to clean up asbestos fibers, even if fitted with a HEPA filter. Specially-designed machines are required to safely clean up asbestos.
Attachments
Most vacuum cleaners are supplied with numerous specialized attachments, such as tools, brushes and extension wands, which allow them to reach otherwise inaccessible places or to be used for cleaning a variety of surfaces. The most common of these tools are:
Hard floor brush (for non-upright designs)
Powered floor nozzle (for canister designs)
Dusting brush
Crevice tool
Upholstery nozzle
Specifications
The performance of a vacuum cleaner can be measured by several parameters:
Airflow, in litres per second [l/s] or cubic feet per minute (CFM or ft3/min)
Air speed, in metres per second [m/s] or miles per hour [mph]
Suction, vacuum, or water lift, in pascals [Pa] or inches of water
Other specifications of a vacuum cleaner are:
Weight, in kilograms [kg] or pounds [lb]
Noise, in decibels [dB]
Power cord length and hose length (as applicable)
Suction (Pa)
The suction is the maximum pressure difference that the pump can create. For example, a typical domestic model has a suction of about negative 20 kPa. This means that it can lower the pressure inside the hose from normal atmospheric pressure (about 100 kPa) by 20 kPa. The higher the suction rating, the more powerful the cleaner. One inch of water is equivalent to about 249 Pa; hence, the typical suction is of water.
Input power (W)
The power consumption of a vacuum cleaner, in watts, is often the only figure stated. Many North American vacuum manufacturers give the current only in amperes (e.g. "6 amps"), and the consumer is left to multiply that by the line voltage of 120 volts to get the approximate power ratings in watts. The rated input power does not indicate the effectiveness of the cleaner, only how much electricity it consumes.
After August 2014, due to EU rules, manufacture of vacuum cleaners with a power consumption greater than 1600 watts were banned within the EU, and from 2017 no vacuum cleaner with a wattage greater than 900 watts was permitted.
Output power (AW)
The amount of input power that is converted into airflow at the end of the cleaning hose is sometimes stated, and is measured in airwatts: the measurement units are simply watts. The word "air" is used to clarify that this is output power, not input electrical power.
The airwatt is derived from English units. ASTM International defines the airwatt as 0.117354 × F × S, where F is the rate of air flow in ft3/min and S is the pressure in inches of water. This makes one airwatt equal to 0.9983 watts.
Peak horsepower
The peak horsepower of a vacuum cleaner is often measured by removal of any cooling fans and calculating power based on the motor's power plus the rotational inertial energy stored the motor armature and centrifugal blower. A peak horsepower rating is often an impractical figure and is only valid for a very short period. Continuous power is typically far lower.
Cultural references
Vacuum cleaners have become closely associated with housecleaning, and artists have sometimes used them to symbolize the banality and routine of everyday life and culture. Visual artist Jeff Koons exhibited his The New series of household vacuums enshrined in museum-quality vitrines, such as New Shelton Wet/Dry Doubledecker (1981) at the Museum of Modern Art and New Hoover Convertibles, Green, Blue; New Hoover Convertibles, Green, Blue; Doubledecker (1981–1987) at the Whitney Museum of American Art. In 2002, fashion designer Tara Subkoff used topless models wielding upright vacuum cleaners to promote her controversial fashion label "Imitation of Christ".
In 2018, Paulius Markevičius organized performances of Dance for the Vacuum-Cleaner and Father choreographed by Greta Grinevičiūtė, and premiered in Vilnius, Lithuania. In 2019, Sandrina Lindgren choreographed dancers in Requiem for Vacuum Cleaning in the Barker Theatre of Turku, Finland, with each performer operating multiple machines simultaneously.
Musician Frank Zappa used vacuum cleaners in many of his different performances and on promotional artwork. Other performers have used a vacuum cleaner hose or wand as a modernized version of the Australian Aboriginal didgeridoo, or used the whine of the motor for techno music.
In 1996, Mister Rogers' Neighborhood episode #1702 featured vacuum cleaners, including dancing, magic, and a segment showing how a small Dirt Devil canister vacuum was manufactured.
See also
Home appliance
Hypoallergenic vacuum cleaner
List of vacuum cleaners
Street sweeper
Suction excavator
References
Further reading
Booth, H. Cecil "The origin of the vacuum cleaner", Transactions of the Newcomen Society, 1934–1935, Volume 15.
Gantz, Carroll. The Vacuum Cleaner: A History (McFarland, 2012), 230 pp
External links
Vacuum Cleaner at HowStuffWorks
HEPA & ULPA vacuum cleaners – what they can and can't do for IAQ
1860 introductions
19th-century inventions
American inventions
Cleaning tools
English inventions
Floors
Gas technologies
Home appliances
Home automation | Vacuum cleaner | [
"Physics",
"Technology",
"Engineering"
] | 6,243 | [
"Home automation",
"Structural engineering",
"Machines",
"Floors",
"Physical systems",
"Home appliances"
] |
49,123 | https://en.wikipedia.org/wiki/Phoenix%20%28constellation%29 | Phoenix is a minor constellation in the southern sky. Named after the mythical phoenix, it was first depicted on a celestial atlas by Johann Bayer in his 1603 Uranometria. The French explorer and astronomer Nicolas Louis de Lacaille charted the brighter stars and gave their Bayer designations in 1756. The constellation stretches from roughly −39° to −57° declination, and from 23.5h to 2.5h of right ascension. The constellations Phoenix, Grus, Pavo and Tucana, are known as the Southern Birds.
The brightest star, Alpha Phoenicis, is named Ankaa, an Arabic word meaning 'the Phoenix'. It is an orange giant of apparent magnitude 2.4. Next is Beta Phoenicis, actually a binary system composed of two yellow giants with a combined apparent magnitude of 3.3. Nu Phoenicis has a dust disk, while the constellation has ten star systems with known planets and the recently discovered galaxy clusters El Gordo and the Phoenix Cluster—located 7.2 and 5.7 billion light years away respectively, two of the largest objects in the visible universe. Phoenix is the radiant of two annual meteor showers: the Phoenicids in December, and the July Phoenicids.
History
Phoenix was the largest of the 12 constellations established by Petrus Plancius from the observations of Pieter Dirkszoon Keyser and Frederick de Houtman. It first appeared on a 35-cm diameter celestial globe published in 1597 (or 1598) in Amsterdam by Plancius with Jodocus Hondius. The first depiction of this constellation in a celestial atlas was in Johann Bayer's Uranometria of 1603. De Houtman included it in his southern star catalog the same year under the Dutch name Den voghel Fenicx, "The Bird Phoenix", symbolising the phoenix of classical mythology. One name of the brightest star Alpha Phoenicis—Ankaa—is derived from the , and was coined sometime after 1800 in relation to the constellation.
Celestial historian Richard Allen noted that unlike the other constellations introduced by Plancius and La Caille, Phoenix has actual precedent in ancient astronomy, as the Arabs saw this formation as representing young ostriches, Al Ri'āl, or as a griffin or eagle. In addition, the same group of stars was sometimes imagined by the Arabs as a boat, Al Zaurak, on the nearby river Eridanus. He observed, "the introduction of a Phoenix into modern astronomy was, in a measure, by adoption rather than by invention."
The Chinese incorporated Phoenix's brightest star, Ankaa (Alpha Phoenicis), and stars from the adjacent constellation Sculptor to depict Bakui, a net for catching birds. Phoenix and the neighbouring constellation of Grus together were seen by Julius Schiller as portraying Aaron the High Priest. These two constellations, along with nearby Pavo and Tucana, are called the Southern Birds.
Characteristics
Phoenix is a small constellation bordered by Fornax and Sculptor to the north, Grus to the west, Tucana to the south, touching on the corner of Hydrus to the south, and Eridanus to the east and southeast. The bright star Achernar is nearby. The three-letter abbreviation for the constellation, as adopted by the International Astronomical Union in 1922, is "Phe". The official constellation boundaries, as set by Belgian astronomer Eugène Delporte in 1930, are defined by a polygon of 10 segments. In the equatorial coordinate system, the right ascension coordinates of these borders lie between and , while the declination coordinates are between −39.31° and −57.84°. This means it remains below the horizon to anyone living north of the 40th parallel in the Northern Hemisphere, and remains low in the sky for anyone living north of the equator. It is most visible from locations such as Australia and South Africa during late Southern Hemisphere spring. Most of the constellation lies within, and can be located by, forming a triangle of the bright stars Achernar, Fomalhaut and Beta Ceti—Ankaa lies roughly in the centre of this.
Features
Stars
A curved line of stars comprising Alpha, Kappa, Mu, Beta, Nu and Gamma Phoenicis was seen as a boat by the ancient Arabs. French explorer and astronomer Nicolas Louis de Lacaille charted and designated 27 stars with the Bayer designations Alpha through to Omega in 1756. Of these, he labelled two stars close together Lambda, and assigned Omicron, Psi and Omega to three stars, which subsequent astronomers such as Benjamin Gould felt were too dim to warrant their letters. A different star was subsequently labelled Psi Phoenicis, while the other two designations fell out of use.
Ankaa is the brightest star in the constellation. It is an orange giant of apparent visual magnitude 2.37 and spectral type K0.5IIIb, 77 light years distant from Earth and orbited by a secondary object about which little is known. Lying close by Ankaa is Kappa Phoenicis, a main sequence star of spectral type A5IVn and apparent magnitude 3.90. Located centrally in the asterism, Beta Phoenicis is the second brightest star in the constellation and another binary star. Together the stars, both yellow giants of spectral type G8, shine with an apparent magnitude of 3.31, though the components are of individual apparent magnitudes of 4.0 and 4.1 and orbit each other every 168 years. Zeta Phoenicis or Wurren is an Algol-type eclipsing binary, with an apparent magnitude fluctuating between 3.9 and 4.4 with a period of around 1.7 days (40 hours); its dimming results from the component two blue-white B-type stars, which orbit and block out each other from Earth. The two stars are 0.05 AU from each other, while a third star is around 600 AU away from the pair, and has an orbital period exceeding 5000 years. The system is around 300 light years distant. In 1976, researchers Clausen, Gyldenkerne, and Grønbech calculated that a nearby 8th magnitude star is a fourth member of the system.
AI Phe is an eclipsing binary star identified in 1972. Its long mutual eclipses and combination of spectroscopic and astrometric data allows precise measurement of the masses and radii of the stars which is viewed as a potential cross-check on stellar properties and distances independent on Ceiphid Variables and such techniques. The long eclipse events require space-based observations to avoid Solar interference.
Gamma Phoenicis is a red giant of spectral type M0IIIa and varies between magnitudes 3.39 and 3.49. It lies 235 light years away. Psi Phoenicis is another red giant, this time of spectral type M4III, and has an apparent magnitude that ranges between 4.3 and 4.5 over a period of around 30 days. Lying 340 light years away, it has around 85 times the diameter, but only 85% of the mass, of the Sun. W Phoenicis is a Mira variable, ranging from magnitude 8.1 to 14.4 over 333.95 days. A red giant, its spectrum ranges between M5e and M6e. Located 6.5 degrees west of Ankaa is SX Phoenicis, a variable star which ranges from magnitude 7.1 to 7.5 over a period of a mere 79 minutes. Its spectral type varies between A2 and F4. It gives its name to a group of stars known as SX Phoenicis variables. Rho and BD Phoenicis are Delta Scuti variables—short period (six hours at most) pulsating stars that have been used as standard candles and as subjects to study astroseismology. Rho is spectral type F2III, and ranges between magnitudes 5.20 and 5.26 over a period of 2.85 hours. BD is of spectral type A1V, and ranges between magnitudes 5.90 and 5.94.
Nu Phoenicis is a yellow-white main sequence star of spectral type F9V and magnitude 4.96. Lying some 49 light years distant, it is around 1.2 times as massive as the Sun, and likely to be surrounded by a disk of dust. It is the closest star in the constellation that is visible with the unaided eye. Gliese 915 is a white dwarf only 26 light years away. It is of magnitude 13.05, too faint to be seen with the naked eye. White dwarfs are extremely dense stars compacted into a volume the size of the Earth. With around 85% of the mass of the Sun, Gliese 915 has a surface gravity of 108.39 ± 0.01 (2.45 · 108) cm·s−2, or approximately 250,000 of Earth's.
Ten stars have been found to have planets to date, and four planetary systems have been discovered with the SuperWASP project. HD 142 is a yellow giant that has an apparent magnitude of 5.7, and has a planet (HD 142 b) 1.36 times the mass of Jupiter which orbits every 328 days. HD 2039 is a yellow subgiant with an apparent magnitude of 9.0 around 330 light years away which has a planet (HD 2039 b) six times the mass of Jupiter. WASP-18 is a star of magnitude 9.29 which was discovered to have a hot Jupiter-like planet (WASP-18b) taking less than a day to orbit the star. The planet is suspected to be causing WASP-18 to appear older than it really is. WASP-4 and WASP-5 are solar-type yellow stars around 1000 light years distant and of 13th magnitude, each with a single planet larger than Jupiter. WASP-29 is an orange dwarf of spectral type K4V and visual magnitude 11.3, which has a planetary companion of similar size and mass to Saturn. The planet completes an orbit every 3.9 days.
WISE J003231.09-494651.4 and WISE J001505.87-461517.6 are two brown dwarfs discovered by the Wide-field Infrared Survey Explorer, and are 63 and 49 light years away respectively. Initially hypothesised before they were belatedly discovered, brown dwarfs are objects more massive than planets, but which are of insufficient mass for hydrogen fusion characteristic of stars to occur. Many are being found by sky surveys.
Phoenix contains HE0107-5240, possibly one of the oldest stars yet discovered. It has around 1/200,000 the metallicity that the Sun has and hence must have formed very early in the history of the universe. With a visual magnitude of 15.17, it is around 10,000 times dimmer than the faintest stars visible to the naked eye and is 36,000 light years distant.
Deep-sky objects
The constellation does not lie on the galactic plane of the Milky Way, and there are no prominent star clusters. NGC 625 is a dwarf irregular galaxy of apparent magnitude 11.0 and lying some 12.7 million light years distant. Only 24000 light years in diameter, it is an outlying member of the Sculptor Group. NGC 625 is thought to have been involved in a collision and is experiencing a burst of active star formation. NGC 37 is a lenticular galaxy of apparent magnitude 14.66. It is approximately 42 kiloparsecs (137,000 light-years) in diameter and about 12.9 billion years old. Robert's Quartet (composed of the irregular galaxy NGC 87, and three spiral galaxies NGC 88, NGC 89 and NGC 92) is a group of four galaxies located around 160 million light-years away which are in the process of colliding and merging. They are within a circle of radius of 1.6 arcmin, corresponding to about 75,000 light-years. Located in the galaxy ESO 243-49 is HLX-1, an intermediate-mass black hole—the first one of its kind identified. It is thought to be a remnant of a dwarf galaxy that was absorbed in a collision with ESO 243-49. Before its discovery, this class of black hole was only hypothesized.
Lying within the bounds of the constellation is the gigantic Phoenix cluster, which is around 7.3 million light years wide and 5.7 billion light years away, making it one of the most massive galaxy clusters. It was first discovered in 2010, and the central galaxy is producing an estimated 740 new stars a year. Larger still is El Gordo, or officially ACT-CL J0102-4915, whose discovery was announced in 2012. Located around 7.2 billion light years away, it is composed of two subclusters in the process of colliding, resulting in the spewing out of hot gas, seen in X-rays and infrared images.
Meteor showers
Phoenix is the radiant of two annual meteor showers. The Phoenicids, also known as the December Phoenicids, were first observed on 3 December 1887. The shower was particularly intense in December 1956, and is thought related to the breakup of the short-period comet 289P/Blanpain. It peaks around 4–5 December, though is not seen every year. A very minor meteor shower peaks around July 14 with around one meteor an hour, though meteors can be seen anytime from July 3 to 18; this shower is referred to as the July Phoenicids.
See also
IAU-recognized constellations
Phoenix (Chinese astronomy)
References
External links
The clickable Phoenix
Southern constellations
Phoenixes in popular culture
Constellations listed by Petrus Plancius | Phoenix (constellation) | [
"Astronomy"
] | 2,843 | [
"Phoenix (constellation)",
"Constellations listed by Petrus Plancius",
"Southern constellations",
"Constellations"
] |
49,139 | https://en.wikipedia.org/wiki/Decentralization | Decentralization or decentralisation is the process by which the activities of an organization, particularly those related to planning and decision-making, are distributed or delegated away from a central, authoritative location or group and given to smaller factions within it.
Concepts of decentralization have been applied to group dynamics and management science in private businesses and organizations, political science, law and public administration, technology, economics and money.
History
The word "centralisation" came into use in France in 1794 as the post-Revolution French Directory leadership created a new government structure. The word "décentralisation" came into usage in the 1820s. "Centralization" entered written English in the first third of the 1800s; mentions of decentralization also first appear during those years. In the mid-1800s Tocqueville would write that the French Revolution began with "a push towards decentralization" but became, "in the end, an extension of centralization." In 1863, retired French bureaucrat Maurice Block wrote an article called "Decentralization" for a French journal that reviewed the dynamics of government and bureaucratic centralization and recent French efforts at decentralization of government functions.
Ideas of liberty and decentralization were carried to their logical conclusions during the 19th and 20th centuries by anti-state political activists calling themselves "anarchists", "libertarians", and even decentralists. Tocqueville was an advocate, writing: "Decentralization has, not only an administrative value but also a civic dimension since it increases the opportunities for citizens to take interest in public affairs; it makes them get accustomed to using freedom. And from the accumulation of these local, active, persnickety freedoms, is born the most efficient counterweight against the claims of the central government, even if it were supported by an impersonal, collective will." Pierre-Joseph Proudhon (1809–1865), influential anarchist theorist wrote: "All my economic ideas as developed over twenty-five years can be summed up in the words: agricultural-industrial federation. All my political ideas boil down to a similar formula: political federation or decentralization."
In the early 20th century, America's response to the centralization of economic wealth and political power was a decentralist movement. It blamed large-scale industrial production for destroying middle-class shop keepers and small manufacturers and promoted increased property ownership and a return to small scale living. The decentralist movement attracted Southern Agrarians like Robert Penn Warren, as well as journalist Herbert Agar. New Left and libertarian individuals who identified with social, economic, and often political decentralism through the ensuing years included Ralph Borsodi, Wendell Berry, Paul Goodman, Carl Oglesby, Karl Hess, Donald Livingston, Kirkpatrick Sale (author of Human Scale), Murray Bookchin, Dorothy Day, Senator Mark O. Hatfield, Mildred J. Loomis and Bill Kauffman.
Leopold Kohr, author of the 1957 book The Breakdown of Nations – known for its statement "Whenever something is wrong, something is too big" – was a major influence on E. F. Schumacher, author of the 1973 bestseller Small Is Beautiful: A Study of Economics As If People Mattered. In the next few years a number of best-selling books promoted decentralization.
Daniel Bell's The Coming of Post-Industrial Society discussed the need for decentralization and a "comprehensive overhaul of government structure to find the appropriate size and scope of units", as well as the need to detach functions from current state boundaries, creating regions based on functions like water, transport, education and economics which might have "different 'overlays' on the map." Alvin Toffler published Future Shock (1970) and The Third Wave (1980). Discussing the books in a later interview, Toffler said that industrial-style, centralized, top-down bureaucratic planning would be replaced by a more open, democratic, decentralized style which he called "anticipatory democracy". Futurist John Naisbitt's 1982 book "Megatrends" was on The New York Times Best Seller list for more than two years and sold 14 million copies. Naisbitt's book outlines 10 "megatrends", the fifth of which is from centralization to decentralization. In 1996 David Osborne and Ted Gaebler had a best selling book Reinventing Government proposing decentralist public administration theories which became labeled the "New Public Management".
Stephen Cummings wrote that decentralization became a "revolutionary megatrend" in the 1980s. In 1983 Diana Conyers asked if decentralization was the "latest fashion" in development administration. Cornell University's project on Restructuring Local Government states that decentralization refers to the "global trend" of devolving responsibilities to regional or local governments. Robert J. Bennett's Decentralization, Intergovernmental Relations and Markets: Towards a Post-Welfare Agenda describes how after World War II governments pursued a centralized "welfarist" policy of entitlements which now has become a "post-welfare" policy of intergovernmental and market-based decentralization.
In 1983, "Decentralization" was identified as one of the "Ten Key Values" of the Green Movement in the United States.
A 1999 United Nations Development Programme report stated:
Overview
Systems approach
Those studying the goals and processes of implementing decentralization often use a systems theory approach, which according to the United Nations Development Programme report applies to the topic of decentralization "a whole systems perspective, including levels, spheres, sectors and functions and seeing the community level as the entry point at which holistic definitions of development goals are
from the people themselves and where it is most practical to support them. It involves seeing multi-level frameworks and continuous, synergistic processes of interaction and iteration of cycles as critical for achieving wholeness in a decentralized system and for sustaining its development."
However, it has been seen as part of a systems approach. Norman Johnson of Los Alamos National Laboratory wrote in a 1999 paper: "A decentralized system is where some decisions by the agents are made without centralized control or processing. An important property of agent systems is the degree of connectivity or connectedness between the agents, a measure global flow of information or influence. If each agent is connected (exchange states or influence) to all other agents, then the system is highly connected."
University of California, Irvine's Institute for Software Research's "PACE" project is creating an "architectural style for trust management in decentralized applications." It adopted Rohit Khare's definition of decentralization: "A decentralized system is one which requires multiple parties to make their own independent decisions" and applies it to Peer-to-peer software creation, writing:
Goals
Decentralization in any area is a response to the problems of centralized systems. Decentralization in government, the topic most studied, has been seen as a solution to problems like economic decline, government inability to fund services and their general decline in performance of overloaded services, the demands of minorities for a greater say in local governance, the general weakening legitimacy of the public sector and global and international pressure on countries with inefficient, undemocratic, overly centralized systems. The following four goals or objectives are frequently stated in various analyses of decentralization.
Participation
In decentralization, the principle of subsidiarity is often invoked. It holds that the lowest or least centralized authority that is capable of addressing an issue effectively should do so. According to one definition: "Decentralization, or decentralizing governance, refers to the restructuring or reorganization of authority so that there is a system of co-responsibility between institutions of governance at the central, regional and local levels according to the principle of subsidiarity, thus increasing the overall quality and effectiveness of the system of governance while increasing the authority and capacities of sub-national levels."
Decentralization is often linked to concepts of participation in decision-making, democracy, equality and liberty from a higher authority. Decentralization enhances the democratic voice. Theorists believe that local representative authorities with actual discretionary powers are the basis of decentralization that can lead to local efficiency, equity and development." Columbia University's Earth Institute identified one of three major trends relating to decentralization: "increased involvement of local jurisdictions and civil society in the management of their affairs, with new forms of participation, consultation, and partnerships."
Decentralization has been described as a "counterpoint to globalization [which] removes decisions from the local and national stage to the global sphere of multi-national or non-national interests. Decentralization brings decision-making back to the sub-national levels". Decentralization strategies must account for the interrelations of global, regional, national, sub-national, and local levels.
Diversity
Norman L. Johnson writes that diversity plays an important role in decentralized systems like ecosystems, social groups, large organizations, political systems. "Diversity is defined to be unique properties of entities, agents, or individuals that are not shared by the larger group, population, structure. Decentralized is defined as a property of a system where the agents have some ability to operate "locally." Both decentralization and diversity are necessary attributes to achieve the self-organizing properties of interest."
Advocates of political decentralization hold that greater participation by better informed diverse interests in society will lead to more relevant decisions than those made only by authorities on the national level. Decentralization has been described as a response to demands for diversity.
Efficiency
In business, decentralization leads to a management by results philosophy which focuses on definite objectives to be achieved by unit results. Decentralization of government programs is said to increase efficiency – and effectiveness – due to reduction of congestion in communications, quicker reaction to unanticipated problems, improved ability to deliver services, improved information about local conditions, and more support from beneficiaries of programs.
Firms may prefer decentralization because it ensures efficiency by making sure that managers closest to the local information make decisions and in a more timely fashion; that their taking responsibility frees upper management for long term strategics rather than day-to-day decision-making; that managers have hands on training to prepare them to move up the management hierarchy; that managers are motivated by having the freedom to exercise their own initiative and creativity; that managers and divisions are encouraged to prove that they are profitable, instead of allowing their failures to be masked by the overall profitability of the company.
The same principles can be applied to the government. Decentralization promises to enhance efficiency through both inter-governmental competitions with market features and fiscal discipline which assigns tax and expenditure authority to the lowest level of government possible. It works best where members of the subnational government have strong traditions of democracy, accountability, and professionalism.
Conflict resolution
Economic and/or political decentralization can help prevent or reduce conflict because they reduce actual or perceived inequities between various regions or between a region and the central government. Dawn Brancati finds that political decentralization reduces intrastate conflict unless politicians create political parties that mobilize minority and even extremist groups to demand more resources and power within national governments. However, the likelihood this will be done depends on factors like how democratic transitions happen and features like a regional party's proportion of legislative seats, a country's number of regional legislatures, elector procedures, and the order in which national and regional elections occur. Brancati holds that decentralization can promote peace if it encourages statewide parties to incorporate regional demands and limit the power of regional parties.
Processes
Initiation
The processes by which entities move from a more to a less centralized state vary. They can be initiated from the centers of authority ("top-down") or from individuals, localities or regions ("bottom-up"), or from a "mutually desired" combination of authorities and localities working together. Bottom-up decentralization usually stresses political values like local responsiveness and increased participation and tends to increase political stability. Top-down decentralization may be motivated by the desire to "shift deficits downwards" and find more resources to pay for services or pay off government debt. Some hold that decentralization should not be imposed, but done in a respectful manner.
Appropriate size
Gauging the appropriate size or scale of decentralized units has been studied in relation to the size of sub-units of hospitals and schools, road networks, administrative units in business and public administration, and especially town and city governmental areas and decision-making bodies.
In creating planned communities ("new towns"), it is important to determine the appropriate population and geographical size. While in earlier years small towns were considered appropriate, by the 1960s, 60,000 inhabitants was considered the size necessary to support a diversified job market and an adequate shopping center and array of services and entertainment. Appropriate size of governmental units for revenue raising also is a consideration.
Even in bioregionalism, which seeks to reorder many functions and even the boundaries of governments according to physical and environmental features, including watershed boundaries and soil and terrain characteristics, appropriate size must be considered. The unit may be larger than many decentralist-bioregionalists prefer.
Inadvertent or silent
Decentralization ideally happens as a careful, rational, and orderly process, but it often takes place during times of economic and political crisis, the fall of a regime and the resultant power struggles. Even when it happens slowly, there is a need for experimentation, testing, adjusting, and replicating successful experiments in other contexts. There is no one blueprint for decentralization since it depends on the initial state of a country and the power and views of political interests and whether they support or oppose decentralization.
Decentralization usually is a conscious process based on explicit policies. However, it may occur as "silent decentralization" in the absence of reforms as changes in networks, policy emphasize and resource availability lead inevitably to a more decentralized system.
Asymmetry
Decentralization may be uneven and "asymmetric" given any one country's population, political, ethnic and other forms of diversity. In many countries, political, economic and administrative responsibilities may be decentralized to the larger urban areas, while rural areas are administered by the central government. Decentralization of responsibilities to provinces may be limited only to those provinces or states which want or are capable of handling responsibility. Some privatization may be more appropriate to an urban than a rural area; some types of privatization may be more appropriate for some states and provinces but not others.
Determinants
The academic literature frequently mentions the following factors as determinants of decentralization:
"The number of major ethnic groups"
"The degree of territorial concentration of those groups"
"The existence of ethnic networks and communities across the border of the state"
"The country's dependence on natural resources and the degree to which those resources are concentrated in the region's territory"
"The country's per capita income relative to that in other regions"
The presence of self-determination movements
In government policy
Historians have described the history of governments and empires in terms of centralization and decentralization. In his 1910 The History of Nations Henry Cabot Lodge wrote that Persian king Darius I (550–486 BC) was a master of organization and "for the first time in history centralization becomes a political fact." He also noted that this contrasted with the decentralization of Ancient Greece. Since the 1980s a number of scholars have written about cycles of centralization and decentralization. Stephen K. Sanderson wrote that over the last 4000 years chiefdoms and actual states have gone through sequences of centralization and decentralization of economic, political and social power. Yildiz Atasoy writes this process has been going on "since the Stone Age" through not just chiefdoms and states, but empires and today's "hegemonic core states". Christopher K. Chase-Dunn and Thomas D. Hall review other works that detail these cycles, including works which analyze the concept of core elites which compete with state accumulation of wealth and how their "intra-ruling-class competition accounts for the rise and fall of states" and their phases of centralization and decentralization.
Rising government expenditures, poor economic performance and the rise of free market-influenced ideas have convinced governments to decentralize their operations, to induce competition within their services, to contract out to private firms operating in the market, and to privatize some functions and services entirely.
Government decentralization has both political and administrative aspects. Its decentralization may be territorial, moving power from a central city to other localities, and it may be functional, moving decision-making from the top administrator of any branch of government to lower level officials, or divesting of the function entirely through privatization. It has been called the "new public management" which has been described as decentralization, management by objectives, contracting out, competition within government and consumer orientation.
Political
Political decentralization signifies a reduction in the authority of national governments over policy-making. This process is accomplished by the institution of reforms that either delegate a certain degree of meaningful decision-making autonomy to sub-national tiers of government, or grant citizens the right to elect lower-level officials, like local or regional representatives. Depending on the country, this may require constitutional or statutory reforms, the development of new political parties, increased power for legislatures, the creation of local political units, and encouragement of advocacy groups.
A national government may decide to decentralize its authority and responsibilities for a variety of reasons. Decentralization reforms may occur for administrative reasons, when government officials decide that certain responsibilities and decisions would be handled best at the regional or local level. In democracies, traditionally conservative parties include political decentralization as a directive in their platforms because rightist parties tend to advocate for a decrease in the role of central government. There is also strong evidence to support the idea that government stability increases the probability of political decentralization, since instability brought on by gridlock between opposing parties in legislatures often impedes a government's overall ability to enact sweeping reforms.
The rise of regional ethnic parties in the national politics of parliamentary democracies is also heavily associated with the implementation of decentralization reforms. Ethnic parties may endeavor to transfer more autonomy to their respective regions, and as a partisan strategy, ruling parties within the central government may cooperate by establishing regional assemblies in order to curb the rise of ethnic parties in national elections. This phenomenon famously occurred in 1999, when the United Kingdom's Labour Party appealed to Scottish constituents by creating a semi-autonomous Scottish Parliament in order to neutralize the threat from the increasingly popular Scottish National Party at the national level.
In addition to increasing the administrative efficacy of government and endowing citizens with more power, there are many projected advantages to political decentralization. Individuals who take advantage of their right to elect local and regional authorities have been shown to have more positive attitudes toward politics, and increased opportunities for civic decision-making through participatory democracy mechanisms like public consultations and participatory budgeting are believed to help legitimize government institutions in the eyes of marginalized groups. Moreover, political decentralization is perceived as a valid means of protecting marginalized communities at a local level from the detrimental aspects of development and globalization driven by the state, like the degradation of local customs, codes, and beliefs. In his 2013 book, Democracy and Political Ignorance, George Mason University law professor Ilya Somin argued that political decentralization in a federal democracy confronts the widespread issue of political ignorance by allowing citizens to engage in foot voting, or moving to other jurisdictions with more favorable laws. He cites the mass migration of over one million southern-born African Americans to the North or the West to evade discriminatory Jim Crow laws in the late 19th century and early 20th century.
The European Union follows the principle of subsidiarity, which holds that decision-making should be made by the most local competent authority. The EU should decide only on enumerated issues that a local or member state authority cannot address themselves. Furthermore, enforcement is exclusively the domain of member states. In Finland, the Centre Party explicitly supports decentralization. For example, government departments have been moved from the capital Helsinki to the provinces. The centre supports substantial subsidies that limit potential economic and political centralization to Helsinki.
Political decentralization does not come without its drawbacks. A study by Fan concludes that there is an increase in corruption and rent-seeking when there are more vertical tiers in the government, as well as when there are higher levels of subnational government employment. Other studies warn of high-level politicians that may intentionally deprive regional and local authorities of power and resources when conflicts arise. In order to combat these negative forces, experts believe that political decentralization should be supplemented with other conflict management mechanisms like power-sharing, particularly in regions with ethnic tensions.
Administrative
Four major forms of administrative decentralization have been described.
Deconcentration, the weakest form of decentralization, shifts responsibility for decision-making, finance and implementation of certain public functions from officials of central governments to those in existing districts or, if necessary, new ones under direct control of the central government.
Delegation passes down responsibility for decision-making, finance and implementation. It involves the creation of public-private enterprises or corporations, or of "authorities", special projects or service districts. All of them will have a great deal of decision-making discretion and they may be exempt from civil service requirements and may be permitted to charge users for services.
Devolution transfers responsibility for decision-making, finance and implementation of certain public functions to the sub-national level, such as a regional, local, or state government.
Divestment, also called privatization, may mean merely contracting out services to private companies. Or it may mean relinquishing totally all responsibility for decision-making, finance and implementation of certain public functions. Facilities will be sold off, workers transferred or fired and private companies or non-for-profit organizations allowed to provide the services. Many of these functions originally were done by private individuals, companies, or associations and later taken over by the government, either directly, or by regulating out of business entities which competed with newly created government programs.
Fiscal
Fiscal decentralization means decentralizing revenue raising and/or expenditure of moneys to a lower level of government while maintaining financial responsibility. While this process usually is called fiscal federalism, it may be relevant to unitary, federal, or confederal governments. Fiscal federalism also concerns the "vertical imbalances" where the central government gives too much or too little money to the lower levels. It actually can be a way of increasing central government control of lower levels of government, if it is not linked to other kinds of responsibilities and authority.
Fiscal decentralization can be achieved through user fees, user participation through monetary or labor contributions, expansion of local property or sales taxes, intergovernmental transfers of central government tax monies to local governments through transfer payments or grants, and authorization of municipal borrowing with national government loan guarantees. Transfers of money may be given conditionally with instructions or unconditionally without them.
Market
Market decentralization can be done through privatization of public owned functions and businesses, as described briefly above. But it also is done through deregulation, the abolition of restrictions on businesses competing with government services, for example, postal services, schools, garbage collection. Even as private companies and corporations have worked to have such services contracted out to or privatized by them, others have worked to have these turned over to non-profit organizations or associations.
From the 1970s to the 1990s, there was deregulation of some industries, like banking, trucking, airlines and telecommunications, which resulted generally in more competition and lower prices. According to the Cato Institute, an American libertarian think-tank, in some cases deregulation in some aspects of an industry were offset by increased regulation in other aspects, the electricity industry being a prime example. For example, in banking, Cato Institute believes some deregulation allowed banks to compete across state lines, increasing consumer choice, while an actual increase in regulators and regulations forced banks to make loans to individuals incapable of repaying them, leading eventually to the financial crisis of 2007–2008.
One example of economic decentralization, which is based on a libertarian socialist model, is decentralized economic planning. Decentralized planning is a type of economic system in which decision-making is distributed amongst various economic agents or localized within production agents. An example of this method in practice is in Kerala, India which experimented in 1996 with the People's Plan campaign.
Emmanuelle Auriol and Michel Benaim write about the "comparative benefits" of decentralization versus government regulation in the setting of standards. They find that while there may be a need for public regulation if public safety is at stake, private creation of standards usually is better because "regulators or 'experts' might misrepresent consumers' tastes and needs." As long as companies are averse to incompatible standards, standards will be created that satisfy needs of a modern economy.
Environmental
Central governments themselves may own large tracts of land and control the forest, water, mineral, wildlife and other resources they contain. They may manage them through government operations or leasing them to private businesses; or they may neglect them to be exploited by individuals or groups who defy non-enforced laws against exploitation. It also may control most private land through land-use, zoning, environmental and other regulations. Selling off or leasing lands can be profitable for governments willing to relinquish control, but such programs can face public scrutiny because of fear of a loss of heritage or of environmental damage. Devolution of control to regional or local governments has been found to be an effective way of dealing with these concerns. Such decentralization has happened in India and other developing nations.
In economic ideology
Libertarian socialism
Libertarian socialism is a political philosophy that promotes a non-hierarchical, non-bureaucratic society without private ownership in the means of production. Libertarian socialists believe in converting present-day private productive property into common or public goods. It promotes free association in place of government, non-coercive forms fo social organization, and opposes the various social relations of capitalism, such as wage slavery. The term libertarian socialism is used by some socialists to differentiate their philosophy from state socialism, and by some as a synonym for left anarchism.
Accordingly, libertarian socialists believe that "the exercise of power in any institutionalized form – whether economic, political, religious, or sexual – brutalizes both the wielder of power and the one over whom it is exercised". Libertarian socialists generally place their hopes in decentralized means of direct democracy such as libertarian municipalism, citizens' assemblies, or workers' councils. Libertarian socialists are strongly critical of coercive institutions, which often leads them to reject the legitimacy of the state in favor of anarchism. Adherents propose achieving this through decentralization of political and economic power, usually involving the socialization of most large-scale private property and enterprise (while retaining respect for personal property). Libertarian socialism tends to deny the legitimacy of most forms of economically significant private property, viewing capitalist property relations as forms of domination that are antagonistic to individual freedom.
Free market
Free market ideas popular in the 19th century such as those of Adam Smith returned to prominence in the 1970s and 1980s. Austrian School economist Friedrich von Hayek argued that free markets themselves are decentralized systems where outcomes are produced without explicit agreement or coordination by individuals who use prices as their guide. Eleanor Doyle writes that "[e]conomic decision-making in free markets is decentralized across all the individuals dispersed in each market and is synchronized or coordinated by the price system," and holds that an individual right to property is part of this decentralized system. Criticizing central government control, Hayek wrote in The Road to Serfdom:
According to Bruce M. Owen, this does not mean that all firms themselves have to be equally decentralized. He writes: "markets allocate resources through arms-length transactions among decentralized actors. Much of the time, markets work very efficiently, but there is a variety of conditions under which firms do better. Hence, goods and services are produced and sold by firms with various degrees of horizontal and vertical integration." Additionally, he writes that the "economic incentive to expand horizontally or vertically is usually, but not always, compatible with the social interest in maximizing long-run consumer welfare."
It is often claimed that free markets and private property generate centralized monopolies and other ills; free market advocates counter with the argument that government is the source of monopoly. Historian Gabriel Kolko in his book The Triumph of Conservatism argued that in the first decade of the 20th century businesses were highly decentralized and competitive, with new businesses constantly entering existing industries. In his view, there was no trend towards concentration and monopolization. While there were a wave of mergers of companies trying to corner markets, they found there was too much competition to do so. According to Kolko, this was also true in banking and finance, which saw decentralization as leading to instability as state and local banks competed with the big New York City firms. He argues that, as a result, the largest firms turned to the power of the state and worked with leaders like United States Presidents Theodore Roosevelt, William H. Taft and Woodrow Wilson to pass as "progressive reforms" centralizing laws like The Federal Reserve Act of 1913 that gave control of the monetary system to the wealthiest bankers; the formation of monopoly "public utilities" that made competition with those monopolies illegal; federal inspection of meat packers biased against small companies; extending Interstate Commerce Commission to regulating telephone companies and keeping rates high to benefit AT&T; and using the Sherman Antitrust Act against companies which might combine to threaten larger or monopoly companies.
Author and activist Jane Jacobs's influential 1961 book The Death and Life of American Cities criticized large-scale redevelopment projects which were part of government-planned decentralization of population and businesses to suburbs. She believed it destroyed cities' economies and impoverished remaining residents. Her 1980 book The Question of Separatism: Quebec and the Struggle over Sovereignty supported secession of Quebec from Canada. Her 1984 book Cities and the Wealth of Nations proposed a solution to the problems faced by cities whose economies were being ruined by centralized national governments: decentralization through the "multiplication of sovereignties", meaning an acceptance of the right of cities to secede from the larger nation states that were greatly limiting their ability to produce wealth.
In the organizational structure of a firm
In response to incentive and information conflicts, a firm can either centralize their organizational structure by concentrating decision-making to upper management, or decentralize their organizational structure by delegating authority throughout the organization. The delegation of authority comes with a basic trade-off: while it can increase efficiency and information flow, the central authority consequentially suffers a loss of control. However, through creating an environment of trust and allocating authority formally in the firm, coupled with a stronger rule of law in the geographical location of the firm, the negative consequences of the trade-off can be minimized.
In having a decentralized organizational structure, a firm can remain agile to external shocks and competing trends. Decision-making in a centralized organization can face information flow inefficiencies and barriers to effective communication which decreases the speed and accuracy in which decisions are made. A decentralized firm is said to hold greater flexibility given the efficiency in which it can analyze information and implement relevant outcomes. Additionally, having decision-making power spread across different areas allows for local knowledge to inform decisions, increasing their relevancy and implementational effectiveness. In the process of developing new products or services, the decentralization enable the firm gain advantages of closely meet particular division's needs.
Decentralization also impacts human resource management. The high level of individual agency that workers experience within a decentralized firm can create job enrichment. Studies have shown this enhances the development of new ideas and innovations given the sense of involvement that comes from responsibility. The impacts of decentralization on innovation are furthered by the ease of information flow that comes from this organizational structure. With increased knowledge sharing, workers are more able to use relevant information to inform decision-making. These benefits are enhanced in firms with skill-intensive environments. Skilled workers are more able to analyze information, they pose less risk of information duplication given increased communication abilities, and the productivity cost of multi-tasking is lower. These outcomes of decentralizion make it a particularly effective organizational structure for entrepreneurial and competitive firm environments, such as start-up companies. The flexibility, efficiency of information flow and higher worker autonomy complement the rapid growth and innovation seen in successful start up companies.
In technology and the Internet
Technological decentralization can be defined as a shift from concentrated to distributed modes of production and consumption of goods and services. Generally, such shifts are accompanied by transformations in technology and different technologies are applied for either system. Technology includes tools, materials, skills, techniques and processes by which goals are accomplished in the public and private spheres. Concepts of decentralization of technology are used throughout all types of technology, including especially information technology and appropriate technology.
Technologies often mentioned as best implemented in a decentralized manner, include: water purification, delivery and waste water disposal, agricultural technology and energy technology. Advances in technology may create opportunities for decentralized and privatized replacements for what had traditionally been public services or utilities, such as power, water, mail, telecommunications, consumer product safety, banking, medical licensure, parking meters, and auto emissions. However, in terms of technology, a clear distinction between fully centralized or decentralized technical solutions is often not possible and therefore finding an optimal degree of centralization difficult from an infrastructure planning perspective.
Information technology
Information technology encompasses computers and computer networks, as well as information distribution technologies such as television and telephones. The whole computer industry of computer hardware, software, electronics, Internet, telecommunications equipment, e-commerce and computer services are included.
Executives and managers face a constant tension between centralizing and decentralizing information technology for their organizations. They must find the right balance of centralizing which lowers costs and allows more control by upper management, and decentralizing which allows sub-units and users more control. This will depend on analysis of the specific situation. Decentralization is particularly applicable to business or management units which have a high level of independence, complicated products and customers, and technology less relevant to other units.
Information technology applied to government communications with citizens, often called e-Government, is supposed to support decentralization and democratization. Various forms have been instituted in most nations worldwide.
The Internet is an example of an extremely decentralized network, having no owners at all (although some have argued that this is less the case in recent years). "No one is in charge of internet, and everyone is." As long as they follow a certain minimal number of rules, anyone can be a service provider or a user. Voluntary boards establish protocols, but cannot stop anyone from developing new ones. Other examples of open source or decentralized movements are Wikis which allow users to add, modify, or delete content via the internet. Wikipedia has been described as decentralized (although it is a centralized web site, with a single entity operating the servers). Smartphones have been described as being an important part of the decentralizing effects of smaller and cheaper computers worldwide.
Decentralization continues throughout the industry, for example as the decentralized architecture of wireless routers installed in homes and offices supplement and even replace phone companies' relatively centralized long-range cell towers.
Inspired by system and cybernetics theorists like Norbert Wiener, Marshall McLuhan and Buckminster Fuller, in the 1960s Stewart Brand started the Whole Earth Catalog and later computer networking efforts to bring Silicon Valley computer technologists and entrepreneurs together with countercultural ideas. This resulted in ideas like personal computing, virtual communities and the vision of an "electronic frontier" which would be a more decentralized, egalitarian and free-market libertarian society. Related ideas coming out of Silicon Valley included the free software and creative commons movements which produced visions of a "networked information economy".
Because human interactions in cyberspace transcend physical geography, there is a necessity for new theories in legal and other rule-making systems to deal with decentralized decision-making processes in such systems. For example, what rules should apply to conduct on the global digital network and who should set them? The laws of which nations govern issues of Internet transactions (like seller disclosure requirements or definitions of "fraud"), copyright and trademark?
Decentralized computing
Centralization and re-decentralization of the Internet
The New Yorker reports that although the Internet was originally decentralized, by 2013 it had become less so: "a staggering percentage of communications flow through a small set of corporations – and thus, under the profound influence of those companies and other institutions [...] One solution, espoused by some programmers, is to make the Internet more like it used to be – less centralized and more distributed."
Examples of projects that attempt to contribute to the re-decentralization of the Internet include ArkOS, Diaspora, FreedomBox, IndieWeb, Namecoin, SAFE Network, twtxt and ZeroNet as well as advocacy group Redecentralize.org, which provides support for projects that aim to make the Web less centralized.
In an interview with BBC Radio 5 Live one of the co-founders of Redecentralize.org explained that:
Blockchain technology
In blockchain, decentralization refers to the transfer of control and decision-making from a centralized entity (individual, organization, or group thereof) to a distributed network. Decentralized networks strive to reduce the level of trust that participants must place in one another, and deter their ability to exert authority or control over one another in ways that degrade the functionality of the network.
Decentralized protocols, applications, and ledgers (used in Web3) could be more difficult for governments to regulate, similar to difficulties regulating BitTorrent (which is not a blockchain technology).
Criticism
Factors hindering decentralization include weak local administrative or technical capacity, which may result in inefficient or ineffective services; inadequate financial resources available to perform new local responsibilities, especially in the start-up phase when they are most needed; or inequitable distribution of resources. Decentralization can make national policy coordination too complex; it may allow local elites to capture functions; local cooperation may be undermined by any distrust between private and public sectors; decentralization may result in higher enforcement costs and conflict for resources if there is no higher level of authority. Additionally, decentralization may not be as efficient for standardized, routine, network-based services, as opposed to those that need more complicated inputs. If there is a loss of economies of scale in procurement of labor or resources, the expense of decentralization can rise, even as central governments lose control over financial resources.
It has been noted that while decentralization may increase "productive efficiency" it may undermine "allocative efficiency" by making redistribution of wealth more difficult. Decentralization will cause greater disparities between rich and poor regions, especially during times of crisis when the national government may not be able to help regions needing it.
See also
Centralization
Federalism
Subsidiarity
References
Further reading
Aucoin, Peter, and Herman Bakvis. The Centralization-Decentralization Conundrum: Organization and Management in the Canadian Government (IRPP, 1988),
Campbell, Tim. Quiet Revolution: Decentralization and the Rise of Political Participation in Latin American Cities (University of Pittsburgh Press, 2003), .
Faguet, Jean-Paul. Decentralization and Popular Democracy: Governance from Below in Bolivia, (University of Michigan Press, 2012), .
Fisman, Raymond and Roberta Gatti (2000). Decentralization and Corruption: Evidence Across Countries, Journal of Public Economics, Vol.83, No.3, pp. 325–45.
Frischmann, Eva. Decentralization and Corruption. A Cross-Country Analysis, (Grin Verlag, 2010), .
Miller, Michelle Ann, ed. Autonomy and Armed Separatism in South and Southeast Asia (Singapore: ISEAS, 2012).
Miller, Michelle Ann. Rebellion and Reform in Indonesia. Jakarta's Security and Autonomy Policies in Aceh (London and New York: Routledge, 2009).
Rosen, Harvey S., ed.. Fiscal Federalism: Quantitative Studies National Bureau of Economic Research Project Report, NBER-Project Report, University of Chicago Press, 2008), .
Taylor, Jeff. Politics on a Human Scale: The American Tradition of Decentralism (Lanham, Md.: Lexington Books, 2013), .
Richard M. Burton, Børge Obel, Design Models for Hierarchical Organizations: Computation, Information, and Decentralization, Springer, 1995,
Merilee Serrill Grindle, Going Local: Decentralization, Democratization, And The Promise of Good Governance, Princeton University Press, 2007,
Daniel Treisman, The Architecture of Government: Rethinking Political Decentralization, Cambridge University Press, 2007,
Ryan McMaken, Breaking Away: The Case for Secession, Radical Decentralization, and Smaller Polities, Ludwig von Mises Institute, 2022,
Schakel, Arjan H. (2008), Validation of the Regional Authority Index, Regional and Federal Studies, Routledge, Vol. 18 (2).
Decentralization, article at the "Restructuring local government project" of Dr. Mildred Warner, Cornell University includes a number of articles on decentralization trends and theories.
Robert J. Bennett, ed., Decentralization, Intergovernmental Relations and Markets: Towards a Post-Welfare Agenda, Clarendon, 1990, pp. 1–26.
External links
Organization design
Cyberpunk themes
Military tactics | Decentralization | [
"Engineering"
] | 8,675 | [
"Design",
"Organization design"
] |
49,172 | https://en.wikipedia.org/wiki/Interval%20%28mathematics%29 | In mathematics, a real interval is the set of all real numbers lying between two fixed endpoints with no "gaps". Each endpoint is either a real number or positive or negative infinity, indicating the interval extends without a bound. A real interval can contain neither endpoint, either endpoint, or both endpoints, excluding any endpoint which is infinite.
For example, the set of real numbers consisting of , , and all numbers in between is an interval, denoted and called the unit interval; the set of all positive real numbers is an interval, denoted ; the set of all real numbers is an interval, denoted ; and any single real number is an interval, denoted .
Intervals are ubiquitous in mathematical analysis. For example, they occur implicitly in the epsilon-delta definition of continuity; the intermediate value theorem asserts that the image of an interval by a continuous function is an interval; integrals of real functions are defined over an interval; etc.
Interval arithmetic consists of computing with intervals instead of real numbers for providing a guaranteed enclosure of the result of a numerical computation, even in the presence of uncertainties of input data and rounding errors.
Intervals are likewise defined on an arbitrary totally ordered set, such as integers or rational numbers. The notation of integer intervals is considered in the special section below.
Definitions and terminology
An interval is a subset of the real numbers that contains all real numbers lying between any two numbers of the subset.
The endpoints of an interval are its supremum, and its infimum, if they exist as real numbers. If the infimum does not exist, one says often that the corresponding endpoint is Similarly, if the supremum does not exist, one says that the corresponding endpoint is
Intervals are completely determined by their endpoints and whether each endpoint belong to the interval. This is a consequence of the least-upper-bound property of the real numbers. This characterization is used to specify intervals by mean of , which is described below.
An does not include any endpoint, and is indicated with parentheses. For example, is the interval of all real numbers greater than and less than . (This interval can also be denoted by , see below). The open interval consists of real numbers greater than , i.e., positive real numbers. The open intervals are thus one of the forms
where and are real numbers such that When in the first case, the resulting interval is the empty set which is a degenerate interval (see below). The open intervals are those intervals that are open sets for the usual topology on the real numbers.
A is an interval that includes all its endpoints and is denoted with square brackets. For example, means greater than or equal to and less than or equal to . Closed intervals have one of the following forms in which and are real numbers such that
The closed intervals are those intervals that are closed sets for the usual topology on the real numbers. The empty set and are the only intervals that are both open and closed.
A has two endpoints and includes only one of them. It is said left-open or right-open depending on whether the excluded endpoint is on the left or on the right. These intervals are denoted by mixing notations for open and closed intervals. For example, means greater than and less than or equal to , while means greater than or equal to and less than . The half-open intervals have the form
Every closed interval is a closed set of the real line, but an interval that is a closed set need not be a closed interval. For example, intervals and are also closed sets in the real line. Intervals and are neither an open set nor a closed set. If one allows an endpoint in the closed side to be an infinity (such as ), the result will not be an interval, since it is not even a subset of the real numbers. Instead, the result can be seen as an interval in the extended real line, which occurs in measure theory, for example.
In summary, a set of the real numbers is an interval, if and only if it is an open interval, a closed interval, or a half-open interval.
A is any set consisting of a single real number (i.e., an interval of the form ). Some authors include the empty set in this definition. A real interval that is neither empty nor degenerate is said to be proper, and has infinitely many elements.
An interval is said to be left-bounded or right-bounded, if there is some real number that is, respectively, smaller than or larger than all its elements. An interval is said to be bounded, if it is both left- and right-bounded; and is said to be unbounded otherwise. Intervals that are bounded at only one end are said to be half-bounded. The empty set is bounded, and the set of all reals is the only interval that is unbounded at both ends. Bounded intervals are also commonly known as finite intervals.
Bounded intervals are bounded sets, in the sense that their diameter (which is equal to the absolute difference between the endpoints) is finite. The diameter may be called the length, width, measure, range, or size of the interval. The size of unbounded intervals is usually defined as , and the size of the empty interval may be defined as (or left undefined).
The centre (midpoint) of a bounded interval with endpoints and is , and its radius is the half-length . These concepts are undefined for empty or unbounded intervals.
An interval is said to be left-open if and only if it contains no minimum (an element that is smaller than all other elements); right-open if it contains no maximum; and open if it contains neither. The interval , for example, is left-closed and right-open. The empty set and the set of all reals are both open and closed intervals, while the set of non-negative reals, is a closed interval that is right-open but not left-open. The open intervals are open sets of the real line in its standard topology, and form a base of the open sets.
An interval is said to be left-closed if it has a minimum element or is left-unbounded, right-closed if it has a maximum or is right unbounded; it is simply closed if it is both left-closed and right closed. So, the closed intervals coincide with the closed sets in that topology.
The interior of an interval is the largest open interval that is contained in ; it is also the set of points in which are not endpoints of . The closure of is the smallest closed interval that contains ; which is also the set augmented with its finite endpoints.
For any set of real numbers, the interval enclosure or interval span of is the unique interval that contains , and does not properly contain any other interval that also contains .
An interval is a subinterval of interval if is a subset of . An interval is a proper subinterval of if is a proper subset of .
However, there is conflicting terminology for the terms segment and interval, which have been employed in the literature in two essentially opposite ways, resulting in ambiguity when these terms are used. The Encyclopedia of Mathematics defines interval (without a qualifier) to exclude both endpoints (i.e., open interval) and segment to include both endpoints (i.e., closed interval), while Rudin's Principles of Mathematical Analysis calls sets of the form [a, b] intervals and sets of the form (a, b) segments throughout. These terms tend to appear in older works; modern texts increasingly favor the term interval (qualified by open, closed, or half-open), regardless of whether endpoints are included.
Notations for intervals
The interval of numbers between and , including and , is often denoted . The two numbers are called the endpoints of the interval. In countries where numbers are written with a decimal comma, a semicolon may be used as a separator to avoid ambiguity.
Including or excluding endpoints
To indicate that one of the endpoints is to be excluded from the set, the corresponding square bracket can be either replaced with a parenthesis, or reversed. Both notations are described in International standard ISO 31-11. Thus, in set builder notation,
Each interval , , and represents the empty set, whereas denotes the singleton set . When , all four notations are usually taken to represent the empty set.
Both notations may overlap with other uses of parentheses and brackets in mathematics. For instance, the notation is often used to denote an ordered pair in set theory, the coordinates of a point or vector in analytic geometry and linear algebra, or (sometimes) a complex number in algebra. That is why Bourbaki introduced the notation to denote the open interval. The notation too is occasionally used for ordered pairs, especially in computer science.
Some authors such as Yves Tillé use to denote the complement of the interval ; namely, the set of all real numbers that are either less than or equal to , or greater than or equal to .
Infinite endpoints
In some contexts, an interval may be defined as a subset of the extended real numbers, the set of all real numbers augmented with and .
In this interpretation, the notations , , , and are all meaningful and distinct. In particular, denotes the set of all ordinary real numbers, while denotes the extended reals.
Even in the context of the ordinary reals, one may use an infinite endpoint to indicate that there is no bound in that direction. For example, is the set of positive real numbers, also written as The context affects some of the above definitions and terminology. For instance, the interval = is closed in the realm of ordinary reals, but not in the realm of the extended reals.
Integer intervals
When and are integers, the notation ⟦a, b⟧, or or or just , is sometimes used to indicate the interval of all integers between and included. The notation is used in some programming languages; in Pascal, for example, it is used to formally define a subrange type, most frequently used to specify lower and upper bounds of valid indices of an array.
Another way to interpret integer intervals are as sets defined by enumeration, using ellipsis notation.
An integer interval that has a finite lower or upper endpoint always includes that endpoint. Therefore, the exclusion of endpoints can be explicitly denoted by writing , , or . Alternate-bracket notations like or are rarely used for integer intervals.
Properties
The intervals are precisely the connected subsets of It follows that the image of an interval by any continuous function from to is also an interval. This is one formulation of the intermediate value theorem.
The intervals are also the convex subsets of The interval enclosure of a subset is also the convex hull of
The closure of an interval is the union of the interval and the set of its finite endpoints, and hence is also an interval. (The latter also follows from the fact that the closure of every connected subset of a topological space is a connected subset.) In other words, we have
The intersection of any collection of intervals is always an interval. The union of two intervals is an interval if and only if they have a non-empty intersection or an open end-point of one interval is a closed end-point of the other, for example
If is viewed as a metric space, its open balls are the open bounded intervals , and its closed balls are the closed bounded intervals . In particular, the metric and order topologies in the real line coincide, which is the standard topology of the real line.
Any element of an interval defines a partition of into three disjoint intervals 1, 2, 3: respectively, the elements of that are less than , the singleton and the elements that are greater than . The parts 1 and 3 are both non-empty (and have non-empty interiors), if and only if is in the interior of . This is an interval version of the trichotomy principle.
Dyadic intervals
A dyadic interval is a bounded real interval whose endpoints are and where and are integers. Depending on the context, either endpoint may or may not be included in the interval.
Dyadic intervals have the following properties:
The length of a dyadic interval is always an integer power of two.
Each dyadic interval is contained in exactly one dyadic interval of twice the length.
Each dyadic interval is spanned by two dyadic intervals of half the length.
If two open dyadic intervals overlap, then one of them is a subset of the other.
The dyadic intervals consequently have a structure that reflects that of an infinite binary tree.
Dyadic intervals are relevant to several areas of numerical analysis, including adaptive mesh refinement, multigrid methods and wavelet analysis. Another way to represent such a structure is p-adic analysis (for ).
Generalizations
Balls
An open finite interval is a 1-dimensional open ball with a center at and a radius of The closed finite interval is the corresponding closed ball, and the interval's two endpoints form a 0-dimensional sphere. Generalized to -dimensional Euclidean space, a ball is the set of points whose distance from the center is less than the radius. In the 2-dimensional case, a ball is called a disk.
If a half-space is taken as a kind of degenerate ball (without a well-defined center or radius), a half-space can be taken as analogous to a half-bounded interval, with its boundary plane as the (degenerate) sphere corresponding to the finite endpoint.
Multi-dimensional intervals
A finite interval is (the interior of) a 1-dimensional hyperrectangle. Generalized to real coordinate space an axis-aligned hyperrectangle (or box) is the Cartesian product of finite intervals. For this is a rectangle; for this is a rectangular cuboid (also called a "box").
Allowing for a mix of open, closed, and infinite endpoints, the Cartesian product of any intervals, is sometimes called an -dimensional interval.
A facet of such an interval is the result of replacing any non-degenerate interval factor by a degenerate interval consisting of a finite endpoint of The faces of comprise itself and all faces of its facets. The corners of are the faces that consist of a single point of
Convex polytopes
Any finite interval can be constructed as the intersection of half-bounded intervals (with an empty intersection taken to mean the whole real line), and the intersection of any number of half-bounded intervals is a (possibly empty) interval. Generalized to -dimensional affine space, an intersection of half-spaces (of arbitrary orientation) is (the interior of) a convex polytope, or in the 2-dimensional case a convex polygon.
Domains
An open interval is a connected open set of real numbers. Generalized to topological spaces in general, a non-empty connected open set is called a domain.
Complex intervals
Intervals of complex numbers can be defined as regions of the complex plane, either rectangular or circular.
Intervals in posets and preordered sets
Definitions
The concept of intervals can be defined in arbitrary partially ordered sets or more generally, in arbitrary preordered sets. For a preordered set and two elements one similarly defines the intervals
where means Actually, the intervals with single or no endpoints are the same as the intervals with two endpoints in the larger preordered set
defined by adding new smallest and greatest elements (even if there were ones), which are subsets of In the case of one may take to be the extended real line.
Convex sets and convex components in order theory
A subset of the preordered set is (order-)convex if for every and every we have Unlike in the case of the real line, a convex set of a preordered set need not be an interval. For example, in the totally ordered set of rational numbers, the set
is convex, but not an interval of since there is no square root of two in
Let be a preordered set and let The convex sets of contained in form a poset under inclusion. A maximal element of this poset is called a convex component of By the Zorn lemma, any convex set of contained in is contained in some convex component of but such components need not be unique. In a totally ordered set, such a component is always unique. That is, the convex components of a subset of a totally ordered set form a partition.
Properties
A generalization of the characterizations of the real intervals follows. For a non-empty subset of a linear continuum the following conditions are equivalent.
The set is an interval.
The set is order-convex.
The set is a connected subset when is endowed with the order topology.
For a subset of a lattice the following conditions are equivalent.
The set is a sublattice and an (order-)convex set.
There is an ideal and a filter such that
Applications
In general topology
Every Tychonoff space is embeddable into a product space of the closed unit intervals Actually, every Tychonoff space that has a base of cardinality is embeddable into the product of copies of the intervals.
The concepts of convex sets and convex components are used in a proof that every totally ordered set endowed with the order topology is completely normal or moreover, monotonically normal.
Topological algebra
Intervals can be associated with points of the plane, and hence regions of intervals can be associated with regions of the plane. Generally, an interval in mathematics corresponds to an ordered pair taken from the direct product of real numbers with itself, where it is often assumed that . For purposes of mathematical structure, this restriction is discarded, and "reversed intervals" where are allowed. Then, the collection of all intervals can be identified with the topological ring formed by the direct sum of with itself, where addition and multiplication are defined component-wise.
The direct sum algebra has two ideals, { [x,0] : x ∈ R } and { [0,y] : y ∈ R }. The identity element of this algebra is the condensed interval . If interval is not in one of the ideals, then it has multiplicative inverse . Endowed with the usual topology, the algebra of intervals forms a topological ring. The group of units of this ring consists of four quadrants determined by the axes, or ideals in this case. The identity component of this group is quadrant I.
Every interval can be considered a symmetric interval around its midpoint. In a reconfiguration published in 1956 by M Warmus, the axis of "balanced intervals" is used along with the axis of intervals that reduce to a point. Instead of the direct sum the ring of intervals has been identified with the hyperbolic numbers by M. Warmus and D. H. Lehmer through the identification
where
This linear mapping of the plane, which amounts of a ring isomorphism, provides the plane with a multiplicative structure having some analogies to ordinary complex arithmetic, such as polar decomposition.
See also
Arc (geometry)
Inequality
Interval graph
Interval finite element
Interval (statistics)
Line segment
Partition of an interval
Unit interval
References
Bibliography
T. Sunaga, "Theory of interval algebra and its application to numerical analysis" , In: Research Association of Applied Geometry (RAAG) Memoirs, Ggujutsu Bunken Fukuy-kai. Tokyo, Japan, 1958, Vol. 2, pp. 29–46 (547-564); reprinted in Japan Journal on Industrial and Applied Mathematics, 2009, Vol. 26, No. 2-3, pp. 126–143.
External links
A Lucid Interval by Brian Hayes: An American Scientist article provides an introduction.
Interval computations website
Interval computations research centers
Interval Notation by George Beck, Wolfram Demonstrations Project.
Sets of real numbers
Order theory
Topology | Interval (mathematics) | [
"Physics",
"Mathematics"
] | 4,075 | [
"Topology",
"Space",
"Geometry",
"Spacetime",
"Order theory"
] |
49,176 | https://en.wikipedia.org/wiki/Conjugacy%20class | In mathematics, especially group theory, two elements and of a group are conjugate if there is an element in the group such that This is an equivalence relation whose equivalence classes are called conjugacy classes. In other words, each conjugacy class is closed under for all elements in the group.
Members of the same conjugacy class cannot be distinguished by using only the group structure, and therefore share many properties. The study of conjugacy classes of non-abelian groups is fundamental for the study of their structure. For an abelian group, each conjugacy class is a set containing one element (singleton set).
Functions that are constant for members of the same conjugacy class are called class functions.
Definition
Let be a group. Two elements are conjugate if there exists an element such that in which case is called of and is called a conjugate of
In the case of the general linear group of invertible matrices, the conjugacy relation is called matrix similarity.
It can be easily shown that conjugacy is an equivalence relation and therefore partitions into equivalence classes. (This means that every element of the group belongs to precisely one conjugacy class, and the classes and are equal if and only if and are conjugate, and disjoint otherwise.) The equivalence class that contains the element is
and is called the conjugacy class of The of is the number of distinct (nonequivalent) conjugacy classes. All elements belonging to the same conjugacy class have the same order.
Conjugacy classes may be referred to by describing them, or more briefly by abbreviations such as "6A", meaning "a certain conjugacy class with elements of order 6", and "6B" would be a different conjugacy class with elements of order 6; the conjugacy class 1A is the conjugacy class of the identity which has order 1. In some cases, conjugacy classes can be described in a uniform way; for example, in the symmetric group they can be described by cycle type.
Examples
The symmetric group consisting of the 6 permutations of three elements, has three conjugacy classes:
No change . The single member has order 1.
Transposing two . The 3 members all have order 2.
A cyclic permutation of all three . The 2 members both have order 3.
These three classes also correspond to the classification of the isometries of an equilateral triangle.
The symmetric group consisting of the 24 permutations of four elements, has five conjugacy classes, listed with their description, cycle type, member order, and members:
No change. Cycle type = [14]. Order = 1. Members = { (1, 2, 3, 4) }. The single row containing this conjugacy class is shown as a row of black circles in the adjacent table.
Interchanging two (other two remain unchanged). Cycle type = [1221]. Order = 2. Members = { (1, 2, 4, 3), (1, 4, 3, 2), (1, 3, 2, 4), (4, 2, 3, 1), (3, 2, 1, 4), (2, 1, 3, 4) }). The 6 rows containing this conjugacy class are highlighted in green in the adjacent table.
A cyclic permutation of three (other one remains unchanged). Cycle type = [1131]. Order = 3. Members = { (1, 3, 4, 2), (1, 4, 2, 3), (3, 2, 4, 1), (4, 2, 1, 3), (4, 1, 3, 2), (2, 4, 3, 1), (3, 1, 2, 4), (2, 3, 1, 4) }). The 8 rows containing this conjugacy class are shown with normal print (no boldface or color highlighting) in the adjacent table.
A cyclic permutation of all four. Cycle type = [41]. Order = 4. Members = { (2, 3, 4, 1), (2, 4, 1, 3), (3, 1, 4, 2), (3, 4, 2, 1), (4, 1, 2, 3), (4, 3, 1, 2) }). The 6 rows containing this conjugacy class are highlighted in orange in the adjacent table.
Interchanging two, and also the other two. Cycle type = [22]. Order = 2. Members = { (2, 1, 4, 3), (4, 3, 2, 1), (3, 4, 1, 2) }). The 3 rows containing this conjugacy class are shown with boldface entries in the adjacent table.
The proper rotations of the cube, which can be characterized by permutations of the body diagonals, are also described by conjugation in
In general, the number of conjugacy classes in the symmetric group is equal to the number of integer partitions of This is because each conjugacy class corresponds to exactly one partition of into cycles, up to permutation of the elements of
In general, the Euclidean group can be studied by conjugation of isometries in Euclidean space.
Example
Let G =
a = ( 2 3 )
x = ( 1 2 3 )
x−1 = ( 3 2 1 )
Then xax−1
= ( 1 2 3 ) ( 2 3 ) ( 3 2 1 ) = ( 3 1 )
= ( 3 1 ) is Conjugate of ( 2 3 )
Properties
The identity element is always the only element in its class, that is
If is abelian then for all , i.e. for all (and the converse is also true: if all conjugacy classes are singletons then is abelian).
If two elements belong to the same conjugacy class (that is, if they are conjugate), then they have the same order. More generally, every statement about can be translated into a statement about because the map is an automorphism of called an inner automorphism. See the next property for an example.
If and are conjugate, then so are their powers and (Proof: if then ) Thus taking th powers gives a map on conjugacy classes, and one may consider which conjugacy classes are in its preimage. For example, in the symmetric group, the square of an element of type (3)(2) (a 3-cycle and a 2-cycle) is an element of type (3), therefore one of the power-up classes of (3) is the class (3)(2) (where is a power-up class of ).
An element lies in the center of if and only if its conjugacy class has only one element, itself. More generally, if denotes the of i.e., the subgroup consisting of all elements such that then the index is equal to the number of elements in the conjugacy class of (by the orbit-stabilizer theorem).
Take and let be the distinct integers which appear as lengths of cycles in the cycle type of (including 1-cycles). Let be the number of cycles of length in for each (so that ). Then the number of conjugates of is:
Conjugacy as group action
For any two elements let
This defines a group action of on The orbits of this action are the conjugacy classes, and the stabilizer of a given element is the element's centralizer.
Similarly, we can define a group action of on the set of all subsets of by writing
or on the set of the subgroups of
Conjugacy class equation
If is a finite group, then for any group element the elements in the conjugacy class of are in one-to-one correspondence with cosets of the centralizer This can be seen by observing that any two elements and belonging to the same coset (and hence, for some in the centralizer ) give rise to the same element when conjugating :
That can also be seen from the orbit-stabilizer theorem, when considering the group as acting on itself through conjugation, so that orbits are conjugacy classes and stabilizer subgroups are centralizers. The converse holds as well.
Thus the number of elements in the conjugacy class of is the index of the centralizer in ; hence the size of each conjugacy class divides the order of the group.
Furthermore, if we choose a single representative element from every conjugacy class, we infer from the disjointness of the conjugacy classes that
where is the centralizer of the element Observing that each element of the center forms a conjugacy class containing just itself gives rise to the class equation:
where the sum is over a representative element from each conjugacy class that is not in the center.
Knowledge of the divisors of the group order can often be used to gain information about the order of the center or of the conjugacy classes.
Example
Consider a finite -group (that is, a group with order where is a prime number and ). We are going to prove that .
Since the order of any conjugacy class of must divide the order of it follows that each conjugacy class that is not in the center also has order some power of where But then the class equation requires that From this we see that must divide so
In particular, when then is an abelian group since any non-trivial group element is of order or If some element of is of order then is isomorphic to the cyclic group of order hence abelian. On the other hand, if every non-trivial element in is of order hence by the conclusion above then or We only need to consider the case when then there is an element of which is not in the center of Note that includes and the center which does not contain but at least elements. Hence the order of is strictly larger than therefore therefore is an element of the center of a contradiction. Hence is abelian and in fact isomorphic to the direct product of two cyclic groups each of order
Conjugacy of subgroups and general subsets
More generally, given any subset ( not necessarily a subgroup), define a subset to be conjugate to if there exists some such that Let be the set of all subsets such that is conjugate to
A frequently used theorem is that, given any subset the index of (the normalizer of ) in equals the cardinality of :
This follows since, if then if and only if in other words, if and only if are in the same coset of
By using this formula generalizes the one given earlier for the number of elements in a conjugacy class.
The above is particularly useful when talking about subgroups of The subgroups can thus be divided into conjugacy classes, with two subgroups belonging to the same class if and only if they are conjugate.
Conjugate subgroups are isomorphic, but isomorphic subgroups need not be conjugate. For example, an abelian group may have two different subgroups which are isomorphic, but they are never conjugate.
Geometric interpretation
Conjugacy classes in the fundamental group of a path-connected topological space can be thought of as equivalence classes of free loops under free homotopy.
Conjugacy class and irreducible representations in finite group
In any finite group, the number of nonisomorphic irreducible representations over the complex numbers is precisely the number of conjugacy classes.
See also
Notes
References
External links
Group theory | Conjugacy class | [
"Mathematics"
] | 2,472 | [
"Group theory",
"Fields of abstract algebra"
] |
49,180 | https://en.wikipedia.org/wiki/Fuzzy%20logic | Fuzzy logic is a form of many-valued logic in which the truth value of variables may be any real number between 0 and 1. It is employed to handle the concept of partial truth, where the truth value may range between completely true and completely false. By contrast, in Boolean logic, the truth values of variables may only be the integer values 0 or 1.
The term fuzzy logic was introduced with the 1965 proposal of fuzzy set theory by mathematician Lotfi Zadeh. Fuzzy logic had, however, been studied since the 1920s, as infinite-valued logic—notably by Łukasiewicz and Tarski.
Fuzzy logic is based on the observation that people make decisions based on imprecise and non-numerical information. Fuzzy models or fuzzy sets are mathematical means of representing vagueness and imprecise information (hence the term fuzzy). These models have the capability of recognising, representing, manipulating, interpreting, and using data and information that are vague and lack certainty.
Fuzzy logic has been applied to many fields, from control theory to artificial intelligence.
Overview
Classical logic only permits conclusions that are either true or false. However, there are also propositions with variable answers, which one might find when asking a group of people to identify a color. In such instances, the truth appears as the result of reasoning from inexact or partial knowledge in which the sampled answers are mapped on a spectrum.
Both degrees of truth and probabilities range between 0 and 1 and hence may seem identical at first, but fuzzy logic uses degrees of truth as a mathematical model of vagueness, while probability is a mathematical model of ignorance.
Applying truth values
A basic application might characterize various sub-ranges of a continuous variable. For instance, a temperature measurement for anti-lock brakes might have several separate membership functions defining particular temperature ranges needed to control the brakes properly. Each function maps the same temperature value to a truth value in the 0 to 1 range. These truth values can then be used to determine how the brakes should be controlled. Fuzzy set theory provides a means for representing uncertainty.
Linguistic variables
In fuzzy logic applications, non-numeric values are often used to facilitate the expression of rules and facts.
A linguistic variable such as age may accept values such as young and its antonym old. Because natural languages do not always contain enough value terms to express a fuzzy value scale, it is common practice to modify linguistic values with adjectives or adverbs. For example, we can use the hedges rather and somewhat to construct the additional values rather old or somewhat young.
Fuzzy systems
Mamdani
The most well-known system is the Mamdani rule-based one. It uses the following rules:
Fuzzify all input values into fuzzy membership functions.
Execute all applicable rules in the rulebase to compute the fuzzy output functions.
De-fuzzify the fuzzy output functions to get "crisp" output values.
Fuzzification
Fuzzification is the process of assigning the numerical input of a system to fuzzy sets with some degree of membership. This degree of membership may be anywhere within the interval [0,1]. If it is 0 then the value does not belong to the given fuzzy set, and if it is 1 then the value completely belongs within the fuzzy set. Any value between 0 and 1 represents the degree of uncertainty that the value belongs in the set. These fuzzy sets are typically described by words, and so by assigning the system input to fuzzy sets, we can reason with it in a linguistically natural manner.
For example, in the image below, the meanings of the expressions cold, warm, and hot are represented by functions mapping a temperature scale. A point on that scale has three "truth values"—one for each of the three functions. The vertical line in the image represents a particular temperature that the three arrows (truth values) gauge. Since the red arrow points to zero, this temperature may be interpreted as "not hot"; i.e. this temperature has zero membership in the fuzzy set "hot". The orange arrow (pointing at 0.2) may describe it as "slightly warm" and the blue arrow (pointing at 0.8) "fairly cold". Therefore, this temperature has 0.2 membership in the fuzzy set "warm" and 0.8 membership in the fuzzy set "cold". The degree of membership assigned for each fuzzy set is the result of fuzzification.
Fuzzy sets are often defined as triangle or trapezoid-shaped curves, as each value will have a slope where the value is increasing, a peak where the value is equal to 1 (which can have a length of 0 or greater) and a slope where the value is decreasing. They can also be defined using a sigmoid function. One common case is the standard logistic function defined as
,
which has the following symmetry property
From this it follows that
Fuzzy logic operators
Fuzzy logic works with membership values in a way that mimics Boolean logic. To this end, replacements for basic operators ("gates") AND, OR, NOT must be available. There are several ways to this. A common replacement is called the s:
For TRUE/1 and FALSE/0, the fuzzy expressions produce the same result as the Boolean expressions.
There are also other operators, more linguistic in nature, called hedges that can be applied. These are generally adverbs such as very, or somewhat, which modify the meaning of a set using a mathematical formula.
However, an arbitrary choice table does not always define a fuzzy logic function. In the paper (Zaitsev, et al), a criterion has been formulated to recognize whether a given choice table defines a fuzzy logic function and a simple algorithm of fuzzy logic function synthesis has been proposed based on introduced concepts of constituents of minimum and maximum. A fuzzy logic function represents a disjunction of constituents of minimum, where a constituent of minimum is a conjunction of variables of the current area greater than or equal to the function value in this area (to the right of the function value in the inequality, including the function value).
Another set of AND/OR operators is based on multiplication, where
x AND y = x*y
NOT x = 1 - x
Hence,
x OR y = NOT( AND( NOT(x), NOT(y) ) )
x OR y = NOT( AND(1-x, 1-y) )
x OR y = NOT( (1-x)*(1-y) )
x OR y = 1-(1-x)*(1-y)
x OR y = x+y-xy
Given any two of AND/OR/NOT, it is possible to derive the third. The generalization of AND is an instance of a t-norm.
IF-THEN rules
IF-THEN rules map input or computed truth values to desired output truth values. Example:
IF temperature IS very cold THEN fan_speed is stopped
IF temperature IS cold THEN fan_speed is slow
IF temperature IS warm THEN fan_speed is moderate
IF temperature IS hot THEN fan_speed is high
Given a certain temperature, the fuzzy variable hot has a certain truth value, which is copied to the high variable.
Should an output variable occur in several THEN parts, then the values from the respective IF parts are combined using the OR operator.
Defuzzification
The goal is to get a continuous variable from fuzzy truth values.
This would be easy if the output truth values were exactly those obtained from fuzzification of a given number.
Since, however, all output truth values are computed independently, in most cases they do not represent such a set of numbers.
One has then to decide for a number that matches best the "intention" encoded in the truth value.
For example, for several truth values of fan_speed, an actual speed must be found that best fits the computed truth values of the variables 'slow', 'moderate' and so on.
There is no single algorithm for this purpose.
A common algorithm is
For each truth value, cut the membership function at this value
Combine the resulting curves using the OR operator
Find the center-of-weight of the area under the curve
The x position of this center is then the final output.
Takagi–Sugeno–Kang (TSK)
The TSK system is similar to Mamdani, but the defuzzification process is included in the execution of the fuzzy rules. These are also adapted, so that instead the consequent of the rule is represented through a polynomial function (usually constant or linear). An example of a rule with a constant output would be:IF temperature IS very cold = 2In this case, the output will be equal to the constant of the consequent (e.g. 2). In most scenarios we would have an entire rule base, with 2 or more rules. If this is the case, the output of the entire rule base will be the average of the consequent of each rule i (Yi), weighted according to the membership value of its antecedent (hi):
An example of a rule with a linear output would be instead:IF temperature IS very cold AND humidity IS high = 2 * temperature + 1 * humidityIn this case, the output of the rule will be the result of function in the consequent. The variables within the function represent the membership values after fuzzification, not the crisp values. Same as before, in case we have an entire rule base with 2 or more rules, the total output will be the weighted average between the output of each rule.
The main advantage of using TSK over Mamdani is that it is computationally efficient and works well within other algorithms, such as PID control and with optimization algorithms. It can also guarantee the continuity of the output surface. However, Mamdani is more intuitive and easier to work with by people. Hence, TSK is usually used within other complex methods, such as in adaptive neuro fuzzy inference systems.
Forming a consensus of inputs and fuzzy rules
Since the fuzzy system output is a consensus of all of the inputs and all of the rules, fuzzy logic systems can be well behaved when input values are not available or are not trustworthy. Weightings can be optionally added to each rule in the rulebase and weightings can be used to regulate the degree to which a rule affects the output values. These rule weightings can be based upon the priority, reliability or consistency of each rule. These rule weightings may be static or can be changed dynamically, even based upon the output from other rules.
Applications
Fuzzy logic is used in control systems to allow experts to contribute vague rules such as "if you are close to the destination station and moving fast, increase the train's brake pressure"; these vague rules can then be numerically refined within the system.
Many of the early successful applications of fuzzy logic were implemented in Japan. A first notable application was on the Sendai Subway 1000 series, in which fuzzy logic was able to improve the economy, comfort, and precision of the ride. It has also been used for handwriting recognition in Sony pocket computers, helicopter flight aids, subway system controls, improving automobile fuel efficiency, single-button washing machine controls, automatic power controls in vacuum cleaners, and early recognition of earthquakes through the Institute of Seismology Bureau of Meteorology, Japan.
Artificial intelligence
Neural networks based artificial intelligence and fuzzy logic are, when analyzed, the same thing—the underlying logic of neural networks is fuzzy. A neural network will take a variety of valued inputs, give them different weights in relation to each other, combine intermediate values a certain number of times, and arrive at a decision with a certain value. Nowhere in that process is there anything like the sequences of either-or decisions which characterize non-fuzzy mathematics, computer programming, and digital electronics. In the 1980s, researchers were divided about the most effective approach to machine learning: decision tree learning or neural networks. The former approach uses binary logic, matching the hardware on which it runs, but despite great efforts it did not result in intelligent systems. Neural networks, by contrast, did result in accurate models of complex situations and soon found their way onto a multitude of electronic devices. They can also now be implemented directly on analog microchips, as opposed to the previous pseudo-analog implementations on digital chips. The greater efficiency of these compensates for the intrinsic lesser accuracy of analog in various use cases.
Medical decision making
Fuzzy logic is an important concept in medical decision making. Since medical and healthcare data can be subjective or fuzzy, applications in this domain have a great potential to benefit a lot by using fuzzy-logic-based approaches.
Fuzzy logic can be used in many different aspects within the medical decision making framework. Such aspects include in medical image analysis, biomedical signal analysis, segmentation of images or signals, and feature extraction / selection of images or signals.
The biggest question in this application area is how much useful information can be derived when using fuzzy logic. A major challenge is how to derive the required fuzzy data. This is even more challenging when one has to elicit such data from humans (usually, patients). As has been said How to elicit fuzzy data, and how to validate the accuracy of the data is still an ongoing effort, strongly related to the application of fuzzy logic. The problem of assessing the quality of fuzzy data is a difficult one. This is why fuzzy logic is a highly promising possibility within the medical decision making application area but still requires more research to achieve its full potential.
Image-based computer-aided diagnosis
One of the common application areas of fuzzy logic is image-based computer-aided diagnosis in medicine. Computer-aided diagnosis is a computerized set of inter-related tools that can be used to aid physicians in their diagnostic decision-making.
Fuzzy databases
Once fuzzy relations are defined, it is possible to develop fuzzy relational databases. The first fuzzy relational database, FRDB, appeared in Maria Zemankova's dissertation (1983). Later, some other models arose like the Buckles-Petry model, the Prade-Testemale Model, the Umano-Fukami model or the GEFRED model by J. M. Medina, M. A. Vila et al.
Fuzzy querying languages have been defined, such as the SQLf by P. Bosc et al. and the FSQL by J. Galindo et al. These languages define some structures in order to include fuzzy aspects in the SQL statements, like fuzzy conditions, fuzzy comparators, fuzzy constants, fuzzy constraints, fuzzy thresholds, linguistic labels etc.
Logical analysis
In mathematical logic, there are several formal systems of "fuzzy logic", most of which are in the family of t-norm fuzzy logics.
Propositional fuzzy logics
The most important propositional fuzzy logics are:
Monoidal t-norm-based propositional fuzzy logic MTL is an axiomatization of logic where conjunction is defined by a left continuous t-norm and implication is defined as the residuum of the t-norm. Its models correspond to MTL-algebras that are pre-linear commutative bounded integral residuated lattices.
Basic propositional fuzzy logic BL is an extension of MTL logic where conjunction is defined by a continuous t-norm, and implication is also defined as the residuum of the t-norm. Its models correspond to BL-algebras.
Łukasiewicz fuzzy logic is the extension of basic fuzzy logic BL where standard conjunction is the Łukasiewicz t-norm. It has the axioms of basic fuzzy logic plus an axiom of double negation, and its models correspond to MV-algebras.
Gödel fuzzy logic is the extension of basic fuzzy logic BL where conjunction is the Gödel t-norm (that is, minimum). It has the axioms of BL plus an axiom of idempotence of conjunction, and its models are called G-algebras.
Product fuzzy logic is the extension of basic fuzzy logic BL where conjunction is the product t-norm. It has the axioms of BL plus another axiom for cancellativity of conjunction, and its models are called product algebras.
Fuzzy logic with evaluated syntax (sometimes also called Pavelka's logic), denoted by EVŁ, is a further generalization of mathematical fuzzy logic. While the above kinds of fuzzy logic have traditional syntax and many-valued semantics, in EVŁ syntax is also evaluated. This means that each formula has an evaluation. Axiomatization of EVŁ stems from Łukasziewicz fuzzy logic. A generalization of the classical Gödel completeness theorem is provable in EVŁ.
Predicate fuzzy logics
Similar to the way predicate logic is created from propositional logic, predicate fuzzy logics extend fuzzy systems by universal and existential quantifiers. The semantics of the universal quantifier in t-norm fuzzy logics is the infimum of the truth degrees of the instances of the quantified subformula, while the semantics of the existential quantifier is the supremum of the same.
Decidability Issues
The notions of a "decidable subset" and "recursively enumerable subset" are basic ones for classical mathematics and classical logic. Thus the question of a suitable extension of them to fuzzy set theory is a crucial one. The first proposal in such a direction was made by E. S. Santos by the notions of fuzzy Turing machine, Markov normal fuzzy algorithm and fuzzy program (see Santos 1970). Successively, L. Biacino and G. Gerla argued that the proposed definitions are rather questionable. For example, in one shows that the fuzzy Turing machines are not adequate for fuzzy language theory since there are natural fuzzy languages intuitively computable that cannot be recognized by a fuzzy Turing Machine. Then they proposed the following definitions. Denote by Ü the set of rational numbers in [0,1]. Then a fuzzy subset s : S [0,1] of a set S is recursively enumerable if a recursive map h : S×N Ü exists such that, for every x in S, the function h(x,n) is increasing with respect to n and s(x) = lim h(x,n).
We say that s is decidable if both s and its complement –s are recursively enumerable. An extension of such a theory to the general case of the L-subsets is possible (see Gerla 2006).
The proposed definitions are well related to fuzzy logic. Indeed, the following theorem holds true (provided that the deduction apparatus of the considered fuzzy logic satisfies some obvious effectiveness property).
Any "axiomatizable" fuzzy theory is recursively enumerable. In particular, the fuzzy set of logically true formulas is recursively enumerable in spite of the fact that the crisp set of valid formulas is not recursively enumerable, in general. Moreover, any axiomatizable and complete theory is decidable.
It is an open question to give support for a "Church thesis" for fuzzy mathematics, the proposed notion of recursive enumerability for fuzzy subsets is the adequate one. In order to solve this, an extension of the notions of fuzzy grammar and fuzzy Turing machine are necessary. Another open question is to start from this notion to find an extension of Gödel's theorems to fuzzy logic.
Compared to other logics
Probability
Fuzzy logic and probability address different forms of uncertainty. While both fuzzy logic and probability theory can represent degrees of certain kinds of subjective belief, fuzzy set theory uses the concept of fuzzy set membership, i.e., how much an observation is within a vaguely defined set, and probability theory uses the concept of subjective probability, i.e., frequency of occurrence or likelihood of some event or condition . The concept of fuzzy sets was developed in the mid-twentieth century at Berkeley as a response to the lack of a probability theory for jointly modelling uncertainty and vagueness.
Bart Kosko claims in Fuzziness vs. Probability that probability theory is a subtheory of fuzzy logic, as questions of degrees of belief in mutually-exclusive set membership in probability theory can be represented as certain cases of non-mutually-exclusive graded membership in fuzzy theory. In that context, he also derives Bayes' theorem from the concept of fuzzy subsethood. Lotfi A. Zadeh argues that fuzzy logic is different in character from probability, and is not a replacement for it. He fuzzified probability to fuzzy probability and also generalized it to possibility theory.
More generally, fuzzy logic is one of many different extensions to classical logic intended to deal with issues of uncertainty outside of the scope of classical logic, the inapplicability of probability theory in many domains, and the paradoxes of Dempster–Shafer theory.
Ecorithms
Computational theorist Leslie Valiant uses the term ecorithms to describe how many less exact systems and techniques like fuzzy logic (and "less robust" logic) can be applied to learning algorithms. Valiant essentially redefines machine learning as evolutionary. In general use, ecorithms are algorithms that learn from their more complex environments (hence eco-) to generalize, approximate and simplify solution logic. Like fuzzy logic, they are methods used to overcome continuous variables or systems too complex to completely enumerate or understand discretely or exactly. Ecorithms and fuzzy logic also have the common property of dealing with possibilities more than probabilities, although feedback and feed forward, basically stochastic weights, are a feature of both when dealing with, for example, dynamical systems.
Gödel G∞ logic
Another logical system where truth values are real numbers between 0 and 1 and where AND & OR operators are replaced with MIN and MAX is Gödel's G∞ logic. This logic has many similarities with fuzzy logic but defines negation differently and has an internal implication. Negation and implication are defined as follows:
which turns the resulting logical system into a model for intuitionistic logic, making it particularly well-behaved among all possible choices of logical systems with real numbers between 0 and 1 as truth values. In this case, implication may be interpreted as "x is less true than y" and negation as "x is less true than 0" or "x is strictly false", and for any and , we have that . In particular, in Gödel logic negation is no longer an involution and double negation maps any nonzero value to 1.
Compensatory fuzzy logic
Compensatory fuzzy logic (CFL) is a branch of fuzzy logic with modified rules for conjunction and disjunction. When the truth value of one component of a conjunction or disjunction is increased or decreased, the other component is decreased or increased to compensate. This increase or decrease in truth value may be offset by the increase or decrease in another component. An offset may be blocked when certain thresholds are met. Proponents claim that CFL allows for better computational semantic behaviors and mimic natural language.
According to Jesús Cejas Montero (2011) The Compensatory fuzzy logic consists of four continuous operators: conjunction (c); disjunction (d); fuzzy strict order (or); and negation (n). The conjunction is the geometric mean and its dual as conjunctive and disjunctive operators.
Markup language standardization
The IEEE 1855, the IEEE STANDARD 1855–2016, is about a specification language named Fuzzy Markup Language (FML) developed by the IEEE Standards Association. FML allows modelling a fuzzy logic system in a human-readable and hardware independent way. FML is based on eXtensible Markup Language (XML). The designers of fuzzy systems with FML have a unified and high-level methodology for describing interoperable fuzzy systems. IEEE STANDARD 1855–2016 uses the W3C XML Schema definition language to define the syntax and semantics of the FML programs.
Prior to the introduction of FML, fuzzy logic practitioners could exchange information about their fuzzy algorithms by adding to their software functions the ability to read, correctly parse, and store the result of their work in a form compatible with the Fuzzy Control Language (FCL) described and specified by Part 7 of IEC 61131.
See also
Bayesian inference
Expert system
False dilemma
Fuzzy architectural spatial analysis
Fuzzy classification
Fuzzy concept
Fuzzy control system
Fuzzy electronics
Fuzzy subalgebra
FuzzyCLIPS
High performance fuzzy computing
IEEE Transactions on Fuzzy Systems
Interval finite element
Noise-based logic
Paraconsistent logic
Rough set
Sorites paradox
Trinary logic
Type-2 fuzzy sets and systems
Vector logic
References
Bibliography
External links
IEC 1131-7 CD1 IEC 1131-7 CD1 PDF
Fuzzy Logic – article at Scholarpedia
Modeling With Words – article at Scholarpedia
Fuzzy logic – article at Stanford Encyclopedia of Philosophy
Fuzzy Math – Beginner level introduction to Fuzzy Logic
Fuzziness and exactness – Fuzziness in everyday life, science, religion, ethics, politics, etc.
Fuzzylite – A cross-platform, free open-source Fuzzy Logic Control Library written in C++. Also has a very useful graphic user interface in QT4.
More Flexible Machine Learning – MIT describes one application.
Semantic Similarity MIT provides details about fuzzy semantic similarity.
Logic in computer science
Non-classical logic
Probability interpretations | Fuzzy logic | [
"Mathematics"
] | 5,197 | [
"Mathematical logic",
"Logic in computer science",
"Probability interpretations"
] |
49,197 | https://en.wikipedia.org/wiki/Antiviral%20drug | Antiviral drugs are a class of medication used for treating viral infections. Most antivirals target specific viruses, while a broad-spectrum antiviral is effective against a wide range of viruses. Antiviral drugs are a class of antimicrobials, a larger group which also includes antibiotic (also termed antibacterial), antifungal and antiparasitic drugs, or antiviral drugs based on monoclonal antibodies. Most antivirals are considered relatively harmless to the host, and therefore can be used to treat infections. They should be distinguished from virucides, which are not medication but deactivate or destroy virus particles, either inside or outside the body. Natural virucides are produced by some plants such as eucalyptus and Australian tea trees.
Medical uses
Most of the antiviral drugs now available are designed to help deal with HIV, herpes viruses, the hepatitis B and C viruses, and influenza A and B viruses.
Viruses use the host's cells to replicate and this makes it difficult to find targets for the drug that would interfere with the virus without also harming the host organism's cells. Moreover, the major difficulty in developing vaccines and antiviral drugs is due to viral variation.
The emergence of antivirals is the product of a greatly expanded knowledge of the genetic and molecular function of organisms, allowing biomedical researchers to understand the structure and function of viruses, major advances in the techniques for finding new drugs, and the pressure placed on the medical profession to deal with the human immunodeficiency virus (HIV), the cause of acquired immunodeficiency syndrome (AIDS).
The first experimental antivirals were developed in the 1960s, mostly to deal with herpes viruses, and were found using traditional trial-and-error drug discovery methods. Researchers grew cultures of cells and infected them with the target virus. They then introduced into the cultures chemicals which they thought might inhibit viral activity and observed whether the level of virus in the cultures rose or fell. Chemicals that seemed to have an effect were selected for closer study.
This was a very time-consuming, hit-or-miss procedure, and in the absence of a good knowledge of how the target virus worked, it was not efficient in discovering effective antivirals which had few side effects. Only in the 1980s, when the full genetic sequences of viruses began to be unraveled, did researchers begin to learn how viruses worked in detail, and exactly what chemicals were needed to thwart their reproductive cycle.
Antiviral drug design
Antiviral targeting
The general idea behind modern antiviral drug design is to identify viral proteins, or parts of proteins, that can be disabled. These "targets" should generally be as unlike any proteins or parts of proteins in humans as possible, to reduce the likelihood of side effects and toxicity. The targets should also be common across many strains of a virus, or even among different species of virus in the same family, so a single drug will have broad effectiveness. For example, a researcher might target a critical enzyme synthesized by the virus, but not by the patient, that is common across strains, and see what can be done to interfere with its operation.
Once targets are identified, candidate drugs can be selected, either from drugs already known to have appropriate effects or by actually designing the candidate at the molecular level with a computer-aided design program.
The target proteins can be manufactured in the lab for testing with candidate treatments by inserting the gene that synthesizes the target protein into bacteria or other kinds of cells. The cells are then cultured for mass production of the protein, which can then be exposed to various treatment candidates and evaluated with "rapid screening" technologies.
Approaches by virus life cycle stage
Viruses consist of a genome and sometimes a few enzymes stored in a capsule made of protein (called a capsid), and sometimes covered with a lipid layer (sometimes called an 'envelope'). Viruses cannot reproduce on their own and instead propagate by subjugating a host cell to produce copies of themselves, thus producing the next generation.
Researchers working on such "rational drug design" strategies for developing antivirals have tried to attack viruses at every stage of their life cycles. Some species of mushrooms have been found to contain multiple antiviral chemicals with similar synergistic effects.
Compounds isolated from fruiting bodies and filtrates of various mushrooms have broad-spectrum antiviral activities, but successful production and availability of such compounds as frontline antiviral is a long way away.
Viral life cycles vary in their precise details depending on the type of virus, but they all share a general pattern:
Attachment to a host cell.
Release of viral genes and possibly enzymes into the host cell.
Replication of viral components using host-cell machinery.
Assembly of viral components into complete viral particles.
Release of viral particles to infect new host cells.
Before cell entry
One antiviral strategy is to interfere with the ability of a virus to infiltrate a target cell. The virus must go through a sequence of steps to do this, beginning with binding to a specific "receptor" molecule on the surface of the host cell and ending with the virus "uncoating" inside the cell and releasing its contents. Viruses that have a lipid envelope must also fuse their envelope with the target cell, or with a vesicle that transports them into the cell before they can uncoat.
This stage of viral replication can be inhibited in two ways:
Using agents which mimic the virus-associated protein (VAP) and bind to the cellular receptors. This may include VAP anti-idiotypic antibodies, natural ligands of the receptor and anti-receptor antibodies.
Using agents which mimic the cellular receptor and bind to the VAP. This includes anti-VAP antibodies, receptor anti-idiotypic antibodies, extraneous receptor and synthetic receptor mimics.
This strategy of designing drugs can be very expensive, and since the process of generating anti-idiotypic antibodies is partly trial and error, it can be a relatively slow process until an adequate molecule is produced.
Entry inhibitor
A very early stage of viral infection is viral entry, when the virus attaches to and enters the host cell. A number of "entry-inhibiting" or "entry-blocking" drugs are being developed to fight HIV. HIV most heavily targets a specific type of lymphocyte known as "helper T cells", and identifies these target cells through T-cell surface receptors designated "CD4" and "CCR5". Attempts to interfere with the binding of HIV with the CD4 receptor have failed to stop HIV from infecting helper T cells, but research continues on trying to interfere with the binding of HIV to the CCR5 receptor in hopes that it will be more effective.
HIV infects a cell through fusion with the cell membrane, which requires two different cellular molecular participants, CD4 and a chemokine receptor (differing depending on the cell type). Approaches to blocking this virus/cell fusion have shown some promise in preventing entry of the virus into a cell. At least one of these entry inhibitors—a biomimetic peptide called Enfuvirtide, or the brand name Fuzeon—has received FDA approval and has been in use for some time. Potentially, one of the benefits from the use of an effective entry-blocking or entry-inhibiting agent is that it potentially may not only prevent the spread of the virus within an infected individual but also the spread from an infected to an uninfected individual.
One possible advantage of the therapeutic approach of blocking viral entry (as opposed to the currently dominant approach of viral enzyme inhibition) is that it may prove more difficult for the virus to develop resistance to this therapy than for the virus to mutate or evolve its enzymatic protocols.
Uncoating inhibitors
Inhibitors of uncoating have also been investigated.
Amantadine and rimantadine have been introduced to combat influenza. These agents act on penetration and uncoating.
Pleconaril works against rhinoviruses, which cause the common cold, by blocking a pocket on the surface of the virus that controls the uncoating process. This pocket is similar in most strains of rhinoviruses and enteroviruses, which can cause diarrhea, meningitis, conjunctivitis, and encephalitis.
Some scientists are making the case that a vaccine against rhinoviruses, the predominant cause of the common cold, is achievable.
Vaccines that combine dozens of varieties of rhinovirus at once are effective in stimulating antiviral antibodies in mice and monkeys, researchers reported in Nature Communications in 2016.
Rhinoviruses are the most common cause of the common cold; other viruses such as respiratory syncytial virus, parainfluenza virus and adenoviruses can cause them too. Rhinoviruses also exacerbate asthma attacks. Although rhinoviruses come in many varieties, they do not drift to the same degree that influenza viruses do. A mixture of 50 inactivated rhinovirus types should be able to stimulate neutralizing antibodies against all of them to some degree.
During viral synthesis
A second approach is to target the processes that synthesize virus components after a virus invades a cell.
Reverse transcription
One way of doing this is to develop nucleotide or nucleoside analogues that look like the building blocks of RNA or DNA, but deactivate the enzymes that synthesize the RNA or DNA once the analogue is incorporated. This approach is more commonly associated with the inhibition of reverse transcriptase (RNA to DNA) than with "normal" transcriptase (DNA to RNA).
The first successful antiviral, aciclovir, is a nucleoside analogue, and is effective against herpesvirus infections. The first antiviral drug to be approved for treating HIV, zidovudine (AZT), is also a nucleoside analogue.
An improved knowledge of the action of reverse transcriptase has led to better nucleoside analogues to treat HIV infections. One of these drugs, lamivudine, has been approved to treat hepatitis B, which uses reverse transcriptase as part of its replication process. Researchers have gone further and developed inhibitors that do not look like nucleosides, but can still block reverse transcriptase.
Another target being considered for HIV antivirals include RNase H—which is a component of reverse transcriptase that splits the synthesized DNA from the original viral RNA.
Integrase
Another target is integrase, which integrate the synthesized DNA into the host cell genome. Examples of integrase inhibitors include raltegravir, elvitegravir, and dolutegravir.
Transcription
Once a virus genome becomes operational in a host cell, it then generates messenger RNA (mRNA) molecules that direct the synthesis of viral proteins. Production of mRNA is initiated by proteins known as transcription factors. Several antivirals are now being designed to block attachment of transcription factors to viral DNA.
Translation/antisense
Genomics has not only helped find targets for many antivirals, it has provided the basis for an entirely new type of drug, based on "antisense" molecules. These are segments of DNA or RNA that are designed as complementary molecule to critical sections of viral genomes, and the binding of these antisense segments to these target sections blocks the operation of those genomes. A phosphorothioate antisense drug named fomivirsen has been introduced, used to treat opportunistic eye infections in AIDS patients caused by cytomegalovirus, and other antisense antivirals are in development. An antisense structural type that has proven especially valuable in research is morpholino antisense.
Morpholino oligos have been used to experimentally suppress many viral types:
caliciviruses
flaviviruses (including West Nile virus)
dengue
HCV
coronaviruses
Translation/ribozymes
Yet another antiviral technique inspired by genomics is a set of drugs based on ribozymes, which are enzymes that will cut apart viral RNA or DNA at selected sites. In their natural course, ribozymes are used as part of the viral manufacturing sequence, but these synthetic ribozymes are designed to cut RNA and DNA at sites that will disable them.
A ribozyme antiviral to deal with hepatitis C has been suggested, and ribozyme antivirals are being developed to deal with HIV. An interesting variation of this idea is the use of genetically modified cells that can produce custom-tailored ribozymes. This is part of a broader effort to create genetically modified cells that can be injected into a host to attack pathogens by generating specialized proteins that block viral replication at various phases of the viral life cycle.
Protein processing and targeting
Interference with post translational modifications or with targeting of viral proteins in the cell is also possible.
Protease inhibitors
Some viruses include an enzyme known as a protease that cuts viral protein chains apart so they can be assembled into their final configuration. HIV includes a protease, and so considerable research has been performed to find "protease inhibitors" to attack HIV at that phase of its life cycle. Protease inhibitors became available in the 1990s and have proven effective, though they can have unusual side effects, for example causing fat to build up in unusual places. Improved protease inhibitors are now in development.
Protease inhibitors have also been seen in nature. A protease inhibitor was isolated from the shiitake mushroom (Lentinus edodes). The presence of this may explain the Shiitake mushrooms' noted antiviral activity in vitro.
Long dsRNA helix targeting
Most viruses produce long dsRNA helices during transcription and replication. In contrast, uninfected mammalian cells generally produce dsRNA helices of fewer than 24 base pairs during transcription. DRACO (double-stranded RNA activated caspase oligomerizer) is a group of experimental antiviral drugs initially developed at the Massachusetts Institute of Technology. In cell culture, DRACO was reported to have broad-spectrum efficacy against many infectious viruses, including dengue flavivirus, Amapari and Tacaribe arenavirus, Guama bunyavirus, H1N1 influenza and rhinovirus, and was additionally found effective against influenza in vivo in weanling mice. It was reported to induce rapid apoptosis selectively in virus-infected mammalian cells, while leaving uninfected cells unharmed. DRACO effects cell death via one of the last steps in the apoptosis pathway in which complexes containing intracellular apoptosis signalling molecules simultaneously bind multiple procaspases. The procaspases transactivate via cleavage, activate additional caspases in the cascade, and cleave a variety of cellular proteins, thereby killing the cell.
Assembly
Rifampicin acts at the assembly phase.
Release phase
The final stage in the life cycle of a virus is the release of completed viruses from the host cell, and this step has also been targeted by antiviral drug developers. Two drugs named zanamivir (Relenza) and oseltamivir (Tamiflu) that have been recently introduced to treat influenza prevent the release of viral particles by blocking a molecule named neuraminidase that is found on the surface of flu viruses, and also seems to be constant across a wide range of flu strains.
Immune system stimulation
Rather than attacking viruses directly, a second category of tactics for fighting viruses involves encouraging the body's immune system to attack them. Some antivirals of this sort do not focus on a specific pathogen, instead stimulating the immune system to attack a range of pathogens.
One of the best-known of this class of drugs are interferons, which inhibit viral synthesis in infected cells. One form of human interferon named "interferon alpha" is well-established as part of the standard treatment for hepatitis B and C, and other interferons are also being investigated as treatments for various diseases.
A more specific approach is to synthesize antibodies, protein molecules that can bind to a pathogen and mark it for attack by other elements of the immune system. Once researchers identify a particular target on the pathogen, they can synthesize quantities of identical "monoclonal" antibodies to link up that target. A monoclonal drug is now being sold to help fight respiratory syncytial virus in babies, and antibodies purified from infected individuals are also used as a treatment for hepatitis B.
Antiviral drug resistance
Antiviral resistance can be defined by a decreased susceptibility to a drug caused by changes in viral genotypes. In cases of antiviral resistance, drugs have either diminished or no effectiveness against their target virus. The issue inevitably remains a major obstacle to antiviral therapy as it has developed to almost all specific and effective antimicrobials, including antiviral agents.
The Centers for Disease Control and Prevention (CDC) inclusively recommends anyone six months and older to get a yearly vaccination to protect them from influenza A viruses (H1N1) and (H3N2) and up to two influenza B viruses (depending on the vaccination). Comprehensive protection starts by ensuring vaccinations are current and complete. However, vaccines are preventative and are not generally used once a patient has been infected with a virus. Additionally, the availability of these vaccines can be limited based on financial or locational reasons which can prevent the effectiveness of herd immunity, making effective antivirals a necessity.
The three FDA-approved neuraminidase antiviral flu drugs available in the United States, recommended by the CDC, include: oseltamivir (Tamiflu), zanamivir (Relenza), and peramivir (Rapivab). Influenza antiviral resistance often results from changes occurring in neuraminidase and hemagglutinin proteins on the viral surface. Currently, neuraminidase inhibitors (NAIs) are the most frequently prescribed antivirals because they are effective against both influenza A and B. However, antiviral resistance is known to develop if mutations to the neuraminidase proteins prevent NAI binding. This was seen in the H257Y mutation, which was responsible for oseltamivir resistance to H1N1 strains in 2009. The inability of NA inhibitors to bind to the virus allowed this strain of virus with the resistance mutation to spread due to natural selection. Furthermore, a study published in 2009 in Nature Biotechnology emphasized the urgent need for augmentation of oseltamivir stockpiles with additional antiviral drugs including zanamivir. This finding was based on a performance evaluation of these drugs supposing the 2009 H1N1 'Swine Flu' neuraminidase (NA) were to acquire the oseltamivir-resistance (His274Tyr) mutation, which is currently widespread in seasonal H1N1 strains.
Origin of antiviral resistance
The genetic makeup of viruses is constantly changing, which can cause a virus to become resistant to currently available treatments. Viruses can become resistant through spontaneous or intermittent mechanisms throughout the course of an antiviral treatment. Immunocompromised patients, more often than immunocompetent patients, hospitalized with pneumonia are at the highest risk of developing oseltamivir resistance during treatment. Subsequent to exposure to someone else with the flu, those who received oseltamivir for "post-exposure prophylaxis" are also at higher risk of resistance.
The mechanisms for antiviral resistance development depend on the type of virus in question. RNA viruses such as hepatitis C and influenza A have high error rates during genome replication because RNA polymerases lack proofreading activity. RNA viruses also have small genome sizes that are typically less than 30 kb, which allow them to sustain a high frequency of mutations. DNA viruses, such as HPV and herpesvirus, hijack host cell replication machinery, which gives them proofreading capabilities during replication. DNA viruses are therefore less error prone, are generally less diverse, and are more slowly evolving than RNA viruses. In both cases, the likelihood of mutations is exacerbated by the speed with which viruses reproduce, which provides more opportunities for mutations to occur in successive replications. Billions of viruses are produced every day during the course of an infection, with each replication giving another chance for mutations that encode for resistance to occur.
Multiple strains of one virus can be present in the body at one time, and some of these strains may contain mutations that cause antiviral resistance. This effect, called the quasispecies model, results in immense variation in any given sample of virus, and gives the opportunity for natural selection to favor viral strains with the highest fitness every time the virus is spread to a new host. Recombination, the joining of two different viral variants, and reassortment, the swapping of viral gene segments among viruses in the same cell, also play a role in resistance, especially in influenza.
Antiviral resistance has been reported in antivirals for herpes, HIV, hepatitis B and C, and influenza, but antiviral resistance is a possibility for all viruses. Mechanisms of antiviral resistance vary between virus types.
Detection of antiviral resistance
National and international surveillance is performed by the CDC to determine effectiveness of the current FDA-approved antiviral flu drugs. Public health officials use this information to make current recommendations about the use of flu antiviral medications. WHO further recommends in-depth epidemiological investigations to control potential transmission of the resistant virus and prevent future progression. As novel treatments and detection techniques to antiviral resistance are enhanced so can the establishment of strategies to combat the inevitable emergence of antiviral resistance.
Treatment options for antiviral resistant pathogens
If a virus is not fully wiped out during a regimen of antivirals, treatment creates a bottleneck in the viral population that selects for resistance, and there is a chance that a resistant strain may repopulate the host. Viral treatment mechanisms must therefore account for the selection of resistant viruses.
The most commonly used method for treating resistant viruses is combination therapy, which uses multiple antivirals in one treatment regimen. This is thought to decrease the likelihood that one mutation could cause antiviral resistance, as the antivirals in the cocktail target different stages of the viral life cycle. This is frequently used in retroviruses like HIV, but a number of studies have demonstrated its effectiveness against influenza A, as well. Viruses can also be screened for resistance to drugs before treatment is started. This minimizes exposure to unnecessary antivirals and ensures that an effective medication is being used. This may improve patient outcomes and could help detect new resistance mutations during routine scanning for known mutants. However, this has not been consistently implemented in treatment facilities at this time.
Direct-acting antivirals
The term Direct-acting antivirals (DAA) has long been associated with the combination of antiviral drugs used to treat hepatitis C infections. These are the more effective than older treatments such as ribavirin (partially indirectly acting) and interferon (indirect acting). The DAA drugs against hepatitis C are taken orally, as tablets, for 8 to 12 weeks. The treatment depends on the type or types (genotypes) of hepatitis C virus that are causing the infection. Both during and at the end of treatment, blood tests are used to monitor the effectiveness of the treatment and subsequent cure.
The DAA combination drugs used include:
Harvoni (sofosbuvir and ledipasvir)
Epclusa (sofosbuvir and velpatasvir)
Vosevi (sofosbuvir, velpatasvir, and voxilaprevir)
Zepatier (elbasvir and grazoprevir)
Mavyret (glecaprevir and pibrentasvir)
The United States Food and Drug Administration approved DAAs on the basis of a surrogate endpoint called sustained virological response (SVR). SVR is achieved in a patient when hepatitis C virus RNA remains undetectable 12–24 weeks after treatment ends. Whether through DAAs or older interferon-based regimens, SVR is associated with improved health outcomes and significantly decreased mortality. For those who already have advanced liver disease (including hepatocellular carcinoma), however, the benefits of achieving SVR may be less pronounced, though still substantial.
Despite its historical roots in hepatitis C research, the term "direct-acting antivirals" is becoming more broadly used to also include other anti-viral drugs with a direct viral target such as aciclovir (against herpes simplex virus), letermovir (against cytomegalovirus), or AZT (against human immunodeficiency virus). In this context it serves to distinguish these drugs from those with an indirect mechanism of action such as immune modulators like interferon alfa. This difference is of particular relevance for potential drug resistance mutation development.
Public policy
Use and distribution
Guidelines regarding viral diagnoses and treatments change frequently and limit quality care. Even when physicians diagnose older patients with influenza, use of antiviral treatment can be low. Provider knowledge of antiviral therapies can improve patient care, especially in geriatric medicine. Furthermore, in local health departments (LHDs) with access to antivirals, guidelines may be unclear, causing delays in treatment. With time-sensitive therapies, delays could lead to lack of treatment.
Overall, national guidelines, regarding infection control and management, standardize care and improve healthcare worker and patient safety. Guidelines, such as those provided by the Centers for Disease Control and Prevention (CDC) during the 2009 flu pandemic caused by the H1N1 virus, recommend, among other things, antiviral treatment regimens, clinical assessment algorithms for coordination of care, and antiviral chemoprophylaxis guidelines for exposed persons. Roles of pharmacists and pharmacies have also expanded to meet the needs of public during public health emergencies.
Stockpiling
Public Health Emergency Preparedness initiatives are managed by the CDC via the Office of Public Health Preparedness and Response. Funds aim to support communities in preparing for public health emergencies, including pandemic influenza. Also managed by the CDC, the Strategic National Stockpile (SNS) consists of bulk quantities of medicines and supplies for use during such emergencies. Antiviral stockpiles prepare for shortages of antiviral medications in cases of public health emergencies. During the H1N1 pandemic in 2009–2010, guidelines for SNS use by local health departments was unclear, revealing gaps in antiviral planning. For example, local health departments that received antivirals from the SNS did not have transparent guidance on the use of the treatments. The gap made it difficult to create plans and policies for their use and future availabilities, causing delays in treatment.
See also
Antiretroviral drug (especially HAART for HIV)
CRISPR-Cas13
Discovery and development of CCR5 receptor antagonists (for HIV)
Monoclonal antibody
List of antiviral drugs
Virucide
Antiprion drugs and Astemizole
Discovery and development of NS5A inhibitors
COVID-19 drug repurposing research
References
Biocides | Antiviral drug | [
"Biology",
"Environmental_science"
] | 5,647 | [
"Antiviral drugs",
"Biocides",
"Toxicology"
] |
49,209 | https://en.wikipedia.org/wiki/Carburetor | A carburetor (also spelled carburettor or carburetter) is a device used by a gasoline internal combustion engine to control and mix air and fuel entering the engine. The primary method of adding fuel to the intake air is through the Venturi effect or Bernoulli's principle in the main metering circuit, though various other components are also used to provide extra fuel or air in specific circumstances.
Since the 1990s, carburetors have been largely replaced by fuel injection for cars and trucks, but carburetors are still used by some small engines (e.g. lawnmowers, generators, and concrete mixers) and motorcycles. In addition, they are still widely used on piston-engine–driven aircraft. Diesel engines have always used fuel injection instead of carburetors, as the compression-based combustion of diesel requires the greater precision and pressure of fuel injection.
Etymology
The term carburetor is derived from the verb carburet, which means "to combine with carbon", or, in particular, "to enrich a gas by combining it with carbon or hydrocarbons". Thus a carburetor mixes intake air with hydrocarbon-based fuel, such as petrol or autogas (LPG).
The name is spelled carburetor in American English and carburettor in British English. Colloquial abbreviations include carb in the UK and North America or carby in Australia.
Operating principle
Air from the atmosphere enters the carburetor (usually via an air cleaner), has fuel added within the carburetor, passes into the inlet manifold, then through the inlet valve(s), and finally into the combustion chamber. Most engines use a single carburetor shared between all of the cylinders, though some high-performance engines historically had multiple carburetors.
The simplest carburetors work on Bernoulli's principle: the static pressure of the intake air reduces at higher speeds compared to the pressure in the float chamber which is vented to ambient air pressure, with the pressure difference then forcing more fuel into the airstream. In most cases (except for the accelerator pump), the driver pressing the throttle pedal does not directly increase the fuel entering the engine. Instead, the airflow through the carburetor increases, which in turn increases the amount of fuel drawn into the intake mixture.
Bernoulli's Principle applies (apart from friction and viscosity etc.) to both the air and the fuel, so that the pressure reduction in the air flow tends to be proportional to the square of the intake airspeed, and the fuel in the main jets will obtain a speed as the square root of the pressure reduction so the two will be proportional to each other. If the pressure reduction is taken as from a change of area along the air flow rather than from ambient pressure to the fuel entry point the effect can be described as the Venturi effect, but that is simply a derivation from the Bernoulli principle at two positions. The actual fuel and air flows are more complicated and need correction. This might be done variously at lower speeds or higher speeds, or over the whole range by a variable emulsion device to add air to the fuel after the main jets/s. In SU and other (e.g. Zenith-Stromberg) variable jet carburetors, it was mainly controlled by varying the jet size.
The orientation of the carburetor is a key design consideration. Older engines used updraft carburetors, where the air enters from below the carburetor and exits through the top. From the late 1930s, downdraft carburetors become more commonly used (especially in the United States), along with side draft carburetors (especially in Europe).
Fuel circuits
Main metering circuit
The main metering circuit usually consists of barrel/s which reduces to a narrow part where the air is at its highest speed before widening again, forming a venturi. Fuel is introduced into the air stream through small tubes leading from the main jet being driven by the difference in pressure to that at the float bowl.
Downstream of the venturi is a throttle (usually in the form of a butterfly valve) which is used to control the amount of air entering the carburetor. In a car, this throttle is usually mechanically connected to the vehicle's throttle pedal, which varies engine speed.
At lesser throttle openings, the air speed through the venturi may be insufficient to maintain the fuel flow, so then the fuel may be supplied by the carburetor's idle and off-idle circuits.
At greater throttle openings, the speed of air passing through the venturi increases, which lowers the pressure of the air and draws more fuel into the airstream. At the same time, the reduced manifold vacuum results in less fuel flow through the idle and off-idle circuits.
Choke
During cold weather fuel vaporizes less readily and tends to condense on the walls of the intake manifold, starving the cylinders of fuel and making cold starts difficult. Additional fuel is required (for a given amount of air) to start and run the engine until it warms up, provided by a choke valve.
While the engine is warming up the choke valve is partially closed, restricting the flow of air at the entrance to the carburetor. This increases the vacuum in the main metering circuit, causing more fuel to be supplied to the engine via the main jets. Prior to the late 1950s the choke was manually operated by the driver, often using a lever or knob on the dashboard. Since then, automatic chokes became more commonplace. These either use a bimetallic thermostat to automatically regulate the choke based on the temperature of the engine's coolant liquid, an electrical resistance heater to do so, or air drawn through a tube connected to an engine exhaust source. A choke left closed after the engine has warmed up increases the engine's fuel consumption and exhaust gas emissions, and causes the engine to run rough and lack power due to an over-rich fuel mixture.
However, excessive fuel can flood an engine and prevent it from starting. To remove the excess fuel, many carburetors with automatic chokes allow it to be held open (by manually, depressing the accelerator pedal to the floor and briefly holding it there while cranking the starter) to allow extra air into the engine until the excess fuel is cleared out.
Another method used by carburetors to improve the operation of a cold engine is a fast idle cam, which is connected to the choke and prevents the throttle from closing fully while the choke is in operation. The resulting increase in idle speed provides a more stable idle for a cold engine (by better atomizing the cold fuel) and helps the engine warm up quicker.
Idle circuit
The system within a carburetor that meters fuel when the engine is running at low RPM. The idle circuit is generally activated by vacuum near the (near closed) throttle plate, where the air speed increases to cause a low-pressure area in the idle passage/port, thus causing fuel to flow through the idle jet. The idle jet is set at some constant value by the carburetor manufacturer, thus flowing a specified amount of fuel.
Off-idle circuit
Many carburetors use an off-idle circuit, which includes an additional fuel jet which is briefly used as the throttle starts to open. This jet is located in a low-pressure area caused by the high air speed near the (partly closed) throttle. The additional fuel it provides is used to compensate for the reduced vacuum that occurs when the throttle is opened, thus smoothing the transition from the idle circuit to the main metering circuit.
Power valve
In a four-stroke engine it is often desirable to provide extra fuel to the engine at high loads (to increase the power output and reduce engine knocking). A 'power valve', which is a spring-loaded valve in the carburetor that is held shut by engine vacuum, is often used to do so. As the airflow through the carburetor increases the reduced manifold vacuum pulls the power valve open, allowing more fuel into the main metering circuit.
In a two-stroke engine, the carburetor power valve operates in the opposite manner: in most circumstances the valve allows extra fuel into the engine, then at a certain engine RPM it closes to reduce the fuel entering the engine. This is done in order to extend the engine's maximum RPM, since many two-stroke engines can temporarily achieve higher RPM with a leaner air-fuel ratio.
This is not to be confused with the unrelated exhaust power valve arrangements used on two-stroke engines.
Metering rod / step-up rod
A metering rod or step-up rod system is sometimes used as an alternative to a power valve in a four-stroke engine in order to supply extra fuel at high loads. One end of the rods is tapered, which sits in the main metering jets and acts as a valve for fuel flow in the jets. At high engine loads, the rods are lifted away from the jets (either mechanically or using manifold vacuum), increasing the volume of fuel flow through the jet. These systems have been used by the Rochester Quadrajet and in the 1950s Carter carburetors.
Accelerator pump
While the main metering circuit can adequately supply fuel to the engine in steady-state conditions, the inertia of fuel (being higher than that of air) causes a temporary shortfall as the throttle is opened. Therefore, an accelerator pump is often used to briefly provide extra fuel as the throttle is opened. When the driver presses the throttle pedal, a small piston or diaphragm pump injects extra fuel directly into the carburetor throat.
The accelerator pump can also be used to "prime" an engine with extra fuel prior to attempting a cold start.
Fuel supply
Float chamber
In order to ensure an adequate supply at all times, carburetors include a reservoir of fuel, called a "float chamber" or "float bowl". Fuel is delivered to the float chamber by a fuel pump or by gravity with the fuel tank located higher than the carburetor. A floating inlet valve regulates the fuel entering the float chamber, assuring a constant level. In some small engines that may instead of a float chamber just use a fuel tank close below the carburetor and use the fuel suction to supply the fuel.
Unlike in a fuel injected engine, the fuel system in a carbureted engine is not pressurized. For engines where the intake air travelling through the carburetor is pressurized (such as where the carburetor is downstream of a supercharger) the entire carburetor must be contained in an airtight pressurized box to operate. However, this is not necessary where the carburetor is upstream of the supercharger.
Problems of fuel boiling and vapor lock can occur in carbureted engines, especially in hotter climates. Since the float chamber is located close to the engine, heat from the engine (including for several hours after the engine is shut off) can cause the fuel to heat up to the point of vaporization. This causes air bubbles in the fuel (similar to the air bubbles that necessitate brake bleeding), which prevents the flow of fuel and is known as 'vapor lock'.
To avoid pressurizing the float chamber, vent tubes allow ambient air to enter and exit the float chamber. These tubes may instead extend into the carburetor air flow prior to where the fuel flows in, in order to use the Venturi effect to achieve suitable pressure difference rather than the Bernoulli principle which applies when the pressure difference is related to the ambient air pressure.
Diaphragm chamber
If an engine must be operated when the carburetor is not in an upright orientation (for example in a chainsaw or airplane), a float chamber and gravity activated float valve would not be suitable. Instead, a diaphragm chamber is typically used. This consists of a flexible diaphragm on one side of the fuel chamber, connected to a needle valve which regulates the fuel entering the chamber. As the flowrate of the air in the chamber (controlled by the throttling valve/butterfly valve) decreases, the diaphragm moves inward (downward), which closes the needle valve to admit less fuel. As the flowrate of the air in the chamber increases, the diaphragm moves outward (upward) which opens the needle valve to admit more fuel, allowing the engine to generate more power. A balanced state is reached which creates a steady fuel reservoir level, that remains constant in any orientation.
Other components
Other components that have been used on carburetors include:
Air bleeds allowing air into various portions of the fuel passages, to premix air and fuel, and minimise vaporization, and to largely correct air/fuel ratio over a large range, typically referred to as the emulsion system.
Fuel flow restrictors in aircraft engines, to prevent fuel starvation during inverted flight.
Heated vaporizers to assist with the atomization of the fuel, particularly for engines using kerosene, tractor vaporizing oil or in petrol-paraffin engines
Early fuel evaporators
Feedback carburetors, which adjusted the fuel/air mixture in response to signals from an oxygen sensor, in order to allow a catalytic converter to be used
Constant vacuum carburetors (also called variable choke carburetors), whereby the throttle cable is connected directly to the throttle cable plate. Pulling the cord caused raw gasoline to enter the carburetor, creating a large emission of hydrocarbons.
Constant velocity carburetors use a variable opening in the intake air stream after movement of the throttle plate from the accelerator pedal. This variable opening is controlled by pressure/vacuum at the variable opening itself. This pressure-controlled opening provides relatively even intake pressure throughout the engine's speed and load ranges.
Two-barrel and four-barrel designs
The basic design for a carburetor consists of a single venturi (main metering circuit), though designs with two or four venturi (two-barrel and four-barrel carburetors respectively) are also quite commonplace. Typically the barrels consist of "primary" barrel(s) used for lower load situations and secondary barrel(s) activating when required to provide additional air/fuel at higher loads. The primary and secondary venturi are often sized differently and incorporate different features to suit the situations in which they are used.
Many four-barrel carburetors use two primary and two secondary barrels. A four-barrel design of two primary and two secondary barrels was commonly used in V8 engines to conserve fuel at low engine speeds while still affording an adequate supply at high.
The use of multiple carburetors (e.g., a carburetor for each cylinder or pair of cylinders) also results in the intake air being drawn through multiple venturi. Some high-performance engines have used multiple two-barrel or four-barrel carburetors, for example six two-barrel carburetors on Ferrari V12s.
History
In 1826, American engineer Samuel Morey received a patent for a "gas or vapor engine", which had a carburetor that mixed turpentine and air. The design did not reach production. In 1875 German engineer Siegfried Marcus produced a car powered by the first petrol engine (which also debuted the first magneto ignition system). Karl Benz introduced his single-cylinder four-stroke powered Benz Patent-Motorwagen in 1885.
All three of these engines used surface carburetors, which operated by moving air across the top of a vessel containing the fuel.
The first float-fed carburetor design, which used an atomizer nozzle, was introduced by German engineers Wilhelm Maybach and Gottlieb Daimler in their 1885 Grandfather Clock engine. The Butler Petrol Cycle car—built in England in 1888—also used a float-fed carburetor.
The first carburetor for a stationary engine was patented in 1893 by Hungarian engineers János Csonka and Donát Bánki.
The first four-barrel carburetors were the Carter Carburetor WCFB and the identical Rochester 4GC, introduced in various General Motors models for 1952. Oldsmobile referred the new carburetor as the "Quadri-Jet" (original spelling) while Buick called it the "Airpower".
In the United States, carburetors were the common method of fuel delivery for most US-made gasoline (petrol) engines until the late 1980s, when fuel injection became the preferred method. One of the last motorsport users of carburetors was NASCAR, which switched to electronic fuel injection after the 2011 Sprint Cup series. NASCAR still uses the four-barrel carburetor in the NASCAR Xfinity Series.
In Europe, carburetors were largely replaced by fuel injection in the late 1980s, although fuel injection had been increasingly used in luxury cars and sports cars since the 1970s. EEC legislation required all vehicles sold and produced in member countries to have a catalytic converter after December 1992. This legislation had been in the pipeline for some time, with many cars becoming available with catalytic converters or fuel injection from around 1990.
Icing in aircraft engine carburetors
A significant concern for aircraft engines is the formation of ice inside the carburetor. The temperature of air within the carburetor can be reduced by up to 40 °C (72 °F), due to a combination of the reduced air pressure in the venturi and the latent heat of the evaporating fuel. The conditions during the descent to landing are particularly conducive to icing, since the engine is run at idle for a prolonged period with the throttle closed. Icing can also occur in cruise conditions at altitude.
A carburetor heat system is often used to prevent icing. This system consists of a secondary air intake which passes around the exhaust, in order to heat the air before it enters the carburetor. Typically, the system is operated by the pilot manually switching the intake air to travel via the heated intake path as required. The carburetor heat system reduces the power output (due to the lower density of heated air) and causes the intake air filter to be bypassed, therefore the system is only used when there is a risk of icing.
If the engine is operating at idle RPM, another method to prevent icing is to periodically open the throttle, which increases the air temperature within the carburetor.
Carburetor icing also occurs on other applications and various methods have been employed to solve this problem. On inline engines the intake and exhaust manifolds are on the same side of the head. Heat from the exhaust is used to warm the intake manifold and in turn the carburetor. On V configurations, exhaust gases were directed from one head through the intake cross over to the other head. One method for regulating the exhaust flow on the cross over for intake warming was a weighted eccentric butterfly valve called a heat riser that remained closed at idle and opened at higher exhaust flow. Some vehicles used a heat stove around the exhaust manifold. It was connected to the air filter intake via tubing and supplied warmed air to the air filter. A vacuum controlled butterfly valve pre heat tube on the intake horn of the air cleaner would open allowing cooler air when engine load increased.
See also
Bernoulli's principle
Fuel injection
Humidifier
List of auto parts
List of carburetor manufacturers
Venturi effect
References
German inventions
Carburettors
Engine fuel system technology
Engine components | Carburetor | [
"Technology"
] | 4,045 | [
"Engine components",
"Engines"
] |
49,234 | https://en.wikipedia.org/wiki/Chromatic%20scale | The chromatic scale (or twelve-tone scale) is a set of twelve pitches (more completely, pitch classes) used in tonal music, with notes separated by the interval of a semitone. Chromatic instruments, such as the piano, are made to produce the chromatic scale, while other instruments capable of continuously variable pitch, such as the trombone and violin, can also produce microtones, or notes between those available on a piano.
Most music uses subsets of the chromatic scale such as diatonic scales. While the chromatic scale is fundamental in western music theory, it is seldom directly used in its entirety in musical compositions or improvisation.
Definition
The chromatic scale is a musical scale with twelve pitches, each a semitone, also known as a half-step, above or below its adjacent pitches. As a result, in 12-tone equal temperament (the most common tuning in Western music), the chromatic scale covers all 12 of the available pitches. Thus, there is only one chromatic scale. The ratio of the frequency of one note in the scale to that of the preceding note is given by .
In equal temperament, all the semitones have the same size (100 cents), and there are twelve semitones in an octave (1200 cents). As a result, the notes of an equal-tempered chromatic scale are equally-spaced.
The ascending and descending chromatic scale is shown below.
Notation
The chromatic scale has no set enharmonic spelling that is always used. Its spelling is, however, often dependent upon major or minor key signatures and whether the scale is ascending or descending. In general, the chromatic scale is usually notated with sharp signs when ascending and flat signs when descending. It is also notated so that no scale degree is used more than twice in succession (for instance, G – G – G).
Similarly, some notes of the chromatic scale have enharmonic equivalents in solfege. The rising scale is Do, Di, Re, Ri, Mi, Fa, Fi, Sol, Si, La, Li, Ti and the descending is Ti, Te/Ta, La, Le/Lo, Sol, Se, Fa, Mi, Me/Ma, Re, Ra, Do, However, once 0 is given to a note, due to octave equivalence, the chromatic scale may be indicated unambiguously by the numbers 0-11 mod twelve. Thus two perfect fifths are 0-7-2. Tone rows, orderings used in the twelve-tone technique, are often considered this way due to the increased ease of comparing inverse intervals and forms (inversional equivalence).
Pitch-rational tunings
Pythagorean
The most common conception of the chromatic scale before the 13th century was the Pythagorean chromatic scale (). Due to a different tuning technique, the twelve semitones in this scale have two slightly different sizes. Thus, the scale is not perfectly symmetric. Many other tuning systems, developed in the ensuing centuries, share a similar asymmetry.
In Pythagorean tuning (i.e. 3-limit just intonation) the chromatic scale is tuned as follows, in perfect fifths from G to A centered on D (in bold) (G–D–A–E–B–F–C–G–D–A–E–B–F–C–G–D–A), with sharps higher than their enharmonic flats (cents rounded to one decimal):
{| class="wikitable" style="text-align: center"
|-
!width=4%|
!width=4%| C
!width=4%| D
!width=4%| C
!width=4%| D
!width=4%| E
!width=4%| D
!width=4%| E
!width=4%| F
!width=4%| G
!width=4%| F
!width=4%| G
!width=4%| A
!width=4%| G
!width=4%| A
!width=4%| B
!width=4%| A
!width=4%| B
!width=4%| C
|-
!Pitchratio
| 1 || || || || || || || || || || || || || || || || || 2
|-
!Cents
| 0 || 90.2 || 113.7 || 203.9 || 294.1 || 317.6 || 407.8 || 498 || 588.3 || 611.7 || 702 || 792.2 || 815.6 || 905.9 || 996.1 || 1019.6 || 1109.8 || 1200
|}
where is a diatonic semitone (Pythagorean limma) and is a chromatic semitone (Pythagorean apotome).
The chromatic scale in Pythagorean tuning can be tempered to the 17-EDO tuning (P5 = 10 steps = 705.88 cents).
Just intonation
In 5-limit just intonation the chromatic scale, Ptolemy's intense chromatic scale, is as follows, with flats higher than their enharmonic sharps, and new notes between E–F and B–C (cents rounded to one decimal):
{| class="wikitable" style="text-align: center"
|-
!
! C !! C !! D !! D !! D !! E !! E !! E/F !! F !! F !! G !! G !! G !! A !! A !! A !! B !! B !! B/C !! C
|-
!Pitch ratio
| 1 || || || || || || || || || || || || || || || || || || || 2
|-
!Cents
| 0 || 70.7 || 111.7 || 203.9 || 274.6 || 315.6 || 386.3 || 427.4 || 498 || 568.7 || 631.3 || 702 || 772.6 || 813.7 || 884.4 || 955 || 1017.6 || 1088.3 || 1129.3 || 1200
|}
The fractions and , and , and , and , and many other pairs are interchangeable, as (the syntonic comma) is tempered out.
Just intonation tuning can be approximated by 19-EDO tuning (P5 = 11 steps = 694.74 cents).
Non-Western cultures
The ancient Chinese chromatic scale is called Shí-èr-lǜ. However, "it should not be imagined that this gamut ever functioned as a scale, and it is erroneous to refer to the 'Chinese chromatic scale', as some Western writers have done. The series of twelve notes known as the twelve lü were simply a series of fundamental notes from which scales could be constructed." However, "from the standpoint of tonal music [the chromatic scale] is not an independent scale, but derives from the diatonic scale," making the Western chromatic scale a gamut of fundamental notes from which scales could be constructed as well.
See also
Atonality
Chromaticism
Twelve-tone technique
20th century music#Classical
"All Through the Night" (Cole Porter song)
Notes
Sources
Further reading
Hewitt, Michael. 27 January 2013. Musical Scales of the World. The Note Tree.
External links
The Chromatic Scale arranged for guitar in several fingerings. (Formatted for easy printing)
The 12 golden notes of music
Chromatic Scale – Analysis
Chromaticism
Musical scales
Post-tonal music theory
Musical symmetry
Hemitonic scales
Tritonic scales | Chromatic scale | [
"Physics"
] | 1,681 | [
"Symmetry",
"Musical symmetry"
] |
49,243 | https://en.wikipedia.org/wiki/The%20Planets | The Planets, Op. 32, is a seven-movement orchestral suite by the English composer Gustav Holst, written between 1914 and 1917. In the last movement the orchestra is joined by a wordless female chorus. Each movement of the suite is named after a planet of the Solar System and its supposed astrological character.
The premiere of The Planets was at the Queen's Hall, London, on 29 September 1918, conducted by Holst's friend Adrian Boult before an invited audience of about 250 people. Three concerts at which movements from the suite were played were given in 1919 and early 1920. The first complete performance at a public concert was given at the Queen's Hall on 15 November 1920 by the London Symphony Orchestra conducted by Albert Coates.
The innovative nature of Holst's music caused some initial hostility among a minority of critics, but the suite quickly became and has remained popular, influential and widely performed. The composer conducted two recordings of the work, and it has been recorded at least 80 times subsequently by conductors, choirs and orchestras from the UK and internationally.
Background and composition
The Planets was composed over nearly three years, between 1914 and 1917. The work had its origins in March and April 1913, when Gustav Holst and his friend and benefactor Balfour Gardiner holidayed in Spain with the composer Arnold Bax and his brother, the author Clifford Bax. A discussion about astrology piqued Holst's interest in the subject. Clifford Bax later commented that Holst became "a remarkably skilled interpreter of horoscopes". Shortly after the holiday Holst wrote to a friend: "I only study things that suggest music to me. That's why I worried at Sanskrit. Then recently the character of each planet suggested lots to me, and I have been studying astrology fairly closely". He told Clifford Bax in 1926 that The Planets:
Imogen Holst, the composer's daughter, wrote that her father had difficulty with large-scale orchestral structures such as symphonies, and the idea of a suite with a separate character for each movement was an inspiration to him. Holst's biographer Michael Short and the musicologist Richard Greene both think it likely that another inspiration for the composer to write a suite for large orchestra was the example of Schoenberg's Five Pieces for Orchestra. That suite had been performed in London in 1912 and again in 1914; Holst was at one of the performances, and he is known to have owned a copy of the score.
Holst described The Planets as "a series of mood pictures", acting as "foils to one another", with "very little contrast in any one of them". Short writes that some of the characteristics the composer attributed to the planets may have been suggested by Alan Leo's booklet What Is a Horoscope?, which he was reading at the time. Holst took the title of two movements – "Mercury, the Winged Messenger" and "Neptune, the Mystic" – from Leo's books. But although astrology was Holst's starting point, he arranged the planets to suit his own plan:
In an early sketch for the suite Holst listed Mercury as "no. 1", which Greene suggests raises the possibility that the composer's first idea was simply to depict the planets in the obvious order, from nearest the sun to the farthest. "However, opening with the more disturbing character of Mars allows a more dramatic and compelling working out of the musical material".
Holst had a heavy workload as head of music at St Paul's Girls' School, Hammersmith, and director of music at Morley College, and had limited time for composing. Imogen Holst wrote, "Weekends and holidays were the only times when he could really get on with his own work, which is why it took him over two years to finish The Planets. She added that Holst's chronic neuritis in his right arm was troubling him considerably and he would have found it impossible to complete the 198 pages of the large full score without the help of two colleagues at St Paul's, Vally Lasker and Nora Day, whom he called his "scribes".
The first movement to be written was Mars in mid-1914, followed by Venus and Jupiter in the latter part of the year, Saturn and Uranus in mid-1915, Neptune later in 1915 and Mercury in early 1916. Holst completed the orchestration during 1917.
First performances
The premiere of The Planets, conducted at Holst's request by Adrian Boult, was held at short notice on 29 September 1918, during the last weeks of the First World War, in the Queen's Hall with the financial support of Gardiner. It was hastily rehearsed; the musicians of the Queen's Hall Orchestra first saw the complicated music only two hours before the performance, and the choir for Neptune was recruited from Holst's students at Morley College and St Paul's Girls' School. It was a comparatively intimate affair, attended by around 250 invited associates, but Holst regarded it as the public premiere, inscribing Boult's copy of the score, "This copy is the property of Adrian Boult who first caused the Planets to shine in public and thereby earned the gratitude of Gustav Holst."
At a Royal Philharmonic Society concert at the Queen's Hall on 27 February 1919 conducted by Boult, five of the seven movements were played in the order Mars, Mercury, Saturn, Uranus, and Jupiter. It was Boult's decision not to play all seven movements at this concert. Although Holst would have liked the suite to be played complete, Boult's view was that when the public were being presented with a completely new language of this kind, "half an hour of it was as much as they could take in". Imogen Holst recalled that her father "hated incomplete performances of The Planets, though on several occasions he had to agree to conduct three or four movements at Queen's Hall concerts. He particularly disliked having to finish with Jupiter, to make a 'happy ending', for, as he himself said, 'in the real world the end is not happy at all'".
At a Queen's Hall concert on 22 November 1919, Holst conducted Venus, Mercury and Jupiter. There was another incomplete public performance, in Birmingham, on 10 October 1920, with five movements (Mars, Venus, Mercury, Saturn and Jupiter), conducted by the composer.
The first complete performance of the suite at a public concert was on 15 November 1920; the London Symphony Orchestra was conducted by Albert Coates. The first complete performance conducted by the composer was on 13 October 1923, with the Queen's Hall Orchestra.
Instrumentation
The work is scored for a large orchestra. Holst's fellow composer Ralph Vaughan Williams wrote in 1920, "Holst uses a very large orchestra in the Planets not to make his score look impressive, but because he needs the extra tone colour and knows how to use it". The score calls for the following instrumentation. The movements vary in the combinations of instruments used.
Woodwinds: four flutes (third doubling first piccolo and fourth doubling second piccolo and "bass flute in G", actually an alto flute), three oboes (third doubling bass oboe), one cor anglais, three clarinets in B and A, one bass clarinet in B, three bassoons, one contrabassoon
Brass: six horns in F, four trumpets in C, two trombones, one bass trombone, one tenor tuba in B (often played on a euphonium), one tuba
Percussion: six timpani (two players); triangle, snare drum, tambourine, cymbals, bass drum, gong, tubular bells, glockenspiel, xylophone
Keyboards: organ, celesta
Strings: two harps, violins i, ii, violas, cellos, double basses (Low C appears in the score)
In Neptune, two three-part women's choruses (each comprising two soprano sections and one alto section) located in an adjoining room which is to be screened from the audience are added.
Source: Published score.
Structure
1. Mars, the Bringer of War
Mars is marked allegro and is in a relentless ostinato for most of its duration. It opens quietly, the first two bars played by percussion, harp and col legno strings. The music builds to a quadruple-forte, dissonant climax. Although Mars is often thought to portray the horrors of mechanised warfare, it was completed before the First World War started. The composer Colin Matthews writes that for Holst, Mars would have been "an experiment in rhythm and clashing keys", and its violence in performance "may have surprised him as much as it galvanised its first audiences". Short comments, "harmonic dissonances abound, often resulting from clashes between moving chords and static pedal-points", which he compares to a similar effect at the end of Stravinsky's The Firebird, and adds that although battle music had been written before, notably by Richard Strauss in Ein Heldenleben, "it had never expressed such violence and sheer terror".
2. Venus, the Bringer of Peace
The second movement begins adagio in . According to Imogen Holst, Venus "has to try and bring the right answer to Mars". The movement opens with a solo horn theme answered quietly by the flutes and oboes. A second theme is given to solo violin. The music proceeds tranquilly with oscillating chords from flutes and harps, with decoration from the celesta. Between the opening adagio and the central largo there is a flowing andante section in with a violin melody (solo then tutti) accompanied by gentle syncopation in the woodwind. The oboe solo in the central largo is one of the last romantic melodies Holst allowed himself before turning to a more austere manner in later works. Leo called the planet "the most fortunate star under which to be born"; Short calls Holst's Venus "one of the most sublime evocations of peace in music".
3. Mercury, the Winged Messenger
Mercury is in and is marked vivace throughout. The composer R. O. Morris thought it the nearest of the movements to "the domain of programme music pure and simple ... it is essentially pictorial in idea. Mercury is a mere activity whose character is not defined". This movement, the last of the seven to be written, contains Holst's first experiments with bitonality. He juxtaposes melodic fragments in B major and E major, in a fast-moving scherzo. Solo violin, high-pitched harp, flute and glockenspiel are prominently featured. It is the shortest of the seven movements, typically taking between and 4 minutes in performance.
4. Jupiter, the Bringer of Jollity
In this movement Holst portrays Jupiter's supposedly characteristic "abundance of life and vitality" with music that is buoyant and exuberant. Nobility and generosity are allegedly characteristics of those born under Jupiter, and in the slower middle section Holst provides a broad tune embodying those traits. In the view of Imogen Holst, it has been compromised by its later use as the melody for a solemn patriotic hymn, "I Vow to Thee, My Country"; the musicologist Lewis Foreman comments that the composer did not think of it in those terms, as shown by his own recordings of the movement. The opening section of the movement is marked allegro giocoso, in time. The second theme, at the same tempo, is in time, as is the broad melody of the middle section, marked andante maestoso, which Holst marks to be taken at half the speed of the opening section. The opening section returns and after a reappearance of the maestoso tune – its expected final cadence unresolved, as in its first appearance – the movement ends with a triple forte quaver chord for the full orchestra.
5. Saturn, the Bringer of Old Age
Saturn was Holst's favourite movement of the suite. Matthews describes it as "a slow processional which rises to a frightening climax before fading away as if into the outer reaches of space". The movement opens as a quiet adagio in and the basic pace remains slow throughout, with short bursts of animato in the first part and a switch to andante in in the later section. Apart from the timpani no percussion is used in this movement except for tubular bells at climactic points. At the beginning, flutes, bassoons and harps play a theme suggesting a ticking clock. A solemn melody is introduced by the trombones (Holst's own main instrument) and taken up by the full orchestra. A development of the ticking theme leads to a clangorous triple forte climax, after which the music dies away and ends quietly.
6. Uranus, the Magician
Matthews describes the character of the movement as that of "a clumsy dance, which gradually gets more and more out of hand (not unlike Dukas's Sorcerer's Apprentice) until, with what seems like a magic wand, all is abruptly swept away into the far distance". The movement, which begins with what Short calls "a tremendous four-note brass motif", is marked allegro in . The music proceeds in "a series of merry pranks" with occasional interjections in , building to a quadruple forte climax with a prominent organ glissando, after which the music suddenly drops to a pianissimo lento before alternating quick and slow sections bring the movement to its pianissimo conclusion.
7. Neptune, the Mystic
The music of the last movement is quiet throughout, in a swaying, irregular metre, opening with flutes joined by piccolo and oboes, with harps and celesta prominent later. Holst makes much use of dissonance in this movement. Before the premiere his colleague Geoffrey Toye said that a bar where the brass play chords of E minor and G minor together was "going to sound frightful". Holst agreed, and said it had made him shudder when he wrote it down but, "What are you to do when they come like that?" As the movement develops, the orchestra is joined by an offstage female chorus singing a soft wordless line: this was unusual in orchestral works at the time, although Debussy had used the same device in his Nocturnes (1900). The orchestra falls silent and the unaccompanied voices bring the work to a pianissimo conclusion in an uncertain tonality, as a door between the singers and the auditorium is gradually closed.
Reception
Imogen Holst wrote of the 1918 premiere under Boult:
When the music was first introduced to the general public in February 1919, critical opinion was divided. Greene prints a summary of reviews of the first four public performances of the suite (or movements from it) in February and November 1919 and October and November 1920. Positive reviews are recorded in 28 of the 37 papers, magazines and journals cited. A small minority of reviewers were particularly hostile, among them those of The Globe ("Noisy and pretentious)"; The Sunday Times ("Pompous, noisy and unalluring"), and The Times ("a great disappointment … elaborately contrived and painful to hear"). The critic in The Saturday Review wrote that Holst evidently regarded the planets "as objectionable nuisances that he would oust from our orbit if he could".
The Times rapidly changed its mind; in July 1919 it called Holst the most intriguing of his compeers and commented, "The Planets still leaves us gasping"; after hearing Holst conduct three of the movements in November 1919 the paper's critic declared the piece "the first music by an Englishman we have heard for some time which is neither conventional nor negligible", and by the time of Holst's death in 1934 the paper's assessment of the piece was "Holst's greatest work":
The Sunday Times, too, quickly changed its line. In 1920 its new music critic, Ernest Newman, said that Holst could do "easily, without a fuss" what some other composers could only do "with an effort and a smirk", and that in The Planets he showed "one of the subtlest and most original minds of our time. It begins working at a musical problem where most other minds would leave off". Newman compared Holst's harmonic innovations to those of Stravinsky, to the latter's disadvantage, and expressed none of the reservations that qualified his admiration of Schoenberg's Five Pieces for Orchestra.
Recordings
There have been at least 80 commercial recordings of The Planets. Holst conducted the London Symphony Orchestra in the first two recorded performances: the first was an acoustic recording made in sessions between September 1922 and November 1923; the second was made in 1926 using the new electrical recording process. Holst's tempi are in general faster than those of most of his successors on record. This may have been due to the need to fit the music on 78 rpm discs, although later 78 versions are slower. Holst's later recording is quicker than the acoustic version, possibly because the electrical process required wider grooves, reducing the available playing time. Other, slower, recordings from the 78 era include those conducted by Leopold Stokowski (1943) and Sir Adrian Boult (1945). Recordings from the LP age are also typically longer than the composer's, but from the digital era a 2010 recording by the London Philharmonic Orchestra conducted by Vladimir Jurowski is quicker than Holst's acoustic version and comes close to matching his 1926 speeds, and in two movements (Venus and Uranus) surpasses them. There were no commercial recordings of the work in the 1930s; timings are given below of a recording representing each subsequent decade up to the 2020s:
Source: Naxos Music Library.
Additions, adaptations and influences
There have been many adaptations of the suite, and several attempts to add an eighth planet – Pluto – in the time between its discovery in 1930 and its downgrading to "dwarf planet" in 2006. The most prominent of these was Matthews's 2000 composition, "Pluto, the Renewer", commissioned by the Hallé Orchestra. Dedicated posthumously to Imogen Holst, it was first performed in Manchester on 11 May 2000, with Kent Nagano conducting. Matthews changed the ending of Neptune slightly so that the movement would segue into Pluto. Matthews's Pluto has been recorded, coupled with Holst's suite, on at least four occasions. Others who have produced versions of Pluto for The Planets include Leonard Bernstein and Jun Nagao.
The suite has been adapted for numerous instruments and instrumental combinations, including organ, synthesiser, brass band, and jazz orchestra. Holst used the melody of the central section of "Jupiter" for a setting ("Thaxted") of the hymn "I Vow to Thee, My Country" in 1921.
The Planets has been taken as an influence by various rock bands, and for film scores such as those for the Star Wars series. There have been numerous references to the suite in popular culture, from films to television and computer games.
Notes, references and sources
Notes
References
Sources
External links
Links to public domain scores of The Planets:
The Planets: Suite for Large Orchestra (Score in the Public Domain)
1916 compositions
Concert band pieces
Compositions that use extended techniques
History of astrology
Music for orchestra and organ
Suites by Gustav Holst
Orchestral compositions with chorus | The Planets | [
"Astronomy"
] | 4,088 | [
"History of astrology",
"History of astronomy"
] |
49,244 | https://en.wikipedia.org/wiki/NaN | In computing, NaN (), standing for Not a Number, is a particular value of a numeric data type (often a floating-point number) which is undefined as a number, such as the result of 0/0. Systematic use of NaNs was introduced by the IEEE 754 floating-point standard in 1985, along with the representation of other non-finite quantities such as infinities.
In mathematics, the result of 0/0 is typically not defined as a number and may therefore be represented by NaN in computing systems.
The square root of a negative number is not a real number, and is therefore also represented by NaN in compliant computing systems. NaNs may also be used to represent missing values in computations.
Two separate kinds of NaNs are provided, termed quiet NaNs and signaling NaNs. Quiet NaNs are used to propagate errors resulting from invalid operations or values. Signaling NaNs can support advanced features such as mixing numerical and symbolic computation or other extensions to basic floating-point arithmetic.
Floating point
In floating-point calculations, NaN is not the same as infinity, although both are typically handled as special cases in floating-point representations of real numbers as well as in floating-point operations. An invalid operation is also not the same as an arithmetic overflow (which would return an infinity or the largest finite number in magnitude) or an arithmetic underflow (which would return the smallest normal number in magnitude, a subnormal number, or zero).
In the IEEE 754 binary interchange formats, NaNs are encoded with the exponent field filled with ones (like infinity values), and some non-zero number in the trailing significand field (to make them distinct from infinity values); this allows the definition of multiple distinct NaN values, depending on which bits are set in the trailing significand field, but also on the value of the leading sign bit (but applications are not required to provide distinct semantics for those distinct NaN values).
For example, an IEEE 754 single precision (32-bit) NaN would be encoded as
where s is the sign (most often ignored in applications) and the x sequence represents a non-zero number (the value zero encodes infinities). In practice, the most significant bit from x is used to determine the type of NaN: "quiet NaN" or "signaling NaN" (see details in Encoding). The remaining bits encode a payload (most often ignored in applications).
Floating-point operations other than ordered comparisons normally propagate a quiet NaN (qNaN). Most floating-point operations on a signaling NaN (sNaN) signal the invalid-operation exception; the default exception action is then the same as for qNaN operands and they produce a qNaN if producing a floating-point result.
The propagation of quiet NaNs through arithmetic operations allows errors to be detected at the end of a sequence of operations without extensive testing during intermediate stages. For example, if one starts with a NaN and adds 1 five times in a row, each addition results in a NaN, but there is no need to check each calculation because one can just note that the final result is NaN. However, depending on the language and the function, NaNs can silently be removed from a chain of calculations where one calculation in the chain would give a constant result for all other floating-point values. For example, the calculation x0 may produce the result 1, even where x is NaN, so checking only the final result would obscure the fact that a calculation before the x0 resulted in a NaN. In general, then, a later test for a set invalid flag is needed to detect all cases where NaNs are introduced (see Function definition below for further details).
In section 6.2 of the old IEEE 754-2008 standard, there are two anomalous functions (the and functions, which return the maximum and the minimum, respectively, of two operands that are expected to be numbers) that favor numbers — if just one of the operands is a NaN then the value of the other operand is returned. The IEEE 754-2019 revision has replaced these functions as they are not associative (when a signaling NaN appears in an operand).
Comparison with NaN
Comparisons are specified by the IEEE 754 standard to take into account possible NaN operands. When comparing two real numbers, or extended real numbers (as in the IEEE 754 floating-point formats), the first number may be either less than, equal to, or greater than the second number. This gives three possible relations. But when at least one operand of a comparison is NaN, this trichotomy does not apply, and a fourth relation is needed: unordered. In particular, two NaN values compare as unordered, not as equal.
As specified, the predicates associated with the <, ≤, =, ≥, > mathematical symbols (or equivalent notation in programming languages) return false on an unordered relation. So, for instance, is not logically equivalent to : on unordered, i.e. when x or y is NaN, the former returns true while the latter returns false. However, ≠ is defined as the negation of =, thus it returns true on unordered.
From these rules, comparing x with itself, or , can be used to test whether x is NaN or non-NaN.
The comparison predicates are either signaling or non-signaling on quiet NaN operands; the signaling versions signal the invalid-operation exception for such comparisons (i.e., by default, this just sets the corresponding status flag in addition to the behavior of the non-signaling versions). The equality and inequality predicates are non-signaling. The other standard comparison predicates associated with the above mathematical symbols are all signaling if they receive a NaN operand. The standard also provides non-signaling versions of these other predicates. The predicate isNaN(x) determines whether a value is a NaN and never signals an exception, even if x is a signaling NaN.
The IEEE floating-point standard requires that NaN ≠ NaN hold. In contrast, the 2022 private standard of posit arithmetic has a similar concept, NaR (Not a Real), where NaR = NaR holds.
Operations generating NaN
There are three kinds of operations that can return NaN:
Most operations with at least one NaN operand.
Indeterminate forms:
The divisions and .
The multiplications and .
Remainder when is an infinity or is zero.
The additions , and equivalent subtractions and .
The standard has alternative functions for powers:
The standard function and the integer exponent function define , , and as .
The function defines all three indeterminate forms as invalid operations and so returns NaN.
Real operations with complex results, for example:
The square root of a negative number.
The logarithm of a negative number.
The inverse sine or inverse cosine of a number that is less than −1 or greater than 1.
NaNs may also be explicitly assigned to variables, typically as a representation for missing values. Prior to the IEEE standard, programmers often used a special value (such as −99999999) to represent undefined or missing values, but there was no guarantee that they would be handled consistently or correctly.
NaNs are not necessarily generated in all the above cases. If an operation can produce an exception condition and traps are not masked then the operation will cause a trap instead. If an operand is a quiet NaN, and there is also no signaling NaN operand, then there is no exception condition and the result is a quiet NaN. Explicit assignments will not cause an exception even for signaling NaNs.
Quiet NaN
In general, quiet NaNs, or qNaNs, do not raise any additional exceptions, as they propagate through most operations. But the invalid-operation exception is signaled by some operations that do not return a floating-point value, such as format conversions or certain comparison operations.
Signaling NaN
Signaling NaNs, or sNaNs, are special forms of a NaN that, when consumed by most operations, should raise the invalid operation exception and then, if appropriate, be "quieted" into a qNaN that may then propagate. They were introduced in IEEE 754. There have been several ideas for how these might be used:
Filling uninitialized memory with signaling NaNs would produce the invalid operation exception if the data is used before it is initialized
Using an sNaN as a placeholder for a more complicated object, such as:
A representation of a number that has underflowed
A representation of a number that has overflowed
Number in a higher precision format
A complex number
When encountered, a trap handler could decode the sNaN and return an index to the computed result. In practice, this approach is faced with many complications. The treatment of the sign bit of NaNs for some simple operations (such as absolute value) is different from that for arithmetic operations. Traps are not required by the standard.
Payload operations
IEEE 754-2019 recommends the operations getPayload, setPayload, and setPayloadSignaling be implemented, standardizing the access to payloads to streamline application use. According to the IEEE 754-2019 background document, this recommendation should be interpreted as "required for new implementations, with reservation for backward compatibility".
Encoding
In IEEE 754 interchange formats, NaNs are identified by specific, pre-defined bit patterns unique to NaNs. The sign bit does not matter. For the binary formats, NaNs are represented with the exponent field filled with ones (like infinity values), and some non-zero number in the trailing significand field (to make them distinct from infinity values). The original IEEE 754 standard from 1985 (IEEE 754-1985) only described binary floating-point formats, and did not specify how the signaling/quiet state was to be tagged. In practice, the most significant bit of the trailing significand field determined whether a NaN is signaling or quiet. Two different implementations, with reversed meanings, resulted:
most processors (including those of the Intel and AMD's x86 family, the Motorola 68000 family, the AIM PowerPC family, the ARM family, the Sun SPARC family, and optionally new MIPS processors) set the signaling/quiet bit to non-zero if the NaN is quiet, and to zero if the NaN is signaling. Thus, on these processors, the bit represents an flag;
in NaNs generated by the PA-RISC and old MIPS processors, the signaling/quiet bit is zero if the NaN is quiet, and non-zero if the NaN is signaling. Thus, on these processors, the bit represents an flag.
The former choice has been preferred as it allows the implementation to quiet a signaling NaN by just setting the signaling/quiet bit to 1. The reverse is not possible with the latter choice because setting the signaling/quiet bit to 0 could yield an infinity.
The 2008 and 2019 revisions of the IEEE 754 standard make formal requirements and recommendations for the encoding of the signaling/quiet state.
For binary interchange formats, the most significant bit of the trailing significand field is exclusively used to distinguish between quiet and signaling NaNs. (This requirement has been added in the 2019 revision.) Moreover, it should be an flag. That is, this bit is non-zero if the NaN is quiet, and zero if the NaN is signaling.
For decimal interchange formats, whether binary or decimal encoded, a NaN is identified by having the top five bits of the combination field after the sign bit set to ones. The sixth bit of the field is the flag. That is, this bit is zero if the NaN is quiet, and non-zero if the NaN is signaling.
For IEEE 754-2008 conformance, the meaning of the signaling/quiet bit in recent MIPS processors is now configurable via the NAN2008 field of the FCSR register. This support is optional in MIPS Release 3 and required in Release 5.
The state of the remaining bits of the trailing significand field are not defined by the standard. These bits encode a value called the 'payload' of the NaN. For the binary formats, the encoding is unspecified. For the decimal formats, the usual encoding of unsigned integers is used. If an operation has a single NaN input and propagates it to the output, the result NaN's payload should be that of the input NaN (this is not always possible for binary formats when the signaling/quiet state is encoded by an flag, as explained above). If there are multiple NaN inputs, the result NaN's payload should be from one of the input NaNs; the standard does not specify which.
Canonical NaN
A number of systems have the concept of a "canonical NaN", where one specific NaN value is chosen to be the only possible qNaN generated by floating-point operations not having a NaN input. The value is usually chosen to be a quiet NaN with an all-zero payload and an arbitrarily-defined sign bit.
On RISC-V, most floating-point operations only ever generate the canonical NaN, even if a NaN is given as the operand (the payload is not propagated). ARM can enable a "default NaN" mode for this behavior. WebAssembly has the same behavior, though it allows two canonical values.
A number of languages do not distinguish among different NaN values, without requiring their implementations to force a certain NaN value. ECMAScript (JavaScript) treats all NaN as if they are the same value. Java has the same treatment "for the most part".
Using a limited amount of NaN representations allows the system to use other possible NaN values for non-arithmetic purposes, the most important being "NaN-boxing", i.e. using the payload for arbitrary data. (This concept of "canonical NaN" is not the same as the concept of a "canonical encoding" in IEEE 754.)
Function definition
There are differences of opinion about the proper definition for the result of a numeric function that receives a quiet NaN as input. One view is that the NaN should propagate to the output of the function in all cases to propagate the indication of an error. Another view, and the one taken by the ISO C99 and IEEE 754-2008 standards in general, is that if the function has multiple arguments and the output is uniquely determined by all the non-NaN inputs (including infinity), then that value should be the result. Thus for example the value returned by and is +∞.
The problem is particularly acute for the exponentiation function The expressions 00, ∞0 and 1∞ are considered indeterminate forms when they occur as limits (just like ∞ × 0), and the question of whether zero to the zero power should be defined as 1 has divided opinion.
If the output is considered as undefined when a parameter is undefined, then should produce a qNaN. However, math libraries have typically returned 1 for for any real number y, and even when y is an infinity. Similarly, they produce 1 for even when x is 0 or an infinity. The 2008 version of the IEEE 754 standard says that and should both return 1 since they return 1 whatever else is used instead of quiet NaN. Moreover, ISO C99, and later IEEE 754-2008, chose to specify instead of qNaN; the reason of this choice is given in the C rationale: "Generally, C99 eschews a NaN result where a numerical value is useful. ... The result of is +∞, because all large positive floating-point values are even integers."
To satisfy those wishing a more strict interpretation of how the power function should act, the 2008 standard defines two additional power functions: where the exponent must be an integer, and which returns a NaN whenever a parameter is a NaN or the exponentiation would give an indeterminate form.
Integer NaN
Most fixed-size integer formats cannot explicitly indicate invalid data. In such a case, when converting NaN to an integer type, the IEEE 754 standard requires that the invalid-operation exception be signaled. For example in Java, such operations throw instances of . In C, they lead to undefined behavior, but if annex F is supported, the operation yields an "invalid" floating-point exception (as required by the IEEE standard) and an unspecified value.
Perl's package uses "NaN" for the result of strings that do not represent valid integers.
> perl -mMath::BigInt -e "print Math::BigInt->new('foo')"
NaN
Display
Different operating systems and programming languages may have different string representations of NaN.
nan (C, C++, Python)
NaN (ECMAScript, Rust, C#, Julia). Julia may show alternative NaN, depending on precision, NaN32, and NaN16; NaN is for Float64 type.
NaN%
NAN (C, C++, Rust)
NaNQ (IBM XL and AIX: Fortran, C++ proposal n2290)
NaNS (ditto)
qNaN
sNaN
1.#SNAN (Excel)
1.#QNAN (Excel)
-1.#IND (Excel)
+nan.0 (Scheme)
Since, in practice, encoded NaNs have a sign, a quiet/signaling bit and optional 'diagnostic information' (sometimes called a payload), these will occasionally be found in string representations of NaNs, too. Some examples are:
For the C and C++ languages, the sign bit is always shown by the standard-library functions (e.g. ) when present. There is no standard display of the payload nor of the signaling status, but a quiet NaN value of a specific payload may either be constructed by providing the string nan(char-sequence) to a number-parsing function (e.g. ) or by providing the char-sequence string to (or for sNaN), both interpreted in an implementation-defined manner.
GCC and LLVM provides built-in implementations of and . They parse the char-sequence as an integer for (or a differently-sized equivalent) with its detection of integer bases.
The GNU C Library's float-parser uses the char-sequence string in "some unspecified fashion". In practice, this parsing has been equivalent to GCC/LLVM's for up to 64 bits of payload.
Newlib does not implement parsing, but accepts a hexadecimal format without prefix.
musl does not implement any payload parsing.
Not all languages admit the existence of multiple NaNs. For example, ECMAScript only uses one NaN value throughout.
References
Notes
Citations
Standards
External links
Not a Number, foldoc.org
IEEE 754-2008 Standard for Floating-Point Arithmetic
IEEE 754-2019 Standard for Floating-Point Arithmetic
Computer arithmetic
Software anomalies | null | [
"Mathematics",
"Technology"
] | 3,914 | [
"Technological failures",
"Software anomalies",
"Computer arithmetic",
"Computer errors",
"Arithmetic"
] |
49,253 | https://en.wikipedia.org/wiki/Urysohn%27s%20lemma | In topology, Urysohn's lemma is a lemma that states that a topological space is normal if and only if any two disjoint closed subsets can be separated by a continuous function.
Urysohn's lemma is commonly used to construct continuous functions with various properties on normal spaces. It is widely applicable since all metric spaces and all compact Hausdorff spaces are normal. The lemma is generalised by (and usually used in the proof of) the Tietze extension theorem.
The lemma is named after the mathematician Pavel Samuilovich Urysohn.
Discussion
Two subsets and of a topological space are said to be separated by neighbourhoods if there are neighbourhoods of and of that are disjoint. In particular and are necessarily disjoint.
Two plain subsets and are said to be separated by a continuous function if there exists a continuous function from into the unit interval such that for all and for all Any such function is called a Urysohn function for and In particular and are necessarily disjoint.
It follows that if two subsets and are separated by a function then so are their closures. Also it follows that if two subsets and are separated by a function then and are separated by neighbourhoods.
A normal space is a topological space in which any two disjoint closed sets can be separated by neighbourhoods. Urysohn's lemma states that a topological space is normal if and only if any two disjoint closed sets can be separated by a continuous function.
The sets and need not be precisely separated by , i.e., it is not necessary and guaranteed that and for outside and A topological space in which every two disjoint closed subsets and are precisely separated by a continuous function is perfectly normal.
Urysohn's lemma has led to the formulation of other topological properties such as the 'Tychonoff property' and 'completely Hausdorff spaces'. For example, a corollary of the lemma is that normal T1 spaces are Tychonoff.
Formal statement
A topological space is normal if and only if, for any two non-empty closed disjoint subsets and of there exists a continuous map such that and
Proof sketch
The proof proceeds by repeatedly applying the following alternate characterization of normality. If is a normal space, is an open subset of , and is closed, then there exists an open and a closed such that .
Let and be disjoint closed subsets of . The main idea of the proof is to repeatedly apply this characterization of normality to and , continuing with the new sets built on every step.
The sets we build are indexed by dyadic fractions. For every dyadic fraction , we construct an open subset and a closed subset of such that:
and for all ,
for all ,
For , .
Intuitively, the sets and expand outwards in layers from :
This construction proceeds by mathematical induction. For the base step, we define two extra sets and .
Now assume that and that the sets and have already been constructed for . Note that this is vacuously satisfied for . Since is normal, for any , we can find an open set and a closed set such that
The above three conditions are then verified.
Once we have these sets, we define if for any ; otherwise for every , where denotes the infimum. Using the fact that the dyadic rationals are dense, it is then not too hard to show that is continuous and has the property and This step requires the sets in order to work.
The Mizar project has completely formalised and automatically checked a proof of Urysohn's lemma in the URYSOHN3 file.
See also
Mollifier
Notes
References
External links
Mizar system proof: http://mizar.org/version/current/html/urysohn3.html#T20
Articles containing proofs
Theory of continuous functions
Lemmas
Separation axioms
Theorems in topology | Urysohn's lemma | [
"Mathematics"
] | 823 | [
"Theory of continuous functions",
"Theorems in topology",
"Topology",
"Mathematical problems",
"Articles containing proofs",
"Mathematical theorems",
"Lemmas"
] |
49,281 | https://en.wikipedia.org/wiki/Hydronium | In chemistry, hydronium (hydroxonium in traditional British English) is the cation , also written as , the type of oxonium ion produced by protonation of water. It is often viewed as the positive ion present when an Arrhenius acid is dissolved in water, as Arrhenius acid molecules in solution give up a proton (a positive hydrogen ion, ) to the surrounding water molecules (). In fact, acids must be surrounded by more than a single water molecule in order to ionize, yielding aqueous and conjugate base.
Three main structures for the aqueous proton have garnered experimental support:
the Eigen cation, which is a tetrahydrate, H3O+(H2O)3
the Zundel cation, which is a symmetric dihydrate, H+(H2O)2
and the Stoyanov cation, an expanded Zundel cation, which is a hexahydrate: H+(H2O)2(H2O)4
Spectroscopic evidence from well-defined IR spectra overwhelmingly supports the Stoyanov cation as the predominant form. For this reason, it has been suggested that wherever possible, the symbol H+(aq) should be used instead of the hydronium ion.
Relation to pH
The molar concentration of hydronium or ions determines a solution's pH according to
pH = -log([]/M)
where M = mol/L. The concentration of hydroxide ions analogously determines a solution's pOH. The molecules in pure water auto-dissociate into aqueous protons and hydroxide ions in the following equilibrium:
In pure water, there is an equal number of hydroxide and ions, so it is a neutral solution. At , pure water has a pH of 7 and a pOH of 7 (this varies when the temperature changes: see self-ionization of water). A pH value less than 7 indicates an acidic solution, and a pH value more than 7 indicates a basic solution.
Nomenclature
According to IUPAC nomenclature of organic chemistry, the hydronium ion should be referred to as oxonium. Hydroxonium may also be used unambiguously to identify it.
An oxonium ion is any cation containing a trivalent oxygen atom.
Structure
Since and N have the same number of electrons, is isoelectronic with ammonia. As shown in the images above, has a trigonal pyramidal molecular geometry with the oxygen atom at its apex. The bond angle is approximately 113°, and the center of mass is very close to the oxygen atom. Because the base of the pyramid is made up of three identical hydrogen atoms, the molecule's symmetric top configuration is such that it belongs to the point group. Because of this symmetry and the fact that it has a dipole moment, the rotational selection rules are ΔJ = ±1 and ΔK = 0. The transition dipole lies along the c-axis and, because the negative charge is localized near the oxygen atom, the dipole moment points to the apex, perpendicular to the base plane.
Acids and acidity
The hydrated proton is very acidic: at 25 °C, its pKa is approximately 0. The values commonly given for pKaaq(H3O+) are 0 or –1.74. The former uses the convention that the activity of the solvent in a dilute solution (in this case, water) is 1, while the latter uses the value of the concentration of water in the pure liquid of 55.5 M. Silverstein has shown that the latter value is thermodynamically unsupportable. The disagreement comes from the ambiguity that to define pKa of H3O+ in water, H2O has to act simultaneously as a solute and the solvent. The IUPAC has not given an official definition of pKa that would resolve this ambiguity. Burgot has argued that H3O+(aq) + H2O (l) ⇄ H2O (aq) + H3O+ (aq) is simply not a thermodynamically well-defined process. For an estimate of pKaaq(H3O+), Burgot suggests taking the measured value pKaEtOH(H3O+) = 0.3, the pKa of H3O+ in ethanol, and applying the correlation equation pKaaq = pKaEtOH – 1.0 (± 0.3) to convert the ethanol pKa to an aqueous value, to give a value of pKaaq(H3O+) = –0.7 (± 0.3). On the other hand, Silverstein has shown that Ballinger and Long's experimental results support a pKa of 0.0 for the aqueous proton. Neils and Schaertel provide added arguments for a pKa of 0.0
The aqueous proton is the most acidic species that can exist in water (assuming sufficient water for dissolution): any stronger acid will ionize and yield a hydrated proton. The acidity of (aq) is the implicit standard used to judge the strength of an acid in water: strong acids must be better proton donors than (aq), as otherwise a significant portion of acid will exist in a non-ionized state (i.e.: a weak acid). Unlike (aq) in neutral solutions that result from water's autodissociation, in acidic solutions, (aq) is long-lasting and concentrated, in proportion to the strength of the dissolved acid.
pH was originally conceived to be a measure of the hydrogen ion concentration of aqueous solution. Virtually all such free protons are quickly hydrated; acidity of an aqueous solution is therefore more accurately characterized by its concentration of (aq). In organic syntheses, such as acid catalyzed reactions, the hydronium ion () is used interchangeably with the ion; choosing one over the other has no significant effect on the mechanism of reaction.
Solvation
Researchers have yet to fully characterize the solvation of hydronium ion in water, in part because many different meanings of solvation exist. A freezing-point depression study determined that the mean hydration ion in cold water is approximately : on average, each hydronium ion is solvated by 6 water molecules which are unable to solvate other solute molecules.
Some hydration structures are quite large: the magic ion number structure (called magic number because of its increased stability with respect to hydration structures involving a comparable number of water molecules – this is a similar usage of the term magic number as in nuclear physics) might place the hydronium inside a dodecahedral cage. However, more recent ab initio method molecular dynamics simulations have shown that, on average, the hydrated proton resides on the surface of the cluster. Further, several disparate features of these simulations agree with their experimental counterparts suggesting an alternative interpretation of the experimental results.
Two other well-known structures are the Zundel cation and the Eigen cation. The Eigen solvation structure has the hydronium ion at the center of an complex in which the hydronium is strongly hydrogen-bonded to three neighbouring water molecules. In the Zundel complex the proton is shared equally by two water molecules in a symmetric hydrogen bond. A work in 1999 indicates that both of these complexes represent ideal structures in a more general hydrogen bond network defect.
Isolation of the hydronium ion monomer in liquid phase was achieved in a nonaqueous, low nucleophilicity superacid solution (). The ion was characterized by high resolution nuclear magnetic resonance.
A 2007 calculation of the enthalpies and free energies of the various hydrogen bonds around the hydronium cation in liquid protonated water at room temperature and a study of the proton hopping mechanism using molecular dynamics showed that the hydrogen-bonds around the hydronium ion (formed with the three water ligands in the first solvation shell of the hydronium) are quite strong compared to those of bulk water.
A new model was proposed by Stoyanov based on infrared spectroscopy in which the proton exists as an ion. The positive charge is thus delocalized over 6 water molecules.
Solid hydronium salts
For many strong acids, it is possible to form crystals of their hydronium salt that are relatively stable. These salts are sometimes called acid monohydrates. As a rule, any acid with an ionization constant of or higher may do this. Acids whose ionization constants are below generally cannot form stable salts. For example, nitric acid has an ionization constant of , and mixtures with water at all proportions are liquid at room temperature. However, perchloric acid has an ionization constant of , and if liquid anhydrous perchloric acid and water are combined in a 1:1 molar ratio, they react to form solid hydronium perchlorate ().
The hydronium ion also forms stable compounds with the carborane superacid . X-ray crystallography shows a symmetry for the hydronium ion with each proton interacting with a bromine atom each from three carborane anions 320 pm apart on average. The salt is also soluble in benzene. In crystals grown from a benzene solution the solvent co-crystallizes and a cation is completely separated from the anion. In the cation three benzene molecules surround hydronium forming pi-cation interactions with the hydrogen atoms. The closest (non-bonding) approach of the anion at chlorine to the cation at oxygen is 348 pm.
There are also many known examples of salts containing hydrated hydronium ions, such as the ion in , the and ions both found in .
Sulfuric acid is also known to form a hydronium salt at temperatures below .
Interstellar H3O+
Hydronium is an abundant molecular ion in the interstellar medium and is found in diffuse and dense molecular clouds as well as the plasma tails of comets. Interstellar sources of hydronium observations include the regions of Sagittarius B2, Orion OMC-1, Orion BN–IRc2, Orion KL, and the comet Hale–Bopp.
Interstellar hydronium is formed by a chain of reactions started by the ionization of into by cosmic radiation. can produce either or through dissociative recombination reactions, which occur very quickly even at the low (≥10 K) temperatures of dense clouds. This leads to hydronium playing a very important role in interstellar ion-neutral chemistry.
Astronomers are especially interested in determining the abundance of water in various interstellar climates due to its key role in the cooling of dense molecular gases through radiative processes. However, does not have many favorable transitions for ground-based observations. Although observations of HDO (the deuterated version of water) could potentially be used for estimating abundances, the ratio of HDO to is not known very accurately.
Hydronium, on the other hand, has several transitions that make it a superior candidate for detection and identification in a variety of situations. This information has been used in conjunction with laboratory measurements of the branching ratios of the various dissociative recombination reactions to provide what are believed to be relatively accurate and abundances without requiring direct observation of these species.
Interstellar chemistry
As mentioned previously, is found in both diffuse and dense molecular clouds. By applying the reaction rate constants (α, β, and γ) corresponding to all of the currently available characterized reactions involving , it is possible to calculate k(T) for each of these reactions. By multiplying these k(T) by the relative abundances of the products, the relative rates (in cm3/s) for each reaction at a given temperature can be determined. These relative rates can be made in absolute rates by multiplying them by the . By assuming for a dense cloud and for a diffuse cloud, the results indicate that most dominant formation and destruction mechanisms were the same for both cases. It should be mentioned that the relative abundances used in these calculations correspond to TMC-1, a dense molecular cloud, and that the calculated relative rates are therefore expected to be more accurate at . The three fastest formation and destruction mechanisms are listed in the table below, along with their relative rates. Note that the rates of these six reactions are such that they make up approximately 99% of hydronium ion's chemical interactions under these conditions. All three destruction mechanisms in the table below are classified as dissociative recombination reactions.
It is also worth noting that the relative rates for the formation reactions in the table above are the same for a given reaction at both temperatures. This is due to the reaction rate constants for these reactions having β and γ constants of 0, resulting in which is independent of temperature.
Since all three of these reactions produce either or OH, these results reinforce the strong connection between their relative abundances and that of . The rates of these six reactions are such that they make up approximately 99% of hydronium ion's chemical interactions under these conditions.
Astronomical detections
As early as 1973 and before the first interstellar detection, chemical models of the interstellar medium (the first corresponding to a dense cloud) predicted that hydronium was an abundant molecular ion and that it played an important role in ion-neutral chemistry. However, before an astronomical search could be underway there was still the matter of determining hydronium's spectroscopic features in the gas phase, which at this point were unknown. The first studies of these characteristics came in 1977, which was followed by other, higher resolution spectroscopy experiments. Once several lines had been identified in the laboratory, the first interstellar detection of H3O+ was made by two groups almost simultaneously in 1986. The first, published in June 1986, reported observation of the J = 1 − 2 transition at in OMC-1 and Sgr B2. The second, published in August, reported observation of the same transition toward the Orion-KL nebula.
These first detections have been followed by observations of a number of additional transitions. The first observations of each subsequent transition detection are given below in chronological order:
In 1991, the 3 − 2 transition at was observed in OMC-1 and Sgr B2. One year later, the 3 − 2 transition at was observed in several regions, the clearest of which was the W3 IRS 5 cloud.
The first far-IR 4 − 3 transition at 69.524 μm (4.3121 THz) was made in 1996 near Orion BN-IRc2. In 2001, three additional transitions of in were observed in the far infrared in Sgr B2; 2 − 1 transition at 100.577 μm (2.98073 THz), 1 − 1 at 181.054 μm (1.65582 THz) and 2 − 1 at 100.869 μm (2.9721 THz).
See also
Hydron (hydrogen cation)
Hydride
Hydrogen anion
Hydrogen ion
Grotthus mechanism
Trifluorooxonium
Law of dilution
References
External links
J Phys Chem infrared spectra of hydronium
Acids
Oxonium compounds
Water chemistry | Hydronium | [
"Chemistry"
] | 3,157 | [
"Acids",
"nan"
] |
49,295 | https://en.wikipedia.org/wiki/Fine-structure%20constant | In physics, the fine-structure constant, also known as the Sommerfeld constant, commonly denoted by (the Greek letter alpha), is a fundamental physical constant that quantifies the strength of the electromagnetic interaction between elementary charged particles.
It is a dimensionless quantity (dimensionless physical constant), independent of the system of units used, which is related to the strength of the coupling of an elementary charge e with the electromagnetic field, by the formula . Its numerical value is approximately , with a relative uncertainty of
The constant was named by Arnold Sommerfeld, who introduced it in 1916 when extending the Bohr model of the atom. quantified the gap in the fine structure of the spectral lines of the hydrogen atom, which had been measured precisely by Michelson and Morley in 1887.
Why the constant should have this value is not understood, but there are a number of ways to measure its value.
Definition
In terms of other physical constants, may be defined as:
where
is the elementary charge ();
is the Planck constant ();
is the reduced Planck constant, ()
is the speed of light ();
is the electric constant ().
Since the 2019 revision of the SI, the only quantity in this list that does not have an exact value in SI units is the electric constant (vacuum permittivity).
Alternative systems of units
The electrostatic CGS system implicitly sets , as commonly found in older physics literature, where the expression of the fine-structure constant becomes
A nondimensionalised system commonly used in high energy physics sets , where the expression for the fine-structure constant becomesAs such, the fine-structure constant is chiefly a quantity determining (or determined by) the elementary charge: in terms of such a natural unit of charge.
In the system of atomic units, which sets , the expression for the fine-structure constant becomes
Measurement
The CODATA recommended value of is
This has a relative standard uncertainty of
This value for gives , 0.8 times the standard uncertainty away from its old defined value, with the mean differing from the old value by only 0.13 parts per billion.
Historically the value of the reciprocal of the fine-structure constant is often given. The CODATA recommended value is
While the value of can be determined from estimates of the constants that appear in any of its definitions, the theory of quantum electrodynamics (QED) provides a way to measure directly using the quantum Hall effect or the anomalous magnetic moment of the electron. Other methods include the A.C. Josephson effect and photon recoil in atom interferometry.
There is general agreement for the value of , as measured by these different methods. The preferred methods in 2019 are measurements of electron anomalous magnetic moments and of photon recoil in atom interferometry. The theory of QED predicts a relationship between the dimensionless magnetic moment of the electron and the fine-structure constant (the magnetic moment of the electron is also referred to as the electron -factor ). One of the most precise values of obtained experimentally (as of 2023) is based on a measurement of using a one-electron so-called "quantum cyclotron" apparatus, together with a calculation via the theory of QED that involved tenth-order Feynman diagrams:
This measurement of has a relative standard uncertainty of . This value and uncertainty are about the same as the latest experimental results.
Further refinement of the experimental value was published by the end of 2020, giving the value
with a relative accuracy of , which has a significant discrepancy from the previous experimental value.
Physical interpretations
The fine-structure constant, , has several physical interpretations. is:
When perturbation theory is applied to quantum electrodynamics, the resulting perturbative expansions for physical results are expressed as sets of power series in . Because is much less than one, higher powers of are soon unimportant, making the perturbation theory practical in this case. On the other hand, the large value of the corresponding factors in quantum chromodynamics makes calculations involving the strong nuclear force extremely difficult.
Variation with energy scale
In quantum electrodynamics, the more thorough quantum field theory underlying the electromagnetic coupling, the renormalization group dictates how the strength of the electromagnetic interaction grows logarithmically as the relevant energy scale increases. The value of the fine-structure constant is linked to the observed value of this coupling associated with the energy scale of the electron mass: the electron's mass gives a lower bound for this energy scale, because it (and the positron) is the lightest charged object whose quantum loops can contribute to the running. Therefore, is the asymptotic value of the fine-structure constant at zero energy.
At higher energies, such as the scale of the Z boson, about 90 GeV, one instead measures an effective ≈ 1/127.
As the energy scale increases, the strength of the electromagnetic interaction in the Standard Model approaches that of the other two fundamental interactions, a feature important for grand unification theories. If quantum electrodynamics were an exact theory, the fine-structure constant would actually diverge at an energy known as the Landau pole – this fact undermines the consistency of quantum electrodynamics beyond perturbative expansions.
History
Based on the precise measurement of the hydrogen atom spectrum by Michelson and Morley in 1887,
Arnold Sommerfeld extended the Bohr model to include elliptical orbits and relativistic dependence of mass on velocity. He introduced a term for the fine-structure constant in 1916.
The first physical interpretation of the fine-structure constant was as the ratio of the velocity of the electron in the first circular orbit of the relativistic Bohr atom to the speed of light in the vacuum.
Equivalently, it was the quotient between the minimum angular momentum allowed by relativity for a closed orbit, and the minimum angular momentum allowed for it by quantum mechanics. It appears naturally in Sommerfeld's analysis, and determines the size of the splitting or fine-structure of the hydrogenic spectral lines. This constant was not seen as significant until Paul Dirac's linear relativistic wave equation in 1928, which gave the exact fine structure formula.
With the development of quantum electrodynamics (QED) the significance of has broadened from a spectroscopic phenomenon to a general coupling constant for the electromagnetic field, determining the strength of the interaction between electrons and photons. The term is engraved on the tombstone of one of the pioneers of QED, Julian Schwinger, referring to his calculation of the anomalous magnetic dipole moment.
History of measurements
The CODATA values in the above table are computed by averaging other measurements; they are not independent experiments.
Potential variation over time
Physicists have pondered whether the fine-structure constant is in fact constant, or whether its value differs by location and over time. A varying has been proposed as a way of solving problems in cosmology and astrophysics. String theory and other proposals for going beyond the Standard Model of particle physics have led to theoretical interest in whether the accepted physical constants (not just ) actually vary.
In the experiments below, represents the change in over time, which can be computed by prev − now . If the fine-structure constant really is a constant, then any experiment should show that
or as close to zero as experiment can measure. Any value far away from zero would indicate that does change over time. So far, most experimental data is consistent with being constant.
Past rate of change
The first experimenters to test whether the fine-structure constant might actually vary examined the spectral lines of distant astronomical objects and the products of radioactive decay in the Oklo natural nuclear fission reactor. Their findings were consistent with no variation in the fine-structure constant between these two vastly separated locations and times.
Improved technology at the dawn of the 21st century made it possible to probe the value of at much larger distances and to a much greater accuracy. In 1999, a team led by John K. Webb of the University of New South Wales claimed the first detection of a variation in .
Using the Keck telescopes and a data set of 128 quasars at redshifts , Webb et al. found that their spectra were consistent with a slight increase in over the last 10–12 billion years. Specifically, they found that
In other words, they measured the value to be somewhere between and . This is a very small value, but the error bars do not actually include zero. This result either indicates that is not constant or that there is experimental error unaccounted for.
In 2004, a smaller study of 23 absorption systems by Chand et al., using the Very Large Telescope, found no measurable variation:
However, in 2007 simple flaws were identified in the analysis method of Chand et al., discrediting those results.
King et al. have used Markov chain Monte Carlo methods to investigate the algorithm used by the UNSW group to determine from the quasar spectra, and have found that the algorithm appears to produce correct uncertainties and maximum likelihood estimates for for particular models. This suggests that the statistical uncertainties and best estimate for stated by Webb et al. and Murphy et al. are robust.
Lamoreaux and Torgerson analyzed data from the Oklo natural nuclear fission reactor in 2004, and concluded that has changed in the past 2 billion years by 45 parts per billion. They claimed that this finding was "probably accurate to within 20%". Accuracy is dependent on estimates of impurities and temperature in the natural reactor. These conclusions have yet to be verified.
In 2007, Khatri and Wandelt of the University of Illinois at Urbana-Champaign realized that the 21 cm hyperfine transition in neutral hydrogen of the early universe leaves a unique absorption line imprint in the cosmic microwave background radiation.
They proposed using this effect to measure the value of during the epoch before the formation of the first stars. In principle, this technique provides enough information to measure a variation of 1 part in (4 orders of magnitude better than the current quasar constraints). However, the constraint which can be placed on is strongly dependent upon effective integration time, going as . The European LOFAR radio telescope would only be able to constrain to about 0.3%. The collecting area required to constrain to the current level of quasar constraints is on the order of 100 square kilometers, which is economically impracticable at present.
Present rate of change
In 2008, Rosenband et al.
used the frequency ratio of and in single-ion optical atomic clocks to place a very stringent constraint on the present-time temporal variation of , namely = per year. A present day null constraint on the time variation of alpha does not necessarily rule out time variation in the past. Indeed, some theories
that predict a variable fine-structure constant also predict that the value of the fine-structure constant should become practically fixed in its value once the universe enters its current dark energy-dominated epoch.
Spatial variation – Australian dipole
Researchers from Australia have said they had identified a variation of the fine-structure constant across the observable universe.
These results have not been replicated by other researchers. In September and October 2010, after released research by Webb et al., physicists C. Orzel and S.M. Carroll separately suggested various approaches of how Webb's observations may be wrong. Orzel argues
that the study may contain wrong data due to subtle differences in the two telescopes
a totally different approach; he looks at the fine-structure constant as a scalar field and claims that if the telescopes are correct and the fine-structure constant varies smoothly over the universe, then the scalar field must have a very small mass. However, previous research has shown that the mass is not likely to be extremely small. Both of these scientists' early criticisms point to the fact that different techniques are needed to confirm or contradict the results, a conclusion Webb, et al., previously stated in their study.
Other research finds no meaningful variation in the fine structure constant.
Anthropic explanation
The anthropic principle is an argument about the reason the fine-structure constant has the value it does: stable matter, and therefore life and intelligent beings, could not exist if its value were very different. One example is that, if modern grand unified theories are correct, then needs to be between around 1/180 and 1/85 to have proton decay to be slow enough for life to be possible.
Numerological explanations
As a dimensionless constant which does not seem to be directly related to any mathematical constant, the fine-structure constant has long fascinated physicists.
Arthur Eddington argued that the value could be "obtained by pure deduction" and he related it to the Eddington number, his estimate of the number of protons in the universe.
This led him in 1929 to conjecture that the reciprocal of the fine-structure constant was not approximately but precisely the integer 137.
By the 1940s experimental values for deviated sufficiently from 137 to refute Eddington's arguments.
Physicist Wolfgang Pauli commented on the appearance of certain numbers in physics, including the fine-structure constant, which he also noted approximates the prime number 137. This constant so intrigued him that he collaborated with psychoanalyst Carl Jung in a quest to understand its significance. Similarly, Max Born believed that if the value of differed, the universe would degenerate, and thus that = is a law of nature.
Richard Feynman, one of the originators and early developers of the theory of quantum electrodynamics (QED), referred to the fine-structure constant in these terms:
Conversely, statistician I. J. Good argued that a numerological explanation would only be acceptable if it could be based on a good theory that is not yet known but "exists" in the sense of a Platonic Ideal.
Attempts to find a mathematical basis for this dimensionless constant have continued up to the present time. However, no numerological explanation has ever been accepted by the physics community.
In the late 20th century, multiple physicists, including Stephen Hawking in his 1988 book A Brief History of Time, began exploring the idea of a multiverse, and the fine-structure constant was one of several universal constants that suggested the idea of a fine-tuned universe.
Quotes
See also
Dimensionless physical constant
Hyperfine structure
Footnotes
References
External links
(adapted from the Encyclopædia Britannica, 15th ed. by NIST)
Physicists Nail Down the ‘Magic Number’ That Shapes the Universe (Natalie Wolchover, Quanta magazine, December 2, 2020). The value of this constant is given here as 1/137.035999206 (note the difference in the last three digits). It was determined by a team of four physicists led by Saïda Guellati-Khélifa at the Kastler Brossel Laboratory in Paris.
Dimensionless constants
Electromagnetism
Fundamental constants
Arnold Sommerfeld | Fine-structure constant | [
"Physics"
] | 3,076 | [
"Dimensionless constants",
"Electromagnetism",
"Physical phenomena",
"Physical quantities",
"Physical constants",
"Fundamental interactions",
"Fundamental constants"
] |
49,298 | https://en.wikipedia.org/wiki/Haidinger%27s%20brush | Haidinger's brush, more commonly known as Haidinger's brushes is an image produced by the eye, an entoptic phenomenon, first described by Austrian physicist Wilhelm Karl von Haidinger in 1844. Haidinger saw it when he looked through various minerals that polarized light.
Many people are able to perceive polarization of light. Haidinger's brushes may be seen as a yellowish horizontal bar or bow-tie shape (with "fuzzy" ends, hence the name "brush") visible in the center of the visual field against the blue sky viewed while facing away from the sun, or on any bright background. It typically occupies roughly 3–5 degrees of vision, about twice or three times the width of one's thumb held at arm's length. The direction of light polarization is perpendicular to the yellow bar (i.e., vertical if the bar is horizontal). Fainter bluish or purplish areas may be visible between the yellow brushes (see illustration). Haidinger's brush may also be seen by looking at a white area on many LCD flat panel computer screens (due to the polarization effect of the display), in which case it is often diagonal.
Physiological causes
Haidinger's brush is usually attributed to the dichroism of the xanthophyll pigment found in the macula lutea. As described by the Fresnel laws, the behavior and distribution of oblique rays in the cylindrical geometry of the foveal blue cones produce an extrinsic dichroism. The size of the brush is consistent with the size of the macula.
It is thought that the macula's dichroism arises from some of its pigment molecules being arranged circularly; (the small proportion of circularly arranged molecules accounts for the faintness of the phenomenon.) Xanthophyll pigments tend to be parallel to visive nerves that (because the fovea is not flat), are almost orthogonal to the fovea in its central part but nearly parallel in its outer region. As a result, two different areas of the fovea can be sensitive to two different degrees of polarization.
Seeing Haidinger's brush
Many people find it difficult to see Haidinger's brush initially. It is very faint, much more so than generally indicated in illustrations, and, like other stabilized images, tends to appear and disappear.
It is most easily seen when it can be made to move. Because it is always positioned on the macula, there is no way to make it move laterally, but it can be made to rotate, by viewing a white surface through a rotating polarizer, or by slowly tilting one's head to one side.
To see Haidinger's brush, start by using a polarizer, such as a lens from a pair of polarizing sunglasses. Gaze at an evenly lit, textureless surface through the lens and rotate the polarizer.
An option is to use the polarizer built into a computer's LCD screen. Look at a white area on the screen, and slowly tilt the head (this method generally works only with LCDs, as most other electronic visual display technologies do not emit polarized light).
It appears with more distinctness against a blue background. With practice, it is possible to see it in the naturally polarized light of a blue sky. Minnaert recommended practicing first with a polarizer, then trying it without. The areas of the sky with the strongest polarization are those 90 degrees away from the sun. Minnaert said that after a minute of gazing at the sky, "a kind of marble effect will appear. This is followed shortly by Haidinger's brush." He commented that not all observers see it in the same way. Some see the yellow pattern as solid and the blue pattern as interrupted, as in the illustrations on this page. Some see the blue as solid and the yellow as interrupted, and some see it alternating between the two states.
Use
The fact that the sensation of Haidinger's brush corresponds with the visual field of the macula means that it can be utilised in training people to look at objects with their macula. People with certain types of strabismus may undergo an adaptation whereupon they look at the object of attention not with their fovea (at the centre of the macula) but with an eccentric region of the retina. This adaptation is known as eccentric fixation. To aid in training a person to look at an object with their fovea rather than their eccentric retinal zone, a training device can be used. One such apparatus utilises a rotating polarised plate backlit with a bright white light. Wearing blue spectacles (to enhance the Haidinger's brush image) and an occluder over the other eye, the user will hopefully notice the Haidinger's brush where their macula correlates with their visual field. The goal of the training is for the user to learn to look at the test object in such a way that the Haidinger's brush overlaps the test object (and the viewer is thus now looking at it with their fovea). The reason for such training is that the healthy fovea is far greater in its resolving power than any other part of the retina. Another diagnostic method that utilises birefringent properties of the retinal tissue is retinal birefringence scanning, that can be used in case of severe amblyopia or when the specialist lacks a cooperation from the patient.
See also
Floater
Haidinger fringe
Isolation tank
Prisoner's cinema
References
Further reading
W. Haidinger: Beobachtung der Lichtpolarisationsbündel im geradlinig polarisirten Lichte. Poggendorfs Annalen, Bd. 68, 1846 S. 73-87 (Original communication at the Bibliothèque nationale de France.)
Minnaert, M. G. J. (1993) Light and Color in the Outdoors. (translated by Len Seymour from the 1974 Dutch edition). , Springer-Verlag, New York.
External links
How to see polarization with the naked eye
Polarization (waves)
Vision | Haidinger's brush | [
"Physics"
] | 1,298 | [
"Polarization (waves)",
"Astrophysics"
] |
49,324 | https://en.wikipedia.org/wiki/Unit%20interval | In mathematics, the unit interval is the closed interval , that is, the set of all real numbers that are greater than or equal to 0 and less than or equal to 1. It is often denoted (capital letter ). In addition to its role in real analysis, the unit interval is used to study homotopy theory in the field of topology.
In the literature, the term "unit interval" is sometimes applied to the other shapes that an interval from 0 to 1 could take: , , and . However, the notation is most commonly reserved for the closed interval .
Properties
The unit interval is a complete metric space, homeomorphic to the extended real number line. As a topological space, it is compact, contractible, path connected and locally path connected. The Hilbert cube is obtained by taking a topological product of countably many copies of the unit interval.
In mathematical analysis, the unit interval is a one-dimensional analytical manifold whose boundary consists of the two points 0 and 1. Its standard orientation goes from 0 to 1.
The unit interval is a totally ordered set and a complete lattice (every subset of the unit interval has a supremum and an infimum).
Cardinality
The size or cardinality of a set is the number of elements it contains.
The unit interval is a subset of the real numbers . However, it has the same size as the whole set: the cardinality of the continuum. Since the real numbers can be used to represent points along an infinitely long line, this implies that a line segment of length 1, which is a part of that line, has the same number of points as the whole line. Moreover, it has the same number of points as a square of area 1, as a cube of volume 1, and even as an unbounded n-dimensional Euclidean space (see Space filling curve).
The number of elements (either real numbers or points) in all the above-mentioned sets is uncountable, as it is strictly greater than the number of natural numbers.
Orientation
The unit interval is a curve. The open interval (0,1) is a subset of the positive real numbers and inherits an orientation from them. The orientation is reversed when the interval is entered from 1, such as in the integral used to define natural logarithm for x in the interval, thus yielding negative values for logarithm of such x. In fact, this integral is evaluated as a signed area yielding negative area over the unit interval due to reversed orientation there.
Generalizations
The interval , with length two, demarcated by the positive and negative units, occurs frequently, such as in the range of the trigonometric functions sine and cosine and the hyperbolic function tanh. This interval may be used for the domain of inverse functions. For instance, when is restricted to then is in this interval and arcsine is defined there.
Sometimes, the term "unit interval" is used to refer to objects that play a role in various branches of mathematics analogous to the role that plays in homotopy theory. For example, in the theory of quivers, the (analogue of the) unit interval is the graph whose vertex set is and which contains a single edge e whose source is 0 and whose target is 1. One can then define a notion of homotopy between quiver homomorphisms analogous to the notion of homotopy between continuous maps.
Fuzzy logic
In logic, the unit interval can be interpreted as a generalization of the Boolean domain {0,1}, in which case rather than only taking values 0 or 1, any value between and including 0 and 1 can be assumed. Algebraically, negation (NOT) is replaced with ; conjunction (AND) is replaced with multiplication (); and disjunction (OR) is defined, per De Morgan's laws, as .
Interpreting these values as logical truth values yields a multi-valued logic, which forms the basis for fuzzy logic and probabilistic logic. In these interpretations, a value is interpreted as the "degree" of truth – to what extent a proposition is true, or the probability that the proposition is true.
See also
Interval notation
Unit square, cube, circle, hyperbola and sphere
Unit impulse
Unit vector
References
Robert G. Bartle, 1964, The Elements of Real Analysis, John Wiley & Sons.
Sets of real numbers
1 (number)
Topology | Unit interval | [
"Physics",
"Mathematics"
] | 895 | [
"Spacetime",
"Topology",
"Space",
"Geometry"
] |
49,326 | https://en.wikipedia.org/wiki/KStars | KStars is a free and open-source planetarium program built using the KDE Frameworks. It is available for Linux, BSD, macOS, and Microsoft Windows. A light version of KStars is available for Android devices. It provides an accurate graphical representation of the night sky, from any location on Earth, at any date and time. The display includes up to 100 million stars (with additional addons), 13,000 deep sky objects, constellations from different cultures, all 8 planets, the Sun and Moon, and thousands of comets, asteroids, satellites, and supernovae. It has features to appeal to users of all levels, from informative hypertext articles about astronomy, to robust control of telescopes and CCD cameras, and logging of observations of specific objects.
KStars supports adjustable simulation speeds in order to view phenomena that happen over long timescales. For astronomical calculations, Astrocalculator can be used to predict conjunctions, lunar eclipses, and perform many common astronomical calculations. The following tools are included:
Observation planner
Sky calendar tool
Script Builder
Solar System
Jupiter Moons
Flags: Custom flags superimposed on the sky map.
FOV editor to calculate field of view of equipment and display them.
Altitude vs. Time tool to plot altitude vs. time graphs for any object.
Hierarchical Progress Surveys (HiPS) overlay.
High quality print outs for sky charts.
Ekos is an astrophotography suite, a complete astrophotography solution that can control all INDI devices including numerous telescopes, CCDs, DSLRs, focusers, filters, and a lot more. Ekos supports highly accurate tracking using online and offline astrometry solver, auto-focus and auto-guiding capabilities, and capture of single or multiple images using the powerful built in sequence manager.
KStars has been packaged by many Linux/BSD distributions, including Red Hat Linux, OpenSUSE, Arch Linux, and Debian. Some distributions package KStars as a separate application, some just provide a kdeedu package, which includes KStars. KStars is distributed with the KDE Software Compilation as part of the kdeedu "Edutainment" module.
KStars participated in Google Summer of Code in 2008, 2009, 2010, 2011 2012, 2015 and 2016. It has also participated in the first run of ESA's Summer of Code in Space in 2011.
It has been identified as one of the three best "Linux stargazing apps" in a Linux.com review.
See also
Space flight simulation game
List of space flight simulation games
Planetarium software
List of observatory software
References
External links
MPC Elements for Comets and Minor Planets in KStars
Download source code and Windows and Mac versions
Astronomy software
Free and open-source software
Free astronomy software
Free educational software
KDE Education Project
KDE software
Linux software
Planetarium software for Linux
Science education software
Software that uses Qt
Software using the GNU General Public License | KStars | [
"Astronomy"
] | 605 | [
"Astronomy software",
"Works about astronomy"
] |
49,365 | https://en.wikipedia.org/wiki/LGM-30%20Minuteman | The LGM-30 Minuteman is an American land-based intercontinental ballistic missile (ICBM) in service with the Air Force Global Strike Command. , the LGM-30G (Version 3) is the only land-based ICBM in service in the United States and represents the land leg of the U.S. nuclear triad, along with the Trident II submarine-launched ballistic missile (SLBM) and nuclear weapons carried by long-range strategic bombers.
Development of the Minuteman began in the mid-1950s when basic research indicated that a solid-fuel rocket motor could stand ready to launch for long periods of time, in contrast to liquid-fueled rockets that required fueling before launch and so might be destroyed in a surprise attack. The missile was named for the colonial minutemen of the American Revolutionary War, who could be ready to fight on short notice.
The Minuteman entered service in 1962 as a deterrence weapon that could hit Soviet cities with a second strike and countervalue counterattack if the U.S. was attacked. However, the development of the United States Navy (USN) UGM-27 Polaris, which addressed the same role, allowed the Air Force to modify the Minuteman, boosting its accuracy enough to attack hardened military targets, including Soviet missile silos. The Minuteman II entered service in 1965 with a host of upgrades to improve its accuracy and survivability in the face of an anti-ballistic missile (ABM) system the Soviets were known to be developing. In 1970, the Minuteman III became the first deployed ICBM with multiple independently targetable reentry vehicles (MIRV): three smaller warheads that improved the missile's ability to strike targets defended by ABMs. However, the Minutemen III missiles were later "de-MIRVed"; since 2016 they have had only a single warhead per missile, either a W78 (335 kT) or W87 (300 kT).
By the 1970s, 1,000 Minuteman missiles were deployed. This force has shrunk to 400 Minuteman III missiles , deployed in missile silos around Malmstrom AFB, Montana; Minot AFB, North Dakota; and Francis E. Warren AFB, Wyoming. The Minuteman III will be progressively replaced by the new LGM-35 Sentinel ICBM, to be built by Northrop Grumman, beginning in 2030.
History
Edward Hall and solid fuels
Minuteman owes its existence largely to Air Force Colonel Edward N. Hall, who in 1956 was given charge of the solid-fuel-propulsion division of General Bernard Schriever's Western Development Division, created to lead development of the SM-65 Atlas and HGM-25A Titan I ICBMs. Solid fuels were already commonly used in short-range rockets. Hall's superiors were interested in short- and medium-range missiles with solids, especially for use in Europe where the fast reaction time was an advantage for weapons that might be attacked by Soviet aircraft. But Hall was convinced that they could be used for a true ICBM with a range.
To achieve the required energy, that year Hall began funding research at Boeing and Thiokol into the use of ammonium perchlorate composite propellant. Adapting a concept developed in the UK, they cast the fuel into large cylinders with a star-shaped hole running along the inner axis. This allowed the fuel to burn along the entire length of the cylinder, rather than just the end as in earlier designs. The increased burn rate meant increased thrust. This also meant the heat was spread across the entire motor, instead of the end, and because it burned from the inside out it did not reach the wall of the missile fuselage until the fuel was finished burning. In comparison, older designs burned primarily from one end to the other, meaning that at any instant one small section of the fuselage was being subjected to extreme loads and temperatures.
Guidance of an ICBM is based not only on the direction the missile is traveling but the precise instant that thrust is cut off. Too much thrust and the warhead will overshoot its target, too little and it will fall short. Solids are normally very hard to predict in terms of burn time and their instantaneous thrust during the burn, which made them questionable for the sort of accuracy required to hit a target at intercontinental range. While this initially appeared to be an insurmountable problem, it ended up being solved in an almost trivial fashion. A series of ports were added inside the rocket nozzle that were opened when the guidance systems called for engine cut-off. The reduction in pressure was so abrupt that the remaining fuel broke up and blew out the nozzle without contributing to the thrust.
The first to use these developments was the US Navy. It had been involved in a joint program with the US Army to develop the liquid-fueled PGM-19 Jupiter, but had always been skeptical of the system. The Navy felt that liquid fuels were too dangerous to use onboard ships, especially submarines. Rapid success in the solids development program, combined with Edward Teller's promise of much lighter nuclear warheads during Project Nobska, led the Navy to abandon Jupiter and begin development of their own solid-fuel missile. Aerojet's work with Hall was adapted for their UGM-27 Polaris starting in December 1956.
Missile farm concept
The US Air Force saw no pressing need for a solid fuel ICBM. Development of the SM-65 Atlas and SM-68 Titan ICBMs was progressing, and "storable" (hypergolic) liquid propellants were being developed that would allow missiles to be left in a ready-to-shoot form for extended periods. These could be placed in missile silos for added protection, and launch in minutes. This met their need for a weapon that would be safe from sneak attacks; hitting all of the silos within a limited time window before they could launch simply did not seem possible.
But Hall saw solid fuels not only as a way to improve launch times or survivability, but part of a radical plan to greatly reduce the cost of ICBMs so that thousands could be built. He envisioned a future where ICBMs were the primary weapon of the US, not in the supporting role of "last ditch backup" as the Air Force saw them at the time. This would require huge deployments, which would not be possible with existing weapons due to their high cost and operational manpower requirements. A solid fuel design would be simpler to build, and easier to maintain.
Hall's ultimate plan was to build a number of integrated missile "farms" that included factories, missile silos, transport and recycling. He was aware that new computerized assembly lines would allow continual production, and that similar equipment would allow a small team to oversee operations for dozens or hundreds of missiles, radically reducing the manpower requirements. Each farm would support between 1,000 and 1,500 missiles being produced in a continuous low rate cycle. Systems in a missile would detect failures, at which point it would be removed and recycled, while a newly built missile would take its place. The missile design was based purely on lowest possible cost, reducing its size and complexity because "the basis of the weapon's merit was its low cost per completed mission; all other factors – accuracy, vulnerability, and reliability – were secondary."
Hall's plan did not go unopposed, especially by the more established names in the ICBM field. Ramo-Wooldridge pressed for a system with higher accuracy, but Hall countered that the missile's role was to attack Soviet cities, and that "a force which provides numerical superiority over the enemy will provide a much stronger deterrent than a numerically inferior force of greater accuracy." Hall was known for his "friction with others" and in 1958 Schriever removed him from the Minuteman project, sending him to the UK to oversee deployment of the Thor IRBM. On his return to the US in 1959, Hall retired from the Air Force. He received his second Legion of Merit in 1960 for his work on solid fuels.
Although he was removed from the Minuteman project, Hall's work on cost reduction had already produced a new design of diameter, much smaller than the Atlas and Titan at , which meant smaller and cheaper silos. Hall's goal of dramatic cost reduction was a success, although many of the other concepts of his missile farm were abandoned.
Guidance system
Previous long-range missiles used liquid fuels that could be loaded only just prior to firing. The loading process took from 30 to 60 minutes in typical designs. Although lengthy, this was not considered to be a problem at the time, because it took about the same amount of time to spin up the inertial guidance system, set the initial position, and program in the target coordinates.
Minuteman was designed from the outset to be launched in minutes. While solid fuel eliminated the fueling delays, the delays in starting and aligning the guidance system remained. For the desired quick launch, the guidance system would have to be kept running and aligned at all times. This was a serious problem for the mechanical systems, especially the gyroscopes which used ball bearings.
Autonetics had an experimental design using air bearings that they claimed had been running continually from 1952 to 1957. Autonetics further advanced the state of the art by building the platform in the form of a ball which could rotate in two directions. Conventional solutions used a shaft with ball bearings at either end that allowed it to rotate around a single axis only. Autonetics' design meant that only two gyros would be needed for the inertial platform, instead of the typical three.
The last major advance was to use a general-purpose digital computer in place of the analog or custom designed digital computers. Previous missile designs normally used two single-purpose and very simple electromechanical computers; one ran the autopilot that kept the missile flying along a programmed course, and the second compared the information from the inertial platform to the target coordinates and sent any needed corrections to the autopilot. To reduce the total number of parts used in Minuteman, a single faster computer was used, running separate subroutines for these functions.
Since the guidance program would not be running while the missile sat in the silo, the same computer was also used to run a program that monitored the various sensors and test equipment. With older designs this had been handled by external systems, requiring miles of extra wiring and many connectors to locations where test instruments could be connected during servicing. Now these could all be accomplished by communicating with the computer through a single connection. In order to store multiple programs, the computer, the D-17B, was built in the form of a drum machine but used a hard disk in place of the drum.
Building a computer with the required performance, size and weight demanded the use of transistors, which were at that time very expensive and not very reliable. Earlier efforts to use computers for guidance, BINAC and the system on the SM-64 Navaho, had failed and were abandoned. The Air Force and Autonetics spent millions on a program to improve transistor and component reliability 100 times, leading to the "Minuteman high-rel parts" specifications. The techniques developed during this program were equally useful for improving all transistor construction, and greatly reduced the failure rate of transistor production lines in general. This improved yield, which had the effect of greatly lowering production costs, had enormous spin-off effects in the electronics industry.
Using a general-purpose computer also had long-lasting effects on the Minuteman program and the US's nuclear stance in general. With Minuteman, the targeting could be easily changed by loading new trajectory information into the computer's hard drive, a task that could be completed in a few hours. Earlier ICBMs' custom wired computers, on the other hand, could have attacked only a single target, whose precise trajectory information was hard-coded directly in the system's logic.
Missile gap
In 1957, a series of intelligence reports suggested the Soviet Union was far ahead in the missile race and would be able to overwhelm the US by the early 1960s. If the Soviets were building missiles in the numbers being predicted by the CIA and others within the defense establishment, by as early as 1961 they would have enough to attack all SAC and ICBM bases in the US in a single first strike. It was later demonstrated that this "missile gap" was just as fictional as the "bomber gap" of a few years earlier, but through the late 1950s, it was a serious concern.
The Air Force responded by beginning research into survivable strategic missiles, starting the WS-199 program. Initially, this focused on air-launched ballistic missiles, which would be carried aboard aircraft flying far from the Soviet Union, and thus impossible to attack by either ICBM, because they were moving, or long-range interceptor aircraft, because they were too far away. In the shorter term, looking to rapidly increase the number of missiles in its force, Minuteman was given crash development status starting in September 1958. Advanced surveying of the potential silo sites had already begun in late 1957.
Adding to their concerns was a Soviet anti-ballistic missile system which was known to be under development at Sary Shagan. WS-199 was expanded to develop a maneuvering reentry vehicle (MARV), which greatly complicated the problem of shooting down a warhead. Two designs were tested in 1957, Alpha Draco and the Boost Glide Reentry Vehicle. These used long and skinny arrow-like shapes that provided aerodynamic lift in the high atmosphere, and could be fitted to existing missiles like Minuteman.
The shape of these reentry vehicles required more room on the front of the missile than a traditional reentry vehicle design. To allow for this future expansion, the Minuteman silos were revised to be built deeper. Although Minuteman would not deploy a boost-glide warhead, the extra space proved invaluable in the future, as it allowed the missile to be extended and carry more fuel and payload.
Polaris
During Minuteman's early development, the Air Force maintained the policy that the manned strategic bomber was the primary weapon of nuclear war. Blind bombing accuracy on the order of was expected, and the weapons were sized to ensure even the hardest targets would be destroyed as long as the weapon fell within this range. The USAF had enough bombers to attack every military and industrial target in the USSR and was confident that its bombers would survive in sufficient numbers that such a strike would utterly destroy the country.
Soviet ICBMs upset this equation to a degree. Their accuracy was known to be low, on the order of , but they carried large warheads that would be useful against Strategic Air Command's bombers, which parked in the open. Since there was no system to detect the ICBMs being launched, the possibility was raised that the Soviets could launch a sneak attack with a few dozen missiles that would take out a significant portion of SAC's bomber fleet.
In this environment, the Air Force saw their own ICBMs not as a primary weapon of war, but as a way to ensure that the Soviets would not risk a sneak attack. ICBMs, especially newer models that were housed in silos, could be expected to survive an attack by a single Soviet missile. In any conceivable scenario where both sides had similar numbers of ICBMs, the US forces would survive a sneak attack in sufficient numbers to ensure the destruction of all major Soviet cities in return. The Soviets would not risk an attack under these conditions.
Considering this countervalue attack concept, strategic planners calculated that an attack of "400 equivalent megatons" aimed at the largest Soviet cities would promptly kill 30% of their population and destroy 50% of their industry. Larger attacks raised these numbers only slightly, as all of the larger targets would already have been hit. This suggested that there was a "finite deterrent" level around 400 megatons that would be enough to prevent a Soviet attack no matter how many missiles they had of their own. All that had to be ensured was that the US missiles survived, which seemed likely given the low accuracy of the Soviet weapons. Reversing the problem, the addition of ICBMs to the US Air Force's arsenal did not eliminate the need, or desire, to attack Soviet military targets, and the Air Force maintained that bombers were the only suitable platform in that role.
Into this argument came the Navy's UGM-27 Polaris. Launched from submarines, Polaris was effectively invulnerable and had enough accuracy to attack Soviet cities. If the Soviets improved the accuracy of their missiles this would present a serious threat to the Air Force's bombers and missiles, but none at all to the Navy's submarines. Based on the same 400 equivalent megatons calculation, they set about building a fleet of 41 submarines carrying 16 missiles each, giving the Navy a finite deterrent that was unassailable.
This presented a serious problem for the Air Force. They were still pressing for the development of newer bombers, like the supersonic B-70, for attacks against military targets, but this role seemed increasingly unlikely in a nuclear war scenario. A February 1960 memo by RAND, entitled "The Puzzle of Polaris", was passed around among high-ranking Air Force officials. It suggested that Polaris negated any need for Air Force ICBMs if they were also being aimed at Soviet cities. If the role of the missile was to present an unassailable threat to the Soviet population, Polaris was a far better solution than Minuteman. The document had long-lasting effects on the future of the Minuteman program, which, by 1961, was firmly evolving towards a counterforce capability.
Kennedy
Minuteman's final tests coincided with the start of John F. Kennedy's presidency. His new Secretary of Defense, Robert McNamara, was tasked with continuing the expansion and modernisation of the US nuclear deterrent while limiting spending. McNamara began to apply cost/benefit analysis, and Minuteman's low production cost ensured its selection. Atlas and Titan were soon scrapped, and the storable liquid fueled Titan II deployment was severely curtailed. McNamara also cancelled the XB-70 bomber project.
Minuteman's low cost had spin-off effects on non-ICBM programs. The Army's LIM-49 Nike Zeus, an interceptor missile capable of shooting down Soviet warheads, provided another way to prevent a sneak attack. This had initially been proposed as a way to defend the SAC bomber fleet. The Army argued that upgraded Soviet missiles might be able to attack US missiles in their silos, and Zeus would be able to blunt such an attack. Zeus was expensive and the Air Force said it was more cost-effective to build another Minuteman missile. Given the large size and complexity of the Soviet liquid-fueled missiles, an ICBM building race was one the Soviets could not afford. Zeus was canceled in 1963.
Counterforce
Minuteman's selection as the primary Air Force ICBM was initially based on the same "second strike" logic as their earlier missiles: that the weapon was primarily one designed to survive any potential Soviet attack and ensure they would be hit in return. But Minuteman had a combination of features that led to its rapid evolution into the US's primary weapon of nuclear war.
Chief among these qualities was its digital computer, the D-17B. This could be updated in the field with new targets and better information about the flight paths with relative ease, gaining accuracy for little cost. One of the unavoidable effects on the warhead's trajectory was the mass of the Earth, which contains many mass concentrations that pull on the warhead as it passes over them. Through the 1960s, the Defense Mapping Agency (now part of National Geospatial-Intelligence Agency) mapped these with increasing accuracy, feeding that information back into the Minuteman fleet. The Minuteman was initially deployed with a circular error probable (CEP) of about , but this had improved to about by 1965. This was accomplished without any mechanical changes to the missile or its navigation system.
At those levels, the ICBM begins to approach the manned bomber in terms of accuracy; a small upgrade, roughly doubling the accuracy of the INS, would give it the same CEP as the manned bomber. Autonetics began such development even before the original Minuteman entered fleet service, and the Minuteman II had a CEP of . Additionally, the computers were upgraded with more memory, allowing them to store information for eight targets, which the missile crews could select among almost instantly, greatly increasing their flexibility. From that point, Minuteman became the US's primary deterrent weapon, until its performance was matched by the Navy's Trident missile of the 1980s.
Questions about the need for the manned bomber were quickly raised. The Air Force began to offer a number of reasons why the bomber offered value, in spite of costing more money to buy and being much more expensive to operate and maintain. Newer bombers with better survivability, like the B-70, cost many times more than the Minuteman, and, in spite of great efforts through the 1960s, became increasingly vulnerable to surface-to-air missiles. The B-1 of the early 1970s eventually emerged with a price tag around $200 million (equivalent to $ million in ) while the Minuteman IIIs built during the 1970s cost only $7 million ($ million in ).
The Air Force countered that having a variety of platforms complicated the defense; if the Soviets built an effective anti-ballistic missile system of some sort, the ICBM and SLBM fleet might be rendered useless, while the bombers would remain. This became the nuclear triad concept, which survives into the present. Although this argument was successful, the number of manned bombers has been repeatedly cut and the deterrent role increasingly passed to missiles.
Minuteman I (LGM-30A/B or SM-80/HSM-80A)
See also W56 Warhead
Deployment
The LGM-30A Minuteman I was first test-fired on 1 February 1961 at Cape Canaveral, entering into the Strategic Air Command's arsenal in 1962. After the first batch of Minuteman I's were fully developed and ready for stationing, the United States Air Force (USAF) had originally decided to put the missiles at Vandenberg AFB in California, but before the missiles were set to officially be moved there it was discovered that this first set of Minuteman missiles had defective boosters which limited their range from their initial to . This defect would cause the missiles to fall short of their targets if launched over the North Pole as planned. The decision was made to station the missiles at Malmstrom AFB in Montana instead. These changes would allow the missiles, even with their defective boosters, to reach their intended targets in the case of a launch.
The "improved" LGM-30B Minuteman I became operational at Ellsworth AFB, South Dakota, Minot AFB, North Dakota, F.E. Warren AFB, Wyoming, and Whiteman AFB, Missouri, in 1963 and 1964. All 800 Minuteman I missiles were delivered by June 1965. Each of the bases had 150 missiles emplaced; F.E. Warren had 200 of the Minuteman IB missiles. Malmstrom had 150 of the Minuteman I, and about five years later added 50 of the Minuteman II similar to those installed at Grand Forks AFB, ND.
Specifications
The Minuteman I's length varied based on which variation one was to look at. The Minuteman I/A had a length of and the Minuteman I/B had a length of . The Minuteman I weighed roughly , had an operational range of with an accuracy of about .
Guidance
The Minuteman I Autonetics D-17 flight computer used a rotating air bearing magnetic disk holding 2,560 "cold-stored" words in 20 tracks (write heads disabled after program fill) of 24 bits each and one alterable track of 128 words. The time for a D-17 disk revolution was 10 ms. The D-17 also used a number of short loops for faster access to intermediate results storage. The D-17 computational minor cycle was three disk revolutions or 30 ms. During that time all recurring computations were performed. For ground operations, the inertial platform was aligned and gyro correction rates updated.
During a flight, filtered command outputs were sent by each minor cycle to the engine nozzles. Unlike modern computers, which use descendants of that technology for secondary storage on hard disk, the disk was the active computer memory. The disk storage was considered hardened to radiation from nearby nuclear explosions, making it an ideal storage medium. To improve computational speed, the D-17 borrowed an instruction look-ahead feature from the Autonetics-built Field Artillery Data Computer (M18 FADAC) that permitted simple instruction execution every word time.
Warhead
At its introduction into service in 1962, Minuteman I was fitted with the W59 warhead with a yield of 1 Mt. Production for the W56 warhead with a 1.2 Mt yield began in March 1963 and W59 production was ended in July 1963 with a production run of only 150 warheads before being retired in June 1969. The W56 would continue production until May 1969 with a production run of 1000 warheads. Mods 0 to 3 were retired by September 1966 and the Mod 4 version would remain in service until the 1990s.
It's not clear exactly why the W59 was replaced by the W56 after deployment but issues with "... one-point safety" and "performance under aged conditions" were cited in a 1987 congressional report regarding the warhead. Chuck Hansen alleged that all weapons sharing the "Tsetse" nuclear primary design including the W59 suffered from a critical one-point safety issue and suffered premature tritium aging issues that needed to be corrected after entry into service.
Minuteman II (LGM-30F)
See also W56 warhead
The LGM-30F Minuteman II was an improved version of the Minuteman I missile. Its first test launch took place on September 24, 1964. Development on the Minuteman II began in 1962 as the Minuteman I entered the Strategic Air Command's nuclear force. Minuteman II production and deployment began in 1965 and completed in 1967. It had an increased range, greater throw weight and guidance system with better azimuthal coverage, providing military planners with better accuracy and a wider range of targets. Some missiles also carried penetration aids, allowing the higher probability of kill against Moscow's anti-ballistic missile system. The payload consisted of a single Mk-11C reentry vehicle containing a W56 nuclear warhead with a yield of 1.2 megatons of TNT (5 PJ).
Specifications
The Minuteman II had a length of , weighed roughly , had an operational range of with an accuracy of about .
The major new features provided by Minuteman II were:
An improved first-stage motor to increase reliability.
A novel, single, fixed nozzle with liquid injection thrust vector control on a larger second-stage motor to increase missile range. Additional motor improvements to increase reliability.
An improved guidance system (the D-37 flight computer), incorporating microchips and miniaturized discrete electronic parts. Minuteman II was the first program to make a major commitment to these new devices. Their use made possible multiple target selection, greater accuracy and reliability, a reduction in the overall size and weight of the guidance system, and an increase in the survivability of the guidance system in a nuclear environment. The guidance system contained 2,000 microchips made by Texas Instruments.
A penetration aids system to camouflage the warhead during its reentry into an enemy environment. In addition, the Mk-11C reentry vehicle incorporated stealth features to reduce its radar signature and make it more difficult to distinguish from decoys. The Mk-11C was no longer made of titanium for this and other reasons.
A larger warhead in the reentry vehicle to increase kill probability.
System modernization was concentrated on launch facilities and command and control facilities. This provided decreased reaction time and increased survivability when under nuclear attack. Final changes to the system were performed to increase compatibility with the expected LGM-118A Peacekeeper. These newer missiles were later deployed into modified Minuteman silos.
The Minuteman II program was the first mass-produced system to use a computer constructed from integrated circuits (the Autonetics D-37C). The Minuteman II integrated circuits were diode–transistor logic and diode logic made by Texas Instruments. The other major customer of early integrated circuits was the Apollo Guidance Computer, which had similar weight and ruggedness constraints. The Apollo integrated circuits were resistor–transistor logic made by Fairchild Semiconductor. The Minuteman II flight computer continued to use rotating magnetic disks for primary storage. The Minuteman II included diodes by Microsemi Corporation.
Minuteman III (LGM-30G)
See also W62 warhead
The LGM-30G Minuteman III program started in 1966 and included several improvements over the previous versions. Its first test launch took place on August 16, 1968. It was first deployed in 1970. Most modifications related to the final stage and reentry system (RS). The final (third) stage was improved with a new fluid-injected motor, giving finer control than the previous four-nozzle system.
Performance improvements realized in Minuteman III include increased flexibility in reentry vehicle (RV) and penetration aids deployment, increased survivability after a nuclear attack, and increased payload capacity. The missile retains a gimballed inertial navigation system.
Minuteman III originally contained the following distinguishing features:
Armed with up to three W62 Mk-12 warheads, having a yield of only 170 kilotons TNT, instead of previous W56's yield of 1.2 megatons.
It was the first missile equipped with multiple independently targetable reentry vehicles (MIRV). A single missile was then able to target three separate locations. This was an improvement from the Minuteman I and Minuteman II models, which were able to carry only one large warhead.
An RS capable of deploying, in addition to the warheads, penetration aids such as chaff and decoys.
Minuteman III introduced in the post-boost-stage ("bus") an additional liquid-fuel propulsion system rocket engine (PSRE) that is used to slightly adjust the trajectory. This enables it to dispense decoys or – with MIRV – dispense individual RVs to separate targets. For the PSRE it uses the bipropellant Rocketdyne RS-14 engine.
The Hercules M57 third stage of Minuteman I and Minuteman II had thrust termination ports on the sides. These ports, when opened by detonation of shaped charges, reduced the chamber pressure so abruptly that the interior flame was blown out. This allowed a precisely timed termination of thrust for targeting accuracy. The larger Minuteman III third-stage motor also has thrust termination ports although the final velocity is determined by PSRE.
A fixed nozzle with a liquid injection thrust vector control system on the new third-stage motor (similar to the second-stage Minuteman II nozzle) additionally increased range.
A flight computer (Autonetics D37D) with larger disk memory and enhanced capability.
A Honeywell HDC-701 flight computer which employed non-destructive readout plated-wire memory instead of rotating magnetic disk for primary storage was developed as a backup for the D37D but was never adopted.
The Guidance Replacement Program, initiated in 1993, replaced the disk-based D37D flight computer with a new one that uses radiation-resistant semiconductor RAM.
The Minuteman III missiles use D-37D computers and complete the 1,000 missile deployment of this system. The initial cost of these computers range from about $139,000 (D-37C) to $250,000 (D-17B).
The existing Minuteman III missiles have been further improved over the decades in service, with more than $7 billion spent in the 2010s to upgrade the 450 missiles.
Specifications
The Minuteman III has a length of , weighs , an operational range of , and an accuracy of about .
W78 warhead
In December 1979 the higher-yield W78 warhead (335–350 kilotons) began replacing a number of the W62s deployed on the Minuteman IIIs. These were delivered in the Mark 12A reentry vehicle. A small, unknown number of the previous Mark 12 RVs were retained operationally, however, to maintain a capability to attack more-distant targets in the south-central Asian republics of the USSR (the Mark 12 RV weighed slightly less than the Mark 12A).
Guidance Replacement Program
The Guidance Replacement Program replaces the NS20A Missile Guidance Set with the NS50 Missile Guidance Set. The newer system extends the service life of the Minuteman missile beyond the year 2030 by replacing aging parts and assemblies with current, high reliability technology while maintaining the current accuracy performance. The replacement program was completed 25 February 2008.
Propulsion Replacement Program
Beginning in 1998 and continuing through 2009, the Propulsion Replacement Program extends the life and maintains the performance by replacing the old solid propellant boosters (downstages).
Single Reentry Vehicle
The Single Reentry Vehicle modification enabled the United States ICBM force to abide by the now-voided START II treaty requirements by reconfiguring Minuteman III missiles from three reentry vehicles down to one. Though it was eventually ratified by both parties, START II never entered into force and was essentially superseded by follow-on agreements such as SORT and New START, which do not limit MIRV capability. Minuteman III remains fitted with a single warhead due to the warhead limitations in New START.
Safety Enhanced Reentry Vehicle
Beginning in 2005, Mk-21/W87 RVs from the deactivated Peacekeeper missile were replaced on the Minuteman III force under the Safety Enhanced Reentry Vehicle (SERV) program. The older W78 did not have many of the safety features of the newer W87, such as insensitive high explosives, as well as more advanced safety devices. In addition to implementing these safety features in at least a portion of the future Minuteman III force, the decision to transfer W87s onto the missile was based on two features that improved the targeting capabilities of the weapon: more fuzing options which allowed for greater targeting flexibility, and the most accurate reentry vehicle available, which provided a greater probability of damage to the designated targets.
Deployment
The Minuteman III missile entered service in 1970, with weapon systems upgrades included during the production run from 1970 to 1978 to increase accuracy and payload capacity. , the USAF plans to operate it until the mid-2030s.
The LGM-118A Peacekeeper (MX) ICBM, which was to have replaced the Minuteman, was retired in 2005 as part of START II.
A total of 450 LGM-30G missiles are emplaced at F.E. Warren Air Force Base, Wyoming (90th Missile Wing), Minot Air Force Base, North Dakota (91st Missile Wing), and Malmstrom Air Force Base, Montana (341st Missile Wing). All Minuteman I and Minuteman II missiles have been retired. The United States prefers to keep its MIRV deterrents on submarine-launched Trident Nuclear Missiles In 2014, the Air Force decided to put fifty Minuteman III silos into "warm" unarmed status, taking up half of the 100 slots in America's allowable nuclear reserve. These can be reloaded in the future if necessary.
Testing
Minuteman III missiles are regularly tested with launches from Vandenberg Space Force Base in order to validate the effectiveness, readiness, and accuracy of the weapon system, as well as to support the system's primary purpose, nuclear deterrence. The safety features installed on the Minuteman III for each test launch allow the flight controllers to terminate the flight at any time if the systems indicate that its course may take it unsafely over inhabited areas. Since these flights are for test purposes only, even terminated flights can send back valuable information to correct a potential problem with the system.
The test of an unarmed Minuteman III failed on November 1, 2023, from Vandenberg Space Force Base, California. The U.S. Air Force said it had blown up the missile over the Pacific Ocean after an anomaly was detected following its launch.
The 576th Flight Test Squadron is responsible for planning, preparing, conducting, and assessing all ICBM ground and flight tests.
Airborne Launch Control System (ALCS)
The Airborne Launch Control System (ALCS) is an integral part of the Minuteman ICBM command and control system and provides a survivable launch capability for the Minuteman ICBM force if ground-based launch control centers (LCCs) are destroyed.
When the Minuteman ICBM was first placed on alert, the Soviet Union did not have the number of weapons, accuracy, nor significant nuclear yield to completely destroy the Minuteman ICBM force during an attack. However, starting in the mid-1960s, the Soviets began to gain parity with the US and potentially had the capability to target and successfully attack the Minuteman force with an increased number of ICBMs that had greater yields and accuracy than were previously available.
Studying the problem, SAC realized that in order to prevent the US from launching all 1,000 Minuteman ICBMs, the Soviets did not have to target all 1,000 Minuteman missile silos. The Soviets needed to launch only a disarming decapitation strike against the 100 Minuteman LCCs – the command and control sites – in order to prevent the launch of all Minuteman ICBMs. Even though the Minuteman ICBMs would have been left unscathed in their missile silos following an LCC decapitation strike, the Minuteman missiles could not be launched without a command and control capability.
In other words, the Soviets needed only 100 warheads to eliminate command and control of the Minuteman ICBMs. Even if the Soviets chose to expend two to three warheads per LCC for assured damage expectancy, the Soviets would have had to expend only up to 300 warheads to disable the Minuteman ICBM force – far less than the total number of Minuteman silos. The Soviets could have then used the remaining warheads to strike other targets they chose.
Faced with only a few Minuteman LCC targets, the Soviets could have concluded that the odds of being successful in a Minuteman LCC decapitation strike were higher with less risk than it would have been having to face the almost insurmountable task of successfully attacking and destroying 1000 Minuteman silos and 100 Minuteman LCCs to ensure Minuteman was disabled. This theory motivated SAC to design a survivable means to launch Minuteman, even if all the ground-based command and control sites were destroyed.
After thorough testing and modification of EC-135 command post aircraft, the ALCS demonstrated its capability on 17 April 1967 by launching an ERCS configured Minuteman II out of Vandenberg AFB, CA. Afterward, ALCS achieved Initial Operational Capability on 31 May 1967. From that point on, airborne missileers stood alert with ALCS-capable EC-135 aircraft for several decades. All Minuteman ICBM Launch Facilities were modified and built to have the capability to receive commands from ALCS. With ALCS standing alert around-the-clock, the Soviets could no longer successfully launch a Minuteman LCC decapitation strike. Even if the Soviets attempted to do so, EC-135s equipped with the ALCS could fly overhead and launch the remaining Minuteman ICBMs in retaliation.
With the ALCS on alert, the Soviet war planning was complicated by forcing them to target not only the 100 LCCs, but also the 1,000 silos with more than one warhead in order to guarantee destruction. This would have required upwards of 3,000 warheads to complete such an attack. The odds of being successful in such an attack on the Minuteman ICBM force would have been extremely low.
The ALCS is operated by airborne missileers from the Air Force Global Strike Command's (AFGSC) 625th Strategic Operations Squadron (STOS) and United States Strategic Command (USSTRATCOM). The weapon system is also located on board the United States Navy's E-6B Mercury. The ALCS crews are integrated into the battle staff of the USSTRATCOM "Looking Glass" Airborne Command Post (ABNCP) and are on alert around-the-clock. Although the Minuteman ICBM force has been reduced since the end of the Cold War, the ALCS continues to act as a force multiplier by ensuring that some enemy cannot launch a successful Minuteman LCC decapitation strike.
Other roles
Mobile Minuteman
Mobile Minuteman was a program for rail-based ICBMs to help increase survivability and for which the USAF released details on 12 October 1959. Minuteman Mobility Test Trains were first exercised from 20 June to 27 August 1960 at Hill Air Force Base, and the 4062nd Strategic Missile Wing (Mobile) was organized 1 December 1960. It was planned to include three missile train squadrons, each with 10 trains carrying 3 missiles per train. During the Kennedy/McNamara force reductions, the Department of Defense announced "that it has abandoned the plan for a mobile Minuteman ICBM. The concept called for 600 to be placed in service450 in silos and 150 on special trains, each train carrying 5 missiles." Kennedy announced on 18 March 1961 that the 3 squadrons were to be replaced with "fixed-base squadrons", and Strategic Air Command discontinued the 4062nd Strategic Missile Wing on 20 February 1962.
Air-Launched ICBM
Air-Launched ICBM was a STRAT-X proposal in which SAMSO (Space & Missile Systems Organization) successfully conducted an Air Mobile Feasibility Test that airdropped a Minuteman 1b from a C-5A Galaxy aircraft from over the Pacific Ocean. The missile fired at , and the 10-second engine burn carried the missile to 20,000 feet again before it dropped into the ocean. Operational deployment was discarded due to engineering and security difficulties, and the capability was a negotiating point in the Strategic Arms Limitation Talks.
Emergency Rocket Communications System (ERCS)
From 1963 through 1991, the National Command Authority communication relay system included the Emergency Rocket Communication System (ERCS). Specially designed rockets called BLUE SCOUT carried radio-transmitting payloads high above the continental United States, to relay messages to units within line-of-sight. In the event of a nuclear attack, ERCS payloads would relay pre-programmed messages giving the "go-order" to SAC units.
BLUE SCOUT launch sites were located at Wisner, West Point and Tekamah, Nebraska. These locations were vital for ERCS effectiveness due to their centralized position in the US, within range of all missile complexes. In 1968, ERCS configurations were placed on the top of modified Minuteman II ICBMs (LGM-30Fs) under the control of the 510th Strategic Missile Squadron located at Whiteman Air Force Base, Missouri.
The Minuteman ERCS may have been assigned the designation LEM-70A.
Satellite launching role
The U.S. Air Force has considered using some decommissioned Minuteman missiles in a satellite launching role. These missiles would be stored in silos, for launch upon short notice. The payload would be variable and would have the ability to be replaced quickly. This would allow a surge capability in times of emergency.
During the 1980s, surplus Minuteman missiles were used to power the Conestoga rocket produced by Space Services Inc. of America. It was the first privately funded rocket, but saw only three flights and was discontinued due to a lack of business. More recently, converted Minuteman missiles have been used to power the Minotaur line of rockets produced by Orbital Sciences (nowadays Northrop Grumman Innovation Systems).
Ground and air launch targets
L-3 Communications is currently using SR-19 SRBs, Minuteman II Second Stage Solid Rocket Boosters, as delivery vehicles for a range of different re-entry vehicles as targets for the THAAD and ASIP interceptor missile programs as well as radar testing.
Operators
The United States Air Force has been the only operator of the Minuteman ICBM weapons system, currently with three operational wings and one test squadron operating the LGM-30G. The active inventory in FY 2009 is 450 missiles and 45 Missile Alert Facilities (MAF).
Operational units
The basic tactical unit of a Minuteman wing is the squadron, consisting of five flights. Each flight consists of ten unmanned launch facilities (LFs) which are remotely controlled by a manned launch control center (LCC). A two-officer crew is on duty in the LCC, typically for 24 hours. The five flights are interconnected and status from any LF may be monitored by any of the five LCCs. Each LF is located at least three nautical miles (5.6 km) from any LCC.
Control does not extend outside the squadron (thus the 319th Missile Squadron's five LCCs cannot control the 320th Missile Squadron's 50 LFs even though they are part of the same Missile Wing). Each Minuteman wing is assisted logistically by a nearby Missile Support Base (MSB). If the ground-based LCCs are destroyed or incapacitated, the Minuteman ICBMs can be launched by airborne missileers utilizing the Airborne Launch Control System.
Active
90th Missile Wing – "Mighty Ninety"
at Francis E. Warren AFB, Wyoming, (1 July 1963 – present)
Units:
319th Missile Squadron – "Screaming Eagles"
320th Missile Squadron – "G.N.I."
321st Missile Squadron – "Greentails"
150 missiles, 15 MAF – Launch sites
LGM-30B Minuteman I, 1964–74
LGM-30G Minuteman III, 1973–present
91st Missile Wing – "Roughriders"
at Minot AFB, North Dakota (25 June 1968 – present)
Units:
740th Missile Squadron – "Vulgar Vultures"
741st Missile Squadron – "Gravelhaulers"
742d Missile Squadron – "Wolf Pack"
150 Missiles, 15 MAF – Launch sites
LGM-30B Minuteman I, 1968–72
LGM-30G Minuteman III, 1972–present
341st Missile Wing
at Malmstrom AFB, Montana (15 July 1961 – present)
Units:
10th Missile Squadron – "First Aces"
12th Missile Squadron – "Red Dawgs"
490th Missile Squadron – "Farsiders"
150 Missiles, 15 MAF – Launch sites
LGM-30A Minuteman I, 1962–69
LGM-30F Minuteman II, 1967–94
LGM-30G Minuteman III, 1975–present
625th Strategic Operations Squadron
at Offutt AFB, Nebraska
Historical
44th Strategic Missile (later Missile) Wing "Black Hills Bandits"
Ellsworth AFB, South Dakota (24 November 1961 – 5 July 1994)
LGM-30B Minuteman I, 1963–73
LGM-30F Minuteman II, 1971–94
66th Missile Squadron
67th Missile Squadron
68th Missile Squadron
44th Missile Wing LGM-30 Minuteman Missile Launch Sites
Inactivated 1994 when Minuteman II phased out of inventory. All retired between 3 December 1991 and April 1994, with destruction of silos and alert facilities finishing in 1996.
90th Missile Wing
400th Missile Squadron (Converted to LGM-118A Peacekeeper in 1987. Inactivated 2005. Peacekeepers retired.)
321st Strategic Missile (later Missile) Wing (later Group)
Grand Forks AFB, North Dakota (14 August 1964 – 30 September 1998)
LGM-30F Minuteman II, 1965–73
LGM-30G Minuteman III, 1972–98
446th Missile Squadron
447th Missile Squadron
448th Missile Squadron
321st Missile Wing LGM-30 Minuteman Missile Launch Sites
Inactivated by BRAC 1995; missiles reassigned to 341st SMW, however in 1995 it was decided to retire the Grand Forks missiles; the last missile was pulled from its silo in June 1998. Destruction of silos and control facilities began in October 1999; the last silo (H-22) was imploded 24 August 2001 (the last US silo destroyed per the 1991 START-I treaty).
341st Missile Wing
564th Missile Squadron (Inactivated 2008, WS-133B system retired, missiles recycled into inventory)
351st Strategic Missile (later Missile) Wing
Whiteman AFB, Missouri (1 February 1963 – 31 July 1995)
LGM-30B Minuteman I, 1963–65
LGM-30F Minuteman II, 1965–95
508th Missile Squadron
509th Missile Squadron
510th Missile Squadron
351st Missile Wing LGM-30 Minuteman Missile Launch Sites
The 510th SMS operated Emergency Rocket Communication System (ERCS) missiles in addition to Minuteman II ICBMs. The 351st SMW was inactivated under START-I. The first silo was imploded on 8 December 1993 and the last on 15 December 1997.
455th Strategic Missile Wing
Minot AFB, North Dakota (28 June 1962 – 25 June 1968)
LGM-30B Minuteman I, 1962–68
Replaced by the 91st Strategic Missile Wing in June 1968
Historical Airborne Launch Control System Units
68th Strategic Missile Squadron (Ellsworth AFB, SD: 1967–1970)
91st Strategic Missile Wing (Minot AFB, ND: 1967–1969)
4th Airborne Command and Control Squadron (Ellsworth AFB, SD: 1970–1992)
2nd Airborne Command and Control Squadron (Offutt AFB, NE: 1970–1994)
7th Airborne Command and Control Squadron (Offutt AFB, NE: 1994–1998)
625th Missile Operations Flight/USSTRATCOM (Offutt AFB, NE: 1998–2007)
Converted to the 625th Strategic Operations Squadron in 2007, where ALCS mission continues to this day
Support
532d Training Squadron – Vandenberg AFB, California (Missile Maintenance Training and Missile Initial Qualification Course)
315th Weapons Squadron – Nellis AFB, Nevada (ICBM Weapons Instructor Course)
526th ICBM Systems Wing – Hill Air Force Base, Utah
576th Flight Test Squadron – Vandenberg Air Force Base, California – "Top Hand"
625th Strategic Operations Squadron – Offutt AFB, Nebraska
Replacement
A request for proposal for development and maintenance of a Ground Based Strategic Deterrent (GBSD) next-generation nuclear ICBM, was made by the US Air Force Nuclear Weapons Center, ICBM Systems Directorate, GBSD Division on 29 July 2016. The GBSD would replace MMIII in the land-based portion of the US Nuclear Triad. The new missile to be phased in over a decade from the late 2020s are estimated over a fifty-year life cycle to cost around $86 billion. Boeing, Lockheed Martin, and Northrop Grumman were competing for the contract.
On 21 August 2017, the US Air Force awarded 3-year development contracts to Boeing and Northrop Grumman, for $349 million and $329 million, respectively. One of these companies will be selected to produce this ground-based nuclear ICBM in 2020. In 2027, the GBSD program is expected to enter service and remain active until 2075.
On 14 December 2019, it was announced that Northrop Grumman had won the competition to build the future ICBM. Northrop won by default, as their bid was at the time the only bid left to be considered for the GBSD program (Boeing had dropped out of the bidding contest earlier in 2019). The US Air Force stated that it would "proceed with an aggressive and effective sole-source negotiation."
Surviving decommissioned sites
Oscar One Alert Facility at Whiteman AFB
Delta One Alert Facility at Minuteman Missile National Historic Site
Delta Nine Silo at Minuteman Missile National Historic Site
Minuteman II missile Training Launch Facility at Ellsworth AFB
Oscar Zero Alert Facility at Ronald Reagan Minuteman Missile State Historic Site
November 33 Silo (topside only) at Ronald Reagan Minuteman Missile State Historic Site
Quebec-One Missile Alert Facility at Cheyenne, Wyoming (modified for Peacekeeper ICBM in 1986)
Preservation
The Minuteman Missile National Historic Site in South Dakota preserves a Launch Control Facility (D-01) and a launch facility (D-09) under the control of the National Park Service. The North Dakota State Historical Society maintains the Ronald Reagan Minuteman Missile Site, preserving a Missile Alert Facility, Launch Control Center and Launch Facility in the WS-133B "Deuce" configuration, near Cooperstown, North Dakota.
Comparable missiles
RS-28 Sarmat
DF-5
DF-41
PGM-17 Thor
R-36
RS-24 Yars
RT-2
RT-2PM2 Topol-M
UR-100N
Agni-VI
See also
Airborne Launch Control Center
LGM-30 Minuteman chronology
Missile combat crew
Missile launch control center
Nuclear weapons and the United States
Single Integrated Operational Plan
List of missiles
Notes
References
Citations
Sources
Further reading
External links
CSIS Missile Threat – Minuteman III
Minuteman Information Site
Strategic-Air-Command.com Minuteman Missile History
Directory of U.S. Military Rockets and Missiles
Nuclear Weapon Archive
Minuteman Missile National Historic Site
Federation of American Scientists
60 Minutes shocked to find 8-inch floppies drive nuclear deterrent – Ars Technica
1974 in spaceflight
Cold War weapons of the United States
Embedded systems
LGM-030
Nuclear weapons of the United States
MIRV capable missiles
Military equipment introduced in the 1960s | LGM-30 Minuteman | [
"Technology",
"Engineering"
] | 11,041 | [
"Embedded systems",
"Computer science",
"Computer engineering",
"Computer systems"
] |
49,375 | https://en.wikipedia.org/wiki/Larynx | The larynx (), commonly called the voice box, is an organ in the top of the neck involved in breathing, producing sound and protecting the trachea against food aspiration. The opening of larynx into pharynx known as the laryngeal inlet is about 4–5 centimeters in diameter. The larynx houses the vocal cords, and manipulates pitch and volume, which is essential for phonation. It is situated just below where the tract of the pharynx splits into the trachea and the esophagus. The word 'larynx' (: larynges) comes from the Ancient Greek word lárunx ʻlarynx, gullet, throatʼ.
Structure
The triangle-shaped larynx consists largely of cartilages that are attached to one another, and to surrounding structures, by muscles or by fibrous and elastic tissue components. The larynx is lined by a ciliated columnar epithelium except for the vocal folds. The cavity of the larynx extends from its triangle-shaped inlet, to the epiglottis, and to the circular outlet at the lower border of the cricoid cartilage, where it is continuous with the lumen of the trachea. The mucous membrane lining the larynx forms two pairs of lateral folds that project inward into its cavity. The upper folds are called the vestibular folds. They are also sometimes called the false vocal cords for the rather obvious reason that they play no part in vocalization. The Kargyraa style of Tuvan throat singing makes use of these folds to sing an octave lower, and they are used in Umngqokolo, a type of Xhosa throat singing. The lower pair of folds are known as the vocal cords, which produce sounds needed for speech and other vocalizations. The slit-like space between the left and right vocal cords, called the rima glottidis, is the narrowest part of the larynx. The vocal cords and the rima glottidis are together designated as the glottis. The laryngeal cavity above the vestibular folds is called the vestibule. The very middle portion of the cavity between the vestibular folds and the vocal cords is the ventricle of the larynx, or laryngeal ventricle. The infraglottic cavity is the open space below the glottis.
Location
In adult humans, the larynx is found in the anterior neck at the level of the cervical vertebrae C3–C6. It connects the inferior part of the pharynx (hypopharynx) with the trachea. The laryngeal skeleton consists of nine cartilages: three single (epiglottic, thyroid and cricoid) and three paired (arytenoid, corniculate, and cuneiform). The hyoid bone is not part of the larynx, though the larynx is suspended from the hyoid. The larynx extends vertically from the tip of the epiglottis to the inferior border of the cricoid cartilage. Its interior can be divided in supraglottis, glottis and subglottis.
Cartilages
There are nine cartilages, three unpaired and three paired (3 pairs=6), that support the mammalian larynx and form its skeleton.
Unpaired cartilages:
Thyroid cartilage: This forms the Adam's apple (also called the laryngeal prominence). It is usually larger in males than in females. The thyrohyoid membrane is a ligament associated with the thyroid cartilage that connects it with the hyoid bone. It supports the front portion of the larynx.
Cricoid cartilage: A ring of hyaline cartilage that forms the inferior wall of the larynx. It is attached to the top of the trachea. The median cricothyroid ligament connects the cricoid cartilage to the thyroid cartilage.
Epiglottis: A large, spoon-shaped piece of elastic cartilage. During swallowing, the pharynx and larynx rise. Elevation of the pharynx widens it to receive food and drink; elevation of the larynx causes the epiglottis to move down and form a lid over the glottis, closing it off.
Paired cartilages:
Arytenoid cartilages: Of the paired cartilages, the arytenoid cartilages are the most important because they influence the position and tension of the vocal cords. These are triangular pieces of mostly hyaline cartilage located at the posterosuperior border of the cricoid cartilage.
Corniculate cartilages: Horn-shaped pieces of elastic cartilage located at the apex of each arytenoid cartilage.
Cuneiform cartilages: Club-shaped pieces of elastic cartilage located anterior to the corniculate cartilages.
Muscles
The muscles of the larynx are divided into intrinsic and extrinsic muscles. The extrinsic muscles act on the region and pass between the larynx and parts around it but have their origin elsewhere; the intrinsic muscles are confined entirely within the larynx and have their origin and insertion there.
The intrinsic muscles are divided into respiratory and the phonatory muscles (the muscles of phonation). The respiratory muscles move the vocal cords apart and serve breathing. The phonatory muscles move the vocal cords together and serve the production of voice. The main respiratory muscles are the posterior cricoarytenoid muscles. The phonatory muscles are divided into adductors (lateral cricoarytenoid muscles, arytenoid muscles) and tensors (cricothyroid muscles, thyroarytenoid muscles).
Intrinsic
The intrinsic laryngeal muscles are responsible for controlling sound production.
Cricothyroid muscle lengthen and tense the vocal cords.
Posterior cricoarytenoid muscles abduct and externally rotate the arytenoid cartilages, resulting in abducted vocal cords.
Lateral cricoarytenoid muscles adduct and internally rotate the arytenoid cartilages, increase medial compression.
Transverse arytenoid muscle adduct the arytenoid cartilages, resulting in adducted vocal cords.
Oblique arytenoid muscles narrow the laryngeal inlet by constricting the distance between the arytenoid cartilages.
Thyroarytenoid muscles narrow the laryngeal inlet, shortening the vocal cords, and lowering voice pitch. The internal thyroarytenoid is the portion of the thyroarytenoid that vibrates to produce sound.
Notably the only muscle capable of separating the vocal cords for normal breathing is the posterior cricoarytenoid. If this muscle is incapacitated on both sides, the inability to pull the vocal cords apart (abduct) will cause difficulty breathing. Bilateral injury to the recurrent laryngeal nerve would cause this condition. It is also worth noting that all muscles are innervated by the recurrent laryngeal branch of the vagus except the cricothyroid muscle, which is innervated by the external laryngeal branch of the superior laryngeal nerve (a branch of the vagus).
Additionally, intrinsic laryngeal muscles present a constitutive Ca2+-buffering profile that predicts their better ability to handle calcium changes in comparison to other muscles. This profile is in agreement with their function as very fast muscles with a well-developed capacity for prolonged work. Studies suggests that mechanisms involved in the prompt sequestering of Ca2+ (sarcoplasmic reticulum Ca2+-reuptake proteins, plasma membrane pumps, and cytosolic Ca2+-buffering proteins) are particularly elevated in laryngeal muscles, indicating their importance for the myofiber function and protection against disease, such as Duchenne muscular dystrophy. Furthermore, different levels of Orai1 in rat intrinsic laryngeal muscles and extraocular muscles over the limb muscle suggests a role for store operated calcium entry channels in those muscles' functional properties and signaling mechanisms.
Extrinsic
The extrinsic laryngeal muscles support and position the larynx within the mid-cervical cereal region.
Sternothyroid muscles depress the larynx. (Innervated by ansa cervicalis)
Omohyoid muscles depress the larynx. (Ansa cervicalis)
Sternohyoid muscles depress the larynx. (Ansa cervicalis)
Inferior constrictor muscles. (CN X)
Thyrohyoid muscles elevates the larynx. (C1)
Digastric elevates the larynx. (CN V3, CN VII)
Stylohyoid elevates the larynx. (CN VII)
Mylohyoid elevates the larynx. (CN V3)
Geniohyoid elevates the larynx. (C1)
Hyoglossus elevates the larynx. (CN XII)
Genioglossus elevates the larynx. (CN XII)
Nerve supply
The larynx is innervated by branches of the vagus nerve on each side. Sensory innervation to the glottis and laryngeal vestibule is by the internal branch of the superior laryngeal nerve. The external branch of the superior laryngeal nerve innervates the cricothyroid muscle. Motor innervation to all other muscles of the larynx and sensory innervation to the subglottis is by the recurrent laryngeal nerve. While the sensory input described above is (general) visceral sensation (diffuse, poorly localized), the vocal cords also receives general somatic sensory innervation (proprioceptive and touch) by the superior laryngeal nerve.
Injury to the external branch of the superior laryngeal nerve causes weakened phonation because the vocal cords cannot be tightened. Injury to one of the recurrent laryngeal nerves produces hoarseness, if both are damaged the voice may or may not be preserved, but breathing becomes difficult.
Development
In newborn infants, the larynx is initially at the level of the C2–C3 vertebrae, and is further forward and higher relative to its position in the adult body. The larynx descends as the child grows.
Laryngeal cavity
The laryngeal cavity (cavity of the larynx) extends from the laryngeal inlet downwards to the lower border of the cricoid cartilage where it is continuous with that of the trachea.
It is divided into two parts by the projection of the vocal folds, between which is a narrow triangular opening, the rima glottidis.
The portion of the cavity of the larynx above the vocal folds is called the laryngeal vestibule; it is wide and triangular in shape, its base or anterior wall presenting, however, about its center the backward projection of the tubercle of the epiglottis.
It contains the vestibular folds, and between these and the vocal folds are the laryngeal ventricles.
The portion below the vocal folds is called the infraglottic cavity. It is at first of an elliptical form, but lower down it widens out, assumes a circular form, and is continuous with the tube of the trachea.
Function
Sound generation
Sound is generated in the larynx, and that is where pitch and volume are manipulated. The strength of expiration from the lungs also contributes to loudness.
Manipulation of the larynx is used to generate a source sound with a particular fundamental frequency, or pitch. This source sound is altered as it travels through the vocal tract, configured differently based on the position of the tongue, lips, mouth, and pharynx. The process of altering a source sound as it passes through the filter of the vocal tract creates the many different vowel and consonant sounds of the world's languages as well as tone, certain realizations of stress and other types of linguistic prosody. The larynx also has a similar function to the lungs in creating pressure differences required for sound production; a constricted larynx can be raised or lowered affecting the volume of the oral cavity as necessary in glottalic consonants.
The vocal cords can be held close together (by adducting the arytenoid cartilages) so that they vibrate (see phonation). The muscles attached to the arytenoid cartilages control the degree of opening. Vocal cord length and tension can be controlled by rocking the thyroid cartilage forward and backward on the cricoid cartilage (either directly by contracting the cricothyroids or indirectly by changing the vertical position of the larynx), by manipulating the tension of the muscles within the vocal cords, and by moving the arytenoids forward or backward. This causes the pitch produced during phonation to rise or fall. In most males the vocal cords are longer and have a greater mass than most females' vocal cords, producing a lower pitch.
The vocal apparatus consists of two pairs of folds, the vestibular folds (false vocal cords) and the true vocal cords. The vestibular folds are covered by respiratory epithelium, while the vocal cords are covered by stratified squamous epithelium. The vestibular folds are not responsible for sound production, but rather for resonance. The exceptions to this are found in Tibetan chanting and Kargyraa, a style of Tuvan throat singing. Both make use of the vestibular folds to create an undertone. These false vocal cords do not contain muscle, while the true vocal cords do have skeletal muscle.
Other
The most important role of the larynx is its protective function, the prevention of foreign objects from entering the lungs by coughing and other reflexive actions. A cough is initiated by a deep inhalation through the vocal cords, followed by the elevation of the larynx and the tight adduction (closing) of the vocal cords. The forced expiration that follows, assisted by tissue recoil and the muscles of expiration, blows the vocal cords apart, and the high pressure expels the irritating object out of the throat. Throat clearing is less violent than coughing, but is a similar increased respiratory effort countered by the tightening of the laryngeal musculature. Both coughing and throat clearing are predictable and necessary actions because they clear the respiratory passageway, but both place the vocal cords under significant strain.
Another important role of the larynx is abdominal fixation, a kind of Valsalva maneuver in which the lungs are filled with air in order to stiffen the thorax so that forces applied for lifting can be translated down to the legs. This is achieved by a deep inhalation followed by the adduction of the vocal cords. Grunting while lifting heavy objects is the result of some air escaping through the adducted vocal cords ready for phonation.
Abduction of the vocal cords is important during physical exertion. The vocal cords are separated by about during normal respiration, but this width is doubled during forced respiration.
During swallowing, elevation of the posterior portion of the tongue levers (inverts) the epiglottis over the glottis' opening to prevent swallowed material from entering the larynx which leads to the lungs, and provides a path for a food or liquid bolus to "slide" into the esophagus; the hyo-laryngeal complex is also pulled upwards to assist this process. Stimulation of the larynx by aspirated food or liquid produces a strong cough reflex to protect the lungs.
In addition, intrinsic laryngeal muscles are spared from some muscle wasting disorders, such as Duchenne muscular dystrophy, may facilitate the development of novel strategies for the prevention and treatment of muscle wasting in a variety of clinical scenarios. ILM have a calcium regulation system profile suggestive of a better ability to handle calcium changes in comparison to other muscles, and this may provide a mechanistic insight for their unique pathophysiological properties
Clinical significance
Disorders
There are several things that can cause a larynx to not function properly. Some symptoms are hoarseness, loss of voice, pain in the throat or ears, and breathing difficulties.
Acute laryngitis is the sudden inflammation and swelling of the larynx. It is caused by the common cold or by excessive shouting. It is not serious.
Chronic laryngitis is caused by smoking, dust, frequent yelling, or prolonged exposure to polluted air. It is much more serious than acute laryngitis.
Presbylarynx is a condition in which age-related atrophy of the soft tissues of the larynx results in weak voice and restricted vocal range and stamina. Bowing of the anterior portion of the vocal colds is found on laryngoscopy.
Ulcers may be caused by the prolonged presence of an endotracheal tube.
Polyps and vocal cord nodules are small bumps caused by prolonged exposure to tobacco smoke and vocal misuse, respectively.
Two related types of cancer of the larynx, namely squamous cell carcinoma and verrucous carcinoma, are strongly associated with repeated exposure to cigarette smoke and alcohol.
Vocal cord paresis is weakness of one or both vocal cords that can greatly impact daily life.
Idiopathic laryngeal spasm.
Laryngopharyngeal reflux is a condition in which acid from the stomach irritates and burns the larynx. Similar damage can occur with gastroesophageal reflux disease (GERD).
Laryngomalacia is a very common condition of infancy, in which the soft, immature cartilage of the upper larynx collapses inward during inhalation, causing airway obstruction.
Laryngeal perichondritis, the inflammation of the perichondrium of laryngeal cartilages, causing airway obstruction.
Laryngeal paralysis is a condition seen in some mammals (including dogs) in which the larynx no longer opens as wide as required for the passage of air, and impedes respiration. In mild cases it can lead to exaggerated or "raspy" breathing or panting, and in serious cases can pose a considerable need for treatment.
Duchenne muscular dystrophy, intrinsic laryngeal muscles (ILM) are spared from the lack of dystrophin and may serve as a useful model to study the mechanisms of muscle sparing in neuromuscular diseases. Dystrophic ILM presented a significant increase in the expression of calcium-binding proteins. The increase of calcium-binding proteins in dystrophic ILM may permit better maintenance of calcium homeostasis, with the consequent absence of myonecrosis. The results further support the concept that abnormal calcium buffering is involved in these neuromuscular diseases.
Treatments
Patients who have lost the use of their larynx are typically prescribed the use of an electrolarynx device. Larynx transplants are a rare procedure. The world's first successful operation took place in 1998 at the Cleveland Clinic, and the second took place in October 2010 at the University of California Davis Medical Center in Sacramento.
Other animals
Pioneering work on the structure and evolution of the larynx was carried out in the 1920s by the British comparative anatomist Victor Negus, culminating in his monumental work The Mechanism of the Larynx (1929). Negus, however, pointed out that the descent of the larynx reflected the reshaping and descent of the human tongue into the pharynx. This process is not complete until age six to eight years. Some researchers, such as Philip Lieberman, Dennis Klatt, Bart de Boer and Kenneth Stevens using computer-modeling techniques have suggested that the species-specific human tongue allows the vocal tract (the airway above the larynx) to assume the shapes necessary to produce speech sounds that enhance the robustness of human speech. Sounds such as the vowels of the words and , [i] and [u] (in phonetic notation), have been shown to be less subject to confusion in classic studies such as the 1950 Peterson and Barney investigation of the possibilities for computerized speech recognition.
In contrast, though other species have low larynges, their tongues remain anchored in their mouths and their vocal tracts cannot produce the range of speech sounds of humans. The ability to lower the larynx transiently in some species extends the length of their vocal tract, which as Fitch showed creates the acoustic illusion that they are larger. Research at Haskins Laboratories in the 1960s showed that speech allows humans to achieve a vocal communication rate that exceeds the fusion frequency of the auditory system by fusing sounds together into syllables and words. The additional speech sounds that the human tongue enables us to produce, particularly [i], allow humans to unconsciously infer the length of the vocal tract of the person who is talking, a critical element in recovering the phonemes that make up a word.
Non-mammals
Most tetrapod species possess a larynx, but its structure is typically simpler than that found in mammals. The cartilages surrounding the larynx are apparently a remnant of the original gill arches in fish, and are a common feature, but not all are always present. For example, the thyroid cartilage is found only in mammals. Similarly, only mammals possess a true epiglottis, although a flap of non-cartilagenous mucosa is found in a similar position in many other groups. In modern amphibians, the laryngeal skeleton is considerably reduced; frogs have only the cricoid and arytenoid cartilages, while salamanders possess only the arytenoids.
An example of a frog that possesses a larynx is the túngara frog. While the larynx is the main sound producing organ in túngara frogs, it serves a higher significance due to its contribution to mating call, which consist of two components: 'whine' and 'chuck'. While 'whine' induces female phonotaxis and allows species recognition, 'chuck' increases mating attractiveness. In particular, the túngara frog produces 'chuck' by vibrating the fibrous mass attached to the larynx.
Vocal folds are found only in mammals, and a few lizards. As a result, many reptiles and amphibians are essentially voiceless; frogs use ridges in the trachea to modulate sound, while birds have a separate sound-producing organ, the syrinx.
History
The ancient Greek physician Galen first described the larynx, describing it as the "first and supremely most important instrument of the voice".
Additional images
See also
Articulatory phonetics
Electrolarynx
Histology of the vocal cords
Origin of speech
References
Notes
Sources
Human head and neck
Human voice
Phonetics
Human throat
Respiratory system
Speech organs | Larynx | [
"Biology"
] | 4,837 | [
"Organ systems",
"Respiratory system"
] |
49,392 | https://en.wikipedia.org/wiki/Affirmative%20action | Affirmative action (also sometimes called reservations, alternative access, positive discrimination or positive action in various countries' laws and policies) refers to a set of policies and practices within a government or organization seeking to address systemic discrimination. Historically and internationally, support for affirmative action has been justified by the idea that it may help with bridging inequalities in employment and pay, increasing access to education, and promoting diversity, social equity, and social inclusion and redressing alleged wrongs, harms, or hindrances, also called substantive equality.
The nature of affirmative-action policies varies from region to region and exists on a spectrum from a hard quota to merely targeting encouragement for increased participation. Some countries use a quota system, reserving a certain percentage of government jobs, political positions, and school vacancies for members of a certain group; an example of this is the reservation system in India. In some other jurisdictions where quotas are not used, minority-group members are given preference or special consideration in selection processes. In the United States, affirmative action by executive order originally meant selection without regard to race but preferential treatment was widely used in college admissions, as upheld in the 2003 Supreme Court case Grutter v. Bollinger, until 2023, when this was overturned in Students for Fair Admissions v. Harvard.
A variant of affirmative action more common in Europe is known as positive action, wherein equal opportunity is promoted by encouraging underrepresented groups into a field. This is often described as being "color blind", but some American sociologists have argued that this is insufficient to achieve substantive equality of outcomes based on race.
In the United States, affirmative action is controversial and public opinion on the subject is divided. Supporters of affirmative action argue that it promotes substantive equality for group outcomes and representation for groups, which are socio-economically disadvantaged or have faced historical discrimination or oppression. Opponents of affirmative action have argued that it is a form of reverse discrimination, that it tends to benefit the most privileged within minority groups at the expense of the least fortunate within majority groups, or that—when applied to universities—it can hinder minority students by placing them in courses for which they have not been adequately prepared.
Origins
The term "affirmative action" was first used in the United States in "Executive Order No. 10925", signed by President John F. Kennedy on 6 March 1961, which included a provision that government contractors "take affirmative action to ensure that applicants are employed, and employees are treated [fairly] during employment, without regard to their race, creed, color, or national origin". In 1965, President Lyndon B. Johnson issued Executive Order 11246 which required government employers to "hire without regard to race, religion and national origin" and "take affirmative action to ensure that applicants are employed and that employees are treated during employment, without regard to their race, color, religion, sex or national origin." The Civil Rights Act of 1964 prohibited discrimination on the basis of race, color, religion, sex or national origin. Neither executive order nor The Civil Rights Act authorized group preferences. The Senate floor manager of the bill, Senator Hubert Humphrey, declared that the bill “would prohibit preferential treatment for any particular group” adding “I will eat my hat if this leads to racial quotas.”
However affirmative action in practice would eventually become synonymous with preferences, goals and quotas as upheld or struck down by Supreme Court decisions even though no law had been passed explicitly permitting discrimination in favor of disadvantaged groups.
Some state laws explicitly banned racial preferences, and in response some laws have failed attempting to explicitly legalize race preferences.
Affirmative action is intended to alleviate under-representation and to promote the opportunities of defined minority groups within a society to give them equal access to that of the majority population. The philosophical basis of the policy has various rationales, including but not limited to compensation for past discrimination, correction of current discrimination, and the diversification of society. It is often implemented in governmental and educational settings to ensure that designated groups within a society can participate in all promotional, educational, and training opportunities.
The stated justification for affirmative action by its proponents is to help compensate for past discrimination, persecution or exploitation by the ruling class of a culture, and to address existing discrimination. More recently concepts have moved beyond discrimination to include diversity, equity and inclusion as motives for preferring historically underrepresented groups.
Methods of implementation
Quotas
Specific scholarships and financial aid for certain groups
Marketing/advertising to groups that the affirmative action is intended to increase
Specific training or emulation actions for identified audiences
Relaxation of selection criteria applied to a target audiences
Women
Several different studies investigated the effect of affirmative action on women. Kurtulus (2012) in her review of affirmative action and the occupational advancement of minorities and women during 1973–2003 showed that the effect of affirmative action on advancing black, Hispanic, and white women into management, professional, and technical occupations occurred primarily during the 1970s and early 1980s. During this period, contractors grew their shares of these groups more rapidly than non-contractors because of the implementation of affirmative action. But the positive effect of affirmative action vanished entirely in the late 1980s, which Kurtulus says may be due to the slowdown into advanced occupation for women and minorities because of the political shift of affirmative action that started with President Reagan. Becoming a federal contractor increased white women's share of professional occupations by 0.183 percentage points, or 9.3 percent, on average during these three decades, and increased black women's share by 0.052 percentage points (or by 3.9 percent). Becoming a federal contractor also increased Hispanic women's and black men's share of technical occupations on average by 0.058 percent and 0.109 percentage points respectively (or by 7.7 and 4.2 percent). These represent a substantial contribution of affirmative action to overall trends in the occupational advancement of women and minorities over the three decades under the study. A reanalysis of multiple scholarly studies, especially in Asia, considered the impact of four primary factors on support for affirmative action programs for women: gender; political factors; psychological factors; and social structure. Kim and Kim (2014) found that, "Affirmative action both corrects existing unfair treatment and gives women equal opportunity in the future."
Quotas
Law regarding quotas and affirmative action varies widely from nation to nation.
Caste-based and other group-based quotas are used in the reservation system.
In 2012, the European Union Commission approved a plan for women to constitute 40% of non-executive board directorships in large listed companies in Europe by 2020. Directive (EU) 2022/2381 requires that EU member states adopt by 28 December 2024 laws to ensure that by 30 June 2026 members of the underrepresented sex hold at least 40% of non-executive director positions and at least 33% of all director positions, including both executive and non-executive directors, for listed companies. Directive (EU) 2022/2381 expires on 31 December 2038.
In Sweden, the Supreme Court has ruled that "affirmative action" ethnic quotas in universities are discrimination and hence unlawful. It said that the requirements for the intake should be the same for all. The justice minister said that the decision left no room for uncertainty.
National approaches
In some countries that have laws on racial equality, affirmative action is rendered illegal because it does not treat all races equally. This approach of equal treatment is sometimes described as being "color blind", in hopes that it is effective against discrimination without engaging in reverse discrimination.
In such countries, the focus tends to be on ensuring equal opportunity and, for example, targeted advertising campaigns to encourage ethnic minority candidates to join the police force. This is sometimes called positive action.
Africa
South Africa
Apartheid
The apartheid government, as a matter of state policy, favoured white-owned, especially Afrikaner-owned companies. The aforementioned policies achieved the desired results, but in the process, they marginalised and excluded black people. Skilled jobs were also reserved for white people, and black people were largely used as unskilled labour, enforced by legislation including the Mines and Works Act, the Job Reservations Act, the Native Building Workers Act, the Apprenticeship Act and the Bantu Education Act, creating and extending the "colour bar" in South African labour. Then the whites successfully persuaded the government to enact laws that highly restricted the blacks employment opportunities.
Since the 1960s the apartheid laws had been weakened. Consequently, from 1975 to 1990 the real wages of black manufacturing workers rose by 50%, while those of the whites rose by 1%.
The variation in skills and productivity between groups of people ultimately caused disparities in employment, occupation and income within labour markets, which provided advantages to certain groups and characteristics of people. This in due course was the motivation to introduce affirmative action in South Africa, following the end of apartheid.
Post-apartheid – the Employment Equity Act
Following the transition to democracy in 1994, the African National Congress-led government chose to implement affirmative action legislation to correct previous imbalances (a policy known as employment equity). As such, all employers were compelled by law to employ previously disenfranchised groups (blacks, Indians, and Coloureds). A related, but distinct concept is Black Economic Empowerment.
The Employment Equity Act and the Broad Based Black Economic Empowerment Act aim to promote and achieve equality in the workplace (in South Africa termed "equity"), by advancing people from designated groups. The designated groups who are to be advanced include all people of colour, women (including white women) and people with disabilities (including white people). Employment Equity legislation requires companies employing more than 50 people to design and implement plans to improve the representativity of workforce demographics, and report them to the Department of Labour.
Employment Equity also forms part of a company's Black Economic Empowerment scorecard: in a relatively complex scoring system, which allows for some flexibility in the manner in which each company meets its legal commitments, each company is required to meet minimum requirements in terms of representation by previously disadvantaged groups. The matters covered include equity ownership, representation at employee and management level (up to the board of director level), procurement from black-owned businesses and social investment programs, amongst others.
The policies of Employment Equity and, particularly, Black Economic empowerment have been criticised both by those who view them as discriminatory against white people, and by those who view them as ineffectual.
These laws cause disproportionally high costs for small companies and reduce economic growth and employment. The laws may give the black middle-class some advantage but can make the worse-off blacks even poorer. Moreover, the Supreme Court has ruled that in principle blacks may be favored, but in practice this should not lead to unfair discrimination against the others.
Affirmative action purpose
Affirmative action was introduced through the Employment Equality Act, 55 in 1998, 4 years after the end of apartheid. This act was passed to promote the constitutional right of equality and exercise true democracy. This idea was to eliminate unfair discrimination in employment, to ensure the implementation of employment equity to redress the effects of discrimination, to achieve a diverse workforce broadly representative of our people, to promote economic development and efficiency in the workforce and to give effects to the obligations of the Republic as a member of the International Labour Organisation.
Many embraced the act; however some concluded that the act contradicted itself. The act eliminates unfair discrimination in certain sectors of the national labour market by imposing similar constraints on another.
With the introduction of Affirmative Action, black economic empowerment (BEE) rose further in South Africa. The BEE was not a moral initiative to redress the wrongs of the past but to promote growth and strategies that aim to realize a country's full potential. The idea was targeting the weakest link in economics, which was inequality and which would help develop the economy. This is evident in the statement by the Department of Trade and Industry, "As such, this strategy stresses a BEE process that is associated with growth, development and enterprise development, and not merely the redistribution of existing wealth". Similarities between the BEE and affirmative action are apparent; however there is a difference. BEE focuses more on employment equality rather than taking wealth away from the skilled white labourers.
The main goal of affirmative action is for the country to reach its full potential. This would result in a completely diverse workforce in economic and social sectors, thus broadening the economic base and stimulating economic growth.
Outcome
Once applied within the country, many different outcomes arose, some positive and some negative. This depended on the approach to and the view of The Employment Equality Act and affirmative action.
Positive:
Pre-Democracy, the apartheid governments discriminated against non-white races, so with affirmative action, the country started to redress past discriminations. Affirmative action also focused on combating structural racism and racial inequality, hoping to maximize diversity in all levels of society and sectors. Achieving this would elevate the status of the perpetual underclass and to restore equal access to the benefits of society.
Negative:
A quota system was implemented, which aimed to achieve targets of diversity in a workforce. This target affected the hiring and level of skills in the workforce, ultimately impacting the free market. Affirmative action created marginalization for coloured and Indian races in South Africa, as well as developing and aiding the middle and elite classes, leaving the lower class behind. This created a bigger gap between the lower and middle class, which led to class struggles and a greater segregation. Entitlement began to arise with the growth of the middle and elite classes, as well as race entitlement. Some assert that affirmative action is discrimination in reverse. Negative consequences of affirmative action, specifically the quota system, drove skilled labour away, resulting in bad economic growth. This is due to very few international companies wanting to invest in South Africa. As a result of the outcomes of affirmative action, the concept is continually evolving.
South African jurist Martin van Staden argues that the way affirmative action and transformation policies have been implemented in South Africa has eroded state institutions, grown corruption, and undermined the rule of law in the country.
Ghana
The Parliament of Ghana passed the Affirmative Action Bill on July 30, 2024.
Asia
China
There is affirmative action in education for minority nationalities in China, this may equate to lowering minimum requirements for the National University Entrance Examination, which is a mandatory exam for all students to enter university. Liangshaoyikuan refers a policy in China on affirmative action in criminal justice.
Israel
A class-based affirmative action policy was incorporated into the admission practices of the four most selective universities in Israel during the early to mid-2000s. In evaluating the eligibility of applicants, neither their financial status nor their national or ethnic origins are considered. The emphasis, rather, is on structural disadvantages, especially neighborhood socioeconomic status and high school rigor, although several individual hardships are also weighed. This policy made the four institutions, especially the echelons at the most selective departments, more diverse than they otherwise would have been. The rise in geographic, economic and demographic diversity of a student population suggests that the plan's focus on structural determinants of disadvantage yields broad diversity dividends.
Israeli citizens who are women, Arabs, Blacks or people with disabilities are supported by affirmative action in the civil service employment. Also Israeli citizens who are Arabs, Blacks or people with disabilities are entitled to full university scholarships by the state.
In her study of gender politics in Israel, Dafna Izraeli showed that the paradox of affirmative action for women directors is that the legitimation for legislating their inclusion on boards also resulted in the exclusion of women's interested as a legitimate issue on the boards' agendas. "The new culture of the men's club is seductive token women are under the pressure to become "social males" and prove that their competence as directors, meaning that they are not significantly different from men. In the negotiation for status as worthy peers, emphasizing gender signals that a woman is an "imposter", someone who does not rightfully belong in the position she is claiming to fill." And once affirmative action for women is fulfilled, and then affirmative action shares the element, as Izraeli put it, the "group equality discourse", making it easier for other groups to claim for a fairer distribution of resources. This suggests that affirmative action can have applications for different groups in Israel.
India
Reservation in India is a form of affirmative action designed to improve the well-being of Scheduled Castes and Scheduled Tribes (SC/ST), and Other Backward Classes (OBC), defined primarily by their caste. Members of these categories comprise about two-thirds of the population of India. According to the Constitution of India, up to 50% of all government-run higher education admissions and government job vacancies may be reserved for members of the SC/ST/OBC-NCL categories, and 10% for those in Economically Weaker Sections (EWS), with the remaining unreserved. In 2014, the Indian National Sample Survey found that 12% of surveyed Indian households had received academic scholarships, with 94% being on account of SC/ST/OBC membership, 2% based on financial weakness and 0.7% based on merit.
Indonesia
Indonesia has offered affirmative action for native Papuans in education, government civil worker selection, and police & army selection. After the 2019 Papua protests, many Papuan students chose to abandon their scholarship and return to their respective provinces. The program has been subject to criticism, with complaints made towards a lack of sufficient quotas and alleged corruption. Prabowo Subianto, Indonesian defense minister, has expressed that he will direct more effort towards recruiting Papuans to the Indonesian National Armed Forces. Education scholarship by Ministry of Education and Culture, called ADik to the native Papuans and students from perhipery regions close to Indonesian border.
Malaysia
The Malaysian New Economic Policy or NEP is a form of ethnicity-based affirmative action. Malaysia provides affirmative action to those that are deemed "Bumiputera", which includes the Malay population, Orang Asli, and the indigenous people of Sabah and Sarawak, who together form a majority of the population. However, the indigenous people of Malaysia (Orang Asli) do not have the same special rights of the rest of the Bumiputera as granted under Article 153, as the Orang Asli are not referenced within the article 153 itself.
The historical/common argument is that the Malays have lower incomes than the Chinese and Indians, who have traditionally been involved in businesses and industries, who were also general migrant workers. Malaysia is a multi-ethnic country, with Malays making up the majority of close to 58% of the population. About 22% of the population is of Chinese descent, while those of Indian descent comprise about 6% of the population.
The Malaysian New Economic Policy (NEP) has been dubbed a failure as of recent years, as evidence has pointed to the ever-growing wealth disparity among Malays, that have widened the gap between the rich and poor Malays, while the Malaysian New Economic Policy has been shown to benefit the existing rich Malays instead of achieving its intention of helping poor Malays.
(See also Bumiputra) The mean income for Malays, Chinese and Indians in 1957/58 were 134, 288 and 228 respectively. In 1967/68 it was 154, 329 and 245, and in 1970 it was 170, 390 and 300. Mean income disparity ratio for Chinese/Malays rose from 2.1 in 1957/58 to 2.3 in 1970, whereas for Indians/Malays the disparity ratio also rose from 1.7 to 1.8 in the same period.
Sri Lanka
In 1981 the Standardization policy of Sri Lankan universities was introduced as an affirmative action program for students from areas which had lower rates of education than other areas due to missionary activity in the north and east, which essentially were the Tamil areas. Successive governments cultivated a historical myth after the colonial powers had left that the British had practised communal favouritism towards Christians and the minority Tamil community for the entire 200 years they had controlled Sri Lanka. However, the Sinhalese in fact benefitted from trade and plantation cultivations over the rest of the other groups and their language and culture as well as the religion of Buddhism was fostered and made into mediums for schools over the Tamil language, which did not have the same treatment and Tamils learned English instead as there was no medium for Tamil until near independence. Tamils' knowledge of English and education came from the very American missionary activity by overseas Christians that the British were concerned will anger the Sinhalese and destroy their trading relationships, so they sent them to the Tamil areas instead to teach, thinking it would have no consequences and due to their small numbers. The British sending the missionaries to the north and east was for the protection of the Sinhalese and in fact, showed favouritism to the majority group instead of the minorities to maintain trading relationships and benefits from them. The Tamils, out of this random benefit from learning English and basic education excelled and flourished and were able to take many civil service jobs to the chagrin of the Sinhalese. The myth of Divide and Rule is untrue. The 'policy of standardisation' was typical of affirmative action policies, in that it required drastically lower standards for Sinhalese students than for the more academic Tamils who had to get about ten more marks to enter into universities. The policy in fact is an example of discrimination against the Tamil ethnic group.
Taiwan
A 2004 legislation requires that, for a firm with 100 employees or more wishing to compete for government contracts, at least 1 percent of its employees must be Taiwanese aborigines. Ministry of Education and Council of Aboriginal Affairs announced in 2002 that Taiwanese Aboriginal students would have their high-school or undergraduate entrance exams boosted by 33% for demonstrating some knowledge of their tribal language and culture. The percentage of boost have been revised several times, and the latest percentage is 35% in 2013.
Europe
Denmark
Greenlanders have special advantages when applying for university, college or vocation university degrees in Denmark. With these specific rules, Greenlanders can get into degrees without the required grade averages by fulfilling certain criteria. They need to have a grade average of over 6,0 and have lived a certain number of years in Greenland. These rules have been in force since 1 January 2014.
Finland
In certain university education programs, including legal and medical education, there are quotas for persons who reach a certain standard of skills in the Swedish language; for students admitted in these quotas, the education is partially arranged in Swedish. The purpose of the quotas is to guarantee that a sufficient number of professionals with skills in Swedish are educated for nationwide needs. The quota system has met with criticism from the Finnish speaking majority, some of whom consider the system unfair. In addition to these linguistic quotas, women may get preferential treatment in recruitment for certain public sector jobs if there is a gender imbalance in the field.
France
No distinctions based on race, religion or sex are allowed under the 1958 French Constitution. Since the 1980s, a French version of affirmative action based on neighborhood is in place for primary and secondary education. Some schools, in neighborhoods labeled "Priority Education Zones", are granted more funds than the others. Students from these schools also benefit from special policies in certain institutions (such as Sciences Po).
The French Ministry of Defence tried in 1990 to make it easier for young French soldiers of North-African descent to be promoted in rank and obtain driving licenses. After a strong protest by a young French lieutenant in the Ministry of Defence newspaper (Armées d'aujourd'hui), the driving license and rank plan was cancelled. After the Sarkozy election, a new attempt in favour of Arab-French students was made, but Sarkozy did not gain enough political support to change the French constitution. However, some French schools do implement affirmative action in that they are obligated to take a certain number of students from impoverished families.
Additionally, following the Norwegian example, after 27 January 2014, women must represent at least 20% of board members in all stock exchange listed or state-owned companies. After 27 January 2017, the proportion will increase to 40%. All appointments of men as directors will be invalid as long as the quota is not met, and monetary penalties may apply for other directors.
Germany
Article 3 of the German Basic Law provides for equal rights of all people regardless of sex, race or social background. There are programs stating that if men and women have equal qualifications, women have to be preferred for a job; moreover, the disabled should be preferred to non-disabled people. This is typical for all positions in state and university service , typically using the phrase "We try to increase diversity in this line of work". In recent years, there has been a long public debate about whether to issue programs that would grant women a privileged access to jobs in order to fight discrimination. Germany's Left Party brought up the discussion about affirmative action in Germany's school system. According to Stefan Zillich, quotas should be "a possibility" to help working class children who did not do well in school gain access to a Gymnasium (University-preparatory school). Headmasters of Gymnasien have objected, saying that this type of policy would "be a disservice" to poor children.
Norway
In all public stock companies (ASA) boards, either gender should be represented by at least 40%. This affects roughly 400 companies of over 300,000 in total.
Seierstad & Opsahl in their study of the effects of affirmative action on presence, prominence, and social capital of women directors in Norway found that there are few boards chaired by a woman, from the beginning of the implementation of the affirmative action policy period to August 2009, the proportion of boards led by a woman has increased from 3.4% to 4.3%. This suggests that the law has had a marginal effect on the sex of the chair and the boards remain internally segregated. Although at the beginning of our observation period, only 7 of 91 prominent directors were women. The gender balance among prominent directors has changed considerably throughout the period, and at the end of the period, 107 women and 117 men were prominent directors. By applying more restrictive definitions of prominence, the proportion of directors who are women generally increases. If only considering directors with at least three directorships, 61.4% of them are women. When considering directors with seven or more directorships, all of them are women. Thus, affirmative action increases the female population in the director position.
A 2016 study found no effect of the ASA representation requirement on either valuation or profits of the affected companies, and also no correlation between the requirement and the restructuring of companies away from ASA.
Romania
Romani people are allocated quotas for access to public schools and state universities.
Soviet Union and Russia
Soon after the 1918 revolution, Inessa Armand, Lenin's secretary and lover, was instrumental in creating Zhenotdel, which functioned until the 1930s as part of the international egalitarian and affirmative action movements. Quota systems existed in the USSR for various social groups including ethnic minorities, women and factory workers. Before 1934 ethnic minorities were described as culturally backward, but in 1934 this term was found inappropriate. In 1920s and early 1930s Korenizatsiia applied affirmative action to ethnic minorities. Quotas for access to university education, offices in the Soviet system and the Communist Party existed: for example, the position of First Secretary of a Soviet Republic's (or Autonomous Republic's) Party Committee was always filled by a representative of this republic's "titular ethnicity".
Russia retains this system partially. Quotas are abolished, but preferences for some ethnic minorities and inhabitants of certain territories remain.
Serbia
The Constitution of the Republic of Serbia from 2006 established the principles of equality and the prohibition of discrimination on any grounds. It also allows affirmative action as "special measures" for certain marginalized groups, such as national minorities, by specifically excluding it from the legal definition of discrimination. In Serbia the Roma national minority is enabled to enroll in public schools under more favorable conditions.
Slovakia
The Constitutional Court declared in October 2005 that affirmative action i.e. "providing advantages for people of an ethnic or racial minority group" as being against its Constitution.
United Kingdom
In the United Kingdom, hiring someone simply because of their protected-group status, without regard to their performance, is illegal. However, the law in the United Kingdom does allow for membership in a protected and disadvantaged group to be considered in hiring and promotion when the group is under-represented in a given area and if the candidates are of equal merit (in which case membership in a disadvantaged group can become a "tie-breaker").
The Equality Act 2010 established the principles of equality and their implementation in the UK. In the UK, any discrimination, quotas or favouritism due to sex, race and ethnicity among other "protected characteristics" is illegal by default in education, employment, during commercial transactions, in a private club or association, and while using public services, although exceptions exist, to wit: "Section 159 of the Equality Act 2010 allows an employer to treat an applicant or employee with a protected characteristic (eg race, sex or age) more favourably in connection with recruitment or promotion than someone without that characteristic who is as qualified for the role. The employer must reasonably think that people with the protected characteristic suffer a disadvantage or are under-represented in that particular activity. Taking the positive action must be a proportionate means of enabling or encouraging people to overcome the disadvantage or to take part in the activity.")
Specific exemptions include:
Part of the Northern Ireland Peace Process, the Good Friday Agreement and the resulting Patten report required the Police Service of Northern Ireland to recruit 50% of numbers from the Catholic community and 50% from the Protestant and other communities, in order to reduce any possible bias towards Protestants. This was later referred to as the '50:50' measure. (See also Independent Commission on Policing for Northern Ireland.)
The Sex Discrimination (Election Candidates) Act 2002 allowed the use of all-women shortlists to select more women as election candidates.
In 2019, an employment tribunal ruled that, while attempting to create a diverse force, the Cheshire Police had discriminated against a "well prepared" white heterosexual male. The ruling stated that "while positive action can be used to boost diversity, it should only be applied to distinguish between candidates who were all equally well qualified for a role".
North America
Canada
The equality section of the Canadian Charter of Rights and Freedoms explicitly permits affirmative action type legislation, although the Charter does not require legislation that gives preferential treatment. Subsection 2 of Section 15 states that the equality provisions do "not preclude any law, program or activity that has as its object the amelioration of conditions of disadvantaged individuals or groups including those that are disadvantaged because of race, national or ethnic origin, colour, religion, sex, age or mental or physical disability".
The Canadian Employment Equity Act requires employers in federally-regulated industries to give preferential treatment to four designated groups: Women, persons with disabilities, aboriginal peoples, and visible minorities. Less than one-third of Canadian Universities offer alternative admission requirements for students of aboriginal descent. Some provinces and territories also have affirmative action-type policies. For example, in the Northwest Territories in the Canadian north, aboriginal people are given preference for jobs and education and are considered to have P1 status. Non-aboriginal people who were born in the NWT or have resided half of their life there are considered a P2, as well as women and people with disabilities.
United States
The policy of affirmative action dates to the Reconstruction Era in the United States, 1863–1877. Current policy was introduced in the early 1960s in the United States, as a way to combat racial discrimination in the hiring process, with the concept later expanded to address gender discrimination. Affirmative action was first created from Executive Order 10925, which was signed by President John F. Kennedy on 6 March 1961 and required that government employers "not discriminate against any employee or applicant for employment because of race, creed, color, or national origin" and "take affirmative action to ensure that applicants are employed and that employees are treated during employment, without regard to their race, creed, color, or national origin" but did not require or permit group preferences.
On 24 September 1965, President Lyndon B. Johnson signed Executive Order 11246, thereby replacing Executive Order 10925, but continued to use the same terminology that did not require or permit group preferences. Affirmative action was extended to sex by Executive Order 11375 which amended Executive Order 11246 on 13 October 1967, by adding "sex" to the list of protected categories. In the U.S. affirmative action's original purpose was to pressure institutions into compliance with the nondiscrimination mandate of the Civil Rights Act of 1964. The Civil Rights Acts do not cover discrimination based on veteran status, disabilities, or age that is 40 years and older. These groups may be protected from discrimination under different laws.
Affirmative action has been the subject of numerous court cases, and has been questioned upon its constitutional legitimacy. In 2003, a Supreme Court decision regarding affirmative action in higher education (Grutter v. Bollinger, 539 US 244 – Supreme Court 2003) permitted educational institutions to consider race as a factor when admitting students. Alternatively, some colleges use financial criteria to attract racial groups that have typically been under-represented and typically have lower living conditions. Some states such as California (California Civil Rights Initiative), Michigan (Michigan Civil Rights Initiative), and Washington (Initiative 200) have passed constitutional amendments banning public institutions, including public schools, from practicing affirmative action within their respective states. In 2014, the U.S. Supreme Court held that "States may choose to prohibit the consideration of racial preferences in governmental decisions". By that time eight states, Oklahoma, New Hampshire, Arizona, Colorado, Nebraska, Michigan, Florida, Washington and California, had already banned affirmative action. Numerous critics report that colleges quietly use illegal quotas to discriminate against people of Asian, Jewish, and Caucasian backgrounds and have launched numerous lawsuits to stop them.
On June 29, 2023, the Supreme Court ruled 6–2 that the use of race in college admissions is unconstitutional under the Equal Protection Clause of the 14th Amendment in Students for Fair Admissions v. Harvard.
Oceania
New Zealand
Individuals of Māori or other Polynesian descent are often afforded improved access to university courses, or have scholarships earmarked specifically for them. Such access to University courses have in the past faced criticism, particularly at the University of Auckland due to a phenomenon known as Mismatch theory, accusations of setting the kids up to fail have been made due to a lack of transparency as to the preferred groups graduation rates and the university informing the students of such historical statistics dating back to the 1970s. Affirmative action is provided for under section 73 of the Human Rights Act 1993 and section 19(2) of the New Zealand Bill of Rights Act 1990. Affirmative action in New Zealand is most often done indirecttly by encouraging those in groups favored by affirmative action to get jobs in sectors they are underrepresented in. Diversity Awards NZ is an organization in New Zealand whose goal is to " celebrate excellence in workplace diversity, equity and inclusion."
Under section 73 of the Human Rights Act 1993, affirmative action would be permissible if:
Done in good faith;
For the purpose of assisting individuals or groups with a characteristic pertaining to a prohibited ground of discrimination; and
The individuals or groups in question need (or may reasonably be supposed to need) assistance in order to achieve an equal place with other members of the community.
South America
Brazil
Some Brazilian universities (state and federal) have created systems of preferred admissions (quotas) for racial minorities (blacks and Amerindians), the poor and people with disabilities. There are also quotas of up to 20% of vacancies reserved for people with disabilities in the civil public services. The Democrats party, accusing the board of directors of the University of Brasília for "resurrecting Nazist ideals", appealed to the Supreme Federal Court against the constitutionality of the quotas the university reserves for minorities. The Supreme Court unanimously approved their constitutionality on 26 April 2012.
International organizations
United Nations
The International Convention on the Elimination of All Forms of Racial Discrimination stipulates (in Article 2.2) that affirmative action programs may be required of countries that ratified the convention, in order to rectify systematic discrimination. It states, however, that such programs "shall in no case entail as a consequence the maintenance of unequal or separate rights for different racial groups after the objectives for which they were taken have been achieved".
The United Nations Human Rights Committee states that "the principle of equality sometimes requires States parties to take affirmative action in order to diminish or eliminate conditions which cause or help to perpetuate discrimination prohibited by the Covenant. For example, in a State where the general conditions of a certain part of the population prevent or impair their enjoyment of human rights, the State should take specific action to correct those conditions. Such action may involve granting for a time to the part of the population concerned certain preferential treatment in specific matters as compared with the rest of the population. However, as long as such action is needed to correct discrimination, in fact, it is a case of legitimate differentiation under the Covenant."
Responses
The principle of affirmative action is to promote societal equality through the preferential treatment of socioeconomically disadvantaged people. Often, these people are disadvantaged for historical reasons, such as oppression or slavery.
Historically and internationally, support for affirmative action has sought to achieve a range of goals: bridging inequalities in employment and pay; increasing access to education; enriching state, institutional, and professional leadership with the full spectrum of society; redressing apparent past wrongs, harms, or hindrances, in particular addressing the apparent social imbalance left in the wake of slavery and slave laws.
A 2017 study of temporary federal affirmative action regulation in the United States estimated that the regulation "increases the black share of employees over time: in 5 years after an establishment is first regulated, the black share of employees increases by an average of 0.8 percentage points. Strikingly, the black share continues to grow at a similar pace even after an establishment is deregulated. [The author] argue[s] that this persistence is driven in part by affirmative action inducing employers to improve their methods for screening potential hires."
Critics of affirmative action offer a variety of arguments as to why it is counterproductive or should be discontinued. For example, critics may argue that affirmative action hinders reconciliation, replaces old wrongs with new wrongs, undermines the achievements of minorities, and encourages individuals to identify themselves as disadvantaged, even if they are not. It may increase racial tension and benefit the more privileged people within minority groups at the expense of the least fortunate within majority groups.
Legal scholar Stanley Fish suggests that opponents of affirmative action often argue it is a form of reverse discrimination, and that any effort to cure discrimination through affirmative action is wrong because it, in turn, is another form of discrimination. He says this is a false equivalence, since those opposed to affirmative action are motivated "not from any wrong done to [them]" but by a desire to continue marginalizing others. Journalist Vann R. Newkirk II says that critics of affirmative action often claim court cases such as Fisher v. University of Texas, which held that colleges have some discretion to consider race when making admissions decisions, demonstrate that discrimination occurs in the name of affirmative action. He says this is one of several "misconceptions" often used to engender "white resentment" in opposition to affirmative action.
According to scholar George Sher, some critics of affirmative action say that it devalues the accomplishments of individuals chosen only based on the social groups to which they belong rather than their qualifications. Legal scholar Tseming Yang and others have also discussed the challenges of fraudulent self-identification when implementing affirmative action policies. Yang suggests that because some individuals from non-preferred groups may designate themselves as members of preferred groups to access the benefits of such programs, this requires the "necessary evil" of verifying individuals' race to prevent this.
Critics of affirmative action suggest that programs may benefit the members of the targeted group that least need the benefit—that is, those who have the greatest social, economic and educational advantages within the targeted group. They may argue that at the same time the people who lose the most to affirmative action are the least fortunate members of non-preferred groups. Political scientist Charles Murray has said that beneficiaries are often wholly unqualified for the opportunity made available, citing his belief in the innate differences between races. He reaffirmed these views in his essay "The Advantages of Social Apartheid", in which he advocates separation of people based on race and intelligence.
Another criticism of affirmative action is that it may reduce the incentives of both the preferred and non-preferred to perform at their best. Beneficiaries of affirmative action may conclude that it is unnecessary to work as hard, and those who do not benefit may perceive hard work as futile.
Mismatching
Mismatching is the supposed negative effect affirmative action has when it places a student into a college that is too difficult for them based on meeting quotas. In the absence of affirmative action, a student may be admitted to a college that matches their academic ability and therefore has a better chance of graduating. The former may increase the chance the student drops out or fails the course, thus hurting the intended beneficiaries of affirmative action. In 2017, researcher Andrew J. Hill found that affirmative action bans resulted in a reduction in minority students completing four-year STEM degrees, and suggests this indicates that the mismatch hypothesis is unfounded. He says this is evidence that affirmative action may be effective in "some circumstances", such as in encouraging greater minority engagement in STEM degrees. In 2020, researcher Zachary Bleemer found that an affirmative action ban in California (Prop 209) had resulted in average wage drops of 5% annually among underrepresented minorities aged 24–34 in STEM industries, especially effecting Hispanic people.
In 2007, Gail Heriot, a professor of law at the University of San Diego and a member of the U.S. Commission on Civil Rights, discussed the evidence in support of mismatching in law courses. She pointed to a study by Richard Sander which suggests there were 7.9% fewer Black attorneys than if there had been no affirmative action. Sander suggests that mismatching meant Black students were more likely to drop out of law school and fail bar exams. Sander's paper on mismatching has been criticized by several law professors, including Ian Ayres and Richard Brooks from Yale, who argue that eliminating affirmative action would actually reduce the number of
Black lawyers by 12.7%. Furthermore, they suggest that students attending higher ranking colleges do better than those who don't. A 2008 study by Jesse Rothstein and Albert H. Yoon said Sander's results were "plausible", but said that eliminating affirmative action would "lead to a 63 percent decline in black matriculants at all law schools and a 90 percent decline at elite law schools". They dismissed the mismatch theory, concluding that "one cannot credibly invoke mismatch effects to argue that there are no benefits" to affirmative action. In a 2016 review of previous studies by Peter Arcidiacono and Michael Lovenheim, they suggested that more African-American students attending less-selective schools would significantly improve first-attempt pass rates at the state bar, but cautioned that such improvements could be outweighed by decreases in law school attendance.
A 2011 study of data held by Duke University said there was no evidence of mismatch, and proposed that mismatch could only occur if a selective school possessed private information about students' prospects at the college which it failed to share. Providing such information to prospective students would avoid mismatch because the students could choose another school that was a better match. A 2016 study on affirmative action in India said there was no evidence for the mismatching hypothesis.
Polls
According to a poll taken by USA Today in 2005, the majority of Americans support affirmative action for women, while views on minority groups were more split. Men are only slightly more likely to support affirmative action for women, though a majority of both do. However, a slight majority of Americans do believe that affirmative action goes beyond ensuring access and goes into the realm of preferential treatment. More recently, a Quinnipiac poll from June 2009 finds that 55% of Americans feel that affirmative action, in general, should be discontinued, though 55% support it for people with disabilities. A Gallup poll from 2005 showed that 72% of black Americans and 44% of white Americans supported racial affirmative action (with 21% and 49% opposing), with support and opposition among Hispanic people falling between those of black people and white people. Support among black people, unlike among white people, had almost no correlation with political affiliation.
A 2009 Quinnipiac University Polling Institute survey found 65% of American voters opposed the application of affirmative action to homosexuals, with 27% indicating they supported it.
A Leger poll taken in 2010 found 59% of Canadians opposed considering race, gender, or ethnicity when hiring for government jobs.
A 2014 Pew Research Center poll found that 63% of Americans thought affirmative action programs aimed at increasing minority representation on college campuses were "a good thing", compared to 30% who thought they were "a bad thing". The following year, Gallup released a poll showing that 67% of Americans supported affirmative action programs aimed at increasing female representation, compared to 58% who supported such programs aimed at increasing the representation of racial minorities.
A 2019 Pew Research Center poll found 73% of Americans believe race or ethnicity should not factor into college admissions decisions. A few years later in 2022, a Pew Research Center poll found that 74% of Americans believe race or ethnicity should not factor into college admissions decisions.
See also
Achievement gap in the United States
Affirmative action bake sale
Anti-discrimination law
Anti-racism
Civil and political rights
Civil liberties
History of civil rights in the United States
Civil rights movement
Disability rights movement
Discrimination in the United States
Racism in the United States
Religious discrimination in the United States
Diversity, equity, and inclusion, the successor policy which replaced Affirmative Action
Diversity (business)
Diversity training
Economic discrimination
Equal opportunity
Ethnic penalty
Gender equality
Women's rights
Legacy preferences
Men's rights movement
Minority rights
Multiculturalism
Multiracialism
Numerus clausus
Jewish quota
Political correctness
Positive liberty
Progressive stack
Quotaism
Racial quota
Race and intelligence
Reasonable accommodation
Reverse racism
Special measures for gender equality in the United Nations
Strong-basis-in-evidence standard
Social justice
Substantive equality
Tokenism
White backlash
Angry white male
White guilt
Reparations for slavery
References
Further reading
Pdf.
, a standard scholarly history
Bernstein, David E. (2022) Classified: The untold story of racial classification in America. Bombardier Books, NY. ISBN 1637581734.
E/CN.4/Sub.2/2002/21 Pdf.
Dobbin, Frank. Inventing equal opportunity (Princeton UP, 2009), scholarly history argues that Congress and the courts followed the lead of programs created by corporations.
Details.
Gillon, Steven M. "The strange career of affirmative action: the Civil Rights Act of 1964" in his "That's Not What We Meant to Do": Reform and Its Unintended Consequences in Twentieth-Century America WW Norton, 2000) pp. 120–162.
Order No. DA3325474.
Pdf
Harper, Shannon, and Barbara Reskin. "Affirmative action at school and on the job." Annual Review of Sociology . 31 (2005): 357-379. online
Katznelson, Ira. When Affirmative Action Was White: An Untold History of Racial Inequality in Twentieth-Century America (W. W. Norton, 2006)
Okechukwu, Amaka. To fulfill these rights: Political struggle over affirmative action and open admissions (Columbia UP, 2019).
Parashar, Sakshi. "Affirmative Action and Social Discrimination: A Functional Comparative Study of India, USA and South Africa." in Comparative Approaches in Law and Policy (Springer Nature Singapore, 2023) pp. 171–187.
Pierce, Jennifer. Racing for innocence: Whiteness, gender, and the backlash against affirmative action (Stanford University Press, 2012).online
Rubio, Philip F. A History of Affirmative Action, 1619-2000 (University Press of Mississippi, 2001) online
Sowell, Thomas. Affirmative Action Around the World: An Empirical Study (Yale University Press) analysis by a conservative
Thurber, Timothy M. "Racial Liberalism, Affirmative Action, and the Troubled History of the President's Committee on Government Contracts." Journal of Policy History 18.4 (2006): 446-476.
Der Tagesspiegel
Primary sources
Robinson, Jo Ann, ed. Affirmative action : a documentary history (2001)
External links
"Affirmative Action: History and Analysis" (2003) for secondary and middle schools
, for graduate students
Affirmative Action collected news and commentary at The Washington Post
Does the success of Barack Obama mean we no longer need affirmative action? NOW on PBS investigates
An interview with Professor Randall Kennedy about the presidency of Barack Obama and affirmative action Clifford Armion for La Clé des langues.
Substantive Equality, Positive Action and Roma Rights in the European Union, Report by Minority Rights Group International
Intelligence Squared debate: Affirmative Action on Campus Does More Harm than Good
Discrimination
Race and law
Social justice
Liberalism
Left-wing politics
Majority–minority relations
Industrial and organizational psychology | Affirmative action | [
"Biology"
] | 10,156 | [
"Behavior",
"Aggression",
"Discrimination"
] |
49,396 | https://en.wikipedia.org/wiki/Volunteer%20%28botany%29 | In gardening and agronomic terminology, a volunteer is a plant that grows on its own, rather than being deliberately planted by a farmer or gardener. The action of such plants — to sprout or grow in this fashion — may also be described as volunteering.
Background
Volunteers often grow from seeds that float in on the wind, are dropped by birds, or are inadvertently mixed into compost. Some volunteers may be encouraged by gardeners once they appear, being watered, fertilized, or otherwise cared for, unlike weeds, which are unwanted volunteers.
Volunteers that grow from the seeds of specific cultivars are not reliably identical or similar to their parent and often differ significantly from it. Such open pollinated plants, if they show desirable characteristics, may be selected to become new cultivars.
Law
This definition also has the meaning in the law context, defining the drug-producing plant like cannabis as a "volunteer" if it grows of its own accord from seeds or roots and is not intentionally planted. There may be special rules about how such plants are managed if any appear after growing the cultivar legitimately under a license.
Agriculture
In agricultural rotations, self-set plants from the previous year's crop may become established as weeds in the current crop. For example, volunteer winter wheat will germinate to quite high levels in a following oilseed rape crop, usually requiring chemical control measures.
In agricultural research, the high purity of a harvested crop is often desirable. To achieve this, typically a group of temporary workers will walk the crop rows looking for volunteer plants, or "rogue" plants in an exercise typically referred to as "roguing".
See also
Domestication
Escaped plant
Hemerochory
Invasive species
Noxious weed
Weed
References
Botany
Crops
Horticulture
Drug control law | Volunteer (botany) | [
"Chemistry",
"Biology"
] | 365 | [
"Drug control law",
"Regulation of chemicals",
"Plants",
"Botany"
] |
49,399 | https://en.wikipedia.org/wiki/XY%20sex-determination%20system | The XY sex-determination system is a sex-determination system present in many mammals, including humans, some insects (Drosophila), some snakes, some fish (guppies), and some plants (Ginkgo tree).
In this system, the sex of an individual usually is determined by a pair of sex chromosomes. Typically, females have two of the same kind of sex chromosome (XX), and are called the homogametic sex. Males typically have two different kinds of sex chromosomes (XY), and are called the heterogametic sex. In humans, the presence of the Y chromosome is responsible for triggering male development; in the absence of the Y chromosome, the fetus will undergo female development. In most species with XY sex determination, an organism must have at least one X chromosome in order to survive.
The XY system contrasts in several ways with the ZW sex-determination system found in birds, some insects, many reptiles, and various other animals, in which the heterogametic sex is female. A temperature-dependent sex determination system is found in some reptiles and fish.
Mechanisms
All animals have a set of DNA coding for genes present on chromosomes. In humans, most mammals, and some other species, two of the chromosomes, called the X chromosome and Y chromosome, code for sex. In these species, one or more genes are present on their Y chromosome that determine maleness. In this process, an X chromosome and a Y chromosome act to determine the sex of offspring, often due to genes located on the Y chromosome that code for maleness. Offspring have two sex chromosomes: an offspring with two X chromosomes (XX) will develop female characteristics, and an offspring with an X and a Y chromosome (XY) will develop male characteristics, except in various exceptions such as individuals with Swyer syndrome, that have XY chromosomes and a female phenotype, and de la Chapelle Syndrome, that have XX chromosomes and a male phenotype, however these exceptions are rare. In one instance, a seemingly normal female with a vagina, cervix, and ovaries had XY chromosomes, however the details and mechanism behind this is unknown, and could potentially be due to chimerism.
Mammals
In most mammals, sex is determined by presence of the Y chromosome. This makes individuals with XXY and XYY karyotypes males, and individuals with X and XXX karyotypes females.
In the 1930s, Alfred Jost determined that the presence of testosterone was required for Wolffian duct development in the male rabbit.
SRY is a sex-determining gene on the Y chromosome in the therians (placental mammals and marsupials). Non-human mammals use several genes on the Y chromosome.
Not all male-specific genes are located on the Y chromosome. The platypus, a monotreme, use five pairs of different XY chromosomes with six groups of male-linked genes, AMH being the master switch.
Humans
A single gene (SRY) present on the Y chromosome acts as a signal to set the developmental pathway towards maleness. Presence of this gene starts off the process of virilization. This and other factors result in the sex differences in humans. The cells in females, with two X chromosomes, undergo X-inactivation, in which one of the two X chromosomes is inactivated. The inactivated X chromosome remains within a cell as a Barr body.
Other animals
Some species of turtles have convergently evolved XY sex determination systems, specifically those in Chelidae and Staurotypinae.
Other species (including most Drosophila species) use the presence of two X chromosomes to determine femaleness: one X chromosome gives putative maleness, but the presence of Y chromosome genes is required for normal male development. In the fruit fly individuals with XY are male and individuals with XX are female; however, individuals with XXY or XXX can also be female, and individuals with X can be males.
Plants
Angiosperms
While very few species of dioecious angiosperm have XY sex determination, making up less than 5% of all species, the sheer diversity of angiosperms means that the total number of species with XY sex determination is actually quite high, estimated to be at around 13,000 species. Molecular and evolutionary studies also show that XY sex determination has evolved independently many times in upwards of 175 unique families, with a recent study suggesting its evolution has independently occurred hundreds to thousands of times.
Many economically important crops are known to have an XY system of sex determination, including kiwifruit, asparagus, grapes and date palms.
Gymnosperms
In sharp contrast to angiosperms, approximately 65% of gymnosperms are dioecious. Some families which contain members that are known to have a XY system of sex determination include the cycad families Cycadaceae and Zamiaceae, Ginkgoaceae, Gnetaceae and Podocarpaceae.
Other systems
Whilst XY sex determination is the most familiar, since it is the system that humans use, there are a range of alternative systems found in nature. The inverse of the XY system (called ZW to distinguish it) is used in birds and many insects, in which it is the females that are heterogametic (ZW), while males are homogametic (ZZ).
Many insects of the order Hymenoptera instead have a haplo-diploid system, where the females are full diploids (with all chromosomes appearing in pairs) but males are haploid (having just one copy of all chromosomes). Some other insects have the X0 sex-determination system, where just the sex-determining chromosome varies in ploidy (XX in females but X in males), while all other chromosomes appear in pairs in both sexes.
Influences
Genetic
In an interview for the Rediscovering Biology website, researcher Eric Vilain described how the paradigm changed since the discovery of the SRY gene:
In an interview by Scientific American in 2007, Vilian was asked: "It sounds as if you are describing a shift from the prevailing view that female development is a default molecular pathway to active pro-male and antimale pathways. Are there also pro-female and antifemale pathways?" He replied:
In mammals, including humans, the SRY gene triggers the development of non-differentiated gonads into testes rather than ovaries. However, there are cases in which testes can develop in the absence of an SRY gene (see sex reversal). In these cases, the SOX9 gene, involved in the development of testes, can induce their development without the aid of SRY. In the absence of SRY and SOX9, no testes can develop and the path is clear for the development of ovaries. Even so, the absence of the SRY gene or the silencing of the SOX9 gene are not enough to trigger sexual differentiation of a fetus in the female direction. A recent finding suggests that ovary development and maintenance is an active process, regulated by the expression of a "pro-female" gene, FOXL2. In an interview for the TimesOnline edition, study co-author Robin Lovell-Badge explained the significance of the discovery:
Implications
Looking into the genetic determinants of human sex can have wide-ranging consequences. Scientists have been studying different sex determination systems in fruit flies and animal models to attempt an understanding of how the genetics of sexual differentiation can influence biological processes like reproduction, ageing and disease.
Maternal
In humans and many other species of animals, the father determines the sex of the child. In the XY sex-determination system, the female-provided ovum contributes an X chromosome and the male-provided sperm contributes either an X chromosome or a Y chromosome, resulting in female (XX) or male (XY) offspring, respectively.
Hormone levels in the male parent affect the sex ratio of sperm in humans. Maternal influences also impact which sperm are more likely to achieve conception.
Human ova, like those of other mammals, are covered with a thick translucent layer called the zona pellucida, which the sperm must penetrate to fertilize the egg. Once viewed simply as an impediment to fertilization, recent research indicates the zona pellucida may instead function as a sophisticated biological security system that chemically controls the entry of the sperm into the egg and protects the fertilized egg from additional sperm.
Recent research indicates that human ova may produce a chemical which appears to attract sperm and influence their swimming motion. However, not all sperm are positively impacted; some appear to remain uninfluenced and some actually move away from the egg.
Maternal influences may also be possible that affect sex determination in such a way as to produce fraternal twins equally weighted between one male and one female.
The time at which insemination occurs during the estrus cycle has been found to affect the sex ratio of the offspring of humans, cattle, hamsters, and other mammals. Hormonal and pH conditions within the female reproductive tract vary with time, and this affects the sex ratio of the sperm that reach the egg.
Sex-specific mortality of embryos also occurs.
History
Ancient ideas on sex determination
Aristotle believed incorrectly that the sex of an infant is determined by how much heat a man's sperm had during insemination. He wrote:
Aristotle claimed in error that the male principle was the driver behind sex determination, such that if the male principle was insufficiently expressed during reproduction, the fetus would develop as a female.
20th century genetics
Nettie Stevens (working with beetles) and Edmund Beecher Wilson (working with hemiptera) are credited with independently discovering, in 1905, the chromosomal XY sex-determination system in insects: the fact that males have XY sex chromosomes and females have XX sex chromosomes. In the early 1920s, Theophilus Painter demonstrated that sex in humans (and other mammals) was also determined by the X and Y chromosomes, and the chromosomes that make this determination are carried by the spermatozoa.
The first clues to the existence of a factor that determines the development of testis in mammals came from experiments carried out by Alfred Jost, who castrated embryonic rabbits in utero and noticed that they all acquired a female phenotype.
In 1959, C. E. Ford and his team, in the wake of Jost's experiments, discovered that the Y chromosome was needed for a fetus to develop as male when they examined patients with Turner's syndrome, who grew up as phenotypic females, and found them to be X0 (hemizygous for X and no Y). At the same time, Jacob & Strong described a case of a patient with Klinefelter syndrome (XXY), which implicated the presence of a Y chromosome in development of maleness.
All these observations led to a consensus that a dominant gene that determines testis development (TDF) must exist on the human Y chromosome. The search for this testis-determining factor (TDF) led to Peter Goodfellow's team of scientists in 1990 to discover a region of the Y chromosome that is necessary for the male sex determination, which was named SRY (sex-determining region of the Y chromosome).
See also
Sexual differentiation (human)
Secondary sex characteristic (human)
Y-chromosomal Adam
Sex Determination in Silene
Sex-determination system
Haplodiploid sex-determination system
Z0 sex-determination system
ZW sex-determination system
Temperature-dependent sex determination
X chromosome
Y chromosome
XY gonadal dysgenesis
References
External links
Sex Determination and Differentiation
Can Mammalian Mothers Control the Sex of their Offspring? (KQED Science article on San Diego Zoo research.)
Maternal Diet and Other Factors Affecting Offspring Sex Ratio: A Review, published in Biology of Reproduction
Sex Determination and the Maternal Dominance Hypothesis
Sperm-Ovum Interactions at WikiGenes
Sex-determination systems
Reproduction in mammals | XY sex-determination system | [
"Biology"
] | 2,484 | [
"Sex-determination systems",
"Sex"
] |
49,400 | https://en.wikipedia.org/wiki/Window | A window is an opening in a wall, door, roof, or vehicle that allows the exchange of light and may also allow the passage of sound and sometimes air. Modern windows are usually glazed or covered in some other transparent or translucent material, a sash set in a frame in the opening; the sash and frame are also referred to as a window. Many glazed windows may be opened, to allow ventilation, or closed to exclude inclement weather. Windows may have a latch or similar mechanism to lock the window shut or to hold it open by various amounts.
Types include the eyebrow window, fixed windows, hexagonal windows, single-hung, and double-hung sash windows, horizontal sliding sash windows, casement windows, awning windows, hopper windows, tilt, and slide windows (often door-sized), tilt and turn windows, transom windows, sidelight windows, jalousie or louvered windows, clerestory windows, lancet windows, skylights, roof windows, roof lanterns, bay windows, oriel windows, thermal, or Diocletian, windows, picture windows, rose windows, emergency exit windows, stained glass windows, French windows, panel windows, double/triple-paned windows, and witch windows.
Etymology
The English language-word window originates from the Old Norse , from 'wind' and 'eye'. In Norwegian, Nynorsk, and Icelandic, the Old Norse form has survived to this day (in Icelandic only as a less used word for a type of small open "window", not strictly a synonym for , the Icelandic word for 'window'). In Swedish, the word remains as a term for a hole through the roof of a hut, and in the Danish language and Norwegian , the direct link to eye is lost, just as for window. The Danish (but not the ) word is pronounced fairly similarly to window.
Window is first recorded in the early 13th century, and originally referred to an unglazed hole in a roof. Window replaced the Old English , which literally means 'eye-hole', and 'eye-door'. Many Germanic languages, however, adopted the Latin word to describe a window with glass, such as standard Swedish , or German . The use of window in English is probably because of the Scandinavian influence on the English language by means of loanwords during the Viking Age. In English, the word fenester was used as a parallel until the mid-18th century. Fenestration is still used to describe the arrangement of windows within a façade, as well as defenestration, meaning 'to throw out of a window'.
History
The Romans were the first known to use glass for windows, a technology likely first produced in Roman Egypt, in Alexandria AD. Presentations of windows can be seen in ancient Egyptian wall art and sculptures from Assyria. Paper windows were economical and widely used in ancient China, Korea, and Japan. In England, glass became common in the windows of ordinary homes only in the early 17th century whereas windows made up of panes of flattened animal horn were used as early as the 14th century. In the 19th century American west, greased paper windows came to be used by pioneering settlers. Modern-style floor-to-ceiling windows became possible only after the industrial plate glass making processes were fully perfected.
Technologies
In the 13th century BC, the earliest windows were unglazed openings in a roof to admit light during the day. Later, windows were covered with animal hide, cloth, or wood. Shutters that could be opened and closed came next. Over time, windows were built that both protected the inhabitants from the elements and transmitted light, using multiple small pieces of translucent material, such as flattened pieces of translucent animal horn, paper sheets, thin slices of marble (such as fengite), or pieces of glass, set in frameworks of wood, iron or lead. In the Far East, paper was used to fill windows.
The Romans were the first known users of glass for windows, exploiting a technology likely first developed in Roman Egypt. Specifically, in Alexandria 100 CE, cast-glass windows, albeit with poor optical properties, began to appear, but these were small thick productions, little more than blown-glass jars (cylindrical shapes) flattened out into sheets with circular striation patterns throughout. It would be over a millennium before window glass became transparent enough to see through clearly, as we expect now. In 1154, Al-Idrisi described glass windows as a feature of the palace belonging to the king of the Ghana Empire.
Over the centuries techniques were developed to shear through one side of a blown glass cylinder and produce thinner rectangular window panes from the same amount of glass material. This gave rise to tall narrow windows, usually separated by a vertical support called a mullion. Mullioned glass windows were the windows of choice among the European well-to-do, whereas paper windows were economical and widely used in ancient China, Korea, and Japan. In England, glass became common in the windows of ordinary homes only in the early-17th century, whereas windows made up of panes of flattened animal horn were used as early as the 14th century.
Modern-style floor-to-ceiling windows became possible only after the industrial plate glass-making processes were perfected in the late 19th century Modern windows are usually filled using glass, although transparent plastic is also used.
Fashions and trends
The introduction of lancet windows into Western European church architecture from the 12th century CE built on a tradition of arched windows inserted between columns, and led not only to tracery and elaborate stained-glass windows but also to a long-standing motif of pointed or rounded window-shapes in ecclesiastical buildings, still seen in many churches today.
Peter Smith discusses overall trends in early-modern rural Welsh window architecture:
Up to about 1680 windows tended to be horizontal in proportion, a shape suitable for lighting the low-ceilinged rooms that had resulted from the insertion of the upper floor into the hall-house. After that date vertically proportioned windows came into fashion, partly at least as a response to the Renaissance taste for the high ceiling. Since 1914 the wheel has come full circle and a horizontally proportioned window is again favoured.
The spread of plate-glass technology made possible the introduction of picture windows (in Levittown, Pennsylvania, founded 1951–1952).
Many modern day windows may have a window screen or mesh, often made of aluminum or fibreglass, to keep bugs out when the window is opened. Windows are primarily designed to facilitate a vital connection with the outdoors, offering those within the confines of the building visual access to the everchanging events occurring outside. The provision of this connection serves as an integral safeguard for the health and well-being of those inhabiting buildings, lest they experience the detrimental effects of enclosed buildings devoid of windows. Among the myriad criteria for the design of windows, several pivotal criteria have emerged in daylight standards: location, time, weather, nature, and people. Of these criteria, windows that are designed to provide views of nature are considered to be the most important by people.
Types
Cross
A cross-window is a rectangular window usually divided into four lights by a mullion and transom that form a Latin cross.
Eyebrow
The term eyebrow window is used in two ways: a curved top window in a wall or an eyebrow dormer; and a row of small windows usually under the front eaves such as the James-Lorah House in Pennsylvania.
Fixed
A fixed window is a window that cannot be opened, whose function is limited to allowing light to enter (unlike an unfixed window, which can open and close). Clerestory windows in church architecture are often fixed. Transom windows may be fixed or operable. This type of window is used in situations where light or vision alone is needed as no ventilation is possible in such windows without the use of trickle vents or overglass vents.
Single-hung sash
A single-hung sash window is a window that has one sash that is movable (usually the bottom one) and the other fixed. This is the earlier form of sliding sash window and is also cheaper.
Double-hung sash
A sash window is the traditional style of window in the United Kingdom, and many other places that were formerly colonized by the UK, with two parts (sashes) that overlap slightly and slide up and down inside the frame. The two parts are not necessarily the same size; where the upper sash is smaller (shorter) it is termed a cottage window. Currently, most new double-hung sash windows use spring balances to support the sashes, but traditionally, counterweights held in boxes on either side of the window were used. These were and are attached to the sashes using pulleys of either braided cord or, later, purpose-made chain. Three types of spring balances are called a tape or clock spring balance; channel or block-and-tackle balance, and a spiral or tube balance.
Double-hung sash windows were traditionally often fitted with shutters. Sash windows can be fitted with simplex hinges that let the window be locked into hinges on one side, while the rope on the other side is detached—so the window can be opened for fire escape or cleaning.
Foldup
A foldup has two equal sashes similar to a standard double-hung but folds upward allowing air to pass through nearly the full-frame opening. The window is balanced using either springs or counterbalances, similar to a double-hung. The sashes can be either offset to simulate a double-hung, or in-line. The inline versions can be made to fold inward or outward. The inward swinging foldup windows can have fixed screens, while the outward swinging ones require movable screens. The windows are typically used for screen rooms, kitchen pass-throughs, or egress.
Horizontal sliding sash
A horizontal sliding sash window has two or more sashes that overlap slightly but slide horizontally within the frame. In the UK, these are sometimes called Yorkshire sash windows, presumably because of their traditional use in that county.
Casement
A casement window is a window with a hinged sash that swings in or out like a door comprising either a side-hung, top-hung (also called "awning window"; see below), or occasionally bottom-hung sash or a combination of these types, sometimes with fixed panels on one or more sides of the sash. In the US, these are usually opened using a crank, but in parts of Europe, they tend to use projection friction stays and espagnolette locking. Formerly, plain hinges were used with a casement stay. Handing applies to casement windows to determine direction of swing; a casement window may be left-handed, right-handed, or double. The casement window is the dominant type now found in modern buildings in the UK and many other parts of Europe.
Awning
An awning window is a casement window that is hung horizontally, hinged on top, so that it swings outward like an awning. In addition to being used independently, they can be stacked, several in one opening, or combined with fixed glass. They are particularly useful for ventilation.
Hopper
A hopper window is a bottom-pivoting casement window that opens by tilting vertically, typically to the inside, resembling a hopper chute.
Pivot
A pivot window is a window hung on one hinge on each of two opposite sides which allows the window to revolve when opened. The hinges may be mounted top and bottom (Vertically Pivoted) or at each jamb (Horizontally Pivoted). The window will usually open initially to a restricted position for ventilation and, once released, fully reverse and lock again for safe cleaning from inside. Modern pivot hinges incorporate a friction device to hold the window open against its weight and may have restriction and reversed locking built-in. In the UK, where this type of window is most common, they were extensively installed in high-rise social housing.
Tilt and slide
A tilt and slide window is a window (more usually a door-sized window) where the sash tilts inwards at the top similar to a hopper window and then slides horizontally behind the fixed pane.
Tilt and turn
A tilt and turn window can both tilt inwards at the top or open inwards from hinges at the side. This is the most common type of window in Germany, its country of origin. It is also widespread in many other European countries. In Europe, it is usual for these to be of the "turn first" type. i.e. when the handle is turned to 90 degrees the window opens in the side hung mode. With the handle turned to 180 degrees the window opens in bottom hung mode. Most usually in the UK the windows will be "tilt first" i.e. bottom hung at 90 degrees for ventilation and side hung at 180 degrees for cleaning the outer face of the glass from inside the building.
Transom
A transom window is a window above a door. In an exterior door the transom window is often fixed, in an interior door, it can open either by hinges at top or bottom, or rotate on hinges. It provided ventilation before forced air heating and cooling. A fan-shaped transom is known as a fanlight, especially in the British Isles.
Side light
Windows beside a door or window are called side-, wing-, margen-lights, and flanking windows.
Jalousie window
Also known as a louvered window, the jalousie window consists of parallel slats of glass or acrylic that open and close like a Venetian blind, usually using a crank or a lever. They are used extensively in tropical architecture. A jalousie door is a door with a jalousie window.
Clerestory
A clerestory window is a window set in a roof structure or high in a wall, used for daylighting.
Skylight
A skylight is a window built into a roof structure. This type of window allows for natural daylight and moonlight.
Roof
A roof window is a sloped window used for daylighting, built into a roof structure. It is one of the few windows that could be used as an exit. Larger roof windows meet building codes for emergency evacuation.
Roof lantern
A roof lantern is a multi-paned glass structure, resembling a small building, built on a roof for day or moon light. Sometimes includes an additional clerestory. May also be called a cupola.
Bay
A bay window is a multi-panel window, with at least three panels set at different angles to create a protrusion from the wall line.
Oriel
An oriel window is a form of bay window. This form most often appears in Tudor-style houses and monasteries. It projects from the wall and does not extend to the ground. Originally a form of porch, they are often supported by brackets or corbels.
Thermal
Thermal, or Diocletian, windows are large semicircular windows (or niches) which are usually divided into three lights (window compartments) by two mullions. The central compartment is often wider than the two side lights on either side of it.
Picture
A picture window is a large fixed window in a wall, typically without glazing bars, or glazed with only perfunctory glazing bars (muntins) near the edge of the window. Picture windows provide an unimpeded view, as if framing a picture.
Multi-lite
A multi-lite window is a window glazed with small panes of glass separated by wooden or lead glazing bars, or muntins, arranged in a decorative glazing pattern often dictated by the building's architectural style. Due to the historic unavailability of large panes of glass, the multi-lit (or lattice window) was the most common window style until the beginning of the 20th century, and is still used in traditional architecture.
Emergency exit/egress
An emergency exit window is a window big enough and low enough so that occupants can escape through the opening in an emergency, such as a fire. In many countries, exact specifications for emergency windows in bedrooms are given in many building codes. Specifications for such windows may also allow for the entrance of emergency rescuers. Vehicles, such as buses, aircraft, and trains frequently have emergency exit windows as well.
Stained glass
A stained glass window is a window composed of pieces of colored glass, transparent, translucent or opaque, frequently portraying persons or scenes. Typically the glass in these windows is separated by lead glazing bars. Stained glass windows were popular in Victorian houses and some Wrightian houses, and are especially common in churches.
French
A French door has two rows of upright rectangular glass panes (lights) extending its full length; and two of these doors on an exterior wall and without a mullion separating them, that open outward with opposing hinges to a terrace or porch, are referred to as a French window. Sometimes these are set in pairs or multiples thereof along the exterior wall of a very large room, but often, one French window is placed centrally in a typically sized room, perhaps among other fixed windows flanking the feature. French windows are known as porte-fenêtre in France and portafinestra in Italy, and frequently are used in modern houses.
Double-paned
Double-paned windows have two parallel panes (slabs of glass) with a separation of typically about 1 cm; this space is permanently sealed and filled at the time of manufacture with dry air or other dry nonreactive gas. Such windows provide a marked improvement in thermal insulation (and usually in acoustic insulation as well) and are resistant to fogging and frosting caused by temperature differential. They are widely used for residential and commercial construction in intemperate climates. In the UK, double-paned and triple-paned are referred to as double-glazing and triple-glazing. Triple-paned windows are now a common type of glazing in central to northern Europe. Quadruple glazing is now being introduced in Scandinavia.
Hexagonal window
A hexagonal window is a hexagon-shaped window, resembling a bee cell or crystal lattice of graphite. The window can be vertically or horizontally oriented, openable or dead. It can also be regular or elongately-shaped and can have a separator (mullion). Typically, the cellular window is used for an attic or as a decorative feature, but it can also be a major architectural element to provide the natural lighting inside buildings.
Guillotine window
A guillotine window is a window that opens vertically. Guillotine windows have more than one sliding frame, and open from bottom to top or top to bottom.
Terms
EN 12519 is the European standard that describes windows terms officially used in EU Member States. The main terms are:
Light, or Lite, is the area between the outer parts of a window (transom, sill and jambs), usually filled with a glass pane. Multiple panes are divided by mullions when load-bearing, muntins when not.
Lattice light is a compound window pane madeup of small pieces of glass held together in a lattice.
Fixed window is a unit of one non-moving lite. The terms single-light, double-light, etc., refer to the number of these glass panes in a window.
Sash unit is a window consisting of at least one sliding glass component, typically composed of two lites (known as a double-light).
Replacement window in the United States means a framed window designed to slip inside the original window frame from the inside after the old sashes are removed. In Europe, it usually means a complete window including a replacement outer frame.
New construction window, in the US, means a window with a nailing fin that is inserted into a rough opening from the outside before applying siding and inside trim. A nailing fin is a projection on the outer frame of the window in the same plane as the glazing, which overlaps the prepared opening, and can thus be 'nailed' into place. In the UK and mainland Europe, windows in new-build houses are usually fixed with long screws into expanding plastic plugs in the brickwork. A gap of up to 13 mm is left around all four sides, and filled with expanding polyurethane foam. This makes the window fixing weatherproof but allows for expansion due to heat.
Lintel is a beam over the top of a window, also known as a transom.
Window sill is the bottom piece in a window frame. Window sills slant outward to drain water away from the inside of the building.
Secondary glazing is an additional frame applied to the inside of an existing frame, usually used on protected or listed buildings to achieve higher levels of thermal and sound insulation without compromising the look of the building
Decorative millwork is the moulding, cornices and lintels often decorating the surrounding edges of the window.
Labeling
The United States NFRC Window Label lists the following terms:
Thermal transmittance (U-factor), best values are around U-0.15 (equal to 0.8 W/m2/K)
Solar heat gain coefficient (SHGC), ratio of solar heat (infrared) passing through the glass to incident solar heat
Visible transmittance (VT), ratio of transmitted visible light divided by incident visible light
Air leakage (AL), measured in cubic foot per minute per linear foot of crack between sash and frame
Condensation resistance (CR), measured between 1 and 100 (the higher the number, the higher the resistance of the formation of condensation)
The European harmonised standard hEN 14351–1, which deals with doors and windows, defines 23 characteristics (divided into essential and non essential). Two other, preliminary European Norms that are under development deal with internal pedestrian doors (prEN 14351–2), smoke and fire resisting doors, and openable windows (prEN 16034).
Construction
Windows can be a significant source of heat transfer. Therefore, insulated glazing units consist of two or more panes to reduce the transfer of heat.
Grids or muntins
These are the pieces of framing that separate a larger window into smaller panes. In older windows, large panes of glass were quite expensive, so muntins let smaller panes fill a larger space. In modern windows, light-colored muntins still provide a useful function by reflecting some of the light going through the window, making the window itself a source of diffuse light (instead of just the surfaces and objects illuminated within the room). By increasing the indirect illumination of surfaces near the window, muntins tend to brighten the area immediately around a window and reduce the contrast of shadows within the room.
Frame and sash construction
Frames and sashes can be made of the following materials:
Composites (also known as Hybrid Windows) are start since early 1998 and combine materials like aluminium + pvc or wood to obtain aesthetics of one material with the functional benefits of another.
A special class of PVC window frames, uPVC window frames, became widespread since the late 20th century, particularly in Europe: there were 83.5 million installed by 1998 with numbers still growing as of 2012.
Glazing and filling
Low-emissivity coated panes reduce heat transfer by radiation, which, depending on which surface is coated, helps prevent heat loss (in cold climates) or heat gains (in warm climates).
High thermal resistance can be obtained by evacuating or filling the insulated glazing units with gases such as argon or krypton, which reduces conductive heat transfer due to their low thermal conductivity. Performance of such units depends on good window seals and meticulous frame construction to prevent entry of air and loss of efficiency.
Modern double-pane and triple-pane windows often include one or more low-e coatings to reduce the window's U-factor (its insulation value, specifically its rate of heat loss). In general, soft-coat low-e coatings tend to result in a lower solar heat gain coefficient (SHGC) than hard-coat low-e coatings.
Modern windows are usually glazed with one large sheet of glass per sash, while windows in the past were glazed with multiple panes separated by glazing bars, or muntins, due to the unavailability of large sheets of glass. Today, glazing bars tend to be decorative, separating windows into small panes of glass even though larger panes of glass are available, generally in a pattern dictated by the architectural style at use. Glazing bars are typically wooden, but occasionally lead glazing bars soldered in place are used for more intricate glazing patterns.
Other construction details
Many windows have movable window coverings such as blinds or curtains to keep out light, provide additional insulation, or ensure privacy.
Windows allow natural light to enter, but too much can have negative effects such as glare and heat gain. Additionally, while windows let the user see outside, there must be a way to maintain privacy on in the inside. Window coverings are practical accommodations for these issues.
Impact of the sun
Sun incidence angle
Historically, windows are designed with surfaces parallel to vertical building walls. Such a design allows considerable solar light and heat penetration due to the most commonly occurring incidence of sun angles. In passive solar building design, an extended eave is typically used to control the amount of solar light and heat entering the window(s).
An alternative method is to calculate an optimum window mounting angle that accounts for summer sun load minimization, with consideration of actual latitude of the building. This process has been implemented, for example, in the Dakin Building in Brisbane, California—in which most of the fenestration is designed to reflect summer heat load and help prevent summer interior over-illumination and glare, by canting windows to nearly a 45 degree angle.
Solar window
Photovoltaic windows not only provide a clear view and illuminate rooms, but also convert sunlight to electricity for the building. In most cases, translucent photovoltaic cells are used.
Passive solar
Passive solar windows allow light and solar energy into a building while minimizing air leakage and heat loss. Properly positioning these windows in relation to sun, wind, and landscape—while properly shading them to limit excess heat gain in summer and shoulder seasons, and providing thermal mass to absorb energy during the day and release it when temperatures cool at night—increases comfort and energy efficiency. Properly designed in climates with adequate solar gain, these can even be a building's primary heating system.
Coverings
A window covering is a shade or screen that provides multiple functions. Some coverings, such as drapes and blinds provide occupants with privacy. Some window coverings control solar heat gain and glare. There are external shading devices and internal shading devices. Low-e window film is a low-cost alternative to window replacement to transform existing poorly-insulating windows into energy-efficient windows. For high-rise buildings, smart glass can provide an alternative.
Gallery
See also
Airflow window
Architectural glass
Crown glass
Demerara window
Display window
Fortochka
Glass mullion system
Greased paper window
Insulated glazing
Plate glass
Porthole
Rose window
Window tax
Window treatment
Witch window
References
External links
Roman Glass from Metropolitan Museum of Art
Architectural elements
Glass | Window | [
"Physics",
"Chemistry",
"Technology",
"Engineering"
] | 5,617 | [
"Glass",
"Building engineering",
"Unsolved problems in physics",
"Homogeneous chemical mixtures",
"Architectural elements",
"Amorphous solids",
"Components",
"Architecture"
] |
49,401 | https://en.wikipedia.org/wiki/Hall | In architecture, a hall is a relatively large space enclosed by a roof and walls. In the Iron Age and early Middle Ages in northern Europe, a mead hall was where a lord and his retainers ate and also slept. Later in the Middle Ages, the great hall was the largest room in castles and large houses, and where the servants usually slept. As more complex house plans developed, the hall remained a large room for dancing and large feasts, often still with servants sleeping there. It was usually immediately inside the main door. In modern British houses, an entrance hall next to the front door remains an indispensable feature, even if it is essentially merely a corridor.
Today, the (entrance) hall of a house is the space next to the front door or vestibule leading to the rooms directly and/or indirectly. Where the hall inside the front door of a house is elongated, it may be called a passage, corridor (from Spanish corredor used in El Escorial and 100 years later in Castle Howard), or hallway.
History
In warmer climates, the houses of the wealthy were often built around a courtyard, but in northern areas manors were built around a great hall. The hall was home to the hearth and was where all the residents of the house would eat, work, and sleep. One common example of this form is the longhouse. Only particularly messy tasks would be done in separate rooms on the periphery of the hall. Still today the term hall is often used to designate a country house such as a hall house, or specifically a Wealden hall house, and manor houses.
In later medieval Europe, the main room of a castle or manor house was the great hall. In a medieval building, the hall was where the fire was kept. As heating technology improved and a desire for privacy grew, tasks moved from the hall to other rooms. First, the master of the house withdrew to private bedrooms and eating areas. Over time servants and children also moved to their own areas, while work projects were also given their own chambers leaving the hall for special functions. With time, its functions as dormitory, kitchen, parlour, and so on were divided into separate rooms or, in the case of the kitchen, a separate building.
Until the early modern era that majority of the population lived in houses with a single room. In the 17th century, even lower classes began to have a second room, with the main chamber being the hall and the secondary room the parlor. The hall and parlor house was found in England and was a fundamental, historical floor plan in parts of the United States from 1620 to 1860.
In Europe, as the wealthy embraced multiple rooms initially the common form was the enfilade, with rooms directly connecting to each other. In 1597 John Thorpe is the first recorded architect to replace multiple connected rooms with rooms along a corridor each accessed by a separate door.
Other uses
Collegiate halls
Many institutions and buildings at colleges and universities are formally titled "___ Hall", typically being named after the person who endowed it, for example, King's Hall, Cambridge. Others, such as Lady Margaret Hall, Oxford, commemorate respected people. Between these in age, Nassau Hall at Princeton University began as the single building of the then college. In medieval origin, these were the halls in which the members of the university lived together during term time. In many cases, some aspect of this community remains.
Some of these institutions are titled "Hall" instead of "College" because at the time of their foundation they were not recognised as colleges (in some cases because their foundation predated the existence of colleges) and did not have the appropriate Royal Charter. Examples at the University of Oxford are:
St Edmund Hall
Hart Hall (now Hertford College)
Lady Margaret Hall
The (currently six) Permanent private halls.
In colleges of the universities of Oxford and Cambridge, the term "Hall" is also used for the dining hall for students, with High Table at one end for fellows. Typically, at "Formal Hall", gowns are worn for dinner during the evening, whereas for "informal Hall" they are not. The medieval collegiate dining hall, with a dais for the high table at the upper end and a screen passage at the lower end, is a modified or assimilated form of the Great hall.
Meeting hall
A hall is also a building consisting largely of a principal room, that is rented out for meetings and social affairs. It may be privately or government-owned, such as a function hall owned by one company used for weddings and cotillions (organized and run by the same company on a contractual basis) or a community hall available for rent to anyone, such as a British village hall.
Religious halls
In religious architecture, as in Islamic architecture, the prayer hall is a large room dedicated to the practice of worship. (example: the prayer hall of the Great Mosque of Kairouan in Tunisia). A hall church is a church with a nave and side aisles of approximately equal height. Many churches have an associated church hall used for meetings and other events.
Public buildings
Following a line of similar development, in office buildings and larger buildings (theatres, cinemas etc.), the entrance hall is generally known as the foyer (the French for fireplace). The atrium, a name sometimes used in public buildings for the entrance hall, was the central courtyard of a Roman house.
Types
In architecture, the term "double-loaded" describes corridors that connect to rooms on both sides. Conversely, a single-loaded corridor only has rooms on one side (and possible windows on the other). A blind corridor does not lead anywhere.
Billiard hall
City hall, town hall or village hall
Concert hall
Concourse (at a large transportation station)
Convention center (exhibition hall)
Dance hall
Dining hall
Firehall
Great room or great hall
Moot hall
Prayer hall, such as the sanctuary of a synagogue
Reading room
Residence hall
Trades hall (also called union hall, labour hall, etc.)
Waiting room (in large transportation stations)
See also
Hall of fame
References
External links
Rooms | Hall | [
"Engineering"
] | 1,237 | [
"Rooms",
"Architecture"
] |
49,402 | https://en.wikipedia.org/wiki/Closet | A closet (especially in North American English usage) is an enclosed space, with a door, used for storage, particularly that of clothes. Fitted closets are built into the walls of the house so that they take up no apparent space in the room. Closets are often built under stairs, thereby using awkward space that would otherwise go unused.
A piece of furniture such as a cabinet or chest of drawers serves the same purpose of storage, but is not a closet, which is an architectural feature rather than a piece of furniture. A closet always has space for hanging, where a cupboard may consist only of shelves for folded garments. Wardrobe can refer to a free-standing piece of furniture (also known as an armoire), but according to the Oxford English Dictionary, a wardrobe can also be a "large cupboard or cabinet for storing clothes or other linen", including "built-in wardrobe, fitted wardrobe, walk-in wardrobe, etc."
Other uses of the word
In Elizabethan and Middle English, closet referred to a small private room, an inner sanctum within a far larger house, used for prayer, reading, or study.
The use of "closet" for "toilet" dates back to 1662. In Indian English, this use continues. Related forms include earth closet and water closet (flush toilet). "Privy" meaning an outhouse derives from "private", making the connection with the Middle English use of "closet", above.
Types
Airing cupboard: A closet containing a water heater, with slatted shelves to allow air to circulate around the clothes or linen stored there.
Broom closet: A closet with top-to-bottom space used for storing cleaning items, like brooms, mops, vacuum cleaners, cleaning supplies, buckets, etc.
Coat closet: A closet located near the front door. Usually used to store coats, jackets, hoodies, sweatshirts, gloves, hats, scarfs, sunglasses, and boots/shoes. This kind of closet sometimes has shelving. It only has a rod and some bottom space used for clothes stored in boxes or drawers. Some may have a top shelf for storage above the rod.
Custom closet: A closet that is made specifically to meet the needs of the user, like a kids closet.
Linen-press or linen closet: A tall, narrow closet. Typically located in or near bathrooms and/or bedrooms, such a closet contains shelves used to hold items such as toiletries and linens, including towels, washcloths, or sheets.
Pantry: A closet or cabinet in a kitchen used for storing food, dishes, linens, and provisions. The closet may have shelves for putting food on.
Spear closet: A closet made to use up otherwise unused space in a building.
Supply closet: A closet most commonly used for storing office supplies.
Utility closet: A closet most commonly used for storing house appliances and cleaning supplies
Walk-in closet: A storage room with enough space for someone to stand in it while accessing stored items. Larger ones used for clothes shade into dressing room.
Wall closet: A closet in a bedroom that is built into the wall. It may be closed by curtains or folding doors, in which clothes can be stored folded on shelves.
Wardrobe: A small closet used for storing clothes.
Closet tax question in colonial America
Though some sources claim that colonial American houses often lacked closets because of a "closet tax" imposed by the British crown, others argue that closets were absent in most houses simply because their residents had few possessions.
Closet organizers
Closet organizers are integrated shelving systems. Different materials have advantages and disadvantages:
Wire shelving: Moderately difficult to install, wire shelves cannot hold much weight without giving in but are cheap.
Wood shelving: Difficult to install, wood shelving is sturdier and more expensive than wire.
Tube shelving: Easy to install, tube shelving involves few pieces and requires no cutting or measuring.
See also
Cubby-hole, one name for the cupboard under the stairs
References
Home
Cabinets (furniture)
Clothing containers
Rooms | Closet | [
"Engineering"
] | 825 | [
"Rooms",
"Architecture"
] |
49,404 | https://en.wikipedia.org/wiki/Kitchen | A kitchen is a room or part of a room used for cooking and food preparation in a dwelling or in a commercial establishment. A modern middle-class residential kitchen is typically equipped with a stove, a sink with hot and cold running water, a refrigerator, and worktops and kitchen cabinets arranged according to a modular design. Many households have a microwave oven, a dishwasher, and other electric appliances. The main functions of a kitchen are to store, prepare and cook food (and to complete related tasks such as dishwashing). The room or area may also be used for dining (or small meals such as breakfast), entertaining and laundry. The design and construction of kitchens is a huge market all over the world.
Commercial kitchens are found in restaurants, cafeterias, hotels, hospitals, educational and workplace facilities, army barracks, and similar establishments. These kitchens are generally larger and equipped with bigger and more heavy-duty equipment than a residential kitchen. For example, a large restaurant may have a huge walk-in refrigerator and a large commercial dishwasher machine. In some instances, commercial kitchen equipment such as commercial sinks is used in household settings as it offers ease of use for food preparation and high durability.
In developed countries, commercial kitchens are generally subject to public health laws. They are inspected periodically by public-health officials, and forced to close if they do not meet hygienic requirements mandated by law.
History
Middle Ages
Early medieval European longhouses had an open fire under the highest point of the building. The "kitchen area" was between the entrance and the fireplace. In wealthy homes, there was typically more than one kitchen. In some homes, there were upwards of three kitchens. The kitchens were divided based on the types of food prepared in them.
The kitchen might be separate from the great hall due to the smoke from cooking fires and the chance the fires may get out of control. Few medieval kitchens survive as they were "notoriously ephemeral structures".
Colonial America
In Connecticut, as in other colonies of New England during Colonial America, kitchens were often built as separate rooms and were located behind the parlor and keeping room or dining room. One early record of a kitchen is found in the 1648 inventory of the estate of a John Porter of Windsor, Connecticut. The inventory lists goods in the house "over the kittchin" and "in the kittchin". The items listed in the kitchen were: silver spoons, pewter, brass, iron, arms, ammunition, hemp, flax and "other implements about the room".
Technological developments such as the Rumford roaster and the kitchen range enabled more efficient use of space and fuel.
Rationalization
A stepping stone to the modern fitted kitchen was the Frankfurt Kitchen, designed by Margarete Schütte-Lihotzky for social housing projects in 1926. This kitchen measured , and was built to optimize kitchen efficiency and lower building costs. The design was the result of detailed time-motion studies and interviews with future tenants to identify what they needed from their kitchens. Schütte-Lihotzky's fitted kitchen was built in some 10,000 apartments in housing projects erected in Frankfurt in the 1930s.
Materials
The Frankfurt Kitchen of 1926 was made of several materials depending on the application. The modern built-in kitchens of today use particle boards or MDF, decorated with a variety of materials and finishes including wood veneers, lacquer, glass, melamine, laminate, ceramic and eco gloss. Very few manufacturers produce home built-in kitchens from stainless steel. Until the 1950s, steel kitchens were used by architects, but this material was displaced by the cheaper particle board panels sometimes decorated with a steel surface.
Domestic kitchen planning
Domestic (or residential) kitchen design is a relatively recent discipline. The first ideas to optimize the work in the kitchen go back to Catharine Beecher's A Treatise on Domestic Economy (1843, revised and republished together with her sister Harriet Beecher Stowe as The American Woman's Home in 1869). Beecher's "model kitchen" propagated for the first time a systematic design based on early ergonomics. The design included regular shelves on the walls, ample workspace, and dedicated storage areas for various food items. Beecher even separated the functions of preparing food and cooking it altogether by moving the stove into a compartment adjacent to the kitchen.
Christine Frederick published from 1913 a series of articles on "New Household Management" in which she analyzed the kitchen following Taylorist principles of efficiency, presented detailed time-motion studies, and derived a kitchen design from them. Her ideas were taken up in the 1920s by architects in Germany and Austria, most notably Bruno Taut, Erna Meyer, Margarete Schütte-Lihotzky and Benita Otte, who designed the first fitted kitchen for the Haus am Horn, which was completed in 1923. Similar design principles were employed by Schütte-Lihotzky for her famous Frankfurt kitchen, designed for Ernst May's Römerstadt, a social housing project in Frankfurt, in 1927.
While this "work kitchen" and variants derived from it were a great success for tenement buildings, homeowners had different demands and did not want to be constrained by a kitchen. Nevertheless, the kitchen design was mostly ad-hoc following the whims of the architect. In the U.S., the "Small Homes Council", since 1993 the "Building Research Council", of the School of Architecture of the University of Illinois at Urbana–Champaign was founded in 1944 with the goal to improve the state of the art in home building, originally with an emphasis on standardization for cost reduction. It was there that the notion of the kitchen work triangle was formalized: the three main functions in a kitchen are storage, preparation, and cooking (which Catharine Beecher had already recognized), and the places for these functions should be arranged in the kitchen in such a way that work at one place does not interfere with work at another place, the distance between these places is not unnecessarily large, and no obstacles are in the way. A natural arrangement is a triangle, with the refrigerator, the sink, and the stove at a vertex each.
This observation led to a few common kitchen forms, commonly characterized by the arrangement of the kitchen cabinets and sink, stove, and refrigerator:
A single-file kitchen (also known as a one-way galley or a straight-line kitchen) has all of these along one wall; the work triangle degenerates to a line. This is not optimal, but often the only solution if space is restricted. This may be common in an attic space that is being converted into a living space, or a studio apartment.
The double-file kitchen (or two-way galley) has two rows of cabinets on opposite walls, one containing the stove and the sink, the other the refrigerator. This is the classical work kitchen and makes efficient use of space.
In the L-kitchen, the cabinets occupy two adjacent walls. Again, the work triangle is preserved, and there may even be space for an additional table at a third wall, provided it does not intersect the triangle.
A U-kitchen has cabinets along three walls, typically with the sink at the base of the "U". This is a typical work kitchen, too, unless the two other cabinet rows are short enough to place a table on the fourth wall.
A G-kitchen has cabinets along three walls, like the U-kitchen, and also a partial fourth wall, often with a double basin sink at the corner of the G shape. The G-kitchen provides additional work and storage space and can support two work triangles. A modified version of the G-kitchen is the double-L, which splits the G into two L-shaped components, essentially adding a smaller L-shaped island or peninsula to the L-kitchen.
The block kitchen (or island) is a more recent development, typically found in open kitchens. Here, the stove or both the stove and the sink are placed where an L or U kitchen would have a table, in a free-standing "island", separated from the other cabinets. In a closed room, this does not make much sense, but in an open kitchen, it makes the stove accessible from all sides such that two persons can cook together, and allows for contact with guests or the rest of the family since the cook does not face the wall any more. Additionally, the kitchen island's counter-top can function as an overflow surface for serving buffet-style meals or sitting down to eat breakfast and snacks.
In the 1980s, there was a backlash against industrial kitchen planning and cabinets with people installing a mix of work surfaces and free standing furniture, led by kitchen designer Johnny Grey and his concept of the "unfitted kitchen". Modern kitchens often have enough informal space to allow for people to eat in it without having to use the formal dining room. Such areas are called "breakfast areas", "breakfast nooks" or "breakfast bars" if space is integrated into a kitchen counter. Kitchens with enough space to eat in are sometimes called "eat-in kitchens". During the 2000s, flat pack kitchens were popular for people doing DIY renovating on a budget. The flat pack kitchens industry makes it easy to put together and mix and matching doors, bench tops and cabinets. In flat pack systems, many components can be interchanged.
In larger homes, where the owners might have meals prepared by a household staff member, the home may have a chef's kitchen. This typically differs from a normal domestic kitchen by having multiple ovens (possibly of different kinds for different kinds of cooking), multiple sinks, and warming drawers to keep food heated between cooking and service.
Other types
Restaurant and canteen kitchens found in hotels, hospitals, educational and workplace facilities, army barracks, and similar institutions are generally (in developed countries) subject to public health laws. They are inspected periodically by public health officials and forced to close if they do not meet hygienic requirements mandated by law.
Canteen kitchens (and castle kitchens) were often the places where new technology was used first. For instance, Benjamin Thompson's "energy saving stove", an early 19th-century fully closed iron stove using one fire to heat several pots, was designed for large kitchens; another thirty years passed before they were adapted for domestic use.
As of 2017, restaurant kitchens usually have tiled walls and floors and use stainless steel for other surfaces (workbench, but also door and drawer fronts) because these materials are durable and easy to clean. Professional kitchens are often equipped with gas stoves, as these allow cooks to regulate the heat more quickly and more finely than electrical stoves. Some special appliances are typical for professional kitchens, such as large installed deep fryers, steamers, or a bain-marie.
The fast food and convenience food trends have changed the manner in which restaurant kitchens operate. Some of these type restaurants may only "finish" convenience food that is delivered to them or just reheat completely prepared meals. At the most they may grill a hamburger or a steak. But in the early 21st century, c-stores (convenience stores) are attracting greater market share by performing more food preparation on-site and better customer service than some fast food outlets.
The kitchens in railway dining cars have presented special challenges: space is limited, and, personnel must be able to serve a great number of meals quickly. Especially in the early history of railways, this required flawless organization of processes; in modern times, the microwave oven and prepared meals have made this task much easier. Kitchens aboard ships, aircraft and sometimes railcars are often referred to as galleys. On yachts, galleys are often cramped, with one or two burners fueled by an LP gas bottle. Kitchens on cruise ships or large warships, by contrast, are comparable in every respect with restaurants or canteen kitchens.
On passenger airliners, the kitchen is reduced to a pantry. The crew's role is to heat and serve in-flight meals delivered by a catering company. An extreme form of the kitchen occurs in space, e.g., aboard a Space Shuttle (where it is also called the "galley") or the International Space Station. The astronauts' food is generally completely prepared, dehydrated, and sealed in plastic pouches before the flight. The kitchen is reduced to a rehydration and heating module.
Outdoor areas where food is prepared are generally not considered kitchens, even though an outdoor area set up for regular food preparation, for instance when camping, might be referred to as an "outdoor kitchen". An outdoor kitchen at a campsite might be placed near a well, water pump, or water tap, and it might provide tables for food preparation and cooking (using portable camp stoves). Some campsite kitchen areas have a large tank of propane connected to burners so that campers can cook their meals. Military camps and similar temporary settlements of nomads may have dedicated kitchen tents, which have a vent to enable cooking smoke to escape.
In schools where home economics, food technology (previously known as "domestic science"), or culinary arts are taught, there are typically a series of kitchens with multiple equipment (similar in some respects to laboratories) solely for the purpose of teaching. These consist of multiple workstations, each with its own oven, sink, and kitchen utensils, where the teacher can show students how to prepare food and cook it.
By region
China
Kitchens in China are called . More than 3000 years ago, the ancient Chinese used the ding for cooking food. The ding was developed into the wok and pot used today. In Chinese spiritual tradition, a Kitchen God watches over the kitchen for the family and reports to the Jade Emperor annually about the family's behavior. On Chinese New Year's Eve, families would gather to pray for the kitchen god to give a good report to heaven and wish him to bring back good news on the fifth day of the New Year.
The most common cooking equipment in Chinese family kitchens and restaurant kitchens are woks, steamer baskets and pots. The fuel or heating resource was also an important technique to practice the cooking skills. Traditionally Chinese were using wood or straw as the fuel to cook food. A Chinese chef had to master flaming and heat radiation to reliably prepare traditional recipes. Chinese cooking will use a pot or wok for pan-frying, stir-frying, deep frying or boiling.
Japan
Kitchens in Japan are called Daidokoro (台所; lit. "kitchen"). Daidokoro is the place where food is prepared in a Japanese house. Until the Meiji era, a kitchen was also called kamado (かまど; lit. stove) and there are many sayings in the Japanese language that involve kamado as it was considered the symbol of a house and the term could even be used to mean "family" or "household" (similar to the English word "hearth"). When separating a family, it was called Kamado wo wakeru, which means "divide the stove". Kamado wo yaburu (lit. "break the stove") means that the family was bankrupt.
India
In India, a kitchen is called a "Rasoi" (in Hindi\Sanskrit) or a "Swayampak ghar" in Marathi, and there exist many other names for it in the various regional languages. Many different methods of cooking exist across the country, and the structure and the materials used in constructing kitchens have varied depending on the region. For example, in the north and central India, cooking used to be carried out in clay ovens called "chulha" (also chullha or chullah), fired by wood, coal or dried cow dung. In households where members observed vegetarianism, separate kitchens were maintained to cook and store vegetarian and non-vegetarian food. Religious families often treat the kitchen as a sacred space. Indian kitchens are built on an Indian architectural science called vastushastra. The Indian kitchen vastu is of utmost importance while designing kitchens in India. Modern-day architects also follow the norms of vastushastra while designing Indian kitchens across the world.
While many kitchens belonging to poor families continue to use clay stoves and the older forms of fuel, the urban middle and upper classes usually have gas stoves with cylinders or piped gas attached. Electric cooktops are rarer since they consume a great deal of electricity, but microwave ovens are gaining popularity in urban households and commercial enterprises. Indian kitchens are also supported by biogas and solar energy as fuel. World's largest solar energy kitchen is built in India. In association with government bodies, India is encouraging domestic biogas plants to support the kitchen system.
See also
Cooking techniques
Cuisine
Dirty kitchen
Hearth
Hoosier cabinet
Kitchen utensil
Kitchen ventilation
Universal design
References
Further reading
Beecher, C. E. and Beecher Stowe, H.: The American Woman's Home, 1869. The American Woman's Home
Cahill, Nicolas. Household and City Organization at Olynthus
Cromley, Elizabeth Collins. The Food Axis: Cooking, Eating, and the Architecture of American Houses (University of Virginia Press; 2011); 288 pages; Explores the history of American houses through a focus on spaces for food preparation, cooking, consumption, and disposal.
Harrison, M.: The Kitchen in History, Osprey; 1972;
Kinchin, Juliet and Aidan O'Connor, Counter Space: Design and the Modern Kitchen (MoMA: New York, 2011)
Lupton, E. and Miller, J. A.: The Bathroom, the Kitchen, and the Aesthetics of Waste, Princeton Architectural Press; 1996; . The Bathroom, the Kitchen and the Aesthetics of Waste
Snodgrass, M. E.: Encyclopedia of Kitchen History; Fitzroy Dearborn Publishers; (November 2004);
External links
Photo History of the Kitchen 1860–1960
Rooms
Food and drink preparation
Restaurant terminology | Kitchen | [
"Engineering"
] | 3,679 | [
"Rooms",
"Architecture"
] |
49,414 | https://en.wikipedia.org/wiki/Sex-determination%20system | A sex-determination system is a biological system that determines the development of sexual characteristics in an organism. Most organisms that create their offspring using sexual reproduction have two common sexes, males and females, and in other species, there are hermaphrodites, organisms that can function reproductively as either female or male, or both.
There are also some species in which only one sex is present, temporarily or permanently. This can be due to parthenogenesis, the act of a female reproducing without fertilization. In some plants or algae the gametophyte stage may reproduce itself, thus producing more individuals of the same sex as the parent.
In some species, sex determination is genetic: males and females have different alleles or even different genes that specify their sexual morphology. In animals this is often accompanied by chromosomal differences, generally through combinations of XY, ZW, XO, ZO chromosomes, or haplodiploidy. The sexual differentiation is generally triggered by a main gene (a "sex locus"), with a multitude of other genes following in a domino effect.
In other cases, sex of a fetus is determined by environmental variables (such as temperature). The details of some sex-determination systems are not yet fully understood. Hopes for future fetal biological system analysis include complete-reproduction-system initialized signals that can be measured during pregnancies to more accurately determine whether a determined sex of a fetus is male, or female. Such analysis of biological systems could also signal whether the fetus is hermaphrodite, which includes total or partial of both male and female reproduction organs.
Some species such as various plants and fish do not have a fixed sex, and instead go through life cycles and change sex based on genetic cues during corresponding life stages of their type. This could be due to environmental factors such as seasons and temperature. In some gonochoric species, a few individuals may have conditions that cause a mix of different sex characteristics.
Discovery
Sex determination was discovered in the mealworm by the American geneticist Nettie Stevens in 1903.
In 1694, J.R. Camerarius, conducted early experiments on pollination and reported the existence of male and female characteristics in plants(Maize).
In 1866, Gregor Mendel published on inheritance of genetic traits. This is known as Mendelian inheritance and it eventually established the modern understanding of inheritance from two gametes.
In 1902, C.E. McClung identified sex chromosomes in bugs.
In 1917, C.E. Allen, discovered sex determination mechanisms in plants.
In 1922, C.B. Bridges, put forth the Genic Balance Theory of sex determination.
Chromosomal systems
Among animals, the most common chromosomal sex determination systems are XY, XO, ZW, ZO, but with numerous exceptions.
According to the Tree of Sex database (as of 2023), the known sex determination systems are:
1. complex sex chromosomes, homomorphic sex chromosomes, or others
XX/XY sex chromosomes
The XX/XY sex-determination system is the most familiar, as it is found in humans. The XX/XY system is found in most other mammals, as well as some insects. In this system, females have two of the same kind of sex chromosome (XX), while males have two distinct sex chromosomes (XY). The X and Y sex chromosomes are different in shape and size from each other, unlike the rest of the chromosomes (autosomes), and are sometimes called allosomes. In some species, such as humans, organisms remain sex indifferent for a time during development (embryogenesis); in others, however, such as fruit flies, sexual differentiation occurs as soon as the egg is fertilized.
Y-centered sex determination
Some species (including humans) have a gene SRY on the Y chromosome that determines maleness. Members of SRY-reliant species can have uncommon XY chromosomal combinations such as XXY and still live.
Human sex is determined by the presence or absence of a Y chromosome with a functional SRY gene. Once the SRY gene is activated, cells create testosterone and anti-müllerian hormone which typically ensures the development of a single, male reproductive system. In typical XX embryos, cells secrete estrogen, which drives the body toward the female pathway.
In Y-centered sex determination, the SRY gene is the main gene in determining male characteristics, but multiple genes are required to develop testes. In XY mice, lack of the gene DAX1 on the X chromosome results in sterility, but in humans it causes adrenal hypoplasia congenita. However, when an extra DAX1 gene is placed on the X chromosome, the result is a female, despite the existence of SRY, since it overrides the effects of SRY. Even when there are normal sex chromosomes in XX females, duplication or expression of SOX9 causes testes to develop. Gradual sex reversal in developed mice can also occur when the gene FOXL2 is removed from females. Even though the gene DMRT1 is used by birds as their sex locus, species who have XY chromosomes also rely upon DMRT1, contained on chromosome 9, for sexual differentiation at some point in their formation.
X-centered sex determination
Some species, such as fruit flies, use the presence of two X chromosomes to determine femaleness. Species that use the number of Xs to determine sex are nonviable with an extra X chromosome.
Other variants of XX/XY sex determination
Some fish have variants of the XY sex-determination system, as well as the regular system. For example, while having an XY format, Xiphophorus nezahualcoyotl and X. milleri also have a second Y chromosome, known as Y', that creates XY' females and YY' males.
At least one monotreme, the platypus, presents a particular sex determination scheme that in some ways resembles that of the ZW sex chromosomes of birds and lacks the SRY gene. The platypus has sex chromosomes . The males have , while females have . During meiosis, 5 of X form one chain, and 5 of Y form another chain. Thus, they behave effectively as a typical XY chromosomal system, except each of X and Y is broken into 5 parts, with the effect at recombinations occur very frequently at 4 particular points. One of the X chromosomes is homologous to the human X chromosome, and another is homologous to the bird Z chromosome.
Although it is an XY system, the platypus' sex chromosomes share no homologues with eutherian sex chromosomes. Instead, homologues with eutherian sex chromosomes lie on the platypus chromosome 6, which means that the eutherian sex chromosomes were autosomes at the time that the monotremes diverged from the therian mammals (marsupials and eutherian mammals). However, homologues to the avian DMRT1 gene on platypus sex chromosomes X3 and X5 suggest that it is possible the sex-determining gene for the platypus is the same one that is involved in bird sex-determination. More research must be conducted in order to determine the exact sex determining gene of the platypus.
XX/X0 sex chromosomes
In this variant of the XY system, females have two copies of the sex chromosome (XX) but males have only one (X0). The 0 denotes the absence of a second sex chromosome. Generally in this method, the sex is determined by amount of genes expressed across the two chromosomes. This system is observed in a number of insects, including the grasshoppers and crickets of order Orthoptera and in cockroaches (order Blattodea). A small number of mammals also lack a Y chromosome. These include the Amami spiny rat (Tokudaia osimensis) and the Tokunoshima spiny rat (Tokudaia tokunoshimensis) and Sorex araneus, a shrew species. Transcaucasian mole voles (Ellobius lutescens) also have a form of XO determination, in which both sexes lack a second sex chromosome. The mechanism of sex determination is not yet understood.
The nematode C. elegans is male with one sex chromosome (X0); with a pair of chromosomes (XX) it is a hermaphrodite. Its main sex gene is XOL, which encodes XOL-1 and also controls the expression of the genes TRA-2 and HER-1. These genes reduce male gene activation and increase it, respectively.
ZW/ZZ sex chromosomes
The ZW sex-determination system is found in birds, some reptiles, and some insects and other organisms. The ZW sex-determination system is reversed compared to the XY system: females have two different kinds of chromosomes (ZW), and males have two of the same kind of chromosomes (ZZ). In the chicken, this was found to be dependent on the expression of DMRT1. In birds, the genes FET1 and ASW are found on the W chromosome for females, similar to how the Y chromosome contains SRY. However, not all species depend upon the W for their sex. For example, there are moths and butterflies that are ZW, but some have been found female with ZO, as well as female with ZZW. Also, while mammals deactivate one of their extra X chromosomes when female, it appears that in the case of Lepidoptera, the males produce double the normal amount of enzymes, due to having two Z's. Because the use of ZW sex determination is varied, it is still unknown how exactly most species determine their sex. However, reportedly, the silkworm Bombyx mori uses a single female-specific piRNA as the primary determiner of sex. Despite the similarities between the ZW and XY systems, these sex chromosomes evolved separately. In the case of the chicken, their Z chromosome is more similar to humans' autosome 9. The chicken's Z chromosome also seems to be related to the X chromosome of the platypus. When a ZW species, such as the Komodo dragon, reproduces parthenogenetically, usually only males are produced. This is due to the fact that the haploid eggs double their chromosomes, resulting in ZZ or WW. The ZZ become males, but the WW are not viable and are not brought to term.
In both XY and ZW sex determination systems, the sex chromosome carrying the critical factors is often significantly smaller, carrying little more than the genes necessary for triggering the development of a given sex.
ZZ/Z0 sex chromosomes
The ZZ/Z0 sex-determination system is found in some moths. In these insects there is one sex chromosome, Z. Males have two Z chromosomes, whereas females have one Z. Males are ZZ, while females are Z0.
UV sex chromosomes
In some bryophyte and some algae species, the gametophyte stage of the life cycle, rather than being hermaphrodite, occurs as separate male or female individuals that produce male and female gametes respectively. When meiosis occurs in the sporophyte generation of the life cycle, the sex chromosomes known as U and V assort in spores that carry either the U chromosome and give rise to female gametophytes, or the V chromosome and give rise to male gametophytes.
Mating types
The mating type in microorganisms is analogous to sex in multi-cellular organisms, and is sometimes described using those terms, though they are not necessarily correlated with physical body structures. Some species have more than two mating types. Tetrahymena, a type of ciliate, has 7 mating types
Mating types are extensively studied in fungi. Among fungi, mating type is determined by chromosomal regions called mating-type loci. Furthermore, it is not as simple as "two different mating types can mate", but rather, a matter of combinatorics. As a simple example, most basidiomycete have a "tetrapolar heterothallism" mating system: there are two loci, and mating between two individuals is possible if the alleles on both loci are different. For example, if there are 3 alleles per locus, then there would be 9 mating types, each of which can mate with 4 other mating types. By multiplicative combination, it generates a vast number of mating types. For example, Schizophyllum commune, a type of fungus, has mating types.
Haplodiploidy
Haplodiploidy is found in insects belonging to Hymenoptera, such as ants and bees. Sex determination is controlled by the zygosity of a complementary sex determiner (csd) locus. Unfertilized eggs develop into haploid individuals which have a single, hemizygous copy of the csd locus and are therefore males. Fertilized eggs develop into diploid individuals which, due to high variability in the csd locus, are generally heterozygous females. In rare instances diploid individuals may be homozygous, these develop into sterile males.
The gene acting as a csd locus has been identified in the honeybee and several candidate genes have been proposed as a csd locus for other Hymenopterans.
Most females in the Hymenoptera order can decide the sex of their offspring by holding received sperm in their spermatheca and either releasing it into their oviduct or not. This allows them to create more workers, depending on the status of the colony.
Polygenic sex determination
Polygenic sex determination is when the sex is primarily determined by genes that occur on multiple non-homologous chromosomes. The environment may have a limited, minor influence on sex determination. Examples include African cichlid fish (Metriaclima spp.), lemmings (Myopus schisticolor), green swordtail, medaka, etc. In such systems, there is typically a dominance hierarchy, where one system is dominant over another if in conflict. For example, in some species of cichlid fish from Lake Malawi, if an individual has both the XY locus (on one chromosome pair) and the WZ locus (on another chromosome pair), then the W is dominant and the individual has a female phenotype.
The sex-determination system of zebrafish is polygenic. Juvenile zebrafishes (0–30 days after hatching) have both ovary-like tissue to testis tissue. They then develop into male or female adults, with the determination based on a complex interaction genes on multiple chromosomes, but not affected by environmental variations.
Other chromosomal systems
In systems with two sex chromosomes, they can be heteromorphic or homomorphic. Homomorphic sex chromosomes are almost identical in size and gene content. The two familiar kinds of sex chromosome pairs (XY and ZW) are heteromorphic. Homomorphic sex chromosomes exist among pufferfish, ratite birds, pythons, and European tree frogs. Some are quite old, meaning that there is some evolutionary force that resists their differentiation. For example, three species of European tree frogs have homologous, homomorphic sex chromosomes, and this homomorphism was maintained for at least 5.4 million years by occasional recombination.
The Nematocera, particularly the Simuliids and Chironomus, have sex determination regions that are labile, meaning that one species may have the sex determination region in one chromosome, but a closely related species might have the same region moved to a different non-homologous chromosome. Some species even have the sex determination region different among individuals within the same species (intraspecific variation). In some species, some populations have homomorphic sex chromosomes while other populations have heteromorphic sex chromosomes.
The New Zealand frog, Leiopelma hochstetteri, uses a supernumerary sex chromosome. With zero of that chromosome, the frog develops into a male. With one or more, the frog develops into a female. One female had as many as 16 of that chromosome.
Different populations of the Japanese frog Rana rugosa uses different systems. Two use homomorphic male heterogamety, one uses XX/XY, one uses ZZ/ZW. Remarkably, the X and Z chromosomes are homologous, and the Y and W as well. Dmrt1 is on autosome 1 and not sex-linked. This means that an XX female individual is genetically similar to a ZZ male individual, and an XY male individual is to a ZW female individual. The mechanism behind this is yet unclear, but it is hypothesized that during its recent evolution, the XY-to-ZW transition occurred twice.
Clarias gariepinus uses both XX/XY and ZW/ZZ system within the species, with some populations using homomorphic XX/XY while others using heteromorphic ZW/ZZ. A population in Thailand appears to use both systems simultaneously, possibly because C. gariepinus were not native to Thailand, and were introduced from different source populations which resulted in a mixture.
Multiple sex chromosomes like those of platypus also occurs in bony fish. Some moths and butterflies have or .
The Southern platyfish has a complex sex determination system involving 3 sex chromosomes and 4 autosomal alleles.
Gastrotheca pseustes has C-banding heteromorphism, meaning that both males and females have XY chromosomes, but their Y chromosomes are different on one or more C-bands. Eleutherodactylus maussi has a system.
Evolution
See for a review.
Origin of sex chromosomes
Sexual chromosome pairs can arise from an autosomal pair that, for various reasons, stopped recombination, allowing for their divergence. The rate at which recombination is suppressed, and therefore the rate of sex chromosome divergence, is very different across clades.
In analogy with geological strata, historical events in the evolution of sex chromosomes are called evolutionary strata. The human Y-chromosome has had about 5 strata since the origin of the X and Y chromosomes about 300 Mya from a pair of autosomes. Each stratum was formed when a pseudoautosomal region (PAR) of the Y chromosome is inverted, stopping it from recombination with the X chromosome. Over time, each inverted region decays, possibly due to Muller's ratchet. Primate Y-chromosome evolution was rapid, with multiple inversions and shifts of the boundary of PAR.
Among many species of the salamanders, the two chromosomes are only distinguished by a pericentric inversion, so that the banding pattern of the X chromosome is the same as that of Y, but with a region near the centromere reversed. (fig 7 ) In some species, the X is pericentrically inverted and the Y is ancestral. In other species it is the opposite. (p. 15 )
The gene content of the X chromosome is almost identical among placental mammals. This is hypothesized to be because the X inactivation means any change would cause serious disruption, thus subjecting it to strong purifying selection. Similarly, birds have highly conserved Z chromosomes.
Neo-sex chromosomes
Neo-sex chromosomes are currently existing sex chromosomes that formed when an autosome pair fused to the previously existing sex chromosome pair. Following this fusion, the autosomal portion undergoes recombination suppression, allowing them to differentiate. Such systems have been observed in insects, reptiles, birds, and mammals. They are useful to the study of the evolution of Y chromosome degeneration and dosage compensation.
Sex-chromosome turnover
The sex-chromosome turnover is an evolutionary phenomenon where sex chromosomes disappear, or becomes autosomal, and autosomal chromosomes become sexual, repeatedly over evolutionary time. Some lineages have extensive turnover, but others don't. Generally, in an XY system, if the Y chromosome is degenerate, mostly different from the X chromosome, and has X dosage compensation, then turnover is unlikely. In particular, this applies to humans.
The ZW and XY systems can evolve into to each other due to sexual conflict.
Homomorphism and the fountain of youth
It is an evolutionary puzzle why certain sex chromosomes remain homomorphic over millions of years, especially among lineages of fishes, amphibians, and nonavian reptiles. The fountain-of-youth model states that heteromorphy results from recombination suppression, and recombination suppression results from the male phenotype, not the sex chromosomes themselves. Therefore, if some XY sex-reversed females are fertile and adaptive under some circumstances, then the X and Y chromosomes would recombine in these individuals, preventing Y chromosome decay and maintaining long-term homomorphism.
Sex reversal denotes a situation where the phenotypic sex is different from the genotypic sex. While in humans, sex reversal (such as the XX male syndrome) are often infertile, sex-reversed individuals of some species are fertile under some conditions. For example, some XY-individuals in population of Chinook salmon in the Columbia River became fertile females, producing YY sons. Since Chinook salmons have homomorphic sex chromosomes, such YY sons are healthy. When YY males mate with XX females, all their progeny would be XY male if grown under normal conditions.
Support for the hypothesis is found in the common frog, for which XX males and XY males both suppresses sex chromosome recombination, but XX and XY females both recombine at the same rate.
Environmental systems
Temperature-dependent
Many other sex-determination systems exist. In some species of reptiles, including alligators, some turtles, and the tuatara, sex is determined by the temperature at which the egg is incubated during a temperature-sensitive period. There are no examples of temperature-dependent sex determination (TSD) in birds. Megapodes had formerly been thought to exhibit this phenomenon, but were found to actually have different temperature-dependent embryo mortality rates for each sex. For some species with TSD, sex determination is achieved by exposure to hotter temperatures resulting in the offspring being one sex and cooler temperatures resulting in the other. This type of TSD is called Pattern I. For others species using TSD, it is exposure to temperatures on both extremes that results in offspring of one sex, and exposure to moderate temperatures that results in offspring of the opposite sex, called Pattern II TSD. The specific temperatures required to produce each sex are known as the female-promoting temperature and the male-promoting temperature. When the temperature stays near the threshold during the temperature sensitive period, the sex ratio is varied between the two sexes. Some species' temperature standards are based on when a particular enzyme is created. These species that rely upon temperature for their sex determination do not have the SRY gene, but have other genes such as DAX1, DMRT1, and SOX9 that are expressed or not expressed depending on the temperature. The sex of some species, such as the Nile tilapia, Australian skink lizard, and Australian dragon lizard, has an initial bias, set by chromosomes, but can later be changed by the temperature of incubation.
It is unknown how exactly temperature-dependent sex determination evolved. It could have evolved through certain sexes being more suited to certain areas that fit the temperature requirements. For example, a warmer area could be more suitable for nesting, so more females are produced to increase the amount that nest next season.
In amniotes, environmental sex determination preceded the genetically determined systems of birds and mammals; it is thought that a temperature-dependent amniote was the common ancestor of amniotes with sex chromosomes.
Other environmental systems
There are other environmental sex determination systems including location-dependent determination systems as seen in the marine worm Bonellia viridis – larvae become males if they make physical contact with a female, and females if they end up on the bare sea floor. This is triggered by the presence of a chemical produced by the females, bonellin. Some species, such as some snails, practice sex change: adults start out male, then become female. In tropical clownfish, the dominant individual in a group becomes female while the other ones are male, and bluehead wrasses (Thalassoma bifasciatum) are the reverse.
Clownfish live in colonies of several small undifferentiated fish and two large fish (male and female). The male and female are the only sexually mature fish to reproduce. Clownfish are protandrous hermaphrodites, which means after they mature into males, they eventually can transform into females. They develop undifferentiated until they are needed to fill a certain role in their environment, i.e., if they receive the social and environmental cues to do so.
Some species, however, have no sex-determination system. Hermaphrodite species include the common earthworm and certain species of snails. A few species of fish, reptiles, and insects reproduce by parthenogenesis and are female altogether. There are some reptiles, such as the boa constrictor and Komodo dragon that can reproduce both sexually and asexually, depending on whether a mate is available.
Others
There are exceptional sex-determination systems, neither genetic nor environmental.
The Wolbachia genus of parasitic bacteria lives inside the cytoplasm of its host, and is vertically transmitted from parents to children. They primarily infect arthropods and nematodes. Different Wolbachia can determine the sex of its host by a variety of means.
In some species, there is paternal genome elimination, where sons lose the entire genome from the father.
Mitochondrial male sterility: In many flowering plants, the mitochondria can cause hermaphrodite individuals to be unable to father offsprings, effectively turning them into exclusive females. This is a form of mother’s curse. It is an evolutionarily adaptive strategy for mitochondria as mitochondrial inheritance is exclusively from mother to child. The first published case of mitochondrial male sterility among metazoan was reported in 2022 in the hermaphroditic snail Physa acuta.
In some flies and crustaceans, all offspring of a particular individual female are either exclusively male or exclusively female (monogeny).
Evolution
Sex determination systems may have evolved from mating type, which is a feature of microorganisms.
Chromosomal sex determination may have evolved early in the history of eukaryotes. But in plants it has been suggested to have evolved recently.
The accepted hypothesis of XY and ZW sex chromosome evolution in amniotes is that they evolved at the same time, in two different branches.
No genes are shared between the avian ZW and mammal XY chromosomes and the chicken Z chromosome is similar to the human autosomal chromosome 9, rather than X or Y. This suggests not that the ZW and XY sex-determination systems share an origin but that the sex chromosomes are derived from autosomal chromosomes of the common ancestor of birds and mammals. In the platypus, a monotreme, the X1 chromosome shares homology with therian mammals, while the X5 chromosome contains an avian sex-determination gene, further suggesting an evolutionary link.
However, there is some evidence to suggest that there could have been transitions between ZW and XY, such as in Xiphophorus maculatus, which have both ZW and XY systems in the same population, despite the fact that ZW and XY have different gene locations. A recent theoretical model raises the possibility of both transitions between the XY/XX and ZZ/ZW system and environmental sex determination The platypus' genes also back up the possible evolutionary link between XY and ZW, because they have the DMRT1 gene possessed by birds on their X chromosomes. Regardless, XY and ZW follow a similar route. All sex chromosomes started out as an original autosome of an original amniote that relied upon temperature to determine the sex of offspring. After the mammals separated, the reptile branch further split into Lepidosauria and Archosauromorpha. These two groups both evolved the ZW system separately, as evidenced by the existence of different sex chromosomal locations. In mammals, one of the autosome pair, now Y, mutated its SOX3 gene into the SRY gene, causing that chromosome to designate sex. After this mutation, the SRY-containing chromosome inverted and was no longer completely homologous with its partner. The regions of the X and Y chromosomes that are still homologous to one another are known as the pseudoautosomal region. Once it inverted, the Y chromosome became unable to remedy deleterious mutations, and thus degenerated. There is some concern that the Y chromosome will shrink further and stop functioning in ten million years: but the Y chromosome has been strictly conserved after its initial rapid gene loss.
There are some vertebrate species, such as the medaka fish, that evolved sex chromosomes separately; their Y chromosome never inverted and can still swap genes with the X. These species' sex chromosomes are relatively primitive and unspecialized. Because the Y does not have male-specific genes and can interact with the X, XY and YY females can be formed as well as XX males. Non-inverted Y chromosomes with long histories are found in pythons and emus, each system being more than 120 million years old, suggesting that inversions are not necessarily an eventuality. XO sex determination can evolve from XY sex determination with about 2 million years.
See also
Clarence Erwin McClung, who discovered the role of chromosomes in sex determination
Testis-determining factor
Maternal influence on sex determination
Sequential hermaphroditism
Sex determination and differentiation (human)
Cell autonomous sex identity
References
Further reading
Epigenetics | Sex-determination system | [
"Biology"
] | 6,263 | [
"Sex-determination systems",
"Sex"
] |
49,417 | https://en.wikipedia.org/wiki/Extinction | Extinction is the termination of a taxon by the death of its last member. A taxon may become functionally extinct before the death of its last member if it loses the capacity to reproduce and recover. Because a species' potential range may be very large, determining this moment is difficult, and is usually done retrospectively. This difficulty leads to phenomena such as Lazarus taxa, where a species presumed extinct abruptly "reappears" (typically in the fossil record) after a period of apparent absence.
More than 99% of all species that ever lived on Earth, amounting to over five billion species, are estimated to have died out. It is estimated that there are currently around 8.7 million species of eukaryotes globally, and possibly many times more if microorganisms, such as bacteria, are included. Notable extinct animal species include non-avian dinosaurs, palaeotheres, saber-toothed cats, dodos, mammoths, ground sloths, thylacines, trilobites, golden toads, and passenger pigeons.
Through evolution, species arise through the process of speciation—where new varieties of organisms arise and thrive when they are able to find and exploit an ecological niche—and species become extinct when they are no longer able to survive in changing conditions or against superior competition. The relationship between animals and their ecological niches has been firmly established. A typical species becomes extinct within 10 million years of its first appearance, although some species, called living fossils, survive with little to no morphological change for hundreds of millions of years.
Mass extinctions are relatively rare events; however, isolated extinctions of species and clades are quite common, and are a natural part of the evolutionary process. Only recently have extinctions been recorded and scientists have become alarmed at the current high rate of extinctions. Most species that become extinct are never scientifically documented. Some scientists estimate that up to half of presently existing plant and animal species may become extinct by 2100. A 2018 report indicated that the phylogenetic diversity of 300 mammalian species erased during the human era since the Late Pleistocene would require 5 to 7 million years to recover.
According to the 2019 Global Assessment Report on Biodiversity and Ecosystem Services by IPBES, the biomass of wild mammals has fallen by 82%, natural ecosystems have lost about half their area and a million species are at risk of extinction—all largely as a result of human actions. Twenty-five percent of plant and animal species are threatened with extinction. In a subsequent report, IPBES listed unsustainable fishing, hunting and logging as being some of the primary drivers of the global extinction crisis.
In June 2019, one million species of plants and animals were at risk of extinction. At least 571 plant species have been lost since 1750, but likely many more. The main cause of the extinctions is the destruction of natural habitats by human activities, such as cutting down forests and converting land into fields for farming.
A dagger symbol (†) placed next to the name of a species or other taxon normally indicates its status as extinct.
Examples
Examples of species and subspecies that are extinct include:
Steller's sea cow (the last known member died circa 1768)
Dodo (the last confirmed sighting was in 1662)
Chinese paddlefish (last seen in 2003; declared extinct in 2022)
Great auk (last confirmed pair was killed in the 1840s)
Thylacine (the last thylacine killed in the wild was shot in 1930; the last captive tiger lived in Hobart Zoo until 1936)
Kauai O'o (last known member was heard in 1987; the entire Mohoidae family became extinct with it)
Spectacled cormorant (last known members were said to live in the 1850s)
Carolina parakeet (last known member named Incas died in captivity in 1918; declared extinct in 1939)
Passenger pigeon (last known member named Martha died in captivity in 1914)
Tasmanian emu (the last claimed sighting of the emu was in 1839)
Japanese Sea Lion (the last confirmed record was a juvenile specimen captured in 1974)
Schomburgk's deer (became extinct in the wild in 1932; the last captive deer was killed in 1938)
Quagga (hunted to extinction in the late 19th century; the last captive quagga died in Natura Artis Magistra in 1883)
Definition
A species is extinct when the last existing member dies. Extinction therefore becomes a certainty when there are no surviving individuals that can reproduce and create a new generation. A species may become functionally extinct when only a handful of individuals survive, which cannot reproduce due to poor health, age, sparse distribution over a large range, a lack of individuals of both sexes (in sexually reproducing species), or other reasons.
Pinpointing the extinction (or pseudoextinction) of a species requires a clear definition of that species. If it is to be declared extinct, the species in question must be uniquely distinguishable from any ancestor or daughter species, and from any other closely related species. Extinction of a species (or replacement by a daughter species) plays a key role in the punctuated equilibrium hypothesis of Stephen Jay Gould and Niles Eldredge.
In ecology, extinction is sometimes used informally to refer to local extinction, in which a species ceases to exist in the chosen area of study, despite still existing elsewhere. Local extinctions may be made good by the reintroduction of individuals of that species taken from other locations; wolf reintroduction is an example of this. Species that are not globally extinct are termed extant. Those species that are extant, yet are threatened with extinction, are referred to as threatened or endangered species.
Currently, an important aspect of extinction is human attempts to preserve critically endangered species. These are reflected by the creation of the conservation status "extinct in the wild" (EW). Species listed under this status by the International Union for Conservation of Nature (IUCN) are not known to have any living specimens in the wild and are maintained only in zoos or other artificial environments. Some of these species are functionally extinct, as they are no longer part of their natural habitat and it is unlikely the species will ever be restored to the wild. When possible, modern zoological institutions try to maintain a viable population for species preservation and possible future reintroduction to the wild, through use of carefully planned breeding programs.
The extinction of one species' wild population can have knock-on effects, causing further extinctions. These are also called "chains of extinction". This is especially common with extinction of keystone species.
A 2018 study indicated that the sixth mass extinction started in the Late Pleistocene could take up to 5 to 7 million years to restore mammal diversity to what it was before the human era.
Pseudoextinction
Extinction of a parent species where daughter species or subspecies are still extant is called pseudoextinction or phyletic extinction. Effectively, the old taxon vanishes, transformed (anagenesis) into a successor, or split into more than one (cladogenesis).
Pseudoextinction is difficult to demonstrate unless one has a strong chain of evidence linking a living species to members of a pre-existing species. For example, it is sometimes claimed that the extinct Hyracotherium, which was an early horse that shares a common ancestor with the modern horse, is pseudoextinct, rather than extinct, because there are several extant species of Equus, including zebra and donkey; however, as fossil species typically leave no genetic material behind, one cannot say whether Hyracotherium evolved into more modern horse species or merely evolved from a common ancestor with modern horses. Pseudoextinction is much easier to demonstrate for larger taxonomic groups.
Lazarus taxa
A Lazarus taxon or Lazarus species refers to instances where a species or taxon was thought to be extinct, but was later rediscovered. It can also refer to instances where large gaps in the fossil record of a taxon result in fossils reappearing much later, although the taxon may have ultimately become extinct at a later point.
The coelacanth, a fish related to lungfish and tetrapods, is an example of a Lazarus taxon that was known only from the fossil record and was considered to have been extinct since the end of the Cretaceous Period. In 1938, however, a living specimen was found off the Chalumna River (now Tyolomnqa) on the east coast of South Africa. Calliostoma bullatum, a species of deepwater sea snail originally described from fossils in 1844 proved to be a Lazarus species when extant individuals were described in 2019.
Attenborough's long-beaked echidna (Zaglossus attenboroughi) is an example of a Lazarus species from Papua New Guinea that had last been sighted in 1962 and believed to be possibly extinct, until it was recorded again in November 2023.
Some species currently thought to be extinct have had continued speculation that they may still exist, and in the event of rediscovery would be considered Lazarus species. Examples include the thylacine, or Tasmanian tiger (Thylacinus cynocephalus), the last known example of which died in Hobart Zoo in Tasmania in 1936; the Japanese wolf (Canis lupus hodophilax), last sighted over 100 years ago; the American ivory-billed woodpecker (Campephilus principalis), with the last universally accepted sighting in 1944; and the slender-billed curlew (Numenius tenuirostris), not seen since 2007.
Causes
As long as species have been evolving, species have been going extinct. It is estimated that over 99.9% of all species that ever lived are extinct. The average lifespan of a species is 1–10 million years, although this varies widely between taxa.
A variety of causes can contribute directly or indirectly to the extinction of a species or group of species. "Just as each species is unique", write Beverly and Stephen C. Stearns, "so is each extinction ... the causes for each are varied—some subtle and complex, others obvious and simple". Most simply, any species that cannot survive and reproduce in its environment and cannot move to a new environment where it can do so, dies out and becomes extinct. Extinction of a species may come suddenly when an otherwise healthy species is wiped out completely, as when toxic pollution renders its entire habitat unliveable; or may occur gradually over thousands or millions of years, such as when a species gradually loses out in competition for food to better adapted competitors. Extinction may occur a long time after the events that set it in motion, a phenomenon known as extinction debt.
Assessing the relative importance of genetic factors compared to environmental ones as the causes of extinction has been compared to the debate on nature and nurture. The question of whether more extinctions in the fossil record have been caused by evolution or by competition or by predation or by disease or by catastrophe is a subject of discussion; Mark Newman, the author of Modeling Extinction, argues for a mathematical model that falls in all positions. By contrast, conservation biology uses the extinction vortex model to classify extinctions by cause. When concerns about human extinction have been raised, for example in Sir Martin Rees' 2003 book Our Final Hour, those concerns lie with the effects of climate change or technological disaster.
Human-driven extinction started as humans migrated out of Africa more than 60,000 years ago. Currently, environmental groups and some governments are concerned with the extinction of species caused by humanity, and they try to prevent further extinctions through a variety of conservation programs. Humans can cause extinction of a species through overharvesting, pollution, habitat destruction, introduction of invasive species (such as new predators and food competitors), overhunting, and other influences. Explosive, unsustainable human population growth and increasing per capita consumption are essential drivers of the extinction crisis. According to the International Union for Conservation of Nature (IUCN), 784 extinctions have been recorded since the year 1500, the arbitrary date selected to define "recent" extinctions, up to the year 2004; with many more likely to have gone unnoticed. Several species have also been listed as extinct since 2004.
Genetics and demographic phenomena
If adaptation increasing population fitness is slower than environmental degradation plus the accumulation of slightly deleterious mutations, then a population will go extinct. Smaller populations have fewer beneficial mutations entering the population each generation, slowing adaptation. It is also easier for slightly deleterious mutations to fix in small populations; the resulting positive feedback loop between small population size and low fitness can cause mutational meltdown.
Limited geographic range is the most important determinant of genus extinction at background rates but becomes increasingly irrelevant as mass extinction arises. Limited geographic range is a cause both of small population size and of greater vulnerability to local environmental catastrophes.
Extinction rates can be affected not just by population size, but by any factor that affects evolvability, including balancing selection, cryptic genetic variation, phenotypic plasticity, and robustness. A diverse or deep gene pool gives a population a higher chance in the short term of surviving an adverse change in conditions. Effects that cause or reward a loss in genetic diversity can increase the chances of extinction of a species. Population bottlenecks can dramatically reduce genetic diversity by severely limiting the number of reproducing individuals and make inbreeding more frequent.
Genetic pollution
Extinction sometimes results for species evolved to specific ecologies that are subjected to genetic pollution—i.e., uncontrolled hybridization, introgression and genetic swamping that lead to homogenization or out-competition from the introduced (or hybrid) species. Endemic populations can face such extinctions when new populations are imported or selectively bred by people, or when habitat modification brings previously isolated species into contact. Extinction is likeliest for rare species coming into contact with more abundant ones; interbreeding can swamp the rarer gene pool and create hybrids, depleting the purebred gene pool (for example, the endangered wild water buffalo is most threatened with extinction by genetic pollution from the abundant domestic water buffalo). Such extinctions are not always apparent from morphological (non-genetic) observations. Some degree of gene flow is a normal evolutionary process; nevertheless, hybridization (with or without introgression) threatens rare species' existence.
The gene pool of a species or a population is the variety of genetic information in its living members. A large gene pool (extensive genetic diversity) is associated with robust populations that can survive bouts of intense selection. Meanwhile, low genetic diversity (see inbreeding and population bottlenecks) reduces the range of adaptions possible. Replacing native with alien genes narrows genetic diversity within the original population, thereby increasing the chance of extinction.
Habitat degradation
Habitat degradation is currently the main anthropogenic cause of species extinctions. The main cause of habitat degradation worldwide is agriculture, with urban sprawl, logging, mining, and some fishing practices close behind. The degradation of a species' habitat may alter the fitness landscape to such an extent that the species is no longer able to survive and becomes extinct. This may occur by direct effects, such as the environment becoming toxic, or indirectly, by limiting a species' ability to compete effectively for diminished resources or against new competitor species.
Habitat destruction, particularly the removal of vegetation that stabilizes soil, enhances erosion and diminishes nutrient availability in terrestrial ecosystems. This degradation can lead to a reduction in agricultural productivity. Furthermore, increased erosion contributes to poorer water quality by elevating the levels of sediment and pollutants in rivers and streams.
Habitat degradation through toxicity can kill off a species very rapidly, by killing all living members through contamination or sterilizing them. It can also occur over longer periods at lower toxicity levels by affecting life span, reproductive capacity, or competitiveness.
Habitat degradation can also take the form of a physical destruction of niche habitats. The widespread destruction of tropical rainforests and replacement with open pastureland is widely cited as an example of this; elimination of the dense forest eliminated the infrastructure needed by many species to survive. For example, a fern that depends on dense shade for protection from direct sunlight can no longer survive without forest to shelter it. Another example is the destruction of ocean floors by bottom trawling.
Diminished resources or introduction of new competitor species also often accompany habitat degradation. Global warming has allowed some species to expand their range, bringing competition to other species that previously occupied that area. Sometimes these new competitors are predators and directly affect prey species, while at other times they may merely outcompete vulnerable species for limited resources. Vital resources including water and food can also be limited during habitat degradation, leading to extinction.
Predation, competition, and disease
In the natural course of events, species become extinct for a number of reasons, including but not limited to: extinction of a necessary host, prey or pollinator, interspecific competition, inability to deal with evolving diseases and changing environmental conditions (particularly sudden changes) which can act to introduce novel predators, or to remove prey. Recently in geological time, humans have become an additional cause of extinction of some species, either as a new mega-predator or by transporting animals and plants from one part of the world to another. Such introductions have been occurring for thousands of years, sometimes intentionally (e.g. livestock released by sailors on islands as a future source of food) and sometimes accidentally (e.g. rats escaping from boats). In most cases, the introductions are unsuccessful, but when an invasive alien species does become established, the consequences can be catastrophic. Invasive alien species can affect native species directly by eating them, competing with them, and introducing pathogens or parasites that sicken or kill them; or indirectly by destroying or degrading their habitat. Human populations may themselves act as invasive predators. According to the "overkill hypothesis", the swift extinction of the megafauna in areas such as Australia (40,000 years before present), North and South America (12,000 years before present), Madagascar, Hawaii (AD 300–1000), and New Zealand (AD 1300–1500), resulted from the sudden introduction of human beings to environments full of animals that had never seen them before and were therefore completely unadapted to their predation techniques.
Coextinction
Coextinction refers to the loss of a species due to the extinction of another; for example, the extinction of parasitic insects following the loss of their hosts. Coextinction can also occur when a species loses its pollinator, or to predators in a food chain who lose their prey. "Species coextinction is a manifestation of one of the interconnectednesses of organisms in complex ecosystems ... While coextinction may not be the most important cause of species extinctions, it is certainly an insidious one." Coextinction is especially common when a keystone species goes extinct. Models suggest that coextinction is the most common form of biodiversity loss. There may be a cascade of coextinction across the trophic levels. Such effects are most severe in mutualistic and parasitic relationships. An example of coextinction is the Haast's eagle and the moa: the Haast's eagle was a predator that became extinct because its food source became extinct. The moa were several species of flightless birds that were a food source for the Haast's eagle.
Climate change
Extinction as a result of climate change has been confirmed by fossil studies. Particularly, the extinction of amphibians during the Carboniferous Rainforest Collapse, 305 million years ago. A 2003 review across 14 biodiversity research centers predicted that, because of climate change, 15–37% of land species would be "committed to extinction" by 2050. The ecologically rich areas that would potentially suffer the heaviest losses include the Cape Floristic Region and the Caribbean Basin. These areas might see a doubling of present carbon dioxide levels and rising temperatures that could eliminate 56,000 plant and 3,700 animal species. Climate change has also been found to be a factor in habitat loss and desertification.
Sexual selection and male investment
Studies of fossils following species from the time they evolved to their extinction show that species with high sexual dimorphism, especially characteristics in males that are used to compete for mating, are at a higher risk of extinction and die out faster than less sexually dimorphic species, the least sexually dimorphic species surviving for millions of years while the most sexually dimorphic species die out within mere thousands of years. Earlier studies based on counting the number of currently living species in modern taxa have shown a higher number of species in more sexually dimorphic taxa which have been interpreted as higher survival in taxa with more sexual selection, but such studies of modern species only measure indirect effects of extinction and are subject to error sources such as dying and doomed taxa speciating more due to splitting of habitat ranges into more small isolated groups during the habitat retreat of taxa approaching extinction. Possible causes of the higher extinction risk in species with more sexual selection shown by the comprehensive fossil studies that rule out such error sources include expensive sexually selected ornaments having negative effects on the ability to survive natural selection, as well as sexual selection removing a diversity of genes that under current ecological conditions are neutral for natural selection but some of which may be important for surviving climate change.
Mass extinctions
There have been at least five mass extinctions in the history of life on earth, and four in the last 350 million years in which many species have disappeared in a relatively short period of geological time. A massive eruptive event that released large quantities of tephra particles into the atmosphere is considered to be one likely cause of the "Permian–Triassic extinction event" about 250 million years ago, which is estimated to have killed 90% of species then existing. There is also evidence to suggest that this event was preceded by another mass extinction, known as Olson's Extinction. The Cretaceous–Paleogene extinction event (K–Pg) occurred 66 million years ago, at the end of the Cretaceous period; it is best known for having wiped out non-avian dinosaurs, among many other species.
Modern extinctions
According to a 1998 survey of 400 biologists conducted by New York's American Museum of Natural History, nearly 70% believed that the Earth is currently in the early stages of a human-caused mass extinction, known as the Holocene extinction. In that survey, the same proportion of respondents agreed with the prediction that up to 20% of all living populations could become extinct within 30 years (by 2028). A 2014 special edition of Science declared there is widespread consensus on the issue of human-driven mass species extinctions. A 2020 study published in PNAS stated that the contemporary extinction crisis "may be the most serious environmental threat to the persistence of civilization, because it is irreversible."
Biologist E. O. Wilson estimated in 2002 that if current rates of human destruction of the biosphere continue, one-half of all plant and animal species of life on earth will be extinct in 100 years. More significantly, the current rate of global species extinctions is estimated as 100 to 1,000 times "background" rates (the average extinction rates in the evolutionary time scale of planet Earth), faster than at any other time in human history, while future rates are likely 10,000 times higher. However, some groups are going extinct much faster. Biologists Paul R. Ehrlich and Stuart Pimm, among others, contend that human population growth and overconsumption are the main drivers of the modern extinction crisis.
In January 2020, the UN's Convention on Biological Diversity drafted a plan to mitigate the contemporary extinction crisis by establishing a deadline of 2030 to protect 30% of the Earth's land and oceans and reduce pollution by 50%, with the goal of allowing for the restoration of ecosystems by 2050. The 2020 United Nations' Global Biodiversity Outlook report stated that of the 20 biodiversity goals laid out by the Aichi Biodiversity Targets in 2010, only 6 were "partially achieved" by the deadline of 2020. The report warned that biodiversity will continue to decline if the status quo is not changed, in particular the "currently unsustainable patterns of production and consumption, population growth and technological developments". In a 2021 report published in the journal Frontiers in Conservation Science, some top scientists asserted that even if the Aichi Biodiversity Targets set for 2020 had been achieved, it would not have resulted in a significant mitigation of biodiversity loss. They added that failure of the global community to reach these targets is hardly surprising given that biodiversity loss is "nowhere close to the top of any country's priorities, trailing far behind other concerns such as employment, healthcare, economic growth, or currency stability."
History of scientific understanding
For much of history, the modern understanding of extinction as the end of a species was incompatible with the prevailing worldview. Prior to the 19th century, much of Western society adhered to the belief that the world was created by God and as such was complete and perfect. This concept reached its heyday in the 1700s with the peak popularity of a theological concept called the great chain of being, in which all life on earth, from the tiniest microorganism to God, is linked in a continuous chain. The extinction of a species was impossible under this model, as it would create gaps or missing links in the chain and destroy the natural order. Thomas Jefferson was a firm supporter of the great chain of being and an opponent of extinction, famously denying the extinction of the woolly mammoth on the grounds that nature never allows a race of animals to become extinct.
A series of fossils were discovered in the late 17th century that appeared unlike any living species. As a result, the scientific community embarked on a voyage of creative rationalization, seeking to understand what had happened to these species within a framework that did not account for total extinction. In October 1686, Robert Hooke presented an impression of a nautilus to the Royal Society that was more than two feet in diameter, and morphologically distinct from any known living species. Hooke theorized that this was simply because the species lived in the deep ocean and no one had discovered them yet. While he contended that it was possible a species could be "lost", he thought this highly unlikely. Similarly, in 1695, Sir Thomas Molyneux published an account of enormous antlers found in Ireland that did not belong to any extant taxa in that area. Molyneux reasoned that they came from the North American moose and that the animal had once been common on the British Isles. Rather than suggest that this indicated the possibility of species going extinct, he argued that although organisms could become locally extinct, they could never be entirely lost and would continue to exist in some unknown region of the globe. The antlers were later confirmed to be from the extinct deer Megaloceros. Hooke and Molyneux's line of thinking was difficult to disprove. When parts of the world had not been thoroughly examined and charted, scientists could not rule out that animals found only in the fossil record were not simply "hiding" in unexplored regions of the Earth.
Georges Cuvier is credited with establishing the modern conception of extinction in a 1796 lecture to the French Institute, though he would spend most of his career trying to convince the wider scientific community of his theory. Cuvier was a well-regarded geologist, lauded for his ability to reconstruct the anatomy of an unknown species from a few fragments of bone. His primary evidence for extinction came from mammoth skulls found near Paris. Cuvier recognized them as distinct from any known living species of elephant, and argued that it was highly unlikely such an enormous animal would go undiscovered. In 1798, he studied a fossil from the Paris Basin that was first observed by Robert de Lamanon in 1782, first hypothesizing that it belonged to a canine but then deciding that it instead belonged to an animal that was unlike living ones. His study paved the way to his naming of the extinct mammal genus Palaeotherium in 1804 based on the skull and additional fossil material along with another extinct contemporary mammal genus Anoplotherium. In both genera, he noticed that their fossils shared some similarities with other mammals like ruminants and rhinoceroses but still had distinct differences. In 1812, Cuvier, along with Alexandre Brongniart and Geoffroy Saint-Hilaire, mapped the strata of the Paris basin. They saw alternating saltwater and freshwater deposits, as well as patterns of the appearance and disappearance of fossils throughout the record. From these patterns, Cuvier inferred historic cycles of catastrophic flooding, extinction, and repopulation of the earth with new species.
Cuvier's fossil evidence showed that very different life forms existed in the past than those that exist today, a fact that was accepted by most scientists. The primary debate focused on whether this turnover caused by extinction was gradual or abrupt in nature. Cuvier understood extinction to be the result of cataclysmic events that wipe out huge numbers of species, as opposed to the gradual decline of a species over time. His catastrophic view of the nature of extinction garnered him many opponents in the newly emerging school of uniformitarianism.
Jean-Baptiste Lamarck, a gradualist and colleague of Cuvier, saw the fossils of different life forms as evidence of the mutable character of species. While Lamarck did not deny the possibility of extinction, he believed that it was exceptional and rare and that most of the change in species over time was due to gradual change. Unlike Cuvier, Lamarck was skeptical that catastrophic events of a scale large enough to cause total extinction were possible. In his geological history of the earth titled Hydrogeologie, Lamarck instead argued that the surface of the earth was shaped by gradual erosion and deposition by water, and that species changed over time in response to the changing environment.
Charles Lyell, a noted geologist and founder of uniformitarianism, believed that past processes should be understood using present day processes. Like Lamarck, Lyell acknowledged that extinction could occur, noting the total extinction of the dodo and the extirpation of indigenous horses to the British Isles. He similarly argued against mass extinctions, believing that any extinction must be a gradual process. Lyell also showed that Cuvier's original interpretation of the Parisian strata was incorrect. Instead of the catastrophic floods inferred by Cuvier, Lyell demonstrated that patterns of saltwater and freshwater deposits, like those seen in the Paris basin, could be formed by a slow rise and fall of sea levels.
The concept of extinction was integral to Charles Darwin's On the Origin of Species, with less fit lineages disappearing over time. For Darwin, extinction was a constant side effect of competition. Because of the wide reach of On the Origin of Species, it was widely accepted that extinction occurred gradually and evenly (a concept now referred to as background extinction). It was not until 1982, when David Raup and Jack Sepkoski published their seminal paper on mass extinctions, that Cuvier was vindicated and catastrophic extinction was accepted as an important mechanism. The current understanding of extinction is a synthesis of the cataclysmic extinction events proposed by Cuvier, and the background extinction events proposed by Lyell and Darwin.
Human attitudes and interests
Extinction is an important research topic in the field of zoology, and biology in general, and has also become an area of concern outside the scientific community. A number of organizations, such as the Worldwide Fund for Nature, have been created with the goal of preserving species from extinction. Governments have attempted, through enacting laws, to avoid habitat destruction, agricultural over-harvesting, and pollution. While many human-caused extinctions have been accidental, humans have also engaged in the deliberate destruction of some species, such as dangerous viruses, and the total destruction of other problematic species has been suggested. Other species were deliberately driven to extinction, or nearly so, due to poaching or because they were "undesirable", or to push for other human agendas. One example was the near extinction of the American bison, which was nearly wiped out by mass hunts sanctioned by the United States government, to force the removal of Native Americans, many of whom relied on the bison for food.
Biologist Bruce Walsh states three reasons for scientific interest in the preservation of species: genetic resources, ecosystem stability, and ethics; and today the scientific community "stress[es] the importance" of maintaining biodiversity.
In modern times, commercial and industrial interests often have to contend with the effects of production on plant and animal life. However, some technologies with minimal, or no, proven harmful effects on Homo sapiens can be devastating to wildlife (for example, DDT). Biogeographer Jared Diamond notes that while big business may label environmental concerns as "exaggerated", and often cause "devastating damage", some corporations find it in their interest to adopt good conservation practices, and even engage in preservation efforts that surpass those taken by national parks.
Governments sometimes see the loss of native species as a loss to ecotourism, and can enact laws with severe punishment against the trade in native species in an effort to prevent extinction in the wild. Nature preserves are created by governments as a means to provide continuing habitats to species crowded by human expansion. The 1992 Convention on Biological Diversity has resulted in international Biodiversity Action Plan programmes, which attempt to provide comprehensive guidelines for government biodiversity conservation. Advocacy groups, such as The Wildlands Project and the Alliance for Zero Extinctions, work to educate the public and pressure governments into action.
People who live close to nature can be dependent on the survival of all the species in their environment, leaving them highly exposed to extinction risks. However, people prioritize day-to-day survival over species conservation; with human overpopulation in tropical developing countries, there has been enormous pressure on forests due to subsistence agriculture, including slash-and-burn agricultural techniques that can reduce endangered species's habitats.
Antinatalist philosopher David Benatar concludes that any popular concern about non-human species extinction usually arises out of concern about how the loss of a species will impact human wants and needs, that "we shall live in a world impoverished by the loss of one aspect of faunal diversity, that we shall no longer be able to behold or use that species of animal." He notes that typical concerns about possible human extinction, such as the loss of individual members, are not considered in regards to non-human species extinction. Anthropologist Jason Hickel speculates that the reason humanity seems largely indifferent to anthropogenic mass species extinction is that we see ourselves as separate from the natural world and the organisms within it. He says that this is due in part to the logic of capitalism: "that the world is not really alive, and it is certainly not our kin, but rather just stuff to be extracted and discarded – and that includes most of the human beings living here too."
Planned extinction
Completed
The smallpox virus is now extinct in the wild, although samples are retained in laboratory settings.
The rinderpest virus, which infected domestic cattle, is now extinct in the wild.
Proposed
Disease agents
The poliovirus is now confined to small parts of the world due to extermination efforts.
Dracunculus medinensis, or Guinea worm, a parasitic worm which causes the disease dracunculiasis, is now close to eradication thanks to efforts led by the Carter Center.
Treponema pallidum pertenue, a bacterium which causes the disease yaws, is in the process of being eradicated.
Disease vectors
Biologist Olivia Judson has advocated the deliberate extinction of certain disease-carrying mosquito species. In a September 25, 2003 article in The New York Times, she advocated "specicide" of thirty mosquito species by introducing a genetic element that can insert itself into another crucial gene, to create recessive "knockout genes". She says that the Anopheles mosquitoes (which spread malaria) and Aedes mosquitoes (which spread dengue fever, yellow fever, elephantiasis, and other diseases) represent only 30 of around 3,500 mosquito species; eradicating these would save at least one million human lives per year, at a cost of reducing the genetic diversity of the family Culicidae by only 1%. She further argues that since species become extinct "all the time" the disappearance of a few more will not destroy the ecosystem: "We're not left with a wasteland every time a species vanishes. Removing one species sometimes causes shifts in the populations of other species—but different need not mean worse." In addition, anti-malarial and mosquito control programs offer little realistic hope to the 300 million people in developing nations who will be infected with acute illnesses this year. Although trials are ongoing, she writes that if they fail "we should consider the ultimate swatting."
Biologist E. O. Wilson has advocated the eradication of several species of mosquito, including malaria vector Anopheles gambiae. Wilson stated, "I'm talking about a very small number of species that have co-evolved with us and are preying on humans, so it would certainly be acceptable to remove them. I believe it's just common sense."
There have been many campaigns – some successful – to locally eradicate tsetse flies and their trypanosomes in areas, countries, and islands of Africa (including Príncipe). There are currently serious efforts to do away with them all across Africa, and this is generally viewed as beneficial and morally necessary, although not always.
Cloning
Some, such as Harvard geneticist George M. Church, believe that ongoing technological advances will let us "bring back to life" an extinct species by cloning, using DNA from the remains of that species. Proposed targets for cloning include the mammoth, the thylacine, and the Pyrenean ibex. For this to succeed, enough individuals would have to be cloned, from the DNA of different individuals (in the case of sexually reproducing organisms) to create a viable population. Though bioethical and philosophical objections have been raised, the cloning of extinct creatures seems theoretically possible.
In 2003, scientists tried to clone the extinct Pyrenean ibex (C. p. pyrenaica). This attempt failed: of the 285 embryos reconstructed, 54 were transferred to 12 Spanish ibexes and ibex–domestic goat hybrids, but only two survived the initial two months of gestation before they, too, died. In 2009, a second attempt was made to clone the Pyrenean ibex: one clone was born alive, but died seven minutes later, due to physical defects in the lungs.
See also
References
Further reading
External links
Committee on recently extinct organisms
The age of extinction series in The Guardian
Biota by conservation status
Environmental conservation
Evolutionary biology
IUCN Red List | Extinction | [
"Biology"
] | 7,953 | [
"Evolutionary biology",
"Biota by conservation status",
"Biodiversity"
] |
49,420 | https://en.wikipedia.org/wiki/CMOS | Complementary metal–oxide–semiconductor (CMOS, pronounced "sea-moss
", , ) is a type of metal–oxide–semiconductor field-effect transistor (MOSFET) fabrication process that uses complementary and symmetrical pairs of p-type and n-type MOSFETs for logic functions. CMOS technology is used for constructing integrated circuit (IC) chips, including microprocessors, microcontrollers, memory chips (including CMOS BIOS), and other digital logic circuits. CMOS technology is also used for analog circuits such as image sensors (CMOS sensors), data converters, RF circuits (RF CMOS), and highly integrated transceivers for many types of communication.
In 1948, Bardeen and Brattain patented an insulated-gate transistor (IGFET) with an inversion layer. Bardeen's concept forms the basis of CMOS technology today. The CMOS process was presented by Fairchild Semiconductor's Frank Wanlass and Chih-Tang Sah at the International Solid-State Circuits Conference in 1963. Wanlass later filed US patent 3,356,858 for CMOS circuitry and it was granted in 1967. commercialized the technology with the trademark "COS-MOS" in the late 1960s, forcing other manufacturers to find another name, leading to "CMOS" becoming the standard name for the technology by the early 1970s. CMOS overtook NMOS logic as the dominant MOSFET fabrication process for very large-scale integration (VLSI) chips in the 1980s, also replacing earlier transistor–transistor logic (TTL) technology. CMOS has since remained the standard fabrication process for MOSFET semiconductor devices in VLSI chips. , 99% of IC chips, including most digital, analog and mixed-signal ICs, were fabricated using CMOS technology.
Two important characteristics of CMOS devices are high noise immunity and low static power consumption.
Since one transistor of the MOSFET pair is always off, the series combination draws significant power only momentarily during switching between on and off states. Consequently, CMOS devices do not produce as much waste heat as other forms of logic, like NMOS logic or transistor–transistor logic (TTL), which normally have some standing current even when not changing state. These characteristics allow CMOS to integrate a high density of logic functions on a chip. It was primarily for this reason that CMOS became the most widely used technology to be implemented in VLSI chips.
The phrase "metal–oxide–semiconductor" is a reference to the physical structure of MOS field-effect transistors, having a metal gate electrode placed on top of an oxide insulator, which in turn is on top of a semiconductor material. Aluminium was once used but now the material is polysilicon. Other metal gates have made a comeback with the advent of high-κ dielectric materials in the CMOS process, as announced by IBM and Intel for the 45 nanometer node and smaller sizes.
History
The principle of complementary symmetry was first introduced by George Sziklai in 1953 who then discussed several complementary bipolar circuits. Paul Weimer, also at RCA, invented in 1962 thin-film transistor (TFT) complementary circuits, a close relative of CMOS. He invented complementary flip-flop and inverter circuits, but did no work in a more complex complementary logic. He was the first person able to put p-channel and n-channel TFTs in a circuit on the same substrate. Three years earlier, John T. Wallmark and Sanford M. Marcus published a variety of complex logic functions implemented as integrated circuits using JFETs, including complementary memory circuits. Frank Wanlass was familiar with work done by Weimer at RCA.
In 1955, Carl Frosch and Lincoln Derick accidentally grew a layer of silicon dioxide over the silicon wafer, for which they observed surface passivation effects. By 1957 Frosch and Derrick, using masking and predeposition, were able to manufacture silicon dioxide transistors and showed that silicon dioxide insulated, protected silicon wafers and prevented dopants from diffusing into the wafer. J.R. Ligenza and W.G. Spitzer studied the mechanism of thermally grown oxides and fabricated a high quality Si/SiO2 stack in 1960.
Following this research, Mohamed Atalla and Dawon Kahng proposed a silicon MOS transistor in 1959 and successfully demonstrated a working MOS device with their Bell Labs team in 1960. Their team included E. E. LaBate and E. I. Povilonis who fabricated the device; M. O. Thurston, L. A. D'Asaro, and J. R. Ligenza who developed the diffusion processes, and H. K. Gummel and R. Lindner who characterized the device. There were originally two types of MOSFET logic, PMOS (p-type MOS) and NMOS (n-type MOS). Both types were developed by Frosch and Derrick in 1957 at Bell Labs.
In 1948, Bardeen and Brattain patented the progenitor of MOSFET, an insulated-gate FET (IGFET) with an inversion layer. Bardeen's patent, and the concept of an inversion layer, forms the basis of CMOS technology today. A new type of MOSFET logic combining both the PMOS and NMOS processes was developed, called complementary MOS (CMOS), by Chih-Tang Sah and Frank Wanlass at Fairchild. In February 1963, they published the invention in a research paper. In both the research paper and the patent filed by Wanlass, the fabrication of CMOS devices was outlined, on the basis of thermal oxidation of a silicon substrate to yield a layer of silicon dioxide located between the drain contact and the source contact.
CMOS was commercialised by RCA in the late 1960s. RCA adopted CMOS for the design of integrated circuits (ICs), developing CMOS circuits for an Air Force computer in 1965 and then a 288-bit CMOS SRAM memory chip in 1968. RCA also used CMOS for its 4000-series integrated circuits in 1968, starting with a 20μm semiconductor manufacturing process before gradually scaling to a 10 μm process over the next several years.
CMOS technology was initially overlooked by the American semiconductor industry in favour of NMOS, which was more powerful at the time. However, CMOS was quickly adopted and further advanced by Japanese semiconductor manufacturers due to its low power consumption, leading to the rise of the Japanese semiconductor industry. Toshiba developed C2MOS (Clocked CMOS), a circuit technology with lower power consumption and faster operating speed than ordinary CMOS, in 1969. Toshiba used its C2MOS technology to develop a large-scale integration (LSI) chip for Sharp's Elsi Mini LED pocket calculator, developed in 1971 and released in 1972. Suwa Seikosha (now Seiko Epson) began developing a CMOS IC chip for a Seiko quartz watch in 1969, and began mass-production with the launch of the Seiko Analog Quartz 38SQW watch in 1971. The first mass-produced CMOS consumer electronic product was the Hamilton Pulsar "Wrist Computer" digital watch, released in 1970. Due to low power consumption, CMOS logic has been widely used for calculators and watches since the 1970s.
The earliest microprocessors in the early 1970s were PMOS processors, which initially dominated the early microprocessor industry. By the late 1970s, NMOS microprocessors had overtaken PMOS processors. CMOS microprocessors were introduced in 1975, with the Intersil 6100, and RCA CDP 1801. However, CMOS processors did not become dominant until the 1980s.
CMOS was initially slower than NMOS logic, thus NMOS was more widely used for computers in the 1970s. The Intel 5101 (1kb SRAM) CMOS memory chip (1974) had an access time of 800ns, whereas the fastest NMOS chip at the time, the Intel 2147 (4kb SRAM) HMOS memory chip (1976), had an access time of 55/70ns. In 1978, a Hitachi research team led by Toshiaki Masuhara introduced the twin-well Hi-CMOS process, with its HM6147 (4kb SRAM) memory chip, manufactured with a 3 μm process. The Hitachi HM6147 chip was able to match the performance (55/70ns access) of the Intel 2147 HMOS chip, while the HM6147 also consumed significantly less power (15mA) than the 2147 (110mA). With comparable performance and much less power consumption, the twin-well CMOS process eventually overtook NMOS as the most common semiconductor manufacturing process for computers in the 1980s.
In the 1980s, CMOS microprocessors overtook NMOS microprocessors. NASA's Galileo spacecraft, sent to orbit Jupiter in 1989, used the RCA 1802 CMOS microprocessor due to low power consumption.
Intel introduced a 1.5 μm process for CMOS semiconductor device fabrication in 1983. In the mid-1980s, Bijan Davari of IBM developed high-performance, low-voltage, deep sub-micron CMOS technology, which enabled the development of faster computers as well as portable computers and battery-powered handheld electronics. In 1988, Davari led an IBM team that demonstrated a high-performance 250 nanometer CMOS process.
Fujitsu commercialized a 700nm CMOS process in 1987, and then Hitachi, Mitsubishi Electric, NEC and Toshiba commercialized 500nm CMOS in 1989. In 1993, Sony commercialized a 350nm CMOS process, while Hitachi and NEC commercialized 250nm CMOS. Hitachi introduced a 160nm CMOS process in 1995, then Mitsubishi introduced 150nm CMOS in 1996, and then Samsung Electronics introduced 140nm in 1999.
In 2000, Gurtej Singh Sandhu and Trung T. Doan at Micron Technology invented atomic layer deposition High-κ dielectric films, leading to the development of a cost-effective 90 nm CMOS process. Toshiba and Sony developed a 65 nm CMOS process in 2002, and then TSMC initiated the development of 45 nm CMOS logic in 2004. The development of pitch double patterning by Gurtej Singh Sandhu at Micron Technology led to the development of 30nm class CMOS in the 2000s.
CMOS is used in most modern LSI and VLSI devices. As of 2010, CPUs with the best performance per watt each year have been CMOS static logic since 1976. As of 2019, planar CMOS technology is still the most common form of semiconductor device fabrication, but is gradually being replaced by non-planar FinFET technology, which is capable of manufacturing semiconductor nodes smaller than 20nm.
Technical details
"CMOS" refers to both a particular style of digital circuitry design and the family of processes used to implement that circuitry on integrated circuits (chips). CMOS circuitry dissipates less power than logic families with resistive loads. Since this advantage has increased and grown more important, CMOS processes and variants have come to dominate, thus the vast majority of modern integrated circuit manufacturing is on CMOS processes. CMOS logic consumes around one seventh the power of NMOS logic, and about 10 million times less power than bipolar transistor-transistor logic (TTL).
CMOS circuits use a combination of p-type and n-type metal–oxide–semiconductor field-effect transistor (MOSFETs) to implement logic gates and other digital circuits. Although CMOS logic can be implemented with discrete devices for demonstrations, commercial CMOS products are integrated circuits composed of up to billions of transistors of both types, on a rectangular piece of silicon of often between 10 and 400 mm2.
CMOS always uses all enhancement-mode MOSFETs (in other words, a zero gate-to-source voltage turns the transistor off).
Inversion
CMOS circuits are constructed in such a way that all P-type metal–oxide–semiconductor (PMOS) transistors must have either an input from the voltage source or from another PMOS transistor. Similarly, all NMOS transistors must have either an input from ground or from another NMOS transistor. The composition of a PMOS transistor creates low resistance between its source and drain contacts when a low gate voltage is applied and high resistance when a high gate voltage is applied. On the other hand, the composition of an NMOS transistor creates high resistance between source and drain when a low gate voltage is applied and low resistance when a high gate voltage is applied. CMOS accomplishes current reduction by complementing every nMOSFET with a pMOSFET and connecting both gates and both drains together. A high voltage on the gates will cause the nMOSFET to conduct and the pMOSFET not to conduct, while a low voltage on the gates causes the reverse. This arrangement greatly reduces power consumption and heat generation. However, during the switching time, both pMOS and nMOS MOSFETs conduct briefly as the gate voltage transitions from one state to another. This induces a brief spike in power consumption and becomes a serious issue at high frequencies.
The adjacent image shows what happens when an input is connected to both a PMOS transistor (top of diagram) and an NMOS transistor (bottom of diagram). Vdd is some positive voltage connected to a power supply and Vss is ground. A is the input and Q is the output.
When the voltage of A is low (i.e. close to Vss), the NMOS transistor's channel is in a high resistance state, disconnecting Vss from Q. The PMOS transistor's channel is in a low resistance state, connecting Vdd to Q. Q, therefore, registers Vdd.
On the other hand, when the voltage of A is high (i.e. close to Vdd), the PMOS transistor is in a high resistance state, disconnecting Vdd from Q. The NMOS transistor is in a low resistance state, connecting Vss to Q. Now, Q registers Vss.
In short, the outputs of the PMOS and NMOS transistors are complementary such that when the input is low, the output is high, and when the input is high, the output is low. No matter what the input is, the output is never left floating (charge is never stored due to wire capacitance and lack of electrical drain/ground). Because of this behavior of input and output, the CMOS circuit's output is the inverse of the input.
The transistors' resistances are never exactly equal to zero or infinity, so Q will never exactly equal Vss or Vdd, but Q will always be closer to Vss than A was to Vdd (or vice versa if A were close to Vss). Without this amplification, there would be a very low limit to the number of logic gates that could be chained together in series, and CMOS logic with billions of transistors would be impossible.
Power supply pins
The power supply pins for CMOS are called VDD and VSS, or VCC and Ground(GND) depending on the manufacturer. VDD and VSS are carryovers from conventional MOS circuits and stand for the drain and source supplies. These do not apply directly to CMOS, since both supplies are really source supplies. VCC and Ground are carryovers from TTL logic and that nomenclature has been retained with the introduction of the 54C/74C line of CMOS.
Duality
An important characteristic of a CMOS circuit is the duality that exists between its PMOS transistors and NMOS transistors. A CMOS circuit is created to allow a path always to exist from the output to either the power source or ground. To accomplish this, the set of all paths to the voltage source must be the complement of the set of all paths to ground. This can be easily accomplished by defining one in terms of the NOT of the other. Due to the logic based on De Morgan's laws, the PMOS transistors in parallel have corresponding NMOS transistors in series while the PMOS transistors in series have corresponding NMOS transistors in parallel.
Logic
More complex logic functions such as those involving AND and OR gates require manipulating the paths between gates to represent the logic. When a path consists of two transistors in series, both transistors must have low resistance to the corresponding supply voltage, modelling an AND. When a path consists of two transistors in parallel, either one or both of the transistors must have low resistance to connect the supply voltage to the output, modelling an OR.
Shown on the right is a circuit diagram of a NAND gate in CMOS logic. If both of the A and B inputs are high, then both the NMOS transistors (bottom half of the diagram) will conduct, neither of the PMOS transistors (top half) will conduct, and a conductive path will be established between the output and Vss (ground), bringing the output low. If both of the A and B inputs are low, then neither of the NMOS transistors will conduct, while both of the PMOS transistors will conduct, establishing a conductive path between the output and Vdd (voltage source), bringing the output high. If either of the A or B inputs is low, one of the NMOS transistors will not conduct, one of the PMOS transistors will, and a conductive path will be established between the output and Vdd (voltage source), bringing the output high. As the only configuration of the two inputs that results in a low output is when both are high, this circuit implements a NAND (NOT AND) logic gate.
An advantage of CMOS over NMOS logic is that both low-to-high and high-to-low output transitions are fast since the (PMOS) pull-up transistors have low resistance when switched on, unlike the load resistors in NMOS logic. In addition, the output signal swings the full voltage between the low and high rails. This strong, more nearly symmetric response also makes CMOS more resistant to noise.
See Logical effort for a method of calculating delay in a CMOS circuit.
Example: NAND gate in physical layout
This example shows a NAND logic device drawn as a physical representation as it would be manufactured. The physical layout perspective is a "bird's eye view" of a stack of layers. The circuit is constructed on a P-type substrate. The polysilicon, diffusion, and n-well are referred to as "base layers" and are actually inserted into trenches of the P-type substrate. (See steps 1 to 6 in the process diagram below right) The contacts penetrate an insulating layer between the base layers and the first layer of metal (metal1) making a connection.
The inputs to the NAND (illustrated in green color) are in polysilicon. The transistors (devices) are formed by the intersection of the polysilicon and diffusion; N diffusion for the N device & P diffusion for the P device (illustrated in salmon and yellow coloring respectively). The output ("out") is connected together in metal (illustrated in cyan coloring). Connections between metal and polysilicon or diffusion are made through contacts (illustrated as black squares). The physical layout example matches the NAND logic circuit given in the previous example.
The N device is manufactured on a P-type substrate while the P device is manufactured in an N-type well (n-well). A P-type substrate "tap" is connected to VSS and an N-type n-well tap is connected to VDD to prevent latchup.
Power: switching and leakage
CMOS logic dissipates less power than NMOS logic circuits because CMOS dissipates power only when switching ("dynamic power"). On a typical ASIC in a modern 90 nanometer process, switching the output might take 120 picoseconds, and happens once every ten nanoseconds. NMOS logic dissipates power whenever the transistor is on, because there is a current path from Vdd to Vss through the load resistor and the n-type network.
Static CMOS gates are very power efficient because they dissipate nearly zero power when idle. Earlier, the power consumption of CMOS devices was not the major concern while designing chips. Factors like speed and area dominated the design parameters. As the CMOS technology moved below sub-micron levels the power consumption per unit area of the chip has risen tremendously.
Broadly classifying, power dissipation in CMOS circuits occurs because of two components, static and dynamic:
Static dissipation
Both NMOS and PMOS transistors have a gate–source threshold voltage (Vth), below which the current (called sub threshold current) through the device will drop exponentially. Historically, CMOS circuits operated at supply voltages much larger than their threshold voltages (Vdd might have been 5 V, and Vth for both NMOS and PMOS might have been 700 mV). A special type of the transistor used in some CMOS circuits is the native transistor, with near zero threshold voltage.
SiO2 is a good insulator, but at very small thickness levels electrons can tunnel across the very thin insulation; the probability drops off exponentially with oxide thickness. Tunnelling current becomes very important for transistors below 130 nm technology with gate oxides of 20 Å or thinner.
Small reverse leakage currents are formed due to formation of reverse bias between diffusion regions and wells (for e.g., p-type diffusion vs. n-well), wells and substrate (for e.g., n-well vs. p-substrate). In modern process diode leakage is very small compared to sub threshold and tunnelling currents, so these may be neglected during power calculations.
If the ratios do not match, then there might be different currents of PMOS and NMOS; this may lead to imbalance and thus improper current causes the CMOS to heat up and dissipate power unnecessarily. Furthermore, recent studies have shown that leakage power reduces due to aging effects as a trade-off for devices to become slower.
To speed up designs, manufacturers have switched to constructions that have lower voltage thresholds but because of this a modern NMOS transistor with a Vth of 200 mV has a significant subthreshold leakage current. Designs (e.g. desktop processors) which include vast numbers of circuits which are not actively switching still consume power because of this leakage current. Leakage power is a significant portion of the total power consumed by such designs. Multi-threshold CMOS (MTCMOS), now available from foundries, is one approach to managing leakage power. With MTCMOS, high Vth transistors are used when switching speed is not critical, while low Vth transistors are used in speed sensitive paths. Further technology advances that use even thinner gate dielectrics have an additional leakage component because of current tunnelling through the extremely thin gate dielectric. Using high-κ dielectrics instead of silicon dioxide that is the conventional gate dielectric allows similar device performance, but with a thicker gate insulator, thus avoiding this current. Leakage power reduction using new material and system designs is critical to sustaining scaling of CMOS.
Dynamic dissipation
Charging and discharging of load capacitances
CMOS circuits dissipate power by charging the various load capacitances (mostly gate and wire capacitance, but also drain and some source capacitances) whenever they are switched. In one complete cycle of CMOS logic, current flows from VDD to the load capacitance to charge it and then flows from the charged load capacitance (CL) to ground during discharge. Therefore, in one complete charge/discharge cycle, a total of Q=CLVDD is thus transferred from VDD to ground. Multiply by the switching frequency on the load capacitances to get the current used, and multiply by the average voltage again to get the characteristic switching power dissipated by a CMOS device: .
Since most gates do not operate/switch at every clock cycle, they are often accompanied by a factor , called the activity factor. Now, the dynamic power dissipation may be re-written as .
A clock in a system has an activity factor α=1, since it rises and falls every cycle. Most data has an activity factor of 0.1. If correct load capacitance is estimated on a node together with its activity factor, the dynamic power dissipation at that node can be calculated effectively.
Short-circuit power
Since there is a finite rise/fall time for both pMOS and nMOS, during transition, for example, from off to on, both the transistors will be on for a small period of time in which current will find a path directly from VDD to ground, hence creating a short-circuit current, sometimes called a crowbar current. Short-circuit power dissipation increases with the rise and fall time of the transistors.
This form of power consumption became significant in the 1990s as wires on chip became narrower and the long wires became more resistive. CMOS gates at the end of those resistive wires see slow input transitions. Careful design which avoids weakly driven long skinny wires reduces this effect, but crowbar power can be a substantial part of dynamic CMOS power.
Input protection
Parasitic transistors that are inherent in the CMOS structure may be turned on by input signals outside the normal operating range, e.g. electrostatic discharges or line reflections. The resulting latch-up may damage or destroy the CMOS device. Clamp diodes are included in CMOS circuits to deal with these signals. Manufacturers' data sheets specify the maximum permitted current that may flow through the diodes.
Analog CMOS
Besides digital applications, CMOS technology is also used in analog applications. For example, there are CMOS operational amplifier ICs available in the market. Transmission gates may be used as analog multiplexers instead of signal relays. CMOS technology is also widely used for RF circuits all the way to microwave frequencies, in mixed-signal (analog+digital) applications.
RF CMOS
RF CMOS refers to RF circuits (radio frequency circuits) which are based on mixed-signal CMOS integrated circuit technology. They are widely used in wireless telecommunication technology. RF CMOS was developed by Asad Abidi while working at UCLA in the late 1980s. This changed the way in which RF circuits were designed, leading to the replacement of discrete bipolar transistors with CMOS integrated circuits in radio transceivers. It enabled sophisticated, low-cost and portable end-user terminals, and gave rise to small, low-cost, low-power and portable units for a wide range of wireless communication systems. This enabled "anytime, anywhere" communication and helped bring about the wireless revolution, leading to the rapid growth of the wireless industry.
The baseband processors and radio transceivers in all modern wireless networking devices and mobile phones are mass-produced using RF CMOS devices. RF CMOS circuits are widely used to transmit and receive wireless signals, in a variety of applications, such as satellite technology (such as GPS), bluetooth, Wi-Fi, near-field communication (NFC), mobile networks (such as 3G and 4G), terrestrial broadcast, and automotive radar applications, among other uses.
Examples of commercial RF CMOS chips include Intel's DECT cordless phone, and 802.11 (Wi-Fi) chips created by Atheros and other companies. Commercial RF CMOS products are also used for Bluetooth and Wireless LAN (WLAN) networks. RF CMOS is also used in the radio transceivers for wireless standards such as GSM, Wi-Fi, and Bluetooth, transceivers for mobile networks such as 3G, and remote units in wireless sensor networks (WSN).
RF CMOS technology is crucial to modern wireless communications, including wireless networks and mobile communication devices. One of the companies that commercialized RF CMOS technology was Infineon. Its bulk CMOS RF switches sell over 1billion units annually, reaching a cumulative 5billion units, .
Temperature range
Conventional CMOS devices work over a range of −55 °C to +125 °C.
There were theoretical indications as early as August 2008 that silicon CMOS will work down to −233 °C (40 K). Functioning temperatures near 40 K have since been achieved using overclocked AMD Phenom II processors with a combination of liquid nitrogen and liquid helium cooling.
Silicon carbide CMOS devices have been tested for a year at 500 °C.
Single-electron MOS transistors
Ultra small (L = 20 nm, W = 20 nm) MOSFETs achieve the single-electron limit when operated at cryogenic temperature over a range of −269 °C (4 K) to about −258 °C (15 K). The transistor displays Coulomb blockade due to progressive charging of electrons one by one. The number of electrons confined in the channel is driven by the gate voltage, starting from an occupation of zero electrons, and it can be set to one or many.
See also
References
Further reading
External links
CMOS gate description and interactive illustrations
Electronic design
Digital electronics
Logic families
Integrated circuits | CMOS | [
"Technology",
"Engineering"
] | 6,207 | [
"Computer engineering",
"Digital electronics",
"Electronic design",
"Electronic engineering",
"Design",
"Integrated circuits"
] |
49,434 | https://en.wikipedia.org/wiki/Conjunction%20%28astronomy%29 | In astronomy, a conjunction occurs when two astronomical objects or spacecraft appear to be close to each other in the sky. This means they have either the same right ascension or the same ecliptic longitude, usually as observed from Earth.
When two objects always appear close to the ecliptic—such as two planets, the Moon and a planet, or the Sun and a planet—this fact implies an apparent close approach between the objects as seen in the sky. A related word, appulse, is the minimum apparent separation in the sky of two astronomical objects.
Conjunctions involve either two objects in the Solar System or one object in the Solar System and a more distant object, such as a star. A conjunction is an apparent phenomenon caused by the observer's perspective: the two objects involved are not actually close to one another in space. Conjunctions between two bright objects close to the ecliptic, such as two bright planets, can be seen with the naked eye.
The astronomical symbol for conjunction is (Unicode U+260C ☌).
The conjunction symbol is not used in modern astronomy. It continues to be used in astrology.
Passing close
More generally, in the particular case of two planets, it means that they merely have the same right ascension (and hence the same hour angle). This is called conjunction in right ascension. However, there is also the term conjunction in ecliptic longitude. At such conjunction both objects have the same ecliptic longitude. Conjunction in right ascension and conjunction in ecliptic longitude do not normally take place at the same time, but in most cases nearly at the same time. However, at triple conjunctions, it is possible that a conjunction only in right ascension (or ecliptic length) occurs. At the time of conjunction – it does not matter if in right ascension or in ecliptic longitude – the involved planets are close together upon the celestial sphere. In the vast majority of such cases, one of the planets will appear to pass north or south of the other.
Passing closer
However, if two celestial bodies attain the same declination at the time of a conjunction in right ascension (or the same ecliptic latitude at a conjunction in ecliptic longitude), the one that is closer to the Earth will pass in front of the other. In such a case, a syzygy takes place. If one object moves into the shadow of another, the event is an eclipse. For example, if the Moon passes into the shadow of Earth and disappears from view, this event is called a lunar eclipse. If the visible disk of the nearer object is considerably smaller than that of the farther object, the event is called a transit. When Mercury passes in front of the Sun, it is a transit of Mercury, and when Venus passes in front of the Sun, it is a transit of Venus. When the nearer object appears larger than the farther one, it will completely obscure its smaller companion; this is called an occultation. An example of an occultation is when the Moon passes between Earth and the Sun, causing the Sun to disappear either entirely or partially. This phenomenon is commonly known as a solar eclipse. Occultations in which the larger body is neither the Sun nor the Moon are very rare. More frequent, however, is an occultation of a planet by the Moon. Several such events are visible every year from various places on Earth.
Position of the observer
A conjunction, as a phenomenon of perspective, is an event that involves two astronomical bodies seen by an observer on the Earth. Times and details depend only very slightly on the observer's location on the Earth's surface, with the differences being greatest for conjunctions involving the Moon because of its relative closeness, but even for the Moon the time of a conjunction never differs by more than a few hours.
Superior and inferior conjunctions with the Sun
As seen from a planet that is superior, if an inferior planet is on the opposite side of the Sun, it is in superior conjunction with the Sun. An inferior conjunction occurs when the two planets lie in a line on the same side of the Sun. In an inferior conjunction, the superior planet is "in opposition" to the Sun as seen from the inferior planet.
The terms "inferior conjunction" and "superior conjunction" are used in particular for the planets Mercury and Venus, which are inferior planets as seen from Earth. However, this definition can be applied to any pair of planets, as seen from the one farther from the Sun.
A planet (or asteroid or comet) is simply said to be in conjunction, when it is in conjunction with the Sun, as seen from Earth. The Moon is in conjunction with the Sun at New Moon.
Multiple conjunctions and quasiconjunctions
Conjunctions between two planets can be single, triple, or even quintuple. Quintuple conjunctions involve Mercury, because it moves rapidly east and west of the sun, in a synodic cycle just 116 days in length. An example will occur in 2048, when Venus, moving eastward behind the Sun, encounters Mercury five times (February 16, March 16, May 27, August 13, and September 5).
There is also a so-called quasiconjunction, when a planet in retrograde motion — always either Mercury or Venus, from the point of view of the Earth — will "drop back" in right ascension until it almost allows another planet to overtake it, but then the former planet will resume its forward motion and thereafter appear to draw away from it again. This will occur in the morning sky, before dawn. The reverse may happen in the evening sky after dusk, with Mercury or Venus entering retrograde motion just as it is about to overtake another planet (often Mercury and Venus are both of the planets involved, and when this situation arises they may remain in very close visual proximity for several days or even longer). The quasiconjunction is reckoned as occurring at the time the distance in right ascension between the two planets is smallest, even though, when declination is taken into account, they may appear closer together shortly before or after this.
Average interval between conjunctions
The interval between two conjunctions involving the same two planets is not constant, but the average interval between two similar conjunctions can be calculated from the periods of the planets. The "speed" at which a planet goes around the Sun, in terms of revolutions per time, is given by the inverse of its period, and the speed difference between two planets is the difference between these. For conjunctions of two planets beyond the orbit of Earth, the average time interval between two conjunctions is the time it takes for 360° to be covered by that speed difference, so the average interval is:
This does not apply of course to the intervals between the individual conjunctions of a triple conjunction.
Conjunctions between a planet inside the orbit of Earth (Venus or Mercury) and a planet outside are a bit more complicated. As the outer planet swings around from being in opposition to the Sun to being east of the Sun, then in superior conjunction with the Sun, then west of the Sun, and back to opposition, it will be in conjunction with Venus or Mercury an odd number of times. So the average interval between, say, the first conjunction of one set and the first of the next set will be equal to the average interval between its oppositions with the Sun. Conjunctions between Mercury and Mars are usually triple, and those between Mercury and planets beyond Mars may also be. Conjunctions between Venus and the planets beyond Earth may be single or triple.
As for conjunctions between Mercury and Venus, each time Venus goes from maximum elongation to the east of the Sun to maximum elongation west of the Sun and then back to east of the Sun (a so-called synodic cycle of Venus), an even number of conjunctions with Mercury take place. There are usually four, but sometimes just two, and sometimes six, as in the cycle mentioned above with a quintuple conjunction as Venus moves eastward, preceded by a singlet on August 6, 2047, as Venus moves westward. The average interval between corresponding conjunctions (for example the first of one set and the first of the next) is 1.599 years (583.9 days), based on the orbital speeds of Venus and Earth, but arbitrary conjunctions occur at least twice this often. The synodic cycle of Venus (1.599 years) is close to five times as long as that of Mercury (0.317 years). When they are in phase and move between the Sun and the Earth at the same time they remain close together in the sky for weeks.
The following table gives these average intervals, between corresponding conjunctions, in Julian years of 365.25 days, for combinations of the nine traditional planets. Conjunctions with the Sun are also included. Since Pluto is in resonance with Neptune the period used is 1.5 times that of Neptune, slightly different from the current value. The interval is then exactly thrice the period of Neptune.
Notable conjunctions
1953 BC
On February 27, 1953, BC, Mercury, Venus, Mars and Saturn formed a group with an angular diameter of 26.45 arc minutes. Jupiter was on the same day only a few degrees away, so that on this day all 5 bright planets could be found in an area measuring only 4.33 degrees. David Pankenier and David Nivison have suggested that this conjunction occurred at the beginning of the Xia dynasty in China.
929
A triple conjunction between Mars and Jupiter occurred. At the first conjunction on May 26, 929, Mars, whose brightness was −1.8 mag, stood 3.1 degrees south of Jupiter with a brightness of −2.6 mag. The second conjunction took place on July 4, 929, whereby Mars stood 5.7 degrees south of Jupiter. Both planets were −2.8 mag bright. On August 18, 929, the −1.9 mag bright Mars stood 4.7 degrees south of Jupiter, which was −2.6 mag bright.
The second conjunction might have been from all conjunctions between outer planets since Birth of Christ that at which both planets had greatest brightness. At all other conjunctions between outer planets at least one planet was dimmer.
1054
On July 5th, 1054 a supernova brighter than Venus appeared in the eastern part of constellation Taurus in the proximity of the waning crescent moon. The exact geocentric conjunction in right ascension took place at 07:58 UTC on this day with an angular separation of 3 degrees. It was perhaps the brightest star-like object in recorded history, which went in closer conjunction with the moon. The event is possible also shown on two petroglyphs in Arizona.
1503
Between December 22, 1503, and December 27, 1503, all three bright outer planets Mars, Jupiter and Saturn reached their opposition to sun and stood therefore close together at the nocturnal sky. During the opposition period 1503 Mars stood 3 times in conjunction with Jupiter (October 5, 1503, January 19, 1504, and February 8, 1504) and 3 times in conjunction with Saturn (October 14, 1503, December 26, 1503, and March 7, 1504). Jupiter and Saturn stood on May 24, 1504, in close conjunction with an angular separation of 19 arcminutes.
1604
On October 9, 1604, a conjunction between Mars and Jupiter took place, whereby Mars passed Jupiter 1.8 degrees southward. Only two degrees away from Jupiter Kepler's Supernova appeared on the same day. This was perhaps the only time in recorded history a supernova took place near a conjunction of two planets.
Saturn passed Kepler's Supernova on December 12th, 1604 33 arc minutes southly, which was however unobservable as the elongation to the sun was just 3.1 degrees. On December 24, 1604 Mercury stood in conjunction with Kepler's Supernova, whereby it was 1.8 degrees south of it. As the elongation of this event to the sun was 15 degree, it was in principle observable. On January 20th, 1605 Venus passed Kepler's Supernova 29 arc minutes northwards at an elongation of 43.1 degrees to the sun.
1899
In early December 1899 the Sun and the naked-eye planets appeared to lie within a band 35 degrees wide along the ecliptic as seen from the Earth. As a consequence, over the period 1–4 December 1899, the Moon reached conjunction with, in order, Jupiter, Uranus, the Sun, Mercury, Mars, Saturn and Venus. Most of these conjunctions were not visible because of the glare of the Sun.
1962
Over the period 4–6 February 1962, in a rare series of events, Mercury and Venus reached conjunction as observed from the Earth, followed by Venus and Jupiter, then by Mars and Saturn. Conjunctions took place between the Moon and, in turn, Mars, Saturn, the Sun, Mercury, Venus and Jupiter. Mercury also reached inferior conjunction with the Sun. The conjunction between the Moon and the Sun at new Moon produced a total solar eclipse visible in Indonesia and the Pacific Ocean,
when these five naked-eye planets were visible in the vicinity of the Sun in the sky.
1987
Mercury, Venus and Mars separately reached conjunction with each other, and each separately with the Sun, within a 7-day period in August 1987 as seen from the Earth. The Moon also reached conjunction with each of these bodies on 24 August. However, none of these conjunctions were observable due to the glare of the Sun.
2000
In May 2000, in a very rare event, several planets lay in the vicinity of the Sun in the sky as seen from the Earth, and a series of conjunctions took place. Jupiter, Mercury and Saturn each reached conjunction with the Sun in the period 8–10 May. These three planets in turn were in conjunction with each other and with Venus over a period of a few weeks. However, most of these conjunctions were not visible from the Earth because of the glare from the Sun. NASA referred to May 5 as the date of the conjunction.
2002
Venus, Mars and Saturn appeared close together in the evening sky in early May 2002, with a conjunction of Mars and Saturn occurring on 4 May. This was followed by a conjunction of Venus and Saturn on 7 May, and another of Venus and Mars on 10 May when their angular separation was only 18 arcminutes. A series of conjunctions between the Moon and, in order, Saturn, Mars and Venus took place on 14 May, although it was not possible to observe all these in darkness from any single location on the Earth.
2007
A conjunction of the Moon and Mars took place on 24 December 2007, very close to the time of the full Moon and at the time when Mars was at opposition to the Sun. Mars and the full Moon appeared close together in the sky worldwide, with an occultation of Mars occurring for observers in some far northern locations.
A similar conjunction took place on 21 May 2016 and on 8 December 2022.
2008
A conjunction of Venus and Jupiter occurred on 1 December 2008, and several hours later both planets separately reached conjunction with the crescent Moon. An occultation of Venus by the Moon was visible from some locations. The three objects appeared close together in the sky from any location on the Earth.
2012
2013
At the end of May, Mercury, Venus and Jupiter went through a series of conjunctions only a few days apart.
2015
June 30 – Venus and Jupiter come close together in a planetary conjunction; they came approximately 1/3 a degree apart. The conjunction had been nicknamed the "Star of Bethlehem."
2016
On the morning of January 9, Venus and Saturn came together in a conjunction
On August 27, Mercury and Venus were in conjunction, followed by a conjunction of Venus and Jupiter, meaning that the three planets were very close together in the evening sky.
2017
On the morning of November 13, Venus and Jupiter were in conjunction, meaning that they appeared close together in the morning sky.
2018
On the early hours of January 7, Mars and Jupiter were in conjunction. The pair was only 0.25 degrees apart in the sky at its closest.
2020
During most of February, March, and April, Mars, Jupiter, and Saturn were close to each other, and so they underwent a series of conjunctions: on March 20, Mars was in conjunction with Jupiter, and on March 31, Mars was in conjunction with Saturn.
On December 21, Jupiter and Saturn appeared at their closest separation in the sky since 1623, in an event known as a great conjunction.
2022
Planetoid Pallas passed Sirius, the brightest star in the night sky, on October 9 to the south at a distance of 8.5 arcminutes (source: Astrolutz 2022, ISBN 978-3-7534-7124-2). As Sirius is far south of the ecliptic only few objects of the solar system can be seen from earth close to Sirius.
At this occasion Pallas had not only the lowest angular distance to Sirius in the 21st century, but also since its discovery in 1802.
In the 19th century the greatest approach of Pallas and Sirius took place on October 11, 1879, when 8.6 mag bright Pallas passed Sirius 1.3° southwest and in the 20th century the lowest distance between Pallas and Sirius was reached on October 12, 1962, when Pallas, whose brightness was also 8.6 mag, stood 1.4° southwest of the brightest star in the sky.
Conjunctions of planets in right ascension 2005–2020
See also
Appulse
Astrometry
Astronomical transit
Transit of Earth from Mars
Transit of Mercury
Transit of Venus
Occultation
Elongation (astronomy)
Great conjunction
Opposition (astronomy)
Spherical astronomy
Syzygy (astronomy)
Triple conjunction
References
External links
Venus – Jupiter 2015 & 2016 conjunctions
Planets conjunctions and mutual occultations 1000BC to 3000AD
Conjunctions of planets with the main asteroids
Astrometry | Conjunction (astronomy) | [
"Astronomy"
] | 3,667 | [
"Astrometry",
"Astronomical sub-disciplines"
] |
49,492 | https://en.wikipedia.org/wiki/Divisor | In mathematics, a divisor of an integer also called a factor of is an integer that may be multiplied by some integer to produce In this case, one also says that is a multiple of An integer is divisible or evenly divisible by another integer if is a divisor of ; this implies dividing by leaves no remainder.
Definition
An integer is divisible by a nonzero integer if there exists an integer such that This is written as
This may be read as that divides is a divisor of is a factor of or is a multiple of If does not divide then the notation is
There are two conventions, distinguished by whether is permitted to be zero:
With the convention without an additional constraint on for every integer
With the convention that be nonzero, for every nonzero integer
General
Divisors can be negative as well as positive, although often the term is restricted to positive divisors. For example, there are six divisors of 4; they are 1, 2, 4, −1, −2, and −4, but only the positive ones (1, 2, and 4) would usually be mentioned.
1 and −1 divide (are divisors of) every integer. Every integer (and its negation) is a divisor of itself. Integers divisible by 2 are called even, and integers not divisible by 2 are called odd.
1, −1, and are known as the trivial divisors of A divisor of that is not a trivial divisor is known as a non-trivial divisor (or strict divisor). A nonzero integer with at least one non-trivial divisor is known as a composite number, while the units −1 and 1 and prime numbers have no non-trivial divisors.
There are divisibility rules that allow one to recognize certain divisors of a number from the number's digits.
Examples
7 is a divisor of 42 because so we can say It can also be said that 42 is divisible by 7, 42 is a multiple of 7, 7 divides 42, or 7 is a factor of 42.
The non-trivial divisors of 6 are 2, −2, 3, −3.
The positive divisors of 42 are 1, 2, 3, 6, 7, 14, 21, 42.
The set of all positive divisors of 60, partially ordered by divisibility, has the Hasse diagram:
Further notions and facts
There are some elementary rules:
If and then that is, divisibility is a transitive relation.
If and then or (That is, and are associates.)
If and then holds, as does However, if and then does not always hold (for example, and but 5 does not divide 6).
for nonzero . This follows immediately from writing .
If and then This is called Euclid's lemma.
If is a prime number and then or
A positive divisor of that is different from is called a or an of (for example, the proper divisors of 6 are 1, 2, and 3). A number that does not evenly divide but leaves a remainder is sometimes called an of
An integer whose only proper divisor is 1 is called a prime number. Equivalently, a prime number is a positive integer that has exactly two positive factors: 1 and itself.
Any positive divisor of is a product of prime divisors of raised to some power. This is a consequence of the fundamental theorem of arithmetic.
A number is said to be perfect if it equals the sum of its proper divisors, deficient if the sum of its proper divisors is less than and abundant if this sum exceeds
The total number of positive divisors of is a multiplicative function meaning that when two numbers and are relatively prime, then For instance, ; the eight divisors of 42 are 1, 2, 3, 6, 7, 14, 21 and 42. However, the number of positive divisors is not a totally multiplicative function: if the two numbers and share a common divisor, then it might not be true that The sum of the positive divisors of is another multiplicative function (for example, ). Both of these functions are examples of divisor functions.
If the prime factorization of is given by
then the number of positive divisors of is
and each of the divisors has the form
where for each
For every natural
Also,
where is Euler–Mascheroni constant.
One interpretation of this result is that a randomly chosen positive integer n has an average
number of divisors of about However, this is a result from the contributions of numbers with "abnormally many" divisors.
In abstract algebra
Ring theory
Division lattice
In definitions that allow the divisor to be 0, the relation of divisibility turns the set of non-negative integers into a partially ordered set that is a complete distributive lattice. The largest element of this lattice is 0 and the smallest is 1. The meet operation ∧ is given by the greatest common divisor and the join operation ∨ by the least common multiple. This lattice is isomorphic to the dual of the lattice of subgroups of the infinite cyclic group Z.
See also
Arithmetic functions
Euclidean algorithm
Fraction (mathematics)
Integer factorization
Table of divisors – A table of prime and non-prime divisors for 1–1000
Table of prime factors – A table of prime factors for 1–1000
Unitary divisor
Notes
Citations
References
; section B
Øystein Ore, Number Theory and its History, McGraw–Hill, NY, 1944 (and Dover reprints).
Elementary number theory
Division (mathematics) | Divisor | [
"Mathematics"
] | 1,189 | [
"Elementary number theory",
"Elementary mathematics",
"Number theory"
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.